Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

lisp is *SLOW*

94 views
Skip to first unread message

Sajid Ahmed the Peaceman

unread,
Aug 1, 1997, 3:00:00 AM8/1/97
to

Fred Haineux wrote:

> In article <33DA50...@capital.net>, peac...@capital.net wrote:
> | Using recursive functions on a computer involves manipulating
> | a stack. Using iterative statements does not. QED.
>
> The point that we have been repeatedly making is that Lisp compilers are
> smart enough to turn some cases of recursive functions into iterative
> ones. By doing so, these functions do not use any stack. Even though you
> write something that looks like it's going to use the stack, it doesn't.
> Period.
>

I agree with you that some Lisp compilers may be smart
enough to turn some functions into iterative code that doesn't use
the stack, with the following conditions:

1. I can only do this with the very simplest functions. Nothing complex.
2. The compilation program is huge.
3. To do the compile, it would take a large amount of time.

>
> But this begs a bigger question: why is it so all-important not to use the
> stack? Because the stack is slow?
>
> So what!
>

If you write simple dinky little programs it wouldn't make a
difference. OTOH, if your writing larger programs, it makes a
tremendous amount of difference. Computers have a limited amount of
physical memory, and using too much stack will deplete the memory
very quickly, and crash your program, or slow it down to a crawl,
if your O.S. (written in assembly or c, not Lisp) uses virtual memory.

Let's take the example of three way decryption algorithms.
It takes a supercomputer a full month to do the decryption. If you
write a program that's not efficient, it would take years or decades
to do the same task. By the time you get anywhere close to the end
of the program, you'll probably run into a power outage, and have to
start all over again.

> When I write a Lisp program, I start by making a wild guess as to how the
> program should work. Then I play with the program and repeatedly refine it
> until it works right.
>If, in the course of development, I happen to
> express a function as recursive or iterative, I don't care. I just get the
> darn thing working. Whatever seems right, probably is. (I might have an
> AHA! insight later, and redesign the program to be simpler or clearer.)


That is the right way go. Forget about the recursion. If
you don't need it, get rid of it.
Sounds like you are as aggravated as I was trying to
make everything recursive. Forget about it. Start iterative, end
iterative, then and if you need to implement something recursive, do
so. You'll think a lot clearer and get more sleep.

Peaceman

Fred Haineux

unread,
Aug 1, 1997, 3:00:00 AM8/1/97
to

| Fred Haineux wrote:
| > In article <33DA50...@capital.net>, peac...@capital.net wrote:
| > | Using recursive functions on a computer involves manipulating
| > | a stack. Using iterative statements does not. QED.
| > The point that we have been repeatedly making is that Lisp compilers are
| > smart enough to turn some cases of recursive functions into iterative
| > ones. By doing so, these functions do not use any stack. Even though you
| > write something that looks like it's going to use the stack, it doesn't.
| > Period.

| I agree with you that some Lisp compilers may be smart
| enough to turn some functions into iterative code that doesn't use
| the stack, with the following conditions:
|
| 1. I can only do this with the very simplest functions. Nothing complex.
| 2. The compilation program is huge.
| 3. To do the compile, it would take a large amount of time.

The facts prove you wrong.

1) There is no special reason that an optimizer cannot work on big
functions. After all, if you have a recursive optimizer, it will recognize
a great big loop as being a candidate for optimization, because it ignores
the details of the interior of the loop. In reality, Lisp does optimize
big loops.

So you're wrong on that count.

2) Metrowerks Code Warrior Gold is the most popular programming
environment on the Mac. It requires 3.4 MB for the IDE, plus 1.2MB for the
compiler module. Mac Common Lisp is 1.5 MB for the IDE and compiler AND
debugger AND a few other things that MW IDE doesn't include. These sizes
are disk space sizes.

When you compare the RAM requirements, MW requires 3854 K for the IDE,
plus 1 M for the compiler, plus extra RAM for your running program, plus
RAM for the debugger. Mac Common Lisp requires 4859 for all of the above.
Obviously, the more RAM you give Lisp, the faster it goes, but when you
compare the amount of memory you need, C and Lisp typically require darn
near the exact amount of RAM.

Two down. Not looking too good.

3) Lisp compiles are typically as fast as Metrowerks. One page of Lisp
code will typically take less than one second to compile, optimize, and
"link" (with a similar form of fast linking to that which Code Warrior
does).

I don't have figures to support this point, but I can see it to be true
based on two similar-sized programs I've written, one with C, one with
Lisp.

So it looks like you're just plain wrong here. You've given three claims
which are utterly false.


| > But this begs a bigger question: why is it so all-important not to use the
| > stack? Because the stack is slow?
| >
| > So what!
| >
|
| If you write simple dinky little programs it wouldn't make a
| difference. OTOH, if your writing larger programs, it makes a
| tremendous amount of difference. Computers have a limited amount of
| physical memory, and using too much stack will deplete the memory
| very quickly, and crash your program, or slow it down to a crawl,
| if your O.S. (written in assembly or c, not Lisp) uses virtual memory.

This idea that a stack somehow requires a lot of memory is bogus. Unless
you are running a wacky operating system, stack space is usually very
limited -- on the Mac, the default stack size is 12K, and the default app
size is 384K. Stack just doesn't take a lot of space by comparison to the
rest of the program.

| Let's take the example of three way decryption algorithms.
| It takes a supercomputer a full month to do the decryption. If you
| write a program that's not efficient, it would take years or decades
| to do the same task. By the time you get anywhere close to the end
| of the program, you'll probably run into a power outage, and have to
| start all over again.

OK, let's take that example. You picked a very poor one to illustrate your
point, you know, because decryption typically uses a whole lot of math,
and very little anything else. Since Lisp's math is just as good as C's,
the difference between the execution time is not going to be 12 or 120
times as long, as you expect, but much closer. But look:

When you write the program in C, you spend a large amount of time chasing
memory allocation and arithmetic precision errors -- many studies have
shown this to be the case. THEN the program works right. So THEN you go
off and instrument the program, to find out where the bottlenecks are.
Surprise! It spends 90% of its time in the math libraries. So let's say
you stop there, for now.

I write the same program in Lisp. I don't have to chase memory or math
problems, so I'm done before you are. Now I have a slow program that works
right. So I spend some time going through adding declarations and other
compiler hints. So let's say that the entire rest of MY program runs HALF
AS FAST as your C version. But I decide to stop writing, because I am a
lazy-ass Lisp hacker who doesn't realize just how important speed is.

But since we're both using the same math libraries, we both take the same
time in the math loops.

Let's say your program runs in 10 seconds to decode a simple message. Do
some math, and you'll see that mine should run in 11.25 seconds. Yours is
12.5% faster, not twice as fast. Where did the speed go? Into the math
libraries!

Is this extra 12.5% important? Not if the program only takes 10 seconds to
run. If it takes 1000 seconds, though, that's an extra 2 minutes. That's
noticeable, right?

After all, at some point, the amount of time you save running the C
program is going to pay off. Let's find out when it's going to pay off.

Well, let's make a wild-ass guess and say I can write the Lisp in half the
time it takes you to write the C. Well, that means that (and I am just
making this up) I take 4 days, and you take 8.

So at what point do you save time by using C? When the Lisp program takes
4 days longer to execute than the C program. Do the math, you get 36 days
of runtime before breakeven. If your message takes 30 days to decode, yep,
you almost get a savings by using C. But if your message takes 10 seconds
to decode, you need to decode 311,040 messages before you break even.
That's 1,000 messages a day, for almost a year.

Maybe it IS important to make the code as fast as possible. If you spend
an additional 4 days writing code, so that your program is spending 100%
of its time in the math libraries, and your new program will take 9
seconds to run. That's 25% faster than mine! Wow! That's a lot faster,
isn't it?

Well, no, not if your time is worth something. In order to break even,
your program has to save 8 days of runtime. So that's still 32 days of
decoding before the extra effort pays for itself.

And if *I* spend four days improving the math call speed by 20%, my new
program will take 9.45 seconds to run, and your new program will take 9
seconds to run. At that point, you've spent 12 days writing code, I've
spent 8, and your program is 5% faster than mine.

Obviously, this is a contrived example. But do you see the point I am
getting at? The point is: if ANY factor other than your own hacker prowess
bottlenecks the program, the advantage of tightly-optimized code
evaporates.

The other point is even more eye-opening: if the time it takes to write
the program "costs" something, and this cost has to be paid back, the
point of diminishing returns for optimizing C code is pretty low.

Are there cases when using a highly-optimized loop is important? OF
COURSE! Any bit-blitting routine, or file I/O routine, or numeric library,
is going to be used so darn often that any speedup will have a noticeable
effect.

But any time you spend optimizing, you have to spend it very carefully, on
places where the program spends "all" its time. If your program spends all
its time in math libraries, and my Lisp math libraries are just as good as
yours (which they are), there's almost no point in optimizing the rest of
the code.

| > When I write a Lisp program, I start by making a wild guess as to how the
| > program should work. Then I play with the program and repeatedly refine it
| > until it works right.
| >If, in the course of development, I happen to
| > express a function as recursive or iterative, I don't care. I just get the
| > darn thing working. Whatever seems right, probably is. (I might have an
| > AHA! insight later, and redesign the program to be simpler or clearer.)
|
|
| That is the right way go. Forget about the recursion. If
| you don't need it, get rid of it.
| Sounds like you are as aggravated as I was trying to
| make everything recursive. Forget about it. Start iterative, end
| iterative, then and if you need to implement something recursive, do
| so. You'll think a lot clearer and get more sleep.

Uhh, you seem to think that everyone "naturally" finds iterative
procedures always clearer and easier to understand.

News: that ain't the case!

Indeed, there are many algorithms that are "naturally" recursive. For
instance, people were recently talking about Quicksort and Heapsort. Both
of those work by "divide and conquer." Indeed, almost every sorting or
searching algorithm talks about "divide and conquer" -- and that's
recursion RIGHT THERE. To write those algorithms in iterative notation can
be much uglier than recursive notation.

In graphics, the utterly common "flood" algorithm is implicitly recursive.

And of course any fractal you care to name is just a recursive curve to be
generated!

But let me say again what I've said at least three times before:

There is NO RULE saying you have to use recursion in Lisp. Lisp supports
iteration just fine -- better than C does, as a matter of fact. You can
use EITHER notation. In many cases, the code that you generate will be
identical, because the Lisp compiler can convert recursion into iteration.

Of course, C supports recursion -- perhaps not as well as Lisp, but the
support is there, nonetheless. Recursion is "just" calling yourself with
different arguments. C, Pascal, C++, Ada, and even some modern FORTRANs
and BASICs allow that.

Yes, recursion can be a difficult concept to understand, at first. But if
you look at the visual representations of some recursive algorithms (for
instance, by looking at a visual "sort" or a flood algorithm slowed down
enough so you can see it), you can see how recursion works straightaway,
and then it's just a question of putting the right arguments into the
function calls. And believe me, THAT'S just as tough in C as it is in
Lisp.

Michael Schuerig

unread,
Aug 2, 1997, 3:00:00 AM8/2/97
to

Sajid Ahmed the Peaceman <peac...@capital.net> wrote:

> 1. I can only do this with the very simplest functions. Nothing complex.
> 2. The compilation program is huge.
> 3. To do the compile, it would take a large amount of time.

You *may* even be right on this counts. But what do you think how much
time it takes to come up with an iterative solution and how error-prone
that would be. I'd happily put more of a workload on the compiler if it
can churn out something what would take me more time to produce.


> If you write simple dinky little programs it wouldn't make a
>difference. OTOH, if your writing larger programs, it makes a
>tremendous amount of difference. Computers have a limited amount of
>physical memory, and using too much stack will deplete the memory
>very quickly, and crash your program, or slow it down to a crawl,
>if your O.S. (written in assembly or c, not Lisp) uses virtual memory.

You make an unwarranted assumption here: that your iterative algorithm
works in constant space. This seems highly unlikely. Let me venture a
guess: if the algorithm is that simple that it works in constant space
if written iteratively a good lisp compiler is able to produce constant
space code for the same algorithm written recursively.

Michael

--
Michael Schuerig I was thrown out of college for cheating on the
mailto:uzs...@uni-bonn.de metaphysics exam; I looked into the soul
http://www.uni-bonn.de/~uzs90z/ of the boy next to me. -Woody Allen

Gareth McCaughan

unread,
Aug 2, 1997, 3:00:00 AM8/2/97
to

"Sajid Ahmed the Peaceman" trolled:

> I agree with you that some Lisp compilers may be smart
> enough to turn some functions into iterative code that doesn't use
> the stack, with the following conditions:
>

> 1. I can only do this with the very simplest functions. Nothing complex.

While you may have this restriction, fortunately the same is not
true of good Lisp compilers.

> 2. The compilation program is huge.

No larger than (say) gcc.

> 3. To do the compile, it would take a large amount of time.

Nope.

> If you write simple dinky little programs it wouldn't make a
> difference. OTOH, if your writing larger programs, it makes a
> tremendous amount of difference. Computers have a limited amount of
> physical memory, and using too much stack will deplete the memory
> very quickly, and crash your program, or slow it down to a crawl,
> if your O.S. (written in assembly or c, not Lisp) uses virtual memory.

You basically don't have a clue about this, do you?

1. A lot of recursion is *tail recursion*. It's clear from other
articles of yours that you either don't know what this is, or
don't understand its implications, or choose to ignore them;
I'm not going to try to educate you on it here. But you can
do tail recursion perfectly well without using up any stack.

2. Recursion doesn't imply "using too much stack".

3. On most modern machines, "too much stack" is actually rather a
lot. Most algorithms that are best expressed using recursion
but not using tail recursion typically use an amount of stack
that is proportional to something like the logarithm of the
size of your problem. If you have 64k of stack (laughably small
as a hard limit) and your stack frames are 256 bytes in size
(way larger than average) and your maximum stack depth varies
like log-to-the-base-1.1-of the problem size, then you can do
problems of size about 39 billion.

4. If you're talking virtual memory, then actually the sort of
pattern of memory use that would come from a very large stack
is just about the best you could possibly hope for, because
its locality is extremely good.

> Let's take the example of three way decryption algorithms.
> It takes a supercomputer a full month to do the decryption. If you
> write a program that's not efficient, it would take years or decades
> to do the same task. By the time you get anywhere close to the end
> of the program, you'll probably run into a power outage, and have to
> start all over again.

What does this have to do with recursion? You surely aren't still
assuming that recursive implies inefficient, are you? Surely no one
can be *that* obtuse.

> Sounds like you are as aggravated as I was trying to
> make everything recursive. Forget about it. Start iterative, end
> iterative, then and if you need to implement something recursive, do
> so. You'll think a lot clearer and get more sleep.

If the arguments you've been displaying in this thread are meant
to be an advertisement for the clarity of thought produced by your
technique, then I hope you'll pardon me for turning it down.

--
Gareth McCaughan Dept. of Pure Mathematics & Mathematical Statistics,
gj...@dpmms.cam.ac.uk Cambridge University, England.

? the platypus {aka David Formosa}

unread,
Aug 2, 1997, 3:00:00 AM8/2/97
to

In <33E257...@capital.net> Sajid Ahmed the Peaceman <peac...@capital.net> writes:

>Fred Haineux wrote:

[...]

> I agree with you that some Lisp compilers may be smart
>enough to turn some functions into iterative code that doesn't use
>the stack, with the following conditions:

>1. I can only do this with the very simplest functions. Nothing complex.

Of cause it can only do tail recursive code (to my knowlige). However if
you can't express it as tail recurstion your going to have to use a stack
anyway.



>2. The compilation program is huge.

Most compilers are.



>3. To do the compile, it would take a large amount of time.

I don't see this as a problem. You don't have to compile lisp to debug it
so comperlations are rear.

[...]

> Computers have a limited amount of physical memory, and using too much
> stack will deplete the memory very quickly,

Very unlikely I run my lisp on a 4 meg linux box, I've never had a lisp [1]
program crash because it was out of memory.

> and crash your program, or slow it down to a crawl,
> if your O.S. (written in assembly or c, not Lisp) uses virtual memory.

In the deep dark days there was a os writtion in lisp. It was before
its time and we still morn for it.

> Let's take the example of three way decryption algorithms.

I've nerver hurd of such a term. Neather is it refured to in Bruce
Schneier's book nor the sci.crypt FAQ. WTF are you talking about?

>It takes a supercomputer a full month to do the decryption.

Then the crypt system stinks, it should take longer then the age
of the universe to brake without the key.

[...]

> By the time you get anywhere close to the end
>of the program, you'll probably run into a power outage, and have to
>start all over again.

Big problems like this are normaly run on thousands of computers in
parralle (like the DES challing) or on falure resisten OS's wich can
roll back from such an event.

[...]

> Sounds like you are as aggravated as I was trying to
>make everything recursive. Forget about it. Start iterative, end
>iterative,

That is as bad as makeing every thing recursive. I like recurstion
because it allows me to make use of the recusive agoruthum builder
algoruthum. For meny complex tasks it allows me to find a soultion
quickly and easly.


[1] Some programs have crashed because thay where inifitly recursive though.
--
Please excuse my spelling as I suffer from agraphia see the url in my header.
Never trust a country with more peaple then sheep. Buy easter bilbies.
Save the ABC Is $0.08 per day too much to pay? ex-net.scum and proud
I'm sorry but I just don't consider 'because its yucky' a convincing argument

Adam P. Jenkins

unread,
Aug 2, 1997, 3:00:00 AM8/2/97
to

Sajid Ahmed the Peaceman <peac...@capital.net> writes:
> Sounds like you are as aggravated as I was trying to
> make everything recursive. Forget about it. Start iterative, end
> iterative, then and if you need to implement something recursive, do
> so. You'll think a lot clearer and get more sleep.
>

But who's saying you should try to do everything recursively? I agree
that that could become aggravating. One of the things I found
aggravating about scheme is that it encourages you to do everything
recursively by only providing one loop structure. Though I also have
found it aggravating when I was using a language like Fortran, which
didn't support recursive functions, and I wanted to implement an
algorithm that seemed much clearer to me as a recursive function.

I agree that while any iterative algorithm can be expressed
recursively, it's often much more intuitive to write it as a loop.
But then try to do something like traverse a binary tree with an
iterative algorithm, or find the powerset of a set. The iterative
algorithms end up being much more complicated and less clear. On the
other hand, while "Print 'hello' 10 times" could be expressed
recursively, I think it's much more suited to a loop. Having a
compiler that expands tail recursions just gives you the freedom to
express what you're doing in whichever way seems more intuitive to
you.

--
Adam P. Jenkins
mailto:ajen...@cs.umass.edu

Adam P. Jenkins

unread,
Aug 2, 1997, 3:00:00 AM8/2/97
to

Sajid Ahmed the Peaceman <peac...@capital.net> writes:
> I agree with you that some Lisp compilers may be smart
> enough to turn some functions into iterative code that doesn't use
> the stack, with the following conditions:
>
> 1. I can only do this with the very simplest functions. Nothing complex.
> 2. The compilation program is huge.
> 3. To do the compile, it would take a large amount of time.
>

This is just wrong. I think you're just guessing about this. Have you
looked at scheme? The scheme standard actually guarantees that it
will turn tail recursive functions into iterative functions, and I've
never found any of the several scheme systems I've used to be
particalarly slow at compiling the code.

While the CL standard doesn't guarantee that it will expand tail
recursive functions, the technology clearly exists, so I don't see why
any decent Lisp system would not use it just because the standard
doesn't require it.

__

Rainer Joswig

unread,
Aug 2, 1997, 3:00:00 AM8/2/97
to

> I agree with you that some Lisp compilers may be smart
> enough to turn some functions into iterative code that doesn't use
> the stack, with the following conditions:
>
> 1. I can only do this with the very simplest functions. Nothing complex.
> 2. The compilation program is huge.
> 3. To do the compile, it would take a large amount of time.

Have you ever used a Common Lisp or Scheme compiler? I guess not.


> If you write simple dinky little programs it wouldn't make a
> difference. OTOH, if your writing larger programs, it makes a

> tremendous amount of difference. Computers have a limited amount of
> physical memory,

Like my Laptop only has 64 MB RAM.

> That is the right way go. Forget about the recursion. If
> you don't need it, get rid of it.


Use recursion where appropriate. It makes programs clearer.

--
http://www.lavielle.com/~joswig/

Martin Rodgers

unread,
Aug 3, 1997, 3:00:00 AM8/3/97
to

My body fell on Rainer Joswig like a dead horse, thusly:

> > If you write simple dinky little programs it wouldn't make a
> > difference. OTOH, if your writing larger programs, it makes a
> > tremendous amount of difference. Computers have a limited amount of
> > physical memory,
>
> Like my Laptop only has 64 MB RAM.

Plus some of us have happily used Lisp with merely 1 MB - or less. In
my case, 1 MB of RAM was enough to give me a Lisp with an interpreter
and native code compiler. I could probably have run it from a floppy
risk instead of a HD, but I didn't try it. Compilers for most other
languages for that machine needed either a HD or a RAM disk.

CLISP is available for DOS, and only requires a 386 and a pathetically
small amount of RAM. Again, while I've not tried it, I would be
suprised if you couldn't run it from a floppy instead of a HD. When I
last checked this, CLISP plus the editor and compiler was only 1 MB.

CLISP is freely available (see the Lisp FAQ), so anyone with one of
the supported platforms can check the requirements themselves. I do
mean _anyone_. In practice this will probably only apply to those who
are curious enough to FTP the relevant CLISP port, install it, and
then check the diskspace and RAM demands.

It's an unfortunate bias that similar info on compilers is so much
more widely "known", but that can easily be remedied - if everyone
reading this who doesn't know CLISP goes to the minimal trouble of
checking the _facts_ instead of relying on heresay.

Remember: ignorance can be cured, while stupidity is for life.
Sajid Ahmed the Peaceman demonstrates his stupidity by refusing to
check the facts, and thus remains ignorant _by choice_. What better
proof of stupity is needed?

Anyone not stupid who wishes to trash Lisp would do so based on the
facts, not myths. Assuming, of course, that there are facts that would
support such trashing. Since nobody who maligns Lisp ever uses facts,
we can either suppose that they can't find such facts, or that these
facts don't exist.

After 5 years of reading this kind of crap, I'm still waiting for a
C++ programmer who knows Lisp well enough to offer any facts. I
hesitate to conclude that all C++ programmers are clueless, but I
wonder why none of the clued up C++ people step in to correct their
misguided fellows? Perhaps they don't wish to get involved (who could
blame them?), but it leaves me wondering if they even exist.

Scary, isn't it?
--
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
"My body falls on you like a dead horse" -- Portait Chinois
Please note: my email address is gubbish

Martin Rodgers

unread,
Aug 3, 1997, 3:00:00 AM8/3/97
to

My body fell on Sajid Ahmed the Peaceman like a dead horse, thusly:

> 3. To do the compile, it would take a large amount of time.

Perhaps you'd like to FTP a Lisp compiler that supports tail
recursion, write some code that uses tail recursion, compile it, and
then share the results with us? We'll be very interested.

Thanks.

Bryant Brandon

unread,
Aug 3, 1997, 3:00:00 AM8/3/97
to

Martin Rodgers fell on someone like a dead horse:

[...]

>After 5 years of reading this kind of crap, I'm still waiting for a
>C++ programmer who knows Lisp well enough to offer any facts. I
>hesitate to conclude that all C++ programmers are clueless, but I
>wonder why none of the clued up C++ people step in to correct their
>misguided fellows? Perhaps they don't wish to get involved (who could
>blame them?), but it leaves me wondering if they even exist.

Hello, I'm a C++ programmer who is slowly but surely making the
transition to Lisp. The only reason I haven't stepped in to flame the
peaceman is because I don't want to get sucked into a pointless argument
when I already have plenty going on in the politics groups. That and,
frankly, I don't know enough about Lisp yet to put up a good argument.
While I'm here, would someone kindly tell me the exact difference
between tail-recursion and generic recursion? Told you I didn't know much.
(;

>Scary, isn't it?


>--
><URL:http://www.wildcard.demon.co.uk/> You can never browse enough
> "My body falls on you like a dead horse" -- Portait Chinois
> Please note: my email address is gubbish

B.B. --"My horse falls on you like a dead body."

Martin Rodgers

unread,
Aug 4, 1997, 3:00:00 AM8/4/97
to

My body fell on Bryant Brandon like a dead horse, thusly:

> While I'm here, would someone kindly tell me the exact difference
> between tail-recursion and generic recursion? Told you I didn't know much.

I first learned of tail recursion while using Basic and assembly
language, in the form of tail calls. You replace the call with a jump.
Imagine what happens if you call a subroutine in Basic using GOTO
instead of GOSUB. In 68K assembly language, you could JMP instead of
JSR. This only makes sense when the last thing your subroutine (or
function, as we refer to them today (-; ) is call the target function
of the tail call.

If you have an activation record on the stack, you'd need to clean it
up before making the jump. Of course, the assembly language book in
which I first read about this technique tended to use the "pass by
register" method of parameter passing, without using the stack for a
activation record. This is also a book that gives an example of
"improving" sort performance by rewritting Basic code in assembly
language. The example was used integers and bubble sort, which makes
me wonder how realistic it was. Still, the example could have used
QuickSort and the point about calling machine code from Basic on the
TRS-80 would have been made just as well. In fact, the following
example used strings...and bubblesort.

Alas, there's no index, and the structure of the book focuses more on
hardware and assembly language that programming techniques, so there's
no section of control structures. So I can't quote the author's own
explanation of tail calls. Sorry if mine doesn't make much sense, but
I'm not too good at explaining ideas like this.

David Brabant

unread,
Aug 4, 1997, 3:00:00 AM8/4/97
to

>After 5 years of reading this kind of crap, I'm still waiting for a
>C++ programmer who knows Lisp well enough to offer any facts. I
>hesitate to conclude that all C++ programmers are clueless, but I
>wonder why none of the clued up C++ people step in to correct their
>misguided fellows? Perhaps they don't wish to get involved (who could
>blame them?), but it leaves me wondering if they even exist.

Languages wars are completely irrational and totaly
useless. I'm certainly not a Lisp expert (but I *do*
have experience with Lisp since I've developed expert
system shells in CL during about three years and Lisp
was a "major option" for my CS degree). I have a lot
more experience in C++ (more than 15 years, in fact).
Both languages have their merits. Even FORTRAN has
its merits :-). "The right tool for the job" is my
motto.

David

--
David BrabaNT, | E-mail: David.Braba...@csl.sni.be
Siemens Nixdorf (SNI), | CIS: 100337(,)1733
Centre Software de Liège, | X-400: C=BE;A=RTT;P=SCN;O=SNI;OU1=LGG1;OU2=S1
2, rue des Fories, | S=BRABANT;G=DAVID
4020 Liège (BELGIUM) | HTTP: www.sni.de www.csl.sni.be/~david

Erik Naggum

unread,
Aug 4, 1997, 3:00:00 AM8/4/97
to

* Bryant Brandon

| While I'm here, would someone kindly tell me the exact difference between
| tail-recursion and generic recursion?

I'll try.

generally, "recursion" refers to cycles in the call tree. however, the
trivial case when you call yourself to solve a smaller problem than you
were given is more commonly referred to as "recursion" than the general
call-tree cycle.

for instance, if you are given an item and a list of items in which to
search for it, the result is either false if the list is empty, true if the
first element of the list is what you're looking for, and otherwise the
same as the result of searching for the item in the _rest_ of the list,
thus having reduced the problem to two simple cases and a smaller problem
congruent to the original problem. as in:

(defun find-item (item list)
(cond ((endp list) nil)
((eq item (first list)) t)
(t (find-item item (rest list)))))

we note that this function calls itself at the very end of itself and that
it does nothing with the value returned but return it to its own caller.
obviously, calling yourself with new arguments when the old arguments are
never needed and then immediately returning whatever such a call returns is
a special case. this special case is called "tail-recursion", and it can
be accomplished with a jump to the start of the function. consider the
straight-forward rewrite (yes, this _is_ Common Lisp):

(defun find-item (item list)
(tagbody
recurse
(cond ((endp list) nil)
((eq item (first list)) t)
(t (setq list (rest list))
(go recurse)))))

(which, incidentally, is a transformation many C programmers do by hand.)

now consider the case where two functions produce a cycle in the call tree:

(defun evenp (integer)
(if (zerop integer)
t
(oddp (1- integer))))

(defun oddp (integer)
(if (zerop integer)
nil
(evenp (1- integer))))

it is obvious that these functions satisfy the criterion that their
original arguments are never used after the last call, and that they return
the value of the last call directly. however, they don't call themselves,
so the obvious rewrite is not available.

at this point, we're moving onto calling conventions in various languages,
and this is a large topic, so I'll just note that Scheme _requires_ a call
whose value is immediately returned to be a _jump_. that is, at assembly
level, what would have been

call <whatever>
return

_must_ be written

jump <whatever>

this is true even if the `return' instruction does a lot of work to clean
up things after itself, but only if that cleanup does not depend on
information local to the function where it occurs. e.g., in the most
common calling convention for C, the calling function must remove arguments
it has pushed on the stack (an optimization to use registers for some of
the arguments does not change this picture), and only the caller can do
this because a function does not know how it was called. this is one of
the legacies of C that are impossible to get rid of (which is why C++ has
it, despite the fact that C++ functions know how many arguments they must
have been called with), and which makes interoperability with other
languages such a hard problem.

ANSI Common Lisp does not impose the requirement to do call/return as jump
on its implementations, but many are very good at detecting cases where it
can do this safely. naturally, there are calling convention "gotchas" that
may inhibit an implementation from doing this.

hope this helps.

#\Erik
--
404 Give me a URL I cannot refuse.

Martin Rodgers

unread,
Aug 4, 1997, 3:00:00 AM8/4/97
to

My body fell on David Brabant like a dead horse, thusly:

> Both languages have their merits. Even FORTRAN has
> its merits :-). "The right tool for the job" is my
> motto.

This is a very sensible attitude. Thanks for letting me know that you
exist!

BTW, at Conscious Computing they say, "Render unto Lisp what Lisp is
due. Render unto C what C is due." I have no connection with these
people, but I do wish that there were more people like them.

Gareth McCaughan

unread,
Aug 8, 1997, 3:00:00 AM8/8/97
to

"Sajid Ahmed the Peaceman" wrote:

> I would try to write your code into something less recursive, and
> more clearer, but I'm sorry to say, I don't know enough about lisp
> to follow your code.

And yet you have no hesitation about pontificating about how *SLOW*
Lisp is, and how it's impossible to write in Lisp without Evil
Recursion, and so on and on and on. Hmm.

Mukesh Prasad

unread,
Aug 8, 1997, 3:00:00 AM8/8/97
to

There seems to be a pattern to these debates on Lisp.

Somebody vaguely familiar with Lisp attempts to
discuss an old truth about Lisp, e.g.
Lisp is interpreted
Lisp is slow
Lisp is recursive
etc.

The Lisp community immediately denies this.
The thrust of the denial is
- newcomer is wrong/ignorant
- newcomer does not know all the intricate details
- tons of intricate details

But invariably, the "old truth" is a real
"old truth", not something the newcomer
made up in spare time.

So the newcomer probably goes away with a
feeling that not only Lisp is slow or
whatever, its user community consists of
hostile political spin gurus.

How about some factual information instead, which
admits to the old truth *AND* adds to it, e.g.

- Lisp used to encourage recursion. This was slow.
- Nowadays, recursion is less used in Lisp.
- All the list-management functions which
were particularly recursion prone, are now
embedded in the run-time environment from
the compiler vendor, and are coded iteratively,
often in the machine language.
- It is ok for learning purposes to duplicate
these list-management functions recursively,
as long as you remember there may be a penalty
if you do this in production code.
- Nowadays, techniques exist for automatically
changing certain types of recursion into iteration.
These techniques were developed for Lisp, but
are now also used in other languages.
- Other reasons the newcomer might want to
continue pursuing his/her interest in Lisp.

Martin Rodgers

unread,
Aug 8, 1997, 3:00:00 AM8/8/97
to

Mukesh Prasad wheezed these wise words:

> There seems to be a pattern to these debates on Lisp.

I've said this myself. I wish I'd archive the similar threads I've
seen during the last 5 years. However, I am archiving _this_ one. Next
time someone tries this, I'll refer them to:

<URL:http://www.wildcard.demon.co.uk/archives/>

> Somebody vaguely familiar with Lisp attempts to
> discuss an old truth about Lisp, e.g.
> Lisp is interpreted
> Lisp is slow
> Lisp is recursive
> etc.

Either they're unfamiliar with Lisp, or they're feigning it. My guess
is that they've been ignorant and naive enough to assume that their
knowledge is complete. When they were presented with the truth, they
shut up.



> The Lisp community immediately denies this.
> The thrust of the denial is
> - newcomer is wrong/ignorant
> - newcomer does not know all the intricate details
> - tons of intricate details

This used to be done _very_ politely. Since Java has entered the
scene, the mass "attacks" have ceased, but the loners - like the
current pain the neck - tend to get treated less gently. Or is this
just my fading memory? There's another reason for archives.



> But invariably, the "old truth" is a real
> "old truth", not something the newcomer
> made up in spare time.

Yep. Who checks these things before posting? There FAQ is there for
anyone who wants to read it.



> So the newcomer probably goes away with a
> feeling that not only Lisp is slow or
> whatever, its user community consists of
> hostile political spin gurus.

This is the current state of affairs. Of course, if a newcomer insists
on remaining ignorant, like a certain somebody, then it might not be
so easy to be patient with them. We should ask ourselves what we're
doing wrong, if anything.



> How about some factual information instead, which
> admits to the old truth *AND* adds to it, e.g.

That would mean admitting that Lisp has not always been perfect, and
that not every implementation today is perfect in every details! Pure
heresy. I say we do it! A little objectivity may help.



> - Lisp used to encourage recursion. This was slow.
> - Nowadays, recursion is less used in Lisp.
> - All the list-management functions which
> were particularly recursion prone, are now
> embedded in the run-time environment from
> the compiler vendor, and are coded iteratively,
> often in the machine language.
> - It is ok for learning purposes to duplicate
> these list-management functions recursively,
> as long as you remember there may be a penalty
> if you do this in production code.
> - Nowadays, techniques exist for automatically
> changing certain types of recursion into iteration.
> These techniques were developed for Lisp, but
> are now also used in other languages.
> - Other reasons the newcomer might want to
> continue pursuing his/her interest in Lisp.

Yes, that sounds good to me. It would be ideal for a special FAQ,
written for newcomers to Lisp (newbies?), with some doubts about the
language. The structure and content of the comp.lang.lisp FAQ may be
intimidating to non-Lispers. Would a more "newbie friendly" FAQ help?


--
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough

"There are no limits." -- ad copy for Hellraiser

Rainer Joswig

unread,
Aug 8, 1997, 3:00:00 AM8/8/97
to

In article <86afis7...@g.pet.cam.ac.uk>, Gareth McCaughan
<gj...@dpmms.cam.ac.uk> wrote:

> "Sajid Ahmed the Peaceman" wrote:
>
> > I would try to write your code into something less recursive, and
> > more clearer, but I'm sorry to say, I don't know enough about lisp
> > to follow your code.
>
> And yet you have no hesitation about pontificating about how *SLOW*
> Lisp is, and how it's impossible to write in Lisp without Evil
> Recursion, and so on and on and on. Hmm.

And recursive code often is clearer.

But it is obvious from his postings that "Sajid Ahmed the Peaceman"
does only have little if at all programming experience.

There are a lot of slow languages/implementations very popular
nowadays. Lisp itself as a family of languages and implementations
has a wide range of options from small to big, and from
slow to fast. There are optimizing compilers.

But what is really frustrating is, that speed often is not an issue
(for example the new Motorola Starmax with the 300 Mhz PowerPC 750
are blazingly fast). Yet we suffer from old-fashioned software technology
(MacOS, Windows, ...).

--
http://www.lavielle.com/~joswig/

Gareth McCaughan

unread,
Aug 8, 1997, 3:00:00 AM8/8/97
to

Mukesh Prasad wrote:

> How about some factual information instead, which
> admits to the old truth *AND* adds to it, e.g.
>

> - Lisp used to encourage recursion. This was slow.

Perhaps it was, but that's not because recursion is slow.
Old Lisps (1) encouraged recursion and (2) were slow. (2) is
not a consequence of (1).

> - Nowadays, recursion is less used in Lisp.

That's probably true. (But PROGN is pretty old...)

> - All the list-management functions which
> were particularly recursion prone, are now
> embedded in the run-time environment from
> the compiler vendor, and are coded iteratively,
> often in the machine language.

They're often coded recursively, in Lisp, and compiled
to good machine code.

> - It is ok for learning purposes to duplicate
> these list-management functions recursively,
> as long as you remember there may be a penalty
> if you do this in production code.

There may be. Or there may not. It depends on your compiler.

> - Nowadays, techniques exist for automatically
> changing certain types of recursion into iteration.
> These techniques were developed for Lisp, but
> are now also used in other languages.

Very true.

> - Other reasons the newcomer might want to
> continue pursuing his/her interest in Lisp.

This particular newcomer doesn't *have* an interest in Lisp.
He's just been airing his prejudices and dismissing everyone
else as ignorant, while he says things that are known to be
false by everyone who knows anything about computer science,
or computer programming, at all.


Yes, sometimes Lisp bigots go over the top, just as with C bigots,
Java bigots and every other kind of bigot. But that's not what's
happened here. "Sajid Ahmed the Peaceman" has breezed into
comp.lang.lisp and shouted about how slow Lisp is, and given totally
bogus arguments for what he's saying. Even if all existing Lisp
implementations were as slow as he thinks, his arguments would
still be wrong. *That's* what everyone's flaming him for.

Martin Rodgers

unread,
Aug 8, 1997, 3:00:00 AM8/8/97
to

Rainer Joswig wheezed these wise words:

> And recursive code often is clearer.

True. If you can think recursively. I suspect that some people can't.
Now, this may be coz they had a poor teacher, and Sajid Ahmed the
Peaceman sounds like he's had a very bad experience with teachers.

It could explain a lot.



> But it is obvious from his postings that "Sajid Ahmed the Peaceman"
> does only have little if at all programming experience.

Do a search with Dejanews - it'll be enlightening. My guess is that
he's a support droid, which would explain his experience and knowledge
of PC hardware. It might also explain his familiarity with C. For all
we know, he may be a perfectly competent C programmer.

Quite why he assumes that his knowledge extends to compiler theory,
Lisp, the future of the Mac...is something for us to puzzle over.
Unless he chooses to explain _himself_.



> There are a lot of slow languages/implementations very popular
> nowadays. Lisp itself as a family of languages and implementations
> has a wide range of options from small to big, and from
> slow to fast. There are optimizing compilers.

The variety may be what confuses some people. Not everyone is as smart
as yourself, Rainer. Some people may judge Lisp based on their first
experience with the language, and if that happens to be with a slow
implementation, then that's how they see the language. We can try to
re-educate them, but there will always be one or two individuals who
are determined not to change their opinions, no matter what evidence
is put before them.

It's possible that someone is the position of supporting a particular
platform may find some advantages in clinging to a set of beliefs,
however misguided they may be. Put another way, new ideas are not
necessarily useful in such a job. Instead, they may get by with just
reading the manuals for products they use/sell/support, picking up
tips from support groups, colleagues, etc.

Not that's an explanation for anyone repeatedly slagging off a tool
that _they don't use_. Could Sajid Ahmed be so bitter, after his bad
experience with Lisp, that he can't bear the thought of anyone else
using the language? Is he afraid that if we use it, then he may
someday find himself facing it again?

> But what is really frustrating is, that speed often is not an issue
> (for example the new Motorola Starmax with the 300 Mhz PowerPC 750
> are blazingly fast). Yet we suffer from old-fashioned software technology
> (MacOS, Windows, ...).

The problem is that most people actually want these things. We should
accept that and move on. However, I don't think that Sajid Ahmed is
complaining about Lisp Machines, not recursion. He'd be advocating
operating systems instead of iteration.

I suspect that his Mac reference is pure red herrings. Don't let it
worry you - it only means that he has no argument with which to refute
your claims. Congratulations. You've won!

Mukesh Prasad

unread,
Aug 8, 1997, 3:00:00 AM8/8/97
to

Gareth McCaughan wrote:
>
> Mukesh Prasad wrote:
>
[snip]

> > - It is ok for learning purposes to duplicate
> > these list-management functions recursively,
> > as long as you remember there may be a penalty
> > if you do this in production code.
>
> There may be. Or there may not. It depends on your compiler.

As far as I know, *only* tail recursion can be optimized
away. You must be confusing "tail recursion" with "recursion".
Unless there are recent compilers which can optimize
away all recursive calls? (Which compilers would
that be?)

Gareth McCaughan

unread,
Aug 8, 1997, 3:00:00 AM8/8/97
to

Mukesh Prasad <pra...@removit-polaroid.com> writes:

> > > - It is ok for learning purposes to duplicate
> > > these list-management functions recursively,
> > > as long as you remember there may be a penalty
> > > if you do this in production code.
> >
> > There may be. Or there may not. It depends on your compiler.
>
> As far as I know, *only* tail recursion can be optimized
> away. You must be confusing "tail recursion" with "recursion".
> Unless there are recent compilers which can optimize
> away all recursive calls? (Which compilers would
> that be?)

Er, I didn't say there were any compilers that can optimise away
all recursive calls. You said "there are these routines for doing
things with lists, which are handled iteratively in the library,
and there may be a penalty associated with doing them recursively".
I assumed that since you were talking about things being handled
iteratively you were talking about things that couple be
tail-call optimised. If not, then I don't think I understand
what sort of iterative code you're suggesting the library contains.

And no, I'm not confusing "tail recursion" with "recursion".

Bryant Brandon

unread,
Aug 8, 1997, 3:00:00 AM8/8/97
to

In article <33EA60...@capital.net>, peac...@capital.net wrote:

[...]

> I would try to write your code into something less recursive, and
>more clearer, but I'm sorry to say, I don't know enough about lisp

>to follow your code. If you repost in pseudocode, I could make an
>attempt.
>
> Peaceman

Let me get this straight, you're in here complaining about Lisp, and you
can't even read it? What kind of idiot are you? I might as well go
complain that Swahili is too hard to pronounce.
BTW, "more clearer" is not proper English. How many languages are you
unfamiliar with?

B.B.

Bulent Murtezaoglu

unread,
Aug 9, 1997, 3:00:00 AM8/9/97
to

jos...@lavielle.com (Rainer Joswig) writes:
[...]

> But what is really frustrating is, that speed often is not an issue
> (for example the new Motorola Starmax with the 300 Mhz PowerPC 750
> are blazingly fast). Yet we suffer from old-fashioned software technology
> (MacOS, Windows, ...).

I don't entirely agree with this. "Language speed" issues do not really have
much to do with the OS used. People who buy newer faster computers before
they turn into mass market commdities usually have a large problem they need
to solve. Common Lisp is fine for those kinds of things, you get rapid
development and then with the help of a good compiler and judicious use
of declerations you can make your code fast enough in most cases. CMUCL
is a very good example of what can be accomplished in that regard.

If people want to argue "C++ is fast" they'd be much better off going
after CLOS. AFAIK (and I'd love to be corrected on this) once you enjoy
the luxury of fast development using CLOS and get your code working with
generic function calls in your inner loops, you're stuck with large constants.
(Please correct me please...)

BM

Richard A. O'Keefe

unread,
Aug 9, 1997, 3:00:00 AM8/9/97
to

Mukesh Prasad <pra...@removit-polaroid.com> writes:
>As far as I know, *only* tail recursion can be optimized
>away.

Wrong.

>Unless there are recent compilers which can optimize
>away all recursive calls?

You don't have to be able to optimise away _all_ recursive calls
to optimise away _some_ of them.

May I make what *ought* to be a blindingly obvious point?
It is possible to inline some of the calls to recursive (not tail recursive)
functions. Henry Baker has a paper on this.
I _think_ one of the Scheme compilers I have here can do it.

Let's take a trivial case:

sum([]) = 0
sum([H|T]) = H + sum(T)

This is not tail recursive. Unfold the recursive call:

sum([]) = 0
sum([H]) = H + 0
sum([H1,H2|T]) = H1 + (H2 + sum(T))

Simplify:

sum([]) = 0
sum([H]) = H
sum([H1,H2|T]) = (H1+H2) + sum(T)

Rewrite using pair? car and cdr:

(define (sum L)
(if (pair? L)
(if (pair? (cdr L))
(+ (+ (car L) (cadr L)) (sum (cddr L)))
(car L))
0))

Not *all* of the recursive calls have gone, but *half* of them have.
If recursive calls are a bother, which they aren't usually if you
don't have to use C's procedure calling conventions, then you have
saved half of them, if your language is clean enough that the
compiler writers can afford to do this.
--
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.

Sajid Ahmed the Peaceman

unread,
Aug 9, 1997, 3:00:00 AM8/9/97
to


OK smart guy. You think your so smart? I got a challenge for
you. See if you can answer the following quuestion. I'll warn
you in advance, if you don't know the answer say so, otherwise you'll
end up looking stupid:


One train leaves LA, California travelling at 300 MPH to
New York City. 5 hours later a train leaves NYC travelling at 600
MPH, heading in the opposite direction towards LA, California. When they
meet,
which one in closer to NYC?


Peaceman

Sajid Ahmed the Peaceman

unread,
Aug 9, 1997, 3:00:00 AM8/9/97
to

Gareth McCaughan wrote:
>
> And yet you have no hesitation about pontificating about how *SLOW*
> Lisp is, and how it's impossible to write in Lisp without Evil
> Recursion, and so on and on and on. Hmm.
>

No hesitation at all. All the program I've seen, as well as all
the programs that you "trolls" in the lisp newsgroup have posted all
majorly involve recursion. The Lisp intrepeters I've seen in the past
were
all slow, as well as the newer ones posted in the faq. I'm still waiting
to hear your response to my post about real world programming. Where
does
Lisp fit into the picture?

Peaceman

Sajid Ahmed the Peaceman

unread,
Aug 9, 1997, 3:00:00 AM8/9/97
to

Marco Antoniotti wrote:
>
> I posted a complete and functioning C binary tree traversal code a few days
> ago, *begging* you to rewrite it in iterative format (sans le stack, of
> course).
>
> Seen nothing yet.
>
> Do you want the code? It's still waiting for you in my disk.
>
> Cheers
> --
> Marco Antoniotti


I never said that you could do it without a stack, that was
somebody else. What I did say, though, was that the iterative version
would be faster than the recursive version, but you would still
need to set up a stack, and maybe put in some inline assembly code
for the push and pop macros. If you didn't have the inline assembly
push and pop and macros, then it may be possible to write a
faster recursive version, but if you do, than definitely not.

Peaceman

Martin Rodgers

unread,
Aug 9, 1997, 3:00:00 AM8/9/97
to

Sajid Ahmed the Peaceman wheezed these wise words:

> No hesitation at all. All the program I've seen, as well as all
> the programs that you "trolls" in the lisp newsgroup have posted all
> majorly involve recursion.

Of course the code posted here uses recursion. Some of it also uses
iteration. Wake up at the back, please.

> The Lisp intrepeters I've seen in the past were
> all slow, as well as the newer ones posted in the faq.

Name them. We'd like to verify your findings.

There are certainly slow implementations, like XLISP. That's because
it's an interpreter. You'll probably also find that a C interpreter
will be slower than an implementation that generates native code.

> I'm still waiting
> to hear your response to my post about real world programming. Where
> does
> Lisp fit into the picture?

Hmm. Where to start?

Ok. Take a look at <URL:http://www.franz.com/frames/ha.list.html>

You might also like to read Rainer Joswig's Lisp page:
<URL:http://www.lavielle.com/~joswig/lisp.html>

Fred Gilham

unread,
Aug 9, 1997, 3:00:00 AM8/9/97
to

Bryant Brandon wrote:
----------------------------------------

[...]

> I would try to write your code into something less recursive, and
>more clearer, but I'm sorry to say, I don't know enough about lisp
>to follow your code. If you repost in pseudocode, I could make an
>attempt.
>
> Peaceman

Let me get this straight, you're in here complaining about Lisp, and you
can't even read it? What kind of idiot are you? I might as well go
complain that Swahili is too hard to pronounce.
BTW, "more clearer" is not proper English. How many languages are you
unfamiliar with?

----------------------------------------

Well, you have to be careful with that last remark. There's a story
about a physicist who was working on the A-bomb during WWII. He spoke
English poorly and one of the engineers constantly rode him about it.
Finally one day he said, ``You sure have a lot of trouble with
English, don't you?'' Another person remarked, ``Yes, and he has the
same problem with six other languages.''


Back to the topic....

I was reading THE ELEMENTS OF PROGRAMMING STYLE by Kernighan and
Plauger---not a trace of lisp in it, all examples done in Fortran and
PL1---and I ran into the following (Peaceman, please take note!):


Recursion represents no saving of time or storage. Somewhere
in the computer must be maintained a list of all the places a
recursive routine is called, so that the program can
eventually find its way back. But the storage for that list
is shared among many different uses. More important, it is
managed automatically; many of the burdens of storage
management are placed on the compiler, not on the programmer.
And since bookkeeping details are hidden, the program can be
much easier to understand. Learning to think recursively
takes some effort, but it is repaid with smaller and simpler
programs.

Not every problem benifits from a recursive approach, but
those that deal with data that is recursively defined often
lead to very complicated programs unless the code is also
recursive. A list, for example, can be said to consist of two
elements, where each element is either an atom or a list. To
trace through an arbitrary list requires an indefinite amount
of storage to keep track of how to get back. The recursion
mechanism provides this simply and conscisely.

Even if you cannot use recursion in a situation, perhaps
because you must stick to Fortran, or because it is too
inefficient (don't believe that until you've tried it), you
will find it valuable to do the original design as a recursive
program. Then unfold the recursion, simulating the recursive
storage with your own explicitly indexed data structures. The
resulting program should be cleaner and easier to understand
than if you start from scratch.

________________________________________

Use recursive procedures for recursively-defined data structures.
________________________________________

(Pg. 77, second edition)


The first paragraph, in particular, sounds like a sales-pitch for both
recursion and garbage-collection.


--
-Fred Gilham gil...@csl.sri.com
``The road to tyranny is paved with human rights....Once the state
sets out to guarantee a right, it will destroy every obstacle that
stands in the way of liberating the oppressed.'' --Thomas Fleming

Sajid Ahmed the Peaceman

unread,
Aug 9, 1997, 3:00:00 AM8/9/97
to

Bryant Brandon wrote:
>
> In article <33EA60...@capital.net>, peac...@capital.net wrote:
>
> [...]
>
> > I would try to write your code into something less recursive, and
> >more clearer, but I'm sorry to say, I don't know enough about lisp
> >to follow your code. If you repost in pseudocode, I could make an
> >attempt.
> >
> > Peaceman
>
> Let me get this straight, you're in here complaining about Lisp, and you
> can't even read it? What kind of idiot are you? I might as well go
> complain that Swahili is too hard to pronounce.
> BTW, "more clearer" is not proper English. How many languages are you
> unfamiliar with?
>
> B.B.


Calling people names shows that you have no argument for your case.

Peaceman

Martin Rodgers

unread,
Aug 9, 1997, 3:00:00 AM8/9/97
to

Fred Gilham wheezed these wise words:

> I was reading THE ELEMENTS OF PROGRAMMING STYLE by Kernighan and
> Plauger---not a trace of lisp in it, all examples done in Fortran and
> PL1---and I ran into the following (Peaceman, please take note!):

[stuff deleted]

> The first paragraph, in particular, sounds like a sales-pitch for both
> recursion and garbage-collection.

It's a most wonderful book, IMHO. I like to recommend it to
programmers using all languages. In fact, I recently did, in another
thread not a million newsgroups from here (elsewhere in the
comp.lang.* heirarchy).

Something tells me that Peaceman has yet to read it. If he _has_ read
it, then I suggest that he re-read it, paying particular attention to
page 77. Thanks for reminding _me_ of that page.

Martin Rodgers

unread,
Aug 9, 1997, 3:00:00 AM8/9/97
to

Sajid Ahmed the Peaceman wheezed these wise words:

> Calling people names shows that you have no argument for your case.

Is it not reasonable to ask a simple question, like how familiar are
you with Lisp? I have a few questions of my own. Where did you learn
Lisp? Was it in a class or did you teach yourself?

You admit that youdon't know enough about Lisp to follow some simple
examples, yet you consider that your knowledge of Lisp is enough to
talk about the quality of Lisp implementations. You can't have it both
ways. Either you know enough about Lisp to talk about it with us with
equal authority, in which case you will be able to follow our code, or
you should admit that we know the language - and the implementation
details - far better than you do.

Which is it?

Bryant Brandon

unread,
Aug 9, 1997, 3:00:00 AM8/9/97
to

>Bryant Brandon wrote:
>>
>> In article <33EA60...@capital.net>, peac...@capital.net wrote:
>>
>> [...]
>>
>> > I would try to write your code into something less recursive, and
>> >more clearer, but I'm sorry to say, I don't know enough about lisp
>> >to follow your code. If you repost in pseudocode, I could make an
>> >attempt.
>> >
>> > Peaceman
>>
>> Let me get this straight, you're in here complaining about Lisp, and you
>> can't even read it? What kind of idiot are you? I might as well go
>> complain that Swahili is too hard to pronounce.
>> BTW, "more clearer" is not proper English. How many languages are you
>> unfamiliar with?
>>
>> B.B.
>
>

> OK smart guy. You think your so smart? I got a challenge for
>you. See if you can answer the following quuestion. I'll warn
>you in advance, if you don't know the answer say so, otherwise you'll
>end up looking stupid:
>
>
> One train leaves LA, California travelling at 300 MPH to
>New York City. 5 hours later a train leaves NYC travelling at 600
>MPH, heading in the opposite direction towards LA, California. When they
>meet,
>which one in closer to NYC?

Ooh, witty. I don't give a shit. Why are you here, complaining that
Lisp is slow when you admit that you don't know enough to read it? You
have had people who know Lisp very well explain things to you, but you
still beat your dead horse.

P.S. Don't change the subject so blatantly. If you don't want to answer
the question, go away.

> Peaceman

B.B.

Emergent Technologies Inc.

unread,
Aug 9, 1997, 3:00:00 AM8/9/97
to

Bryant Brandon wrote in article <5485026EB6413842.EACA535CEB038604.37719035
ECD7...@library-proxy.airnews.net>...


> While I'm here, would someone kindly tell me the exact difference

>between tail-recursion and generic recursion? Told you I didn't know
much.

You know enough to ask good questions! I'll assume that you're an
experienced programmer.

When the last thing a function does is call another function (or itself),
rather than pushing the return address before transfering control
(by using a CALL instruction), you can just use a JUMP instruction
and let the called function return to whoever called the caller (ignore
for the moment the arguments kicking around on the stack). If
the function you are calling is yourself, you have just transformed a
recursive call into a loop.

This started out as a hack, but Guy Steele elevated it into an important
principle.
He pointed out first that you don't push stack because of function calls,
per se,
but because the result of the function call is being used as an argument
to
another call (it is a subproblem). Thus, *all* function calls are actually
GOTO's
that pass arguments, and can be compiled as JUMPs. (The pushes and
pops usually associated with a function call are now made the
responsibility
of the code that wants to do some computation after the call.)
Second, he pointed out that once you do this, iterative constructs such as
while, for, and do, are trivially implemented as function calls, *and are
no
more expensive when implemented that way*, because they are implemented
as JUMPs. Note, however, that while, for, and do loops are less general
in that a general function call can pass arguments.

In Lisp, a tail recursive call occurs whenever the return value of a
function
is computed by another function call. Here is an example:
(defun foo (x) (bar (baz x)))
The call to bar is in a tail recursive position because the value of a call
to
foo is computed by a call to bar. Thus the code for foo can simply JUMP
to bar as its last action. In Scheme, it is required to work this way,
but
in Common Lisp, it is implementation dependent, although these days
it is more likely than not.

The benefit is more than just changing a CALL to a JUMP. In the case
where the code calls itself, you simply get the same loop you would have
gotten from using the obvious while, for or do loop. However, in mutually
recursive procedures, you can generate an iteration that is non-obvious.
You can even create code that iterates or recurses dynamically depending
upon the input data (for example, it recurses down the CAR's of a tree,
but iterates down the CDR's). *And*, to top it all off, you get all this
*for free*
,
without even having to think about it --- just write recursive functions
and
it happens like magic.

In other posts, I argue that C++ compilers cannot in general compile
code this way because C and C++ explicitly require a certain amount
of cleanup to occur after the last computation in a function (deallocate
the stack frame and run the destructors). So an equivalent statement
in C++:
return bar (baz (x));
can only be turned into a jump if there aren't any cleanups to be done.

Hope this helps.
~jrm


Bryant Brandon

unread,
Aug 9, 1997, 3:00:00 AM8/9/97
to

In article <33ECA3...@capital.net>, peac...@capital.net wrote:

[...]

> Calling people names shows that you have no argument for your case.

I didn't call you an idiot, I merely asked what category you belong in.
Besides, you ignored the point of my message, why are you commenting on a
language you can't even read? I don't care about your personality.

B.B.

Christopher Oliver

unread,
Aug 10, 1997, 3:00:00 AM8/10/97
to

Sajid Ahmed the Peaceman (peac...@capital.net) wrote:

: Calling people names shows that you have no argument for your case.


Huh? Who was your forensic discourse teacher? While it is cert-
ainly true that an ad hominem attack (more generally a non sequitur)
does not contribute anything to an argument, it hardly invalidates
the other premises. So while you certainly have drawn some quite
justified rebuke, which may not contribute anything to the matter
under discussion, "existence of an iterative form for tail recurs-
ion," it doesn't detract from the validity of the other arguments
since they do not depend of claims of your knowledge or lack thereof.

You asked for an existence argument. I and others gave examples;
this is sufficient for the purposes of this argument. Some people
exceptionally knowledgeable regarding theory and practice of prog-
ramming languages have tried to explain more thoroughly, but you
seem bent and determined to discard the lessons without considera-
tion. This is the very definition of invincible, not to mention
malicious, ignorance.

--
Christopher Oliver Traverse Communications
Systems Coordinator 223 Grandview Pkwy, Suite 108
oli...@traverse.com Traverse City, Michigan, 49684
"You can even make Emacs look just like vi -- but why?" - C. Musciano

Will Hartung

unread,
Aug 10, 1997, 3:00:00 AM8/10/97
to

Sajid Ahmed the Peaceman <peac...@capital.net> writes:


> One train leaves LA, California travelling at 300 MPH to
>New York City. 5 hours later a train leaves NYC travelling at 600
>MPH, heading in the opposite direction towards LA, California. When they
>meet,
>which one in closer to NYC?

Hey! I'll bite!

I always sucked at these problems in High School, but I'll give it a
whirl.

I'll assume that NY is 3000 miles away from LA. You didn't say, and I
didn't want to look it up. But, 3000 miles sounds good enough.

I then guesstimated an equation:

x*300 = 3000-(x-5)*600

To find how long it takes before the two met. Then, once you solve for
x, you plug it into the x*300, and see if it's greater than 1500.

I got 6.66 for x, and that times 300 gives me...umm...2000. So, I
would say....hey...wait a minute! This if one of those TRICK
questions!

All that math for nothing...Told you I sucked at these problems.

But d'ya know what I used as a tool to figure out the math? Rather
than relying on Algebra that's 15 years dead?

I got this little calculator called the HP-48GX. Probably one of the
most powerful calculators on the planet. 512K ROM, 128K RAM. It's got
a FOUR (4) Bit processor inside of this thing screaming at 2 or 4 MHz
(I forget). Batteries last about 6 weeks of full pop use.

Anyway, among other things, it solves symbolic equations. I type in the
above equation, just like you see it, and say "gimme X", and it did.
Zowie!

You know what they coded the ROM in? To give it all of its great
power and flexibility? Wanna take a WILD guess?

It's called RPL - Reverse Polish Lisp (ack! The 'L' word!). A cross
between Forth and Lisp. I can frankly say that the HP-48 has one of
THE most incredible development environments on the market today, all
in this little bitty box. It's got all sorts of things: Lists,
strings, matrices, and even first class functions. Garbage collection
and EVAL. Who could ask for more?

More bad news. People USE these things in the Real World(tm). Real
popular with Surveyors. I even have a client that has 30 of these
things floating around in greenhouses collecting job and cost
accounting data. The calculator plugs into a PC and gets its guts
sucked out each night into some evil FoxPRO app that I thankfully
didn't have to write.

I wrote the app for the calculator. I did it with a dream. I wasn't
tasked with "Son, go out and get us some calculators and make 'em jump
through hoops!". Heck, I wasn't even on the team doing the project.

No, I saw the calc on my own. Bought one, on MY dime, took it home,
figured it out, coded up a template app, and presented it to the
companies management because I thought it was a better solution than
what they were looking at. If they didn't bite, I was out $300 for the
calculator plus my time.

Know what? They didn't say "But, Son, that's not coded using
C/Fox/Basic/xxx!". They saw their problem solved, money saved, and
their jobs easier. That's the goal of ANY project, long term.

They could have purchased units that would have been coded in
whatever, but no. In fact, they were prototyping the app on a handheld
in 'C', and they were having nothing but problems. It can be difficult
to write, and especially debug, embedded applications. The vendor they
were dealing with was underwhelmed when he was told that they were
going with "calculators".

WHAT the app is written in is not as important as WHAT IT DOES FOR THEM.

THAT is the Real World(tm).

You may want to try opening your eyes and mind to it someday.

Oh, and as a disclaimer, the HP app was not as fast as perhaps one
written on another handheld. (The HP doesn't perform as well as an 8
MHz DOS compatible handheld - shocking, I know) And, I didn't use a
lick of recursion in the code. Somehow, the client didn't care on
either count.

Share and Enjoy!

--
Will Hartung - Rancho Santa Margarita. It's a dry heat. vfr...@netcom.com
1990 VFR750 - VFR=Very Red "Ho, HaHa, Dodge, Parry, Spin, HA! THRUST!"
1993 Explorer - Cage? Hell, it's a prison. -D. Duck

Gareth McCaughan

unread,
Aug 10, 1997, 3:00:00 AM8/10/97
to

"Sajid Ahmed the Peaceman" wrote:

> > And yet you have no hesitation about pontificating about how *SLOW*
> > Lisp is, and how it's impossible to write in Lisp without Evil
> > Recursion, and so on and on and on. Hmm.
> >
>

> No hesitation at all. All the program I've seen, as well as all
> the programs that you "trolls" in the lisp newsgroup have posted all
> majorly involve recursion.

As have been pointed out before, it's foolish to portray the limits
of your own experience with the limits of reality.

Many Lisp programs use recursion, because Lisp makes it easy to
use recursion effectively. Complaining about this is like complaining
that ever so many C programs use pointers[1], or that C++ programs
use classes[2]. Use them incompetently, and recursion will lead to
inefficient programs, pointers to crashed machines and classes to
bloat and incomprehensibility. Use them well, and they can lead to
improved clarity, efficiency and reliability.

And recursion is not rocket science. You can do it in C, you can
do it in Pascal, you can do it in most versions of BASIC. You
can't do it in old versions of FORTRAN, and I expect you can't
do it in COBOL. Are these your languages of choice?

How many times do we have to explain this? (1) Nothing in Lisp forces
you to use recursion if you don't want to. (2) If you do want to, then
Lisp makes recursion a convenient and natural way of expressing many
things. (3) Most Lisp systems do a good job of making recursion
efficient. (4) Some things are much better expressed recursively
than iteratively. (5) Some things *can't* be expressed iteratively
without gross contortions (typically amounting to implementing
recursion by hand, which can *lose* efficiency compared with letting
the compiler do it (for reasons to do with register allocation, for
instance).

And yet you keep complaining that Lisp forces you to use recursion
(which it doesn't); that this is bad because recursion is hard to
understand (it isn't, for those who can be bothered to get used to
it), and because it's inefficient (it isn't, if you use a competent
compiler); and that Lisp is incapable of being used in the "Real
World" (it isn't).

What's the point?

> The Lisp intrepeters I've seen in the past
> were all slow, as well as the newer ones posted in the faq.

Have you used CMU Common Lisp? Have you used Allegro Common Lisp?
Have you used Harlequin LispWorks? Did you remember to compile
things?

> I'm still waiting
> to hear your response to my post about real world programming. Where
> does Lisp fit into the picture?

I already answered. Wherever it, and people who actually know how
to use it, want it to.

You can write scientific number-crunching applications in Lisp.
You can write programmers' utilities in Lisp.
You can write games in Lisp.
You can write mainstream "office" applications in Lisp.
You can write "artificial intelligence" programs in Lisp.
You can write programs for controlling robots in Lisp.
You can write little extensions for other programs in Lisp.
You can write data-slurping database applications in Lisp.
You can write operating systems in Lisp.
You can write compilers in Lisp.
You can write HTTP servers for the WWW in Lisp.

Most of these have been done. A couple haven't yet.

If all you mean is that *more* programs are written in C than in
Lisp, then of course that's true. It just doesn't have a thing to do
with the question at issue, which is "Is Lisp a good language
or not?". (By that metric, classical Greek and Hebrew and Sanskrit
are lousy human languages (more people speak English and Chinese
and French and so on); and the Rolls-Royce is a lousy car (more
people drive Fords and Volkswagens and such); and the DEC Alpha
is a lousy processor (more people use Pentia); and Bach wrote
lousy music; and so on, and on, and on.


[1] Of course, using pointers isn't always the Right Thing, any
more than using recursion is always the Right Thing.

[2] Likewise for classes.

Martin Rodgers

unread,
Aug 10, 1997, 3:00:00 AM8/10/97
to

Bryant Brandon wheezed these wise words:

> I didn't call you an idiot, I merely asked what category you belong in.
> Besides, you ignored the point of my message, why are you commenting on a
> language you can't even read? I don't care about your personality.

I suspect it's because he doesn't have a answer to the technical
issues. He may well be completely out of his depth and hasn't realised
it yet. Alternately, he may be very aware of this, but has an ulterior
motive - like some personal interest in slagging off Lisp. I've asked
him about this, but that's another of the questions which he ignores.

Would it be fair for me to suggest an answer, if he won't offer one
himself? I invite Peaceman to comment now. I mean _really_ comment,
not post more trolls, flamebait, and other nonsense.

I won't be suprised if some people reading this thread, never mind
those participating in it, make a few unflattering judgements of him.
By refusing to answer the technical questions asked here, he appears
to be refusing to engage in technical debate, and if he were right
then surely he could offer some proof that _anyone_ could verify for
themselves.

Lisp is available for a wide variety of machines, including Windows.
So is GNU C. Anyone with Internet access to these compilers can test
the claims made here and see for themselves who is telling the truth
and who is lying.

Martin Rodgers

unread,
Aug 10, 1997, 3:00:00 AM8/10/97
to

Gareth McCaughan wheezed these wise words:

> What's the point?

Hmm. A cheap propaganda exercise?

It looks like it to me. Peaceman refused to accept what we've given
him, information that counters his assertions, plus code that backs up
that info. Instead, he gives us flamebait and trolls.

When Peaceman takes our code and gives us the results from real
compilers that counters _our_ assertions, then we may have a debtate.
Until then, we have a repeatative exchange of posts. Peaceman says
"recursion is expensive", and we reply, "Oh no it isn't". At least,
anyone who thinks like Peaceman could see it like that. Those who can
tell a compiler from their elbow will see something very different.

Martin Rodgers

unread,
Aug 10, 1997, 3:00:00 AM8/10/97
to

Bryant Brandon wheezed these wise words:

> Ooh, witty. I don't give a shit. Why are you here, complaining that


> Lisp is slow when you admit that you don't know enough to read it? You
> have had people who know Lisp very well explain things to you, but you
> still beat your dead horse.

Perhaps he's just trolling? He's already started a mini OS war, in
another part of this thread. What he won't discuss are the weaknesses
in his argument, so he cheats by posting flamebait and other nonsense.

Steve Atkinson

unread,
Aug 10, 1997, 3:00:00 AM8/10/97
to

In article <86pvrm2...@g.pet.cam.ac.uk>, Gareth McCaughan
<gj...@dpmms.cam.ac.uk> writes

>and I expect you can't
>do it in COBOL.
Ermm, I think you'll find you can.

--
Steve Atkinson.

"To err is human. To really foul things up, you need a computer"
(Remove antispam from address when replying)

Henry Baker

unread,
Aug 10, 1997, 3:00:00 AM8/10/97
to

In article <5sjbe0$otc$1...@newsie2.cent.net>, "Emergent Technologies Inc."
<emer...@eval-apply.com> wrote:

> This started out as a hack, but Guy Steele elevated it into an important
> principle.

Well, actually Carl Hewitt pushed this before Guy Steele did. Carl's
'Actor' language(s) had tail-calling as the central principle. Steele
and Sussman put 'tail-calling' back into Lisp.

(If you do a search, you'll find that Carl didn't call it 'tail calling',
but 'actor calling'/'message passing'.)

Rainer Joswig

unread,
Aug 10, 1997, 3:00:00 AM8/10/97
to

In article <wwlvke...@servtech.com>, Bulent Murtezaoglu
<mu...@servtech.com> wrote:

> jos...@lavielle.com (Rainer Joswig) writes:
> [...]
> > But what is really frustrating is, that speed often is not an issue
> > (for example the new Motorola Starmax with the 300 Mhz PowerPC 750
> > are blazingly fast). Yet we suffer from old-fashioned software technology
> > (MacOS, Windows, ...).
>
> I don't entirely agree with this. "Language speed" issues do not really have
> much to do with the OS used. People who buy newer faster computers before
> they turn into mass market commdities usually have a large problem they need
> to solve. Common Lisp is fine for those kinds of things, you get rapid
> development and then with the help of a good compiler and judicious use
> of declerations you can make your code fast enough in most cases. CMUCL
> is a very good example of what can be accomplished in that regard.

"Speed" is often not an issue on modern personal computers. These
machines are most of the time *idle*, the user is just fuzzing
around, for many tasks there is little processing power needed or
there is even hardware support. While I'm writing this mail my 100Mhz PowerPC
is more or less waiting for some keystrokes. My old Apple II was
fast enough for this years ago. In the 1980s a workstation built to
run Lisp fast was really expensive. Prices of $100000 were not unusual.
These machines were able to run an object-oriented operating system
with virtual memory and garbage collection. This was more then ten
years ago. Today's hardware is hundred times faster in certain
areas.

If I have to write a window system and my task is to notify all
open windows of an event, it doesn't really matter (speed wise)
whether you are using a recursive algorithm or not. The constant
obsession with speed is totally misleading. Common Lisp (and
similar languages) is really fast enough for all these tasks.
If somebody thinks Lisp is slow because you can use recursive
functions, which are slow by *his/her* definition - well,
this is complete nonsense. There are much slower software
architectures than compiled Lisp in successful massive use today.


Let's get a list of window names.

First the iterative version:

(defun get-window-titles-1 ()
(loop for window in (windows)
collect (window-title window)))

Now a recursive version:

(defun get-window-titles-2 ()
(labels ((collect (windows)
(if (null windows)
nil
(cons (window-title (first windows))
(collect (rest windows))))))
(collect (windows))))

Finally a version which uses a higher-order library function
(mapcar takes a function as an argument).

(defun get-window-titles-3 ()
(mapcar #'window-title (windows)))

Nine ;-) windows are open.

(time (get-window-titles-1)) -> one millisecond
(time (get-window-titles-2)) -> one millisecond
(time (get-window-titles-3)) -> one millisecond

Surprise. They are all fast enough.

Let's run them ten thousand times (don't want to open 90000 windows).

(defun run-function (fn times)
(loop repeat times do (funcall fn)))

(time (run-function #'get-window-titles-1 10000)) -> three seconds
(time (run-function #'get-window-titles-2 10000)) -> three seconds
(time (run-function #'get-window-titles-3 10000)) -> three seconds

Surprise. They all take approximately the same amount of
time (and the iteration/recursion is not the time consuming part,
I would guess).

**Maybe** some C code would iterate over 90000 windows in one or two
seconds. I don't care.

People who don't know nothing about recursive functions (optimized
or not ;-) ) enforce a totally wrong point of view:
Premature optimization.

I often think recent failures (they are more like modern
catastrophes) like Taligent (the failed Apple/IBM adventure)
or Copland (the modern version of MacOS Apple was
working on for years - and failed) happened because the used software
architecture was not capable enough for creation of a complex
system. Going for speed with C/C++/... is just the wrong
choice today. All you get is either old fashioned software,
overly complex software and/or multiple embedded
versions of Lisp features (GC, list processing, dynamic
objects, whatever).


A radical break with current software practice - which
tells people to write C code because it is faster - is needed.


> If people want to argue "C++ is fast" they'd be much better off going
> after CLOS. AFAIK (and I'd love to be corrected on this) once you enjoy
> the luxury of fast development using CLOS and get your code working with
> generic function calls in your inner loops, you're stuck with large constants.
> (Please correct me please...)

What if you call generic functions in inner loops where speed
is really needed?

Hmm?

Don't use them there when unnecessary. Simply as that.
CLOS's generic functions are not the solution to all software
problems. Sadly. ;-) Still you can write this code
in Common Lisp and enjoy the fast development using Common Lisp.

--
http://www.lavielle.com/~joswig/

Mukesh Prasad

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Richard A. O'Keefe wrote:
>
> Mukesh Prasad <pra...@removit-polaroid.com> writes:
> >As far as I know, *only* tail recursion can be optimized
> >away.
>
> Wrong.
>
> >Unless there are recent compilers which can optimize
> >away all recursive calls?
>
> You don't have to be able to optimise away _all_ recursive calls
> to optimise away _some_ of them.
>
> May I make what *ought* to be a blindingly obvious point?
> It is possible to inline some of the calls to recursive (not tail recursive)
> functions. Henry Baker has a paper on this.
> I _think_ one of the Scheme compilers I have here can do it.
>
> Let's take a trivial case:
>
> sum([]) = 0
> sum([H|T]) = H + sum(T)
>
> This is not tail recursive.

No, the correct technological terminology for
this would be "obfuscative-deceptive recursive".

Your example is nothing but

(defun sum (list)
(if (null? list) 0 (+ (head list) (sum (tail list)))))

Which is entirely tail recursive.

So the trick here is, turn this into something
which looks less tail-recursive. Then
optimize it and claim that a non-tail-recursive
problem has been optimized.

Not that there cannot be ad-hoc cases where
something which is truly not tail-recursive can
get optimized away. But this does not
qualify. Sorry.

Gareth McCaughan

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Mukesh Prasad wrote:

> Gee Whiz... If you can't do a little math, do you have
> to write lots of confusing prose to cover up?

Because he was using it as a hook on which to hang a real point,
namely an example of the use of Lisp in the Real World (tm).

> All of which does not shed any light on *why*
> Sajid wants to discuss Lisp. Nor on why people
> opposing him are prone to obfuscate and confuse,

Reality check: it was *SAtP*, not the Lispoholics, who threw in
the irrelevant thing about the trains.

> e.g. strongly and repeatedly pretending that
> all recursion cases can be eliminated, instead
> of noting that sometimes recursion can
> be eliminated?

The only person who has claimed that in this newsgroup was
SAtP. Other people have claimed that recursion doesn't need to
involve any performance penalty; is that what you mean?

Mukesh Prasad

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Gareth McCaughan wrote:
>
> Mukesh Prasad <pra...@removit-polaroid.com> writes:
>
> > > > - It is ok for learning purposes to duplicate
> > > > these list-management functions recursively,
> > > > as long as you remember there may be a penalty
> > > > if you do this in production code.
> > >
> > > There may be. Or there may not. It depends on your compiler.

> >
> > As far as I know, *only* tail recursion can be optimized
> > away. You must be confusing "tail recursion" with "recursion".

> > Unless there are recent compilers which can optimize
> > away all recursive calls? (Which compilers would
> > that be?)
>
> Er, I didn't say there were any compilers that can optimise away
> all recursive calls. You said "there are these routines for doing
> things with lists, which are handled iteratively in the library,
> and there may be a penalty associated with doing them recursively".
> I assumed that since you were talking about things being handled
> iteratively you were talking about things that couple be
> tail-call optimised. If not, then I don't think I understand

So you are saying:

Anything which is recursive and can be manually re-written
iteratively, can be automatically re-written iteratively.

Hmmm. Possible; I have not read up on it. Does this apply to,
say, qsort? I believe iterative versions of qsort can be written.
Would current compilers automatically compile standard
qsort into iterative versions?

Gareth McCaughan

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Mukesh Prasad wrote:

> So you are saying:
>
> Anything which is recursive and can be manually re-written
> iteratively, can be automatically re-written iteratively.

No, I'm not. I'm saying:

For the specific case you were putting forward, namely the
simple list-manipulation functions that would form part of
the library of e.g. a typical Lisp system, there need be no
penalty for writing them recursively because this is the
sort of thing that existing Lisp compiler technology is
already good at handling.

> Hmmm. Possible; I have not read up on it. Does this apply to,
> say, qsort? I believe iterative versions of qsort can be written.

That depends on what you mean by "iterative". You can do it by
implementing your own stack. I wouldn't really call that iterative.

On the other hand, it's easy to implement quicksort so that it ends
with a tail-recursive call; turning this into a jump is well within
the abilities of current compilers. (What they aren't within a million
miles of doing, of course, is noticing that (1) the two recursive
calls commute and so can be taken in either order, and (2) if you
always make the larger one the tail-call then you get an upper bound
on the amount of stack space you need.)

> Would current compilers automatically compile standard
> qsort into iterative versions?

No, but then I wouldn't call the code one writes by hand when
implementing quicksort "iterative" either. The code produced by
a good compiler will be similar to what you could code by hand.
Whether it's better or worse depends on how clever the compiler
is. (A really clever compiler, given a little help, could certainly
in principle[1] notice that hardly any variables actually needed
saving, and so reduce the stack overhead to the same level as
you get by doing it manually. In this case it might even perform
better than the hand-coded version, if the compiler isn't able[2]
to reuse the usual stack-pointer register for other things.)


[1] Do they in practice? I don't know. I wouldn't be surprised either
way.

[2] This needn't imply a stupid compiler. Keeping the stack pointer
valid at all times might be necessary for correct handling of
errors or exceptions or signals or something.

Gareth McCaughan

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Mukesh Prasad wrote:

> No, the correct technological terminology for
> this would be "obfuscative-deceptive recursive".
>
> Your example is nothing but
>
> (defun sum (list)
> (if (null? list) 0 (+ (head list) (sum (tail list)))))
>
> Which is entirely tail recursive.

No it isn't. After doing (sum (tail list)), the calling instance
of SUM has to get control back again to call +.

> So the trick here is, turn this into something
> which looks less tail-recursive. Then
> optimize it and claim that a non-tail-recursive
> problem has been optimized.

If I were Richard O'Keefe, I'd be feeling slandered at this point.
The function is *not* tail recursive, his presentation of it is
*not* obfuscated, and he is *not* indulging in deception and trickery.

Michael Schuerig

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Mukesh Prasad <pra...@removit-polaroid.com> wrote:

> Hmmm. Possible; I have not read up on it. Does this apply to,
> say, qsort? I believe iterative versions of qsort can be written.

> Would current compilers automatically compile standard
> qsort into iterative versions?

I doubt current compilers are able to do this. And it would be sensible
to rewrite qsort iteratively as you need to keep the state of the
computation somewhere, that is, you'd have to manage a stack manually on
the heap -- presumably much less efficient than using recursive calls.

Michael

--
Michael Schuerig The natural rhythm of human life
mailto:uzs...@uni-bonn.de is routine punctuated by orgies.
http://www.uni-bonn.de/~uzs90z/ -Aldous Huxley

Sajid Ahmed the Peaceman

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Richard A. O'Keefe wrote:
>
> Sajid Ahmed the Peaceman <peac...@capital.net> writes:
>
> > I agree with you that some Lisp compilers may be smart
> >enough to turn some functions into iterative code that doesn't use
> >the stack, with the following conditions:
>
> I have worked on a commercial compiler for a non-Lisp language.
> Let me comment on your conditions.
>
> >1. I can only do this with the very simplest functions. Nothing complex.
>
> This is a fact about *you*. It isn't even remotely true of compilers.
>
> >2. The compilation program is huge.
>
> It isn't. The compiler for that language was *tiny* compared with
> commercial C compilers.
>
> >3. To do the compile, it would take a large amount of time.
>
> Ludicrously false. Our compiler was fast.
>
> In short, it appears that you know nothing about compilers.
>


Well, I'll give you the fact that you probably know more about
compiler design than I do, but I still stand by the conditions that I
mentioned above.
You can't have it both ways. Either the compiler will do a quick
translation into assembly language by putting everything on the stack,
or spend a longer amount of time analyzing a possible tail recursion
optimization. The same goes with the code for the compiler. Either it's
a small amount of code to push everything on the stack, or additional
code to examine possibilities on TRO.

Peaceman

Martin Rodgers

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Sajid Ahmed the Peaceman wheezed these wise words:

> Well, I'll give you the fact that you probably know more about

> compiler design than I do, but I still stand by the conditions that I
> mentioned above.

Then you're trying to teach your grandmother to suck eggs.

> You can't have it both ways. Either the compiler will do a quick
> translation into assembly language by putting everything on the stack,
> or spend a longer amount of time analyzing a possible tail recursion
> optimization. The same goes with the code for the compiler. Either it's
> a small amount of code to push everything on the stack, or additional
> code to examine possibilities on TRO.

I think that by assembly language, you probably mean machine code.
Some Lisp compilers produce assembly source code, or at least I've
heard of one. I've just not seen one myself. Some others produce C
source code. Most Lisp compilers that I've used or know of produce
native code. A few Lisps compile code to byecodes. See the Lisp FAQ
for details (I'm sure I've said that before!)

It doesn't take a lot of work to optimise tail calls. Get a good book
and read about how it's done. The last chapter of SICP will do nicely:

Structure and Interpretation of Computer Programs
MIT Press and McGraw-Hill, Inc.

First learn to suck eggs yourself, then you talk with us about sucking
eggs, and someday _you_ may be able to teach someone to suck eggs.

As for Lisp compilers...you have a long way to go, grasshopper.

Martin Rodgers

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Gareth McCaughan wheezed these wise words:

> The only person who has claimed that in this newsgroup was


> SAtP. Other people have claimed that recursion doesn't need to
> involve any performance penalty; is that what you mean?

Anyone who can't recall who has claimed what, and doesn't have the
entire thread on their news server or in their newsreader's msgbase,
can check my archive of the thread. ;) I don't have the last few days
archived online yet, but I'm updating it at least once a week.

<URL:http://www.wildcard.demon.co.uk/archives/slow.ZIP>

Sajid Ahmed the Peaceman

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Martin Rodgers wrote:
>
> You admit that youdon't know enough about Lisp to follow some simple
> examples, yet you consider that your knowledge of Lisp is enough to
> talk about the quality of Lisp implementations. You can't have it both

> ways. Either you know enough about Lisp to talk about it with us with
> equal authority, in which case you will be able to follow our code, or
> you should admit that we know the language - and the implementation
> details - far better than you do.
>
> Which is it?

You got it. You know lisp better than I do. I gave up on it after
seeing that it was "recursive" language. I'm glad to see that people
are coming to their senses and making lisp more iterative. Other than
the recursive aspect of Lisp, the only complaint I may have is with all
the parenthesis, but that's not a big deal.

The problem is with all the recursive style languages,
lisp, prolog, ml , scheme, etc. It is going in the wrong direction.
Computer languages are not like mathematics. Computer functions
are different than mathematical functions. The experience I've had
(I know, I probably had some bad teachers), is where Lisp, prolog,
ml, etc. is being used like mathematical functions: no iterative code
allowed; functions returning instantaneously; abstract mathematical
worlds, etc. That is not what computing is about. Computers run
with a processor, one instruction at a time, not instantaneous
functions.

When I went to school (graduated in 95), I had a near 4.0
GPA, and I thought that the Lisp course was the easiest CS course in
the computer science curriculum. I was planning on going to grad
school in CS, until I took an upper level course in ML. The stuff
was easy enough, I got an A in the course, but if that's what grad
school is about, all this recursive crap that does not have
any practical application, it's not for me. I then went out, got
a programming job for about a year, and finally started up my own
company, where I am now.

I tried to tell the CS teacher about machine language instructions,
but he shoved it off and tried to say that some processors were designed
to do only recursion; biggest bunch of bologna I've ever heard.

Anyway, I've come to conclusion that Lisp is bad, Lisp is evil,
and as the subject says lisp is *SLOW*.

Peaceman

Alexey Goldin

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Sajid Ahmed the Peaceman <peac...@capital.net> writes:
>
> When I went to school (graduated in 95), I had a near 4.0
> GPA, and I thought that the Lisp course was the easiest CS course in

Big deal

> the computer science curriculum. I was planning on going to grad
> school in CS, until I took an upper level course in ML. The stuff
> was easy enough, I got an A in the course, but if that's what grad
> school is about, all this recursive crap that does not have
> any practical application, it's not for me. I then went out, got
> a programming job for about a year, and finally started up my own
> company, where I am now.
>
> I tried to tell the CS teacher about machine language instructions,
> but he shoved it off and tried to say that some processors were designed
> to do only recursion; biggest bunch of bologna I've ever heard.
>
> Anyway, I've come to conclusion that Lisp is bad, Lisp is evil,
> and as the subject says lisp is *SLOW*.
>
>
> Peaceman

My lesson: do not teach pig to sing: it is useless and annoys the pig.

Gareth McCaughan

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

"Sajid Ahmed the Peaceman" wrote:

> The problem is with all the recursive style languages,
> lisp, prolog, ml , scheme, etc. It is going in the wrong direction.
> Computer languages are not like mathematics. Computer functions
> are different than mathematical functions. The experience I've had
> (I know, I probably had some bad teachers), is where Lisp, prolog,
> ml, etc. is being used like mathematical functions: no iterative code
> allowed;

Mathematical functions can contain iteration. Ever seen a formula
with a summation or product symbol in it? Or a maximum or minimum
operator? Iterative, every one.

> functions returning instantaneously; abstract mathematical
> worlds, etc. That is not what computing is about. Computers run
> with a processor, one instruction at a time, not instantaneous
> functions.

Yep, that's right. And all those people who try to tell you they've
got several processors in their computer, or that their processors
can actually do several things at once, are LYING. You got it.

> I tried to tell the CS teacher about machine language instructions,
> but he shoved it off and tried to say that some processors were designed
> to do only recursion; biggest bunch of bologna I've ever heard.

Well, in this case your CS teacher was wrong.

> Anyway, I've come to conclusion that Lisp is bad, Lisp is evil,
> and as the subject says lisp is *SLOW*.

Er, is this meant to have anything to do with what went before?
Are you going to give any evidence for your claims?
Nope, it's just a troll. Oh well.

Proverbs 27:22, I guess.

Aaron Gross

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Gareth McCaughan <gj...@dpmms.cam.ac.uk> writes:

> Mathematical functions can contain iteration. Ever seen a formula
> with a summation or product symbol in it? Or a maximum or minimum
> operator? Iterative, every one.

Whoa, care to elaborate? I thought summations could be defined as
integrals with respect to counting measure. (The index set itself may
be uncountable, as long as only countably many summands are nonzero.)
Integrals, in turn, are defined more-or-less as suprema over sets of
certain finite sums. Are you saying the addition function from R^n
into R can only be defined recursively (for instance, 1+2+3)? If so,
does that include n=2 terms? Is the special case n=2 really simpler
to define than general n>1? If so, why? If not, where does the
recursion come in?

Mukesh Prasad

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Will Hartung wrote:
>
> Sajid Ahmed the Peaceman <peac...@capital.net> writes:
>

Gee Whiz... If you can't do a little math, do you have


to write lots of confusing prose to cover up?

When trains "meet", they better be at the exact
same distance from NY. (YMMV, depending upon
how you define "meet". E.g. heads meet, but
distance from NY == distance from center of train.
In that case, the train coming from NY is closer.
Now if Sajid Ahmed had said, "cross", you could
define it so the train coming from CA is closer.)

All of which does not shed any light on *why*
Sajid wants to discuss Lisp. Nor on why people

opposing him are prone to obfuscate and confuse,


e.g. strongly and repeatedly pretending that
all recursion cases can be eliminated, instead
of noting that sometimes recursion can
be eliminated?

Maybe like meets like on the newsgroups...

Jon S Anthony

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

In article <33ECA3...@capital.net> Sajid Ahmed the Peaceman <peac...@capital.net> writes:

> Calling people names shows that you have no argument for your case.
>

> Peaceman

It's only "name calling" if it's not true. Think about it...

/Jon
--
Jon Anthony
OMI, Belmont, MA 02178, 617.484.3383
"Nightmares - Ha! The way my life's been going lately,
Who'd notice?" -- Londo Mollari

Jon S Anthony

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

In article <86zpqo1...@g.pet.cam.ac.uk> Gareth McCaughan <gj...@dpmms.cam.ac.uk> writes:

> Mathematical functions can contain iteration. Ever seen a formula
> with a summation or product symbol in it? Or a maximum or minimum
> operator? Iterative, every one.

This is at least misleading. Mathematical functions do not have
"action" in them at all - iteration, (computational) recursion or
whatever. A _computation_ of such a function may involve iteration or
recursion.

OTOH, the idea that Lisp or other "functional" languages are somehow
mathematics is just plain wrong and so, for this context, completely
irrelevant (and thus completely in keeping with the cluelessness of
the "peaceman").

Gareth McCaughan

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Aaron Gross wrote:

> > Mathematical functions can contain iteration. Ever seen a formula
> > with a summation or product symbol in it? Or a maximum or minimum
> > operator? Iterative, every one.
>

> Whoa, care to elaborate? I thought summations could be defined as
> integrals with respect to counting measure. (The index set itself may
> be uncountable, as long as only countably many summands are nonzero.)
> Integrals, in turn, are defined more-or-less as suprema over sets of
> certain finite sums. Are you saying the addition function from R^n
> into R can only be defined recursively (for instance, 1+2+3)? If so,
> does that include n=2 terms? Is the special case n=2 really simpler
> to define than general n>1? If so, why? If not, where does the
> recursion come in?

I think you misread. "Mathematical functions can contain ->iteration<-".
SAtP was trying to make a sharp distinction between the Real World (tm)
in which everything is iterative, and the Evil Mathematical World where
everything is recursive and all function evaluations take zero time,
and I was pointing out that the distinction is fuzzier than he thinks.

Mukesh Prasad

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Gareth McCaughan wrote:
>
> Mukesh Prasad wrote:
>
> > No, the correct technological terminology for
> > this would be "obfuscative-deceptive recursive".
> >
> > Your example is nothing but
> >
> > (defun sum (list)
> > (if (null? list) 0 (+ (head list) (sum (tail list)))))
> >
> > Which is entirely tail recursive.
>
> No it isn't. After doing (sum (tail list)), the calling instance
> of SUM has to get control back again to call +.
>
> > So the trick here is, turn this into something
> > which looks less tail-recursive. Then
> > optimize it and claim that a non-tail-recursive
> > problem has been optimized.
>
> If I were Richard O'Keefe, I'd be feeling slandered at this point.
> The function is *not* tail recursive, his presentation of it is
> *not* obfuscated, and he is *not* indulging in deception and trickery.

Er, I erred. My apologies to Richard O'Keefe.

Gareth McCaughan

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Jon Anthony wrote:

> > Mathematical functions can contain iteration. Ever seen a formula
> > with a summation or product symbol in it? Or a maximum or minimum
> > operator? Iterative, every one.
>

> This is at least misleading. Mathematical functions do not have
> "action" in them at all - iteration, (computational) recursion or
> whatever. A _computation_ of such a function may involve iteration or
> recursion.

Sure. I was being sloppy. So sue me :-).

Sajid Ahmed the Peaceman

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Martin Rodgers wrote:
>
> Do a search with Dejanews - it'll be enlightening. My guess is that
> he's a support droid, which would explain his experience and knowledge
> of PC hardware. It might also explain his familiarity with C. For all
> we know, he may be a perfectly competent C programmer.
>

Want to know more about me? Just ask. I'd be glad to tell you.

BTW, taking a look at your home page:

: I get paid to develop for Windows in C/C++, and Java, but in my spare
: (programming) time I like to code in Lisp or Haskell.


Peaceman

Markku Laukkanen

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Sajid Ahmed the Peaceman wrote:

> Markku Laukkanen wrote:
>
> >
> >
> > Hmm, here is a simple :-) code from the compiler typeanalysis phase,
>
> > what I once wrote, I would like to hear, how to write this as a
> > intercative code, so it would be clearer.
> > Remark that the code is recursive in more than one way....
> >
> > Just for fun, take a look for the optimizations of funcall and
> mapcar
> >
> > PKY
> > How said, that the compiled LISP is slow ???
> >
> > ... lisp code deleted....
>
> I would try to write your code into something less recursive,
> and
> more clearer, but I'm sorry to say, I don't know enough about lisp
> to follow your code. If you repost in pseudocode, I could make an
> attempt.
>
> Peaceman

That's the catch, the pseudo code for the main function (form-type) is
somewhat three pages long, when explained in detail...

PKY

Markku Laukkanen

unread,
Aug 11, 1997, 3:00:00 AM8/11/97
to

Sajid Ahmed the Peaceman wrote:

> Gareth McCaughan wrote:
> >
> > And yet you have no hesitation about pontificating about how *SLOW*
> > Lisp is, and how it's impossible to write in Lisp without Evil
> > Recursion, and so on and on and on. Hmm.
> >
>
> No hesitation at all. All the program I've seen, as well as
> all
> the programs that you "trolls" in the lisp newsgroup have posted all
> majorly involve recursion. The Lisp intrepeters I've seen in the past
> were
> all slow, as well as the newer ones posted in the faq. I'm still
> waiting
> to hear your response to my post about real world programming. Where
> does
> Lisp fit into the picture?
>
> Peaceman

AAARRGGGGH; Lisp Interpreter ......

Lisp can be developed using interpreter, but the final code should
ALWAYS be compiled.
The interpreter can be slow, and in some cases, is slow, but the time
which is won in RAD is enermous, if you compare the time to develop the
same code in C/C++ environment.
And when you see, that the code work, just compile it. And Voila, there
you go....

PKY

Paul Fuqua

unread,
Aug 12, 1997, 3:00:00 AM8/12/97
to

Date: Mon, 11 Aug 1997 08:56:17 -0400
From: Mukesh Prasad <pra...@removit-polaroid.com>

Your example is nothing but

(defun sum (list)
(if (null? list) 0 (+ (head list) (sum (tail list)))))

Actually, no. Mr O'Keefe's example ends up having *two* additions for
each recursive call, and half as many calls.

It is exactly analogous to loop-unrolling in an iterative program:
repeat the body of the loop more than once, and decrease the number of
iterations accordingly. Same amount of work, less loop overhead.

His example does the same amount of work with less recursion overhead.
It doesn't *eliminate* it, of course, but it does reduce it, using a
simple mechanical process that a compiler could do, thus supporting his
argument that it is possible to optimise away *some* recursive calls.

Nothing to do with tail recursion, this time, because there are no
tail-calls in the function.

Paul Fuqua
Texas Instruments, Dallas, Texas p...@hc.ti.com

Erik Naggum

unread,
Aug 12, 1997, 3:00:00 AM8/12/97
to

* Sajid Ahmed

| I tried to tell the CS teacher about machine language instructions, but
| he shoved it off and tried to say that some processors were designed to
| do only recursion; biggest bunch of bologna I've ever heard.
|
| Anyway, I've come to conclusion that Lisp is bad, Lisp is evil, and as
| the subject says lisp is *SLOW*.

* Alexey Goldin
| My lesson: do not teach pig to sing: it is useless and annoys the pig.

I think there are a couple other possible lessons.

(1) there are people out there who think they know what computing is all
about, and who will object vehemently to any conflicting suggestions,
especially when they come from someone better educated than themselves.
dismissing such people at a young age will cause them to go mad with
rage, targeting any and all reasonbly well-educated people they can
find, preferably somebody working in a field that isn't too popular.
the only other case I have found of this kind of lunatic rage against
better education is Larry Wall. he is also kicking Lisp and academics
even more than this "Peaceman" bozo does, equipped with slightly better
intellectual ammunition (I gotta give him that).

(2) efficiency concerns _are_ actually legitimate at some level, and those
who think low-level code efficiency is the alpha and omega of software
performance can probably be put to good use in optimizing inner loops
and reordering data to increase cache hit rates and the like, after the
designers have gone home and the software actually works (otherwise,
they will just screw it up). I imagine this kind of people would be
useful in writing optimizers for compilers, too, after the real brains
have found new ways and directions in which to optimize.

in both cases, we have classes of people who can do something good and
useful, but which it would not be beneficial to tell much of the big
picture. you can teach a mechanic to fix your car, and you need a damn
good mechanic these days, but it would be a mistake to discuss new metal
alloys that can withstand higher pressures in the combustion chambers or
new kinds of fuel with him, especially if they could replace the engines
that this mechanic knows and on which he has based his career and all his
education from when he was just a boy and loved cars.

for all intents and purpose, I grew up on a DEC-10. I had met other
architectures before it, but this lovely machine was where I became a
programmer. others have argued that other CPUs are their model of beauty
and intelligent elegance, and they typically argue differently than I do,
not infrequently based on their perception of the system software and the
way everything was organized, as far as they could understand it at their
age. the Intel generation, further misguided by Bill Gates' total lack of
appreciation of the DEC-10 on which he pilfered CPU time to defraud his
first customer and ever since has peddled shoddy excuses for software, has
had to deal with a totally different kind of "incubating environment". I
believe that a good environment brings out the best in people, and that a
really bad one brings out the worst. after all, we're still animals that
try very hard to adapt and survive.

yet, I, too, am a deeply concerned "software mechanic" at times, and I
could be exceedingly myopic in my concerns, but in time, I discovered that
the car wasn't invented, much less built, in a run-down garage, tuning and
tweaking doesn't make a better engine, washing and polishing doesn't make a
better car, etc. I wanted to figure out how people who designed these
things thought and how they worked. I wanted to make some _significant_
improvement one day, not just tweak a single engine to yield a little
better, but make _all_ engines yield better, not just polish a single hood
better, but design a metal allow and coat it with something that would make
all hoods look shinier no matter the weather conditions and air pollutants.

now, there's a important distinction between being satisfied with your
ability to tweak an engine or polish a car, and being deeply frustrated
with the limits of tweaking or the need to polish cars so frequently while
you're looking for that better design that would make such trivial problems
go away. my guess is that there's a frightening lot of frustrated, but
good, mechanics out there. Peaceman the bozo is not among them, obviously,
but I'm afraid that he talks their lingo. "Lisp is evil", the frustrated
mechanic says, and any other mechanic will look at him, they will _feel_
his frustration (which is real enough), and they will murmur "it's all
because of unleaded fuel" or whatever it is that car aficionados lament --
I wouldn't know. I don't actually own a car, because today's automobile
design _sucks_, and the taxes they levy on owning and using one are just
criminally insane (at least in this country). I'm waiting patiently here
while I'm sure they're hard at work designing those nifty "transporters"
from Star Trek. _then_ I'll start going places. fortunately, we already
have programming languages that work like "transporters", taking care of
all the crufty details, so I don't have to wait. but think of it: how
would the automotive and oil industries react to "transporters"? wouldn't
they call them evil and worse? not to mention that you could probably get
a least five kilometers in your highly optimized car before the transporter
could beam you anywhere, and let's not forget that any teenager can learn
to drive, but it would probably still require a pretty solid education
before you could operate a transporter safely. of _course_ transporters
are bad, evil, and slow. sheesh!

it all boils down to this: if Lisp were allowed to win, just how would
society cope with all the criminals who were just writing bad code?

#\Erik
--
404 Give me a URL I cannot refuse.

Martin Rodgers

unread,
Aug 12, 1997, 3:00:00 AM8/12/97
to

Sajid Ahmed the Peaceman wheezed these wise words:

> Want to know more about me? Just ask. I'd be glad to tell you.

Where did you learn anything about compiler theory?



> BTW, taking a look at your home page:
>
> : I get paid to develop for Windows in C/C++, and Java, but in my spare
> : (programming) time I like to code in Lisp or Haskell.

Yep, there are not as many jobs for Lisp or Haskell programmers. There
may also be more jobs for Cobol programmers, but that doesn't mean
anything.

(Paraphrasing Richard Feynman): What do _you_ care what other people
program with?

Name the Lisps that you've used. Are any of them commercial systems,
like LispWorks and Allegro CL? Here's LispWorks:

<URL:http://www.harlequin.co.uk/products/ads/lispworks/lispworks.html>

There's a free version of ACL for you to play with:
<URL:http://www.franz.com/frames/dload.main.html>

Show us some code that runs with ACL/PC (I'm sure you can find a
machine that can run it) that demonstrates the problems that you
describe. Can you do that?

Martin Rodgers

unread,
Aug 12, 1997, 3:00:00 AM8/12/97
to

Sajid Ahmed the Peaceman wheezed these wise words:

> You got it. You know lisp better than I do. I gave up on it after

> seeing that it was "recursive" language. I'm glad to see that people
> are coming to their senses and making lisp more iterative. Other than
> the recursive aspect of Lisp, the only complaint I may have is with all
> the parenthesis, but that's not a big deal.

You're still assuming that recursion has problems. Go back to school,
find a good teacher, and start learning. Obviously there's nothing
more that we can tell you, as you insist on ignoring the facts.

Answer this:

int factorial(int number) {
return factorial_tail(1,number, 1);
}


.file "fact.c"
.version "01.01"
gcc2_compiled.:
.global .umul
.section ".text"
.align 4
.global factorial_tail
.type factorial_tail,#function
.proc 04
factorial_tail:
!#PROLOGUE# 0
save %sp,-104,%sp
!#PROLOGUE# 1
mov %i0,%o1
mov %i2,%o0
.LL6:
cmp %o1,%i1
bg .LL5
nop
call .umul,0
add %o1,1,%l0
b .LL6 <----------- watch this
mov %l0,%o1
.LL5:
ret
restore %g0,%o0,%o0
.LLfe1:
.size factorial_tail,.LLfe1-factorial_tail
.align 4
.global factorial
.type factorial,#function
.proc 04
factorial:
!#PROLOGUE# 0
save %sp,-104,%sp
!#PROLOGUE# 1
mov 1,%o1
mov 1,%o0
.LL15:
cmp %o1,%i0
bg .LL14
nop
call .umul,0
add %o1,1,%l0
b .LL15
mov %l0,%o1
.LL14:
ret
restore %g0,%o0,%o0
.LLfe2:
.size factorial,.LLfe2-factorial
.ident "GCC: (GNU) 2.7.2"

Try reading SICP, esp the last chapter:

Structure and Interpretation of Computer Programs
MIT Press and McGraw-Hill, Inc.

You are denying that recursion can be used efficiently. You're telling
all kinds of lies about Lisp, offering no evidence at all.

Put up or shut. Simply repeating the same lies over and over is not
impressive. In fact, it just makes you another propagandist, and not
even a particularly talented one.



> The problem is with all the recursive style languages,
> lisp, prolog, ml , scheme, etc. It is going in the wrong direction.

Really? Says you. Other people disagree. You made you an authority?

> Computer languages are not like mathematics. Computer functions
> are different than mathematical functions. The experience I've had
> (I know, I probably had some bad teachers), is where Lisp, prolog,
> ml, etc. is being used like mathematical functions: no iterative code

> allowed; functions returning instantaneously; abstract mathematical

> worlds, etc. That is not what computing is about. Computers run
> with a processor, one instruction at a time, not instantaneous
> functions.

Have you heard of Alonzo Church? Computing came out of mathematics.
Thanks for admitting that you had some bad teachers. Now, instead of
spreading your ignorance, why not find a new teacher and go back to
school? You can do this by reading books like SICP.



> When I went to school (graduated in 95), I had a near 4.0
> GPA, and I thought that the Lisp course was the easiest CS course in

> the computer science curriculum. I was planning on going to grad
> school in CS, until I took an upper level course in ML. The stuff
> was easy enough, I got an A in the course, but if that's what grad
> school is about, all this recursive crap that does not have
> any practical application, it's not for me. I then went out, got
> a programming job for about a year, and finally started up my own
> company, where I am now.

If it's not for you, what the heck are you doing here? WHy make a
fuss? I've asked this before: What do you care what language other
people program in? If you can't cope with recursion, fine. Just don't
use it. You don't have to slag off Lisp or any other language.

If you continue like this, don't blame anyone for suspecting that you
have an ulterior motive. Perhaps you're concerned that you may be
forced to use Lisp or recursion? You could simply ask how to avoid
these things, but you've not done this.

Please explain why you feel a need to attack Lisp and recursion.



> I tried to tell the CS teacher about machine language instructions,
> but he shoved it off and tried to say that some processors were designed
> to do only recursion; biggest bunch of bologna I've ever heard.

You had a bad teacher. Do something about it. Complain. Find another
school, find another teacher, but don't bitch about it. You can easily
solve these problems, but instead you choose to attack a tool that
others have no trouble with.



> Anyway, I've come to conclusion that Lisp is bad, Lisp is evil,
> and as the subject says lisp is *SLOW*.

You are bad, you are evil, you are *SLOW*. Can't you get the message
by now? Get a new teacher, learn something. Don't bitch. You won't win
any respect by whinging about problems that are easy to fix. If you
can't understand Lisp, just don't use it. It's not a big deal.

Would you like to show that you're not clueless? That's also easy.


Name the Lisps that you've used. Are any of them commercial systems,
like LispWorks and Allegro CL? Here's LispWorks:

<URL:http://www.harlequin.co.uk/products/ads/lispworks/lispworks.html>

There's a free version of ACL for you to play with:
<URL:http://www.franz.com/frames/dload.main.html>

Show us some code that runs with ACL/PC (I'm sure you can find a
machine that can run it) that demonstrates the problems that you

describe. Can you do that? Why don't you do it? Could it be that you
_can't_ do it? Tsk.

Andreas Eder

unread,
Aug 12, 1997, 3:00:00 AM8/12/97
to

Jon S Anthony wrote:
>OTOH, the idea that Lisp or other "functional" languages are somehow
>mathematics is just plain wrong and so, for this context, completely
>irrelevant (and thus completely in keeping with the cluelessness of
>the "peaceman").

I wouldn't call this idea 'plain wrong.' Of course, it is not mathematics proper,
but on the other hand, large parts of Lisp and other more functional languages like
Haskell are not very far removed from lambda calculus. (That's where the lamda in
Lisp comes from ;-) ). And that is precisely what makes these languages superior
to others! It is much easier to reason about programs without nasty side-effects.

Andreas

Andreas Eder

unread,
Aug 12, 1997, 3:00:00 AM8/12/97
to

Gareth McCaughan wrote:
>Mathematical functions can contain iteration. Ever seen a formula
>with a summation or product symbol in it? Or a maximum or minimum
>operator? Iterative, every one.

I wouldn't call a formula with a summation signe in front iterative.
In essences it is neither iterative nor recursive. It's just a sum, that's it.
But to define the summation sign, you would do it best in a recursive way :-)

sum(i=0,0, T(i)) = 0 ; sum(i=0,n+1,T(i)) = T(n+1) + sum(i=0,n,T(i))

Same goes for products. If you define it in most any other way, you get the
mysterious ( :-) ) ... notation. And besides, doing it this way is surely the
most natural one!


Aaron Gross

unread,
Aug 12, 1997, 3:00:00 AM8/12/97
to

Andreas Eder <a...@laphroig.mch.sni.de> writes:

> Gareth McCaughan wrote:
> >Mathematical functions can contain iteration. Ever seen a formula
> >with a summation or product symbol in it? Or a maximum or minimum
> >operator? Iterative, every one.
>
> I wouldn't call a formula with a summation signe in front iterative.
> In essences it is neither iterative nor recursive. It's just a sum, that's it.
> But to define the summation sign, you would do it best in a recursive way :-)
>
> sum(i=0,0, T(i)) = 0 ; sum(i=0,n+1,T(i)) = T(n+1) + sum(i=0,n,T(i))

And if the index set (the set of your i's) is infinite? How about
uncountable?

>
> Same goes for products. If you define it in most any other way, you get the
> mysterious ( :-) ) ... notation. And besides, doing it this way is surely the
> most natural one!

Not if you studied measure and integration before you studied
Lisp. Recursion's great, but let's not get carried away.

David Thornley

unread,
Aug 12, 1997, 3:00:00 AM8/12/97
to

In article <33EF3E...@capital.net>,

Sajid Ahmed the Peaceman <peac...@capital.net> wrote:
>Martin Rodgers wrote:
>>
>> You admit that youdon't know enough about Lisp to follow some simple
>> examples, yet you consider that your knowledge of Lisp is enough to
>> talk about the quality of Lisp implementations. You can't have it both
>> ways. Either you know enough about Lisp to talk about it with us with
>> equal authority, in which case you will be able to follow our code, or
>> you should admit that we know the language - and the implementation
>> details - far better than you do.
>>
>> Which is it?
>
> You got it. You know lisp better than I do. I gave up on it after
>seeing that it was "recursive" language. I'm glad to see that people
>are coming to their senses and making lisp more iterative. Other than
>the recursive aspect of Lisp, the only complaint I may have is with all
>the parenthesis, but that's not a big deal.
>
So what's wrong with recursion?

> The problem is with all the recursive style languages,
>lisp, prolog, ml , scheme, etc. It is going in the wrong direction.

>Computer languages are not like mathematics. Computer functions
>are different than mathematical functions.

If you mean that both x = 1 and x == 1 (in C) are not the same
as x = 1 (in the usual algebraic sense), yes, you're correct.
In other senses, you're wrong. Computer science involves a lot
of mathematics, and computer functions are mathematical functions.
In some respects, we have to treat them as mathematical functions
in order to get large systems built.

Even the assignment statement has a clear mathematical meaning,
which you will find discussed in great detail in Dijkstra's
_Discipline_of_Programming_. We rely on that meaning with
pretty much every C program we write. After we have written
x = 1;, we assume that x now has the value 1 (in whatever data
type x is).

We cannot tolerate unlimited complexity in our systems, because
we are merely human. The larger and more complex the building
blocks we can use, the larger and more complex systems we can
build with them. The only way to have large, complex, and
reliable building blocks is to treat them mathematically.

> The experience I've had
>(I know, I probably had some bad teachers), is where Lisp, prolog,
>ml, etc. is being used like mathematical functions: no iterative code
>allowed; functions returning instantaneously; abstract mathematical
>worlds, etc. That is not what computing is about. Computers run
>with a processor, one instruction at a time, not instantaneous
>functions.
>

Either you had bad teachers, or you weren't ready to learn what
they were teaching. It sounds like what they were teaching is
how to build reliable, large, and complex building blocks for
large systems. This is what computing is about, in the large.

Look, when I got my first personal computer (about 1980), it had
a 1.77 MHz Z80 processor. Skipping several steps, I bought a
Mac 6100/60 used a couple of years ago, which does math faster
than the supercomputer I first used. You can't buy anything like
the 6100/60 new any more; the slowest current Macs and clones
are more than twice as fast.

Any application that was painfully slow on my old TRS-80 would be
incredibly fast on my (slow) Mac. On the other hand, I'm pretty
much the same guy I was in 1980. I've learned a lot, but I'm
really no better at handling complexity. Machine efficiency
is becoming less and less important relative to human efficiency,
and I don't see that stopping.

If we can build larger systems by running software half as fast, that's
cool. We can spare a factor of two easy, for most purposes. (We're
doing that now, in most applications, because we can't handle the
complexity. Windows 95 does more than, say, MacOS 5, but requires
a 150 MHz Pentium and 16 MB memory rather than a 8 MHz 68000 with
1 MB memory. That's an incredible increase in processing power.)
What we can't afford, in general, is to increase the asymptotic
time required, but that's in itself a mathematical concept.

(Nor is your statement about instructions and processors completely
correct in a multiprocessing system.)

> When I went to school (graduated in 95), I had a near 4.0
>GPA, and I thought that the Lisp course was the easiest CS course in
>the computer science curriculum. I was planning on going to grad
>school in CS, until I took an upper level course in ML. The stuff
>was easy enough, I got an A in the course, but if that's what grad
>school is about, all this recursive crap that does not have
>any practical application, it's not for me. I then went out, got
>a programming job for about a year, and finally started up my own
>company, where I am now.
>

Currently, it's easy to stay in business in programming with some
pretty lousy systems (the place I'm working now is an excellent
example), so it's easy to get a programming job without understanding
computer science. It probably always will be easy, as the demand
for programmers is likely to go up, and the number of good computer
scientists isn't.

On the other hand, there's a lot of interest nowadays in extremely
complex systems that would be very useful if we could get them running
reliably. You're probably aware that many large software projects
fail horribly and expensively. Anything that allows us to make these
systems more reliably is potentially good.

> I tried to tell the CS teacher about machine language instructions,
>but he shoved it off and tried to say that some processors were designed
>to do only recursion; biggest bunch of bologna I've ever heard.
>

Never heard of recursive-only processors myself, nor can I understand
how they'd do that right now. There are processors that try to do
function calls fast. I suspect there's more to the conversation
than you're telling, possibly more than you remember, unless that was
a very bad CS teacher (and there are enough of those out there).

> Anyway, I've come to conclusion that Lisp is bad, Lisp is evil,
>and as the subject says lisp is *SLOW*.
>

I don't know how I'd recognize a "bad" and "evil" programming language,
if you'll allow me to disregard Intercal for the moment. Lisp does
tend to polarize people into those that love it and those that hate it,
but that doesn't make it bad.

However, Lisp is not slow. Recursion is not necessarily slow. The
only way to show that Lisp is slow is to take a modern Lisp system
and a modern C system (or what have you), and benchmark them. Be
sure that they do equivalent tasks. Normally, C programs do less
than Lisp programs, so you must allow for this. On Unix, large C
programs tend to leak memory like it was going out of style, so you
should either make sure the C program manages memory correctly or
turn off garbage collection for Lisp. Integer overflow in C usually
causes no problems other than plausible wrong answers, while Lisp
will get the right answer at the cost of a performance hit. Either
prove that undetected overflow is impossible in the C system (yes,
that's mathematical) or use declarations in the Lisp program to turn
off overflow checking. Remember to declare all variables by type
in Lisp, just as you'd have to in C.

When you've done all these things, do some timing. I'd be really
surprised if, for a reasonable benchmark, Lisp was less than half
as fast as C. It might be faster; CMU-CL has beaten Fortran systems
in number-crunching. Even if Lisp is half as fast, consider that if
the C program runs OK on my Mac the Lisp program will run better on
a current low-end machine.

David Thornley


Martin Rodgers

unread,
Aug 12, 1997, 3:00:00 AM8/12/97
to

Aaron Gross wheezed these wise words:

> And if the index set (the set of your i's) is infinite? How about
> uncountable?

Isn't that what lazy evaluation is for? ;-) Well, it can be used to
search an infinite space, providing the search isn't exhaustive.

Aaron Gross

unread,
Aug 13, 1997, 3:00:00 AM8/13/97
to

Gareth McCaughan <gj...@dpmms.cam.ac.uk> writes:

> Aaron Gross wrote:
>
> > > Mathematical functions can contain iteration. Ever seen a formula
> > > with a summation or product symbol in it? Or a maximum or minimum
> > > operator? Iterative, every one.
> >

> > Whoa, care to elaborate? I thought summations could be defined as
> > integrals with respect to counting measure. (The index set itself may
> > be uncountable, as long as only countably many summands are nonzero.)
> > Integrals, in turn, are defined more-or-less as suprema over sets of
> > certain finite sums. Are you saying the addition function from R^n
> > into R can only be defined recursively (for instance, 1+2+3)? If so,
> > does that include n=2 terms? Is the special case n=2 really simpler
> > to define than general n>1? If so, why? If not, where does the
> > recursion come in?
>
> I think you misread. "Mathematical functions can contain ->iteration<-".
> SAtP was trying to make a sharp distinction between the Real World (tm)
> in which everything is iterative, and the Evil Mathematical World where
> everything is recursive and all function evaluations take zero time,
> and I was pointing out that the distinction is fuzzier than he thinks.

I did misread (sorry), but my comment still stands with "recursion"
replaced by "iteration". I don't see how *either* recursion or
iteration comes into the definition of summation. It seems that
however you define x+y, you could define w+x+y+z just as directly, and
once you have finite sums defined, you go from there to infinite sums
(more generally, integrals) without any reference to iteration or
recursion, either implicit or explicit.

My overall point, which I wanted to make after misreading "recursion"
for "iteration", was that some people seem to think that recursion is
the natural description for practically anything, even in cases where
non-recursive descriptions are more straightforward. This is unrelated
to SAtP's original comments, but then again, who cares any more about
his comments anyway?

Sajid Ahmed the Peaceman

unread,
Aug 13, 1997, 3:00:00 AM8/13/97
to

David Thornley wrote:
>
> If you mean that both x = 1 and x == 1 (in C) are not the same
> as x = 1 (in the usual algebraic sense), yes, you're correct.
> In other senses, you're wrong. Computer science involves a lot
> of mathematics, and computer functions are mathematical functions.


Not really. Computer science is restricted to the processor
that programs run on, and the instructions that they have. Also the
amount of time it takes to execute these instructions. These
instructions are simple bitwise operations.

> In some respects, we have to treat them as mathematical functions
> in order to get large systems built.
>

> ...

> We cannot tolerate unlimited complexity in our systems, because
> we are merely human. The larger and more complex the building
> blocks we can use, the larger and more complex systems we can
> build with them. The only way to have large, complex, and
> reliable building blocks is to treat them mathematically.
>

You have to treat these building blocks as a group
of processor instructions, not as mathematical functions.

>
> Any application that was painfully slow on my old TRS-80 would be
> incredibly fast on my (slow) Mac. On the other hand, I'm pretty
> much the same guy I was in 1980. I've learned a lot, but I'm
> really no better at handling complexity. Machine efficiency
> is becoming less and less important relative to human efficiency,
> and I don't see that stopping.
>
> If we can build larger systems by running software half as fast, that's
> cool. We can spare a factor of two easy, for most purposes. (We're
> doing that now, in most applications, because we can't handle the
> complexity. Windows 95 does more than, say, MacOS 5, but requires
> a 150 MHz Pentium and 16 MB memory rather than a 8 MHz 68000 with
> 1 MB memory. That's an incredible increase in processing power.)
> What we can't afford, in general, is to increase the asymptotic
> time required, but that's in itself a mathematical concept.
>

Whether or not you *want* code that is fast and efficient is
a different program. Don't make the mistake of ignoring the speed.
Word Perfect Corp did that with their word processing
program for windows, and look where they ended up.


> (Nor is your statement about instructions and processors completely
> correct in a multiprocessing system.)
>

In a multiprocessing system, either there is shared memory,
or data is passed back and forth between the processors. It can be
looked
at a one processor system with different inputs.


> Currently, it's easy to stay in business in programming with some
> pretty lousy systems (the place I'm working now is an excellent
> example), so it's easy to get a programming job without understanding
> computer science. It probably always will be easy, as the demand
> for programmers is likely to go up, and the number of good computer
> scientists isn't.
>

I'm glad that you can admit that this so called good computer
science stuff has no demand in the work place. It's like the advanced
mathematics that they do research on, that has no real world
applications. That was one of my frustations with the graduate
level computer science courses that they had in my school. All this
mathematical and theory crap that you would never see implemented
in a real computer system. Did I tell you about the professor
I once had, that had 'The world's fastest sorting algorithm' ?


> However, Lisp is not slow. Recursion is not necessarily slow. The
> only way to show that Lisp is slow is to take a modern Lisp system
> and a modern C system (or what have you), and benchmark them. Be
> sure that they do equivalent tasks. Normally, C programs do less
> than Lisp programs, so you must allow for this. On Unix, large C
> programs tend to leak memory like it was going out of style, so you
> should either make sure the C program manages memory correctly or
> turn off garbage collection for Lisp. Integer overflow in C usually
> causes no problems other than plausible wrong answers, while Lisp
> will get the right answer at the cost of a performance hit. Either
> prove that undetected overflow is impossible in the C system (yes,
> that's mathematical) or use declarations in the Lisp program to turn
> off overflow checking. Remember to declare all variables by type
> in Lisp, just as you'd have to in C.
>
> When you've done all these things, do some timing. I'd be really
> surprised if, for a reasonable benchmark, Lisp was less than half
> as fast as C.

Get rid of any unneccesary recursion, and you'd get your Lisp
to run even faster. You'll get rid of the overhead needed for repeated
calls to functions.

Peaceman

Marco Antoniotti

unread,
Aug 13, 1997, 3:00:00 AM8/13/97
to

In article <33ECA2...@capital.net> Sajid Ahmed the Peaceman <peac...@capital.net> writes:

From: Sajid Ahmed the Peaceman <peac...@capital.net>
Newsgroups: comp.lang.lisp,comp.lang.c++,comp.programming
Date: Sat, 09 Aug 1997 13:00:52 -0400
Organization: Logical Net
Reply-To: peac...@capital.net
Lines: 24
NNTP-Posting-Host: dialup033.colnny1.capital.net
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 3.01 (WinNT; I)
Xref: agate comp.lang.lisp:29827 comp.lang.c++:286538 comp.programming:53822

Marco Antoniotti wrote:
>
> I posted a complete and functioning C binary tree traversal code a few days
> ago, *begging* you to rewrite it in iterative format (sans le stack, of
> course).
>
> Seen nothing yet.
>
> Do you want the code? It's still waiting for you in my disk.
>
> Cheers
> --
> Marco Antoniotti


I never said that you could do it without a stack, that was
somebody else. What I did say, though, was that the iterative version
would be faster than the recursive version, but you would still
need to set up a stack, and maybe put in some inline assembly code
for the push and pop macros. If you didn't have the inline assembly
push and pop and macros, then it may be possible to write a
faster recursive version, but if you do, than definitely not.

My memory must be failing me, but I might accept this :) Apart from
the fact that maybe only today you realized the difference between
tail and inherent recursion (as you obfuscated C/C++ factorial
postings show), what you are basically saying is that "if you write
your code in assembly language, your program will be faster".

In Italy we call this a discovery of hot water :)

Cheers
--
Marco Antoniotti
==============================================================================
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472

Marco Antoniotti

unread,
Aug 13, 1997, 3:00:00 AM8/13/97
to

In article <33EF3E...@capital.net> Sajid Ahmed the Peaceman <peac...@capital.net> writes:

From: Sajid Ahmed the Peaceman <peac...@capital.net>
Newsgroups: comp.lang.lisp,comp.lang.c++,comp.programming
Date: Mon, 11 Aug 1997 12:31:35 -0400
Organization: Logical Net
Reply-To: peac...@capital.net

...

When I went to school (graduated in 95), I had a near 4.0
GPA, and I thought that the Lisp course was the easiest CS course in
the computer science curriculum. I was planning on going to grad
school in CS, until I took an upper level course in ML. The stuff
was easy enough, I got an A in the course, but if that's what grad
school is about, all this recursive crap that does not have
any practical application, it's not for me. I then went out, got
a programming job for about a year, and finally started up my own
company, where I am now.


A few suggestions:

1 - Check out "Introduction to Algorithms" by T. H. Cormen and
C. E. Leiserson and R. L. Rivest, MIT Press, (not the best, but a
good one) or any other algorithms book. Hope you get past the
first three chapters, where math abounds.

2 - Check out "The Laws of Human Stupity" by C. Cipolla.

3 - Tell us the name of your company. You will have to be ready to
suffer from some "market forces". :)

David Thornley

unread,
Aug 13, 1997, 3:00:00 AM8/13/97
to

In article <33F1F1...@capital.net>,

Sajid Ahmed the Peaceman <peac...@capital.net> wrote:
>David Thornley wrote:
>>
>> If you mean that both x = 1 and x == 1 (in C) are not the same
>> as x = 1 (in the usual algebraic sense), yes, you're correct.
>> In other senses, you're wrong. Computer science involves a lot
>> of mathematics, and computer functions are mathematical functions.
>
>
> Not really. Computer science is restricted to the processor
>that programs run on, and the instructions that they have. Also the
>amount of time it takes to execute these instructions. These
>instructions are simple bitwise operations.
>
All processors and instruction sets are equivalent, in a sense that
can be quantified, to Turing machines. Turing machines are mathematical
constructs. Moreover, a programming language definition can be
treated as a mathematical entity. It is the responsibility, then,
of the people who build the system to make it work according to the
abstract, mathematical, specification.

I've taken a mechanical engineering class, and there was plenty of
math in that. It makes just as much sense to say that Mech E is
restricted to steel and plastic and copper as it does to say that
CSci is restricted to processors.

Suppose we did restrict ourselves to existing processors: how would
we, as computer scientists, get anything done? There are theorems
all over the field that are very useful in practice, because we form
mathematical abstractions that we can reason with. If we had to
prove that this K of machine language instructions does such and
such on, say, a 6502, we'd never be able to generalize. Computers
today would be marginally more useful than they were in 1950.


>
>> In some respects, we have to treat them as mathematical functions
>> in order to get large systems built.
>>
>> ...
>> We cannot tolerate unlimited complexity in our systems, because
>> we are merely human. The larger and more complex the building
>> blocks we can use, the larger and more complex systems we can
>> build with them. The only way to have large, complex, and
>> reliable building blocks is to treat them mathematically.
>>
>
> You have to treat these building blocks as a group
>of processor instructions, not as mathematical functions.
>

Why? Why do I have to do what you say? If I had to treat them as
groups of processor instructions, they'd be far too clumsy to
do anything useful with.

What I do is create modules of some sort in a higher-level programming
language. I use the abstract *mathematical* model of the language to
demonstrate that they will do what I want them to reliably. Most of
the time, the real, physical, systems do what the mathematical model
says; when they don't, I bitch to the vendor and get results. When I
feel like it, I move my modules to another machine, very different,
that also has a Lisp or C or whatever system, and they still work.
I suspect you do much the same thing.

How do you do this while treating modules as groups of instructions?
You would be very hard put to come to any useful conclusions just
studying the machine code. Assuming that after long, painful work
you succeeded, what if you want to upgrade your compiler or move to
another computer or change the optimization level? You'd be screwed.

>>
>> If we can build larger systems by running software half as fast, that's
>> cool. We can spare a factor of two easy, for most purposes. (We're
>> doing that now, in most applications, because we can't handle the
>> complexity. Windows 95 does more than, say, MacOS 5, but requires
>> a 150 MHz Pentium and 16 MB memory rather than a 8 MHz 68000 with
>> 1 MB memory. That's an incredible increase in processing power.)
>> What we can't afford, in general, is to increase the asymptotic
>> time required, but that's in itself a mathematical concept.
>>
>
> Whether or not you *want* code that is fast and efficient is
>a different program. Don't make the mistake of ignoring the speed.
>Word Perfect Corp did that with their word processing
>program for windows, and look where they ended up.
>

Yup. Microsoft released an operating system for a personal computer
that requires far more resources to run fast than the mainframes
I used to work on. I used it on a 80MHz Pentium with slightly slow
memory and it was a dog.

They have gone bankrupt, haven't they?


>
>> (Nor is your statement about instructions and processors completely
>> correct in a multiprocessing system.)
>>
>
> In a multiprocessing system, either there is shared memory,
>or data is passed back and forth between the processors. It can be
>looked
>at a one processor system with different inputs.
>

You are perfectly correct. Now, expand on that insight.

You can look at a certain sort of system and see it as something else,
when it suits you. This is very good. Now, look at a mathematical
model and see it as a physical computer with software, and vice versa.
You're almost there.


>
>> Currently, it's easy to stay in business in programming with some
>> pretty lousy systems (the place I'm working now is an excellent
>> example), so it's easy to get a programming job without understanding
>> computer science. It probably always will be easy, as the demand
>> for programmers is likely to go up, and the number of good computer
>> scientists isn't.
>>
>
> I'm glad that you can admit that this so called good computer
>science stuff has no demand in the work place. It's like the advanced
>mathematics that they do research on, that has no real world
>applications. That was one of my frustations with the graduate
>level computer science courses that they had in my school. All this
>mathematical and theory crap that you would never see implemented
>in a real computer system. Did I tell you about the professor
>I once had, that had 'The world's fastest sorting algorithm' ?
>

OK, let's look at some of this mathematical and theory crap.

The study of algorithms is highly mathematical. The result is that
many of our canned program libraries run swiftly and accurately.
Much of this is built into the software we use daily. Did you
ever wonder what goes into a fast string search?

The study of optimization is highly mathematical. The results are
vital to the production of processors.

Our compilers include the mathematics of finite automata, denotational
semantics, and some of the algorithms above. Producing optimizers
requires a bit of mathematics itself.

The results of mathematics are all over our computers. The stuff that
fuzzy-looking academics were working on when I was an undergrad is
vital parts of stuff I can buy at Computer City today.

You've chosen to overlook this, and to do programming grunt work. Right
now, this pays pretty well, since businesses will generally prefer to
pay people a lot of money to create bad custom software than to buy
decent packages. It's likely to pay OK for the rest of your life.
It still might be better to open up your mind and learn something.


>
>> However, Lisp is not slow. Recursion is not necessarily slow. The
>> only way to show that Lisp is slow is to take a modern Lisp system
>> and a modern C system (or what have you), and benchmark them. Be
>> sure that they do equivalent tasks. Normally, C programs do less
>>

>> When you've done all these things, do some timing. I'd be really
>> surprised if, for a reasonable benchmark, Lisp was less than half
>> as fast as C.
>
> Get rid of any unneccesary recursion, and you'd get your Lisp
>to run even faster. You'll get rid of the overhead needed for repeated
>calls to functions.
>

This may be good advice under some circumstances, but what does it
have to do with whether Lisp is fast or slow? Maybe that, recursion
being slow, Lisp compilers are likely to produce fast code because
they generally eliminate more recursion than other compilers?

OK, so you've realized that Lisp can be fast, and that you can look
at a physical system (a multi-processor computer) as if it were
something else. You're making progress.

David Thornley

Christopher B. Browne

unread,
Aug 14, 1997, 3:00:00 AM8/14/97
to

On Wed, 13 Aug 1997 13:40:00 -0400, Sajid Ahmed the Peaceman
<peac...@capital.net> posted:

>David Thornley wrote:
>>
>> If you mean that both x = 1 and x == 1 (in C) are not the same
>> as x = 1 (in the usual algebraic sense), yes, you're correct.
>> In other senses, you're wrong. Computer science involves a lot
>> of mathematics, and computer functions are mathematical functions.

> Not really. Computer science is restricted to the processor
>that programs run on, and the instructions that they have.

Huh?

You clearly got an unspeakably horrible education in computer science
if you can conclude that from what you learned.

Computer science is "restricted" to machines that reflect reasonable
facsimiles of Turing machines, which are a fundamentally generic sort of
"processor."

Any time you hit computing theory, it is *necessary* to create, whether
in logical or in theoretical terms, whatever "machine" (DFA, PDA, TM)
proves necessary for the task at hand. These "processors" seldom have
any directly isomorphic analogue in the world of "implemented processors."

Which is quite separate from them being of practical value; people seldom
build TM's as the programming model is, while theoretically clean, rather
arcane (and inefficient) for realistic applications. On the other hand,
the various finite state automatons are of tremendous practical value,
used particularly often in engineering applications...

You may have studied computer science, but based on your above claim,
you're clearly no computer scientist.

Based on the other commentaries, it's clear that you can't see the
difference between:
- Tail recursion, which need not consume either time or memory,
- State space search recursion, which *inherently* tends to consume
enormous amounts of memory. (And no doubt you'd have stronger words
to say if you could comprehend what "unification" means...)
and
- Other recursive algorithms, which may or may not behave "efficiently,"
depending on how they branch.

In no case does any of this depend on what language is used for
implementation; an A* algorithm can readily consume 1GB of memory
regardless of whether it is implemented in C++, assembly language,
LISP, or Prolog.

But comprehending *that* would require some knowledge of computer
science...
--
Christopher B. Browne, cbbr...@hex.net, chris_...@sdt.com
PGP Fingerprint: 10 5A 20 3C 39 5A D3 12 D9 54 26 22 FF 1F E9 16
URL: <http://www.hex.net/~cbbrowne/>
Q: What does the CE in Windows CE stand for? A: Caveat Emptor...

Bjørn Remseth

unread,
Aug 14, 1997, 3:00:00 AM8/14/97
to

Sajid Ahmed the Peaceman <peac...@capital.net> writes:

> Not really. Computer science is restricted to the processor
> that programs run on, and the instructions that they have. Also the
> amount of time it takes to execute these instructions. These
> instructions are simple bitwise operations.


Er, in a word "No". Comp. Sci. is quite a bit more than that, in fact
the borders are extremely fuzzy, and rightfully so, since it is not at
all clear where it is most convenient to put them..

--
(Rmz)

Bj\o rn Remseth !Institutt for Informatikk !Net: r...@ifi.uio.no
Phone:+47 22855802!Universitetet i Oslo, Norway !ICBM: N595625E104337

Sajid Ahmed the Peaceman

unread,
Aug 14, 1997, 3:00:00 AM8/14/97
to

David Thornley wrote:
> All processors and instruction sets are equivalent, in a sense that
> can be quantified, to Turing machines. Turing machines are mathematical
> constructs. Moreover, a programming language definition can be
> treated as a mathematical entity. It is the responsibility, then,
> of the people who build the system to make it work according to the
> abstract, mathematical, specification.
>

Fine, as long as you keep in mind that your working with a
processor, not some abstract mathematical world.


>
> Suppose we did restrict ourselves to existing processors: how would
> we, as computer scientists, get anything done? There are theorems
> all over the field that are very useful in practice, because we form
> mathematical abstractions that we can reason with. If we had to
> prove that this K of machine language instructions does such and
> such on, say, a 6502, we'd never be able to generalize. Computers
> today would be marginally more useful than they were in 1950.

Advanced computer science concepts should be treated the same
way as advanced mathematics concepts. If there is real world
applications
for it, good, if not, it's a complete and utter waste of time.

> > You have to treat these building blocks as a group
> >of processor instructions, not as mathematical functions.
> >
> Why? Why do I have to do what you say? If I had to treat them as
> groups of processor instructions, they'd be far too clumsy to
> do anything useful with.
>

The reason that you treat them as groups of processor
instructions is beacause that is in fact what they are.


> What I do is create modules of some sort in a higher-level programming
> language. I use the abstract *mathematical* model of the language to
> demonstrate that they will do what I want them to reliably. Most of
> the time, the real, physical, systems do what the mathematical model
> says; when they don't, I bitch to the vendor and get results. When I
> feel like it, I move my modules to another machine, very different,
> that also has a Lisp or C or whatever system, and they still work.
> I suspect you do much the same thing.


Good. Your keeping in mind the the processor.


>
> How do you do this while treating modules as groups of instructions?
> You would be very hard put to come to any useful conclusions just
> studying the machine code. Assuming that after long, painful work
> you succeeded, what if you want to upgrade your compiler or move to
> another computer or change the optimization level? You'd be screwed.
>

To the extent that your driving, using mathematical functions is
fine. Levels above that, like the 'worlds fastest sorting algorithm'
of a former professor of mine, as well as many other so called computer
science mathematical concepts is complete ignorance of reality.

...

The other problems about looking at computer functions as
mathematical functions in general is the tendency
to make everything recursive.

> Yup. Microsoft released an operating system for a personal computer
> that requires far more resources to run fast than the mainframes
> I used to work on. I used it on a 80MHz Pentium with slightly slow
> memory and it was a dog.

I never used windows in the old days, before they came out with
win 95, for the same reasons that you stated. Microsoft has pretty much
forced every pc user to go into a gui, with their release of win95. I've
always wondered why they did it. At first I thought it was because of
the
easier interface for new users, but then I realized it was something
else.
Before the introduction of Windows 95, you had several companies making
their own version of dos (DR. dos,pc dos, etc.). It was fairly easy
to write ones own O.S. thats compatible with msdos. With windows 95,
it's
a completely different story. They pretty much have a lock on the
market.
I know OS/2 is there, but making os/2 compatible with win95 is no easy
task.

>
> They have gone bankrupt, haven't they?

(in reference to wordperfect corp)

Naa. They sold out when the going got tough.


Peaceman

Markku Laukkanen

unread,
Aug 14, 1997, 3:00:00 AM8/14/97
to

Sajid Ahmed the Peaceman wrote:

> Martin Rodgers wrote:
> >
> > You admit that youdon't know enough about Lisp to follow some simple
>
> > examples, yet you consider that your knowledge of Lisp is enough to
> > talk about the quality of Lisp implementations. You can't have it
> both
> > ways. Either you know enough about Lisp to talk about it with us
> with
> > equal authority, in which case you will be able to follow our code,
> or
> > you should admit that we know the language - and the implementation
> > details - far better than you do.
> >
> > Which is it?
>
> You got it. You know lisp better than I do. I gave up on it
> after
> seeing that it was "recursive" language. I'm glad to see that people
> are coming to their senses and making lisp more iterative. Other than
> the recursive aspect of Lisp, the only complaint I may have is with
> all
> the parenthesis, but that's not a big deal.
>

> The problem is with all the recursive style languages,
> lisp, prolog, ml , scheme, etc. It is going in the wrong direction.
> Computer languages are not like mathematics. Computer functions

> are different than mathematical functions. The experience I've had


> (I know, I probably had some bad teachers), is where Lisp, prolog,
> ml, etc. is being used like mathematical functions: no iterative code
> allowed; functions returning instantaneously; abstract mathematical
> worlds, etc. That is not what computing is about. Computers run
> with a processor, one instruction at a time, not instantaneous
> functions.

All right, so it easier to think on a iterative style than recursive
style ?

So let's say example from one kind of recursion :

if a process A, which does the job A'
Process B does job B'
Job A' need to call B' to get his job done (and in some cases A' too)
B' needs (in some cases) to call A' do get his jobs done.

So how you are going to implement this without using functions & some
kind of recursion ?


>
>
> When I went to school (graduated in 95), I had a near 4.0
> GPA, and I thought that the Lisp course was the easiest CS course in
> the computer science curriculum. I was planning on going to grad

I am very pleased to hear that, I always assumed, that the Common Lisp
especially is a little bit difficult language to understand for an
average programmer.

Could you please explain the following terms (I suppose, that the A
grade is enough for this)

unwind-protect
dynamic binding
dynamic scope
differences between let and let*
symbol-macrolet
compiler-let
eval-when
differences between eq/eql/equal/equalp
loop macro
and maybe clos

Trivial, I suppose ?


> school in CS, until I took an upper level course in ML. The stuff
> was easy enough, I got an A in the course, but if that's what grad
> school is about, all this recursive crap that does not have
> any practical application, it's not for me. I then went out, got
> a programming job for about a year, and finally started up my own
> company, where I am now.

Recursive functions doesn't have any practical applications ?
How about games like chess ?

>
>
> I tried to tell the CS teacher about machine language
> instructions,
> but he shoved it off and tried to say that some processors were
> designed
> to do only recursion; biggest bunch of bologna I've ever heard.

Personally I am not interested about machine instructions at all.But
sometimes it nice to be able to debug assembler to see that the C/C++
compiler REALLY screwed it up

> Anyway, I've come to conclusion that Lisp is bad, Lisp is
> evil,
> and as the subject says lisp is *SLOW*.

> Peaceman

C++ is horrible, C++ is HUGE, C++ doesn't have GC, Most of the C??
compilers don't support templates properly, Most of the C++ programmers
don't know anything about OOD.

PKY
C++ Code Development is EXTREMELY slow.

Dennis Weldy

unread,
Aug 14, 1997, 3:00:00 AM8/14/97
to

Sajid Ahmed the Peaceman wrote in article <33F35D...@capital.net>...>>

>> Suppose we did restrict ourselves to existing processors: how would
>> we, as computer scientists, get anything done? There are theorems
>> all over the field that are very useful in practice, because we form
>> mathematical abstractions that we can reason with. If we had to
>> prove that this K of machine language instructions does such and
>> such on, say, a 6502, we'd never be able to generalize. Computers
>> today would be marginally more useful than they were in 1950.
>

> Advanced computer science concepts should be treated the same
>way as advanced mathematics concepts. If there is real world
>applications
>for it, good, if not, it's a complete and utter waste of time.

Gosh, Im so glad that Turing, Von Neumann, etc didnt feel that way. If
y'look it up, you'll note that the TM predates the computer by a number of
years. Y'see, many of the concepts we use today (finite automata for
lexical analysis, PDAs for parsing, come from the advanced concepts in CS.
Even if t's not clear at th time it' s being developed whether it has
"practical" applications or not.

Hmm.. So do you also consider the work by Hawking, et al to be a waste of
time? After all, what are the practical applications of determining whether
or not black holes radiate? ;-)

It may not be clear now, but someday....?


Ingvar Mattsson

unread,
Aug 14, 1997, 3:00:00 AM8/14/97
to

Sajid Ahmed the Peaceman <peac...@capital.net> writes:
>
> David Thornley wrote:

[SNIP]


> > Any application that was painfully slow on my old TRS-80 would be
> > incredibly fast on my (slow) Mac. On the other hand, I'm pretty
> > much the same guy I was in 1980. I've learned a lot, but I'm
> > really no better at handling complexity. Machine efficiency
> > is becoming less and less important relative to human efficiency,
> > and I don't see that stopping.
> >
> > If we can build larger systems by running software half as fast, that's
> > cool. We can spare a factor of two easy, for most purposes. (We're
> > doing that now, in most applications, because we can't handle the
> > complexity. Windows 95 does more than, say, MacOS 5, but requires
> > a 150 MHz Pentium and 16 MB memory rather than a 8 MHz 68000 with
> > 1 MB memory. That's an incredible increase in processing power.)
> > What we can't afford, in general, is to increase the asymptotic
> > time required, but that's in itself a mathematical concept.
> >
>
> Whether or not you *want* code that is fast and efficient is
> a different program. Don't make the mistake of ignoring the speed.
> Word Perfect Corp did that with their word processing
> program for windows, and look where they ended up.

Given that the resulting object code is equally fast, is it better to
spend time waiting for compilations or to write code?

[SNIP]


> > Currently, it's easy to stay in business in programming with some
> > pretty lousy systems (the place I'm working now is an excellent
> > example), so it's easy to get a programming job without understanding
> > computer science. It probably always will be easy, as the demand
> > for programmers is likely to go up, and the number of good computer
> > scientists isn't.
> >
>
> I'm glad that you can admit that this so called good computer
> science stuff has no demand in the work place. It's like the advanced
> mathematics that they do research on, that has no real world
> applications. That was one of my frustations with the graduate
> level computer science courses that they had in my school. All this
> mathematical and theory crap that you would never see implemented
> in a real computer system. Did I tell you about the professor
> I once had, that had 'The world's fastest sorting algorithm' ?

What part of the "high level CS crap" haven't you seen implemented in
a modern computer system? Queue theory shows up in schedulers, Lots of
hairy stuff ends up in optimizers, relational algebra shows up in
database systems, abstraction shows up in maintainable code, recursion
shows up (in the source) whenever you want to write a parser. Automata
theory shows up in compilers, interpreters, network code, whatever.

[Lisp isn't slow and suitable test environments scetched]


> >
> > When you've done all these things, do some timing. I'd be really
> > surprised if, for a reasonable benchmark, Lisp was less than half
> > as fast as C.
>
> Get rid of any unneccesary recursion, and you'd get your Lisp
> to run even faster. You'll get rid of the overhead needed for repeated
> calls to functions.

That's what the compiler is there for. If I can win 30% development
time for a moderately low increase in compile time and (say) a 5%
speed loss, that is in general a good trade-off. Not that I say that I
will necessary lose speed. Since I have a good development tool (and a
good optimizer), I can concentrate on algorithmic optimization,
instead of low-level bitfiddling. When the algorithms have been
revised, I can then concentrate on bit fiddling. All in all, I
probably end up with a better/faster/more maintainable end product.

OK, this isn't (actually) lisp specific. Most of it is general.
But I must honestly say that I find prototyping to be *much* faster in
lisp than in C.

//Ingvar
--
Sysadmin, disgruntled, unpolite.
I don't speak for my employer nor do they speak for me.
Accept this and life will be easier. ing...@idasys.se

Martin Rodgers

unread,
Aug 15, 1997, 3:00:00 AM8/15/97
to

Sajid Ahmed the Peaceman wheezed these wise words:

> Advanced computer science concepts should be treated the same

> way as advanced mathematics concepts. If there is real world
> applications
> for it, good, if not, it's a complete and utter waste of time.

Then I suggest that you take a better look at the world of
mathematics. A lot of it may seem abstract and meaningless, and then
years later somebody notices something curious. Pow, suddenly you have
a whole "new" field, like fratcals or chaos theory.

Do you have a use for boolean algebra, lambda calculus? Perhaps not
you personally, but people write compilers might. Don't invite us to
discard everything that you don't understand. If you personally don't
have a use for something, there may still be another person who _will_
have an application for it.

However, your posts so far have suggested that you can not only
dismiss all these things, but advise the rest of us to do the same.
You need trust me on one thing only, which is that we don't need to be
"saved" from ourselves. If we've got it all wrong, fine. You won't
suffer at all, will you?

Give it a rest. You've made your point. If you have anything else to
say, then say it. Otherwise, I fail to see what you can gain you by
making the same point over and over, with us refuting it each time.

As I've said before, let the jury decide.

? the platypus {aka David Formosa}

unread,
Aug 15, 1997, 3:00:00 AM8/15/97
to

In <33F1F1...@capital.net> Sajid Ahmed the Peaceman <peac...@capital.net> writes:

>David Thornley wrote:
[...] Computer science involves a lot


>> of mathematics, and computer functions are mathematical functions.

> Not really. Computer science is restricted to the processor
>that programs run on, and the instructions that they have.

You have never hurd of turing equivlence? Any computer (given time and
memory) can simulate any other one. If you have a algrothum you can
implerment it on any proccessor you choose.

> Also the amount of time it takes to execute these instructions.

Of cause this it true, however the speed of computers is getting faster
and faster. This combined with the fact that most cpus are ideal means that
it is more likely to get better effency gains by makeing the OS use the
proccesor more effently.

> These instructions are simple bitwise operations.

I wish this was true. Even RISK arcutectures have pritty complex stametens.

[...]

>> (Nor is your statement about instructions and processors completely
>> correct in a multiprocessing system.)
>>

> In a multiprocessing system, either there is shared memory,
>or data is passed back and forth between the processors. It can be
>looked at a one processor system with different inputs.

If you could turn this abstraction into realty then a whole of compuer
scince would be alot easyer. However you can't, a multy processor
device works diffrently and has diffrent costs. For example porly
writton code can't spread itself accros multipul proccsors and gain
the speed that this brings.

>> Currently, it's easy to stay in business in programming with some
>> pretty lousy systems (the place I'm working now is an excellent
>> example), so it's easy to get a programming job without understanding
>> computer science.

[...]

> I'm glad that you can admit that this so called good computer
>science stuff has no demand in the work place.

Reread the stament he made, he said its easy to get a job porgraming
where you don't know computer scince. However a CS deggry means
that what you write will be good, and seeing the compuleat trash
that I've read this is a grat pitty. Peaple with no idear of effent
code ect, peaple who use bubble sorts, peaple who write without meaning
full names, A good dose of CS would cure all that.

[...]
--
Please excuse my spelling as I suffer from agraphia see the url in my header.
Never trust a country with more peaple then sheep. Buy easter bilbies.
Save the ABC Is $0.08 per day too much to pay? ex-net.scum and proud
I'm sorry but I just don't consider 'because its yucky' a convincing argument

Sajid Ahmed the Peaceman

unread,
Aug 15, 1997, 3:00:00 AM8/15/97
to

Martin Rodgers wrote:

>
> Give it a rest. You've made your point. If you have anything else to
> say, then say it. Otherwise, I fail to see what you can gain you by
> making the same point over and over, with us refuting it each time.
>

I think your right. I have way too much stuff to do, and can't
keep up with this thread. You guys will have to keep going without me.

Peaceman

Sajid Ahmed the Peaceman

unread,
Aug 15, 1997, 3:00:00 AM8/15/97
to

Markku Laukkanen wrote:
>
> So let's say example from one kind of recursion :
>
> if a process A, which does the job A'
> Process B does job B'
> Job A' need to call B' to get his job done (and in some cases A' too)
> B' needs (in some cases) to call A' do get his jobs done.
>
> So how you are going to implement this without using functions & some
> kind of recursion ?
>

There's a term for what you describe above. It's called
event driven programming, which is common in some kind of windows
programming environment.

With event driven programming, you need to write a handler
for the different events that occur. This is a whole different animal
than standard programming. In this case you have to look at each event
handler as a seperate program, though they may share some of the same
memory, i.e. variables.

The difference between these event handlers and standard recursive
functions is that they terminate completely, after posting an
event notification on some kind of event queu.


>
> > school in CS, until I took an upper level course in ML. The stuff
> > was easy enough, I got an A in the course, but if that's what grad
> > school is about, all this recursive crap that does not have
> > any practical application, it's not for me. I then went out, got
> > a programming job for about a year, and finally started up my own
> > company, where I am now.
>
> Recursive functions doesn't have any practical applications ?
> How about games like chess ?

It may not be easy, but it can be done in a nonrecursive
format.


Peaceman

Christopher B. Browne

unread,
Aug 16, 1997, 3:00:00 AM8/16/97
to

On Fri, 15 Aug 1997 22:38:36 -0400, Sajid Ahmed the Peaceman
<peac...@capital.net> posted:

>> Recursive functions doesn't have any practical applications ?
>> How about games like chess ?
>
> It may not be easy, but it can be done in a nonrecursive
>format.

And you can do accounting using roman numerals. That makes it neither
sensible nor efficient.

I'd love to see your non-recursive implementation of A* search... Or
frankly *any* implementation of *any* of the important state space search
algorithms that doesn't use recursion either directly or indirectly.

The question is not whether it can be done or not, but rather whether
the *real* implementations use recursion or not.

Can you describe a chess search algorithm in less than 10 lines of
code *without* using recursion?

MIT's "latest" is described at:
<a href="http://theory.lcs.mit.edu/~plaat/mtdf.html">MTD(f), a new
chess algorithm</a>

If you wish to demonstrate the evils of recursion properly, please
provide a nonrecursive implementation of any algorithm similar to MTD(f)
or Alpha/Beta pruning. And show how your nonrecursive implementation:
a) Takes less time, and
b) Consumes less memory.

Do you think that the IBM developers that wrote Deep Blue used purely
"iterative" algorithms, eschewing recursion? I rather doubt it...


--
Christopher B. Browne, cbbr...@hex.net, chris_...@sdt.com
PGP Fingerprint: 10 5A 20 3C 39 5A D3 12 D9 54 26 22 FF 1F E9 16
URL: <http://www.hex.net/~cbbrowne/>

Bill Gates to his broker: "You idiot, I said $150 million on **SNAPPLE**!!!"

Norman Bullen

unread,
Aug 16, 1997, 3:00:00 AM8/16/97
to

Christopher B. Browne wrote:
>
> And you can do accounting using roman numerals. That makes it neither
> sensible nor efficient.
>
> I'd love to see your non-recursive implementation of A* search... Or
> frankly *any* implementation of *any* of the important state space search
> algorithms that doesn't use recursion either directly or indirectly.
>
> The question is not whether it can be done or not, but rather whether
> the *real* implementations use recursion or not.
>
> Can you describe a chess search algorithm in less than 10 lines of
> code *without* using recursion?
>
> MIT's "latest" is described at:
> <a href="http://theory.lcs.mit.edu/~plaat/mtdf.html">MTD(f), a new
> chess algorithm</a>
>
> If you wish to demonstrate the evils of recursion properly, please
> provide a nonrecursive implementation of any algorithm similar to MTD(f)
> or Alpha/Beta pruning. And show how your nonrecursive implementation:
> a) Takes less time, and
> b) Consumes less memory.
>
> Do you think that the IBM developers that wrote Deep Blue used purely
> "iterative" algorithms, eschewing recursion? I rather doubt it...
> --

Without intending to get involved in any debate about the "evils of
recursion" (I use recursion where it's appropriate and try to identify
those algorithms where iteration might be better) I'd like to comment on
the issue of move trees in Chess.

There are good reasons for generating move trees in an iterative
fashion. I have myself written a Checkers player that generated the move
tree without use of recursion. (It wasn't very good but that's because
I'm not a very good Checkers player; had nothing to do with the way the
move tree was generated.)

The primary reason for the design of my iterative algorithm was to
achieve a "breadth-first" rather than "depth-first" move tree. A
recursive algorithm will be "depth-first": from any given board
situation it will generate a move and look at the board situation
resulting from that move, and so on, to some limit of recursion depth.
Only after consequences of that first move are examined will it generate
another move at the top level. I wanted an algorithm that would look
first at each move at the top level before looking at the moves possible
from each of those resulting board situations.

I did this by constructing a tree in "heap" memory as opposed to "stack"
memory as recursion normally does. The root of the tree is a
representation of the current board situation. Each node of the tree
must be linked not only to its parent node and all child nodes but also
in a single list of nodes in the order of creation. That list initially
consists of just the root node. Now, iterate along that list. Generate
each possible move and for each of them construct the resulting board
situation and append it to the list. Then step along to the next node on
the list and do the same thing, an iterative process. Note that you may
never get to the end of the list because you keep adding more nodes to
the end of it. Just as a recursive algorithm must have some depth limit
this algorithm must have a limit to the total number of nodes that it
can generate.

After generating this tree with a certain number of nodes I did use
recursive algorithm to walk the tree from the root to do the mini-max
alpha/beta search although I believe I could have incorporated that into
the initial iterative loop as well.

A breadth-first algorithm seems that it might be better for some timed
games, such as Chess and Checkers. You can generate nodes at a deeper
and deeper level until you run out of time without worrying that you
might have missed the best move at the root level simply because you
looked depth-first at other possibilities first.

Also, and this may be more important in Checkers, there are many cases
where different combinations of moves may lead to the same board
situation. If you have the whole tree in memory you can combine leaves
that match (even when they are on different levels) and only generate
the sub-tree once. (This turned out to be REALLY messy in practice!)

Finally, if you have the tree in memory, you may be able to save a
little time by saving the portion of it (probably a small portion) that
results from the move that you make and the move that the opponent
makes.

It does, probably, require more memory to build the move tree this way.

Norm

Bulent Murtezaoglu

unread,
Aug 17, 1997, 3:00:00 AM8/17/97
to

In article <joswig-ya0231800...@news.lavielle.com>
Rainer Joswig writes:
[quoted parts of my posting deleted]
>"Speed" is often not an issue on modern personal computers. These
>machines are most of the time *idle*, the user is just fuzzing
>around, for many tasks there is little processing power needed or
>there is even hardware support.

Agreed. I agreed with that observation in your original article too.
My point is: [knowledgeable] people who go after the latest and
fastest usually have a reason _other than_ GUI speed or what not. We
_seem to_ disagree because we're talking about different people, and
the Peaceman clouded everybodies thinking recently. I understand your
point about window systems, and mostly agree with it. Having said
that, I should also point out -- as I'm sure you will agree -- there
are other legitimate uses of computers that do not entail the system
waiting for user input. If the arguments about overheads brought
about by dynamic nature of Lisp is going to make any sense at all from
an engineering perspective, they will do so for computation intensive
tasks.

[examples noted and deleted]
>**Maybe** some C code would iterate over 90000 windows in one or two
>seconds. I don't care.

You would care if you weren't iterating over windows but, say, records
of sensor data for a robotics system.

>People who don't know nothing about recursive functions (optimized
>or not ;-) ) enforce a totally wrong point of view:
>Premature optimization.

I'm not one of them. I mostly worry about the big-Oh when I'm designing
the code and then worry about the constants later if I need to. I suspect
that's what most CL programmers do. Recursion and such is a side issue,
IMHO, unless you're writing for the benefit of you-know-who.

BM> If people want to argue "C++ is fast" they'd be much better off going
BM> after CLOS. AFAIK (and I'd love to be corrected on this) once you enjoy
BM> the luxury of fast development using CLOS and get your code working with
BM> generic function calls in your inner loops, you're stuck with large
BM> constants. (Please correct me please...)

>What if you call generic functions in inner loops where speed
>is really needed?
>
>Hmm?

Huh? In that case you get stuck with large constants. Here's the
point I'm trying to make: With CL minus CLOS, one has the means to get
the code working fast and then optimizing the speed-critical parts of
the code w/o radical rewrites. Good compilers help the programmer in
that regard (I used CMUCL as an example because it is outstanding).
OTOH, w/ CLOS If you call CLOS generic functions on CLOS objects in
your inner loops you get stuck with inefficency. (AFAIK, correct me)
The compiler cannot optimize the slot accesses (offsets are unkown),
you cannot inline your generic function calls etc. The dynamism that
helped you so much during development is now built into your code and
is giving you inefficiency. So while you see web pages proudly displaying
"Lisp is faster than C" examples, you will not see "CLOS is faster than
C++" examples because even if you restrict CLOS functionality to what you
can do in C++, you cannot get comparable efficiency. That's all I'm saying.


>Don't use them there when unnecessary. Simply as that.
>CLOS's generic functions are not the solution to all software
>problems. Sadly. ;-) Still you can write this code
>in Common Lisp and enjoy the fast development using Common Lisp.

Yes, of course, that's exactly what I mean. What I am arguing
against is the attitude (not necessarily yours) that labels people
who bring up efficiency issues as ignorant un-enlightened bozos
when in fact _there are_ efficiency gains in C++ vs. CLOS and
there are legitimate needs for that efficiency. It doesn't have
to be this way, as far as I can see (if we could take some
dynamism out of CLOS in finished code.) But it is.

Are we converging?

BM

smr...@mail.com

unread,
Aug 17, 1997, 3:00:00 AM8/17/97
to

> The primary reason for the design of my iterative algorithm was to
> achieve a "breadth-first" rather than "depth-first" move tree. A
> recursive algorithm will be "depth-first": from any given board
> situation it will generate a move and look at the board situation

A general way of looking at this is, given a problem, you'll divide it
into subproblems, adding the subproblems to the list. You then continue
by pulling a problem off the list.

If the list is queue, you are doing a breadth-first search.

If the list is stack, you are doing a depth-first search.

Or you can sort entries on the list for a prioritised search.

Or you can pluck subproblems randomly from the list.

You can iterate or recurse through the list in any case. If the list is a
stack, you can combine that with the protocol stack when you use
recursion.

-------------------==== Posted via Deja News ====-----------------------
http://www.dejanews.com/ Search, Read, Post to Usenet

Erik Naggum

unread,
Aug 18, 1997, 3:00:00 AM8/18/97
to

* Bulent Murtezaoglu

| So while you see web pages proudly displaying "Lisp is faster than C"
| examples, you will not see "CLOS is faster than C++" examples because
| even if you restrict CLOS functionality to what you can do in C++, you
| cannot get comparable efficiency. That's all I'm saying.

I actually haven't seen any comparisons at all between C++ and CLOS. I'm
sure you have, so I would be interested in looking at them and running them
on my own.

#\Erik
--
man who cooks while hacking eats food that has died twice.

Andreas Eder

unread,
Aug 18, 1997, 3:00:00 AM8/18/97
to

The Peaceman writes:
>Advanced computer science concepts should be treated the same
>way as advanced mathematics concepts. If there is real world
>applications for it, good, if not, it's a complete and utter waste of time.

In the first half of the 19th century Group Theory was developped by
some mathematicians, notably Evariste Galois. Nobody knew then, that it would be
of any use outside the world of mathematics. So it was an utter waste of time by
your definition. Now, about 100 years later it became a vital tool for
understanding quantum physics and the foundations of modern chemistry. Very
real world applications, if you ask me. (There wouldn't be any computers
without it). So, is it utter waste of time, or isn't it. You are surely wise
enoughto foresee all the applications a new mathematical theory has, or predict
that it will never have one, aren't you ?

Andreas

Martin Rodgers

unread,
Aug 18, 1997, 3:00:00 AM8/18/97
to

Andreas Eder wheezed these wise words:

> You are surely wise
> enoughto foresee all the applications a new mathematical theory has, or predict
> that it will never have one, aren't you ?

And then there's that question about the light bulb, to which the
answer was another question, about the usefulness of a human baby.

Potential is very hard to evaluate at a point where only potential can
be appreciated. Not all of us, if any, have the foresight required.

A little history can teach us a lot. Anyone who thinks otherwise
should be using candles. They might even prefer the darkness!

Richard A. O'Keefe

unread,
Aug 18, 1997, 3:00:00 AM8/18/97
to

Norman Bullen <nbu...@ix.netcom.com> writes:
>The primary reason for the design of my iterative algorithm was to
>achieve a "breadth-first" rather than "depth-first" move tree. A
>recursive algorithm will be "depth-first": from any given board
>situation it will generate a move and look at the board situation
>resulting from that move, and so on, to some limit of recursion depth.

Wrong, and wrong on two counts.
(a) I've written simple depth first and breadth first searches in a
variety of declarative languages. It is extremely hard to tell
them apart.
(b) There are now quite a few variations on the idea of iterative
deepening, based on the idea of have two (or possibly even more)
searches:
outer search looks for a depth (or other) bound
inner search looks within that bound
The reason for this is to combine the virtues of depth first search
(simplicity and storage economy) with the virtues of breadth first search.

The algorithm you describe can easily and *naturally* be implemented in a
recursive way. Even this bit:

>Also, and this may be more important in Checkers, there are many cases
>where different combinations of moves may lead to the same board
>situation. If you have the whole tree in memory you can combine leaves
>that match (even when they are on different levels) and only generate
>the sub-tree once. (This turned out to be REALLY messy in practice!)

That's quite easy to handle in a declarative language as long as the
implementation supports function caching (memo functions, tabling).
There is, for example, a Common Lisp implementation of memoing available
over the net.
--
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.

It is loading more messages.
0 new messages