The old Google Groups will be going away soon, but your browser is incompatible with the new version.
lisp is *SLOW*
 There are currently too many topics in this group that display first. To make this topic appear first, remove this option from another topic. There was an error processing your request. Please try again. Standard view   View as tree
 Messages 76 - 100 of 105 < Older  Newer >

From:
To:
Cc:
Followup To:
Subject:
 Validation: For verification purposes please type the characters you see in the picture below or the numbers you hear by clicking the accessibility icon.

More options Aug 12 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: thorn...@visi.com (David Thornley)
Date: 1997/08/12
Subject: Re: lisp is *SLOW*

In article <33EF3E67.6...@capital.net>,
Sajid Ahmed the Peaceman  <peace...@capital.net> wrote:

So what's wrong with recursion?

>    The problem is with all the recursive style languages,
>lisp, prolog, ml , scheme, etc. It is going in the wrong direction.
>Computer languages are not like mathematics. Computer functions
>are different than mathematical functions.

If you mean that both x = 1 and x == 1 (in C) are not the same
as x = 1 (in the usual algebraic sense), yes, you're correct.
In other senses, you're wrong.  Computer science involves a lot
of mathematics, and computer functions are mathematical functions.
In some respects, we have to treat them as mathematical functions
in order to get large systems built.

Even the assignment statement has a clear mathematical meaning,
which you will find discussed in great detail in Dijkstra's
_Discipline_of_Programming_.  We rely on that meaning with
pretty much every C program we write.  After we have written
x = 1;, we assume that x now has the value 1 (in whatever data
type x is).

We cannot tolerate unlimited complexity in our systems, because
we are merely human.  The larger and more complex the building
blocks we can use, the larger and more complex systems we can
build with them.  The only way to have large, complex, and
reliable building blocks is to treat them mathematically.

>(I know, I probably had some bad teachers), is where Lisp, prolog,
>ml, etc. is being used like mathematical functions: no iterative code
>allowed; functions returning instantaneously; abstract mathematical
>worlds, etc.  That is not what computing is about. Computers run
>with a processor, one instruction at a time, not instantaneous
>functions.

they were teaching.  It sounds like what they were teaching is
how to build reliable, large, and complex building blocks for
large systems.  This is what computing is about, in the large.

Look, when I got my first personal computer (about 1980), it had
a 1.77 MHz Z80 processor.  Skipping several steps, I bought a
Mac 6100/60 used a couple of years ago, which does math faster
than the supercomputer I first used.  You can't buy anything like
the 6100/60 new any more; the slowest current Macs and clones
are more than twice as fast.

Any application that was painfully slow on my old TRS-80 would be
incredibly fast on my (slow) Mac.  On the other hand, I'm pretty
much the same guy I was in 1980.  I've learned a lot, but I'm
really no better at handling complexity.  Machine efficiency
is becoming less and less important relative to human efficiency,
and I don't see that stopping.

If we can build larger systems by running software half as fast, that's
cool.  We can spare a factor of two easy, for most purposes.  (We're
doing that now, in most applications, because we can't handle the
complexity.  Windows 95 does more than, say, MacOS 5, but requires
a 150 MHz Pentium and 16 MB memory rather than a 8 MHz 68000 with
1 MB memory.  That's an incredible increase in processing power.)
What we can't afford, in general, is to increase the asymptotic
time required, but that's in itself a mathematical concept.

correct in a multiprocessing system.)

>    When I went to school (graduated in 95), I had a near 4.0
>GPA, and I thought that the Lisp course was the easiest CS course in
>the computer science curriculum.  I was planning on going to grad
>school in CS, until I took an upper level course in ML. The stuff
>was easy enough, I got an A in the course, but if that's what grad
>school is about, all this recursive crap that does not have
>any practical application, it's not for me. I then went out, got
>a programming job for about a year, and finally started up my own
>company, where I am now.

Currently, it's easy to stay in business in programming with some
pretty lousy systems (the place I'm working now is an excellent
example), so it's easy to get a programming job without understanding
computer science.  It probably always will be easy, as the demand
for programmers is likely to go up, and the number of good computer
scientists isn't.

On the other hand, there's a lot of interest nowadays in extremely
complex systems that would be very useful if we could get them running
reliably.  You're probably aware that many large software projects
fail horribly and expensively.  Anything that allows us to make these
systems more reliably is potentially good.

>    I tried to tell the CS teacher about machine language instructions,
>but he shoved it off and tried to say that some processors were designed
>to do only recursion; biggest bunch of bologna I've ever heard.

Never heard of recursive-only processors myself, nor can I understand
how they'd do that right now.  There are processors that try to do
function calls fast.  I suspect there's more to the conversation
than you're telling, possibly more than you remember, unless that was
a very bad CS teacher (and there are enough of those out there).

>    Anyway, I've come to conclusion that Lisp is bad, Lisp is evil,
>and as the subject says lisp is *SLOW*.

I don't know how I'd recognize a "bad" and "evil" programming language,
if you'll allow me to disregard Intercal for the moment.  Lisp does
tend to polarize people into those that love it and those that hate it,
but that doesn't make it bad.

However, Lisp is not slow.  Recursion is not necessarily slow.  The
only way to show that Lisp is slow is to take a modern Lisp system
and a modern C system (or what have you), and benchmark them.  Be
sure that they do equivalent tasks.  Normally, C programs do less
than Lisp programs, so you must allow for this.  On Unix, large C
programs tend to leak memory like it was going out of style, so you
should either make sure the C program manages memory correctly or
turn off garbage collection for Lisp.  Integer overflow in C usually
causes no problems other than plausible wrong answers, while Lisp
will get the right answer at the cost of a performance hit.  Either
prove that undetected overflow is impossible in the C system (yes,
that's mathematical) or use declarations in the Lisp program to turn
off overflow checking.  Remember to declare all variables by type
in Lisp, just as you'd have to in C.

When you've done all these things, do some timing.  I'd be really
surprised if, for a reasonable benchmark, Lisp was less than half
as fast as C.  It might be faster; CMU-CL has beaten Fortran systems
in number-crunching.  Even if Lisp is half as fast, consider that if
the C program runs OK on my Mac the Lisp program will run better on
a current low-end machine.

David Thornley

To post a message you must first join this group.
You do not have the permission required to post.
More options Aug 12 1997, 3:00 am
Newsgroups: comp.lang.lisp
Date: 1997/08/12
Subject: Re: lisp is *SLOW*

Aaron Gross wheezed these wise words:

> And if the index set (the set of your i's) is infinite? How about
> uncountable?

Isn't that what lazy evaluation is for? ;-) Well, it can be used to
search an infinite space, providing the search isn't exhaustive.
--
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
"There are no limits." -- ad copy for Hellraiser

To post a message you must first join this group.
You do not have the permission required to post.
More options Aug 13 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: Aaron Gross <aa...@bfr.REMOVETHISPART.co.il>
Date: 1997/08/13
Subject: Re: lisp is *SLOW*

I did misread (sorry), but my comment still stands with "recursion"
replaced by "iteration". I don't see how *either* recursion or
iteration comes into the definition of summation. It seems that
however you define x+y, you could define w+x+y+z just as directly, and
once you have finite sums defined, you go from there to infinite sums
(more generally, integrals) without any reference to iteration or
recursion, either implicit or explicit.

My overall point, which I wanted to make after misreading "recursion"
for "iteration", was that some people seem to think that recursion is
the natural description for practically anything, even in cases where
non-recursive descriptions are more straightforward. This is unrelated
to SAtP's original comments, but then again, who cares any more about

To post a message you must first join this group.
You do not have the permission required to post.
More options Aug 13 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: Sajid Ahmed the Peaceman <peace...@capital.net>
Date: 1997/08/13
Subject: Re: lisp is *SLOW*

David Thornley wrote:

> If you mean that both x = 1 and x == 1 (in C) are not the same
> as x = 1 (in the usual algebraic sense), yes, you're correct.
> In other senses, you're wrong.  Computer science involves a lot
> of mathematics, and computer functions are mathematical functions.

Not really. Computer science is restricted to the processor
that programs run on, and the instructions that they have. Also the
amount of time it takes to execute these instructions. These
instructions are simple bitwise operations.

> In some respects, we have to treat them as mathematical functions
> in order to get large systems built.

> ...
> We cannot tolerate unlimited complexity in our systems, because
> we are merely human.  The larger and more complex the building
> blocks we can use, the larger and more complex systems we can
> build with them.  The only way to have large, complex, and
> reliable building blocks is to treat them mathematically.

You have to treat these building blocks as a group
of processor instructions, not as mathematical functions.

Whether or not you *want* code that is fast and efficient is
a different program. Don't make the mistake of ignoring the speed.
Word Perfect Corp did that with their word processing
program for windows, and look where they ended up.

> correct in a multiprocessing system.)

In a multiprocessing system, either there is shared memory,
or data is passed back and forth between the processors. It can be
looked
at a one processor system with different inputs.

> Currently, it's easy to stay in business in programming with some
> pretty lousy systems (the place I'm working now is an excellent
> example), so it's easy to get a programming job without understanding
> computer science.  It probably always will be easy, as the demand
> for programmers is likely to go up, and the number of good computer
> scientists isn't.

I'm glad that you can admit that this so called good computer
science stuff has no demand in the work place. It's like the advanced
mathematics that they do research on, that has no real world
applications. That was one of my frustations with the graduate
level computer science courses that they had in my school. All this
mathematical and theory crap that you would never see implemented
in a real computer system. Did I tell you about the professor

Get rid of any unneccesary recursion, and you'd get your Lisp
to run even faster. You'll get rid of the overhead needed for repeated
calls to functions.

Peaceman

To post a message you must first join this group.
You do not have the permission required to post.
More options Aug 13 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: marc...@infiniti.PATH.Berkeley.EDU (Marco Antoniotti)
Date: 1997/08/13
Subject: Re: lisp is *SLOW*

In article <33ECA244.7...@capital.net> Sajid Ahmed the Peaceman <peace...@capital.net> writes:

From: Sajid Ahmed the Peaceman <peace...@capital.net>
Newsgroups: comp.lang.lisp,comp.lang.c++,comp.programming
Date: Sat, 09 Aug 1997 13:00:52 -0400
Organization: Logical Net
Lines: 24
NNTP-Posting-Host: dialup033.colnny1.capital.net
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: Mozilla 3.01 (WinNT; I)
Xref: agate comp.lang.lisp:29827 comp.lang.c++:286538 comp.programming:53822

Marco Antoniotti wrote:
>
> I posted a complete and functioning C binary tree traversal code a few days
> ago, *begging* you to rewrite it in iterative format (sans le stack, of
> course).
>
> Seen nothing yet.
>
> Do you want the code?  It's still waiting for you in my disk.
>
> Cheers
> --
> Marco Antoniotti

I never said that you could do it without a stack, that was
somebody else. What I did say, though, was that the iterative version
would be faster than the recursive version, but you would still
need to set up a stack, and maybe put in some inline assembly code
for the push and pop macros. If you didn't have the inline assembly
push and pop and macros, then it may be possible to write a
faster recursive version, but if you do, than definitely not.

My memory must be failing me, but I might accept this :) Apart from
the fact that maybe only today you realized the difference between
tail and inherent recursion (as you obfuscated C/C++ factorial
postings show), what you are basically saying is that "if you write

In Italy we call this a discovery of hot water :)

Cheers
--
Marco Antoniotti
=========================================================================== ===
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472

To post a message you must first join this group.
You do not have the permission required to post.
More options Aug 13 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: marc...@infiniti.PATH.Berkeley.EDU (Marco Antoniotti)
Date: 1997/08/13
Subject: Re: lisp is *SLOW*

In article <33EF3E67.6...@capital.net> Sajid Ahmed the Peaceman <peace...@capital.net> writes:

From: Sajid Ahmed the Peaceman <peace...@capital.net>
Newsgroups: comp.lang.lisp,comp.lang.c++,comp.programming
Date: Mon, 11 Aug 1997 12:31:35 -0400
Organization: Logical Net

...

When I went to school (graduated in 95), I had a near 4.0
GPA, and I thought that the Lisp course was the easiest CS course in
the computer science curriculum.  I was planning on going to grad
school in CS, until I took an upper level course in ML. The stuff
was easy enough, I got an A in the course, but if that's what grad
school is about, all this recursive crap that does not have
any practical application, it's not for me. I then went out, got
a programming job for about a year, and finally started up my own
company, where I am now.

A few suggestions:

1 - Check out "Introduction to Algorithms" by T. H. Cormen and
C. E. Leiserson and R. L. Rivest, MIT Press, (not the best, but a
good one) or any other algorithms book.  Hope you get past the
first three chapters, where math abounds.

2 - Check out "The Laws of Human Stupity" by C. Cipolla.

3 - Tell us the name of your company.  You will have to be ready to
suffer from some "market forces". :)

Cheers
--
Marco Antoniotti
=========================================================================== ===
California Path Program - UC Berkeley
Richmond Field Station
tel. +1 - 510 - 231 9472

To post a message you must first join this group.
You do not have the permission required to post.
More options Aug 13 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: thorn...@visi.com (David Thornley)
Date: 1997/08/13
Subject: Re: lisp is *SLOW*

In article <33F1F170.2...@capital.net>,
Sajid Ahmed the Peaceman  <peace...@capital.net> wrote:

>David Thornley wrote:

>> If you mean that both x = 1 and x == 1 (in C) are not the same
>> as x = 1 (in the usual algebraic sense), yes, you're correct.
>> In other senses, you're wrong.  Computer science involves a lot
>> of mathematics, and computer functions are mathematical functions.

>    Not really. Computer science is restricted to the processor
>that programs run on, and the instructions that they have. Also the
>amount of time it takes to execute these instructions. These
>instructions are simple bitwise operations.

All processors and instruction sets are equivalent, in a sense that
can be quantified, to Turing machines.  Turing machines are mathematical
constructs.  Moreover, a programming language definition can be
treated as a mathematical entity.  It is the responsibility, then,
of the people who build the system to make it work according to the
abstract, mathematical, specification.

I've taken a mechanical engineering class, and there was plenty of
math in that.  It makes just as much sense to say that Mech E is
restricted to steel and plastic and copper as it does to say that
CSci is restricted to processors.

Suppose we did restrict ourselves to existing processors:  how would
we, as computer scientists, get anything done?  There are theorems
all over the field that are very useful in practice, because we form
mathematical abstractions that we can reason with.  If we had to
prove that this K of machine language instructions does such and
such on, say, a 6502, we'd never be able to generalize.  Computers
today would be marginally more useful than they were in 1950.

>> In some respects, we have to treat them as mathematical functions
>> in order to get large systems built.

>> ...
>> We cannot tolerate unlimited complexity in our systems, because
>> we are merely human.  The larger and more complex the building
>> blocks we can use, the larger and more complex systems we can
>> build with them.  The only way to have large, complex, and
>> reliable building blocks is to treat them mathematically.

>    You have to treat these building blocks as a group
>of processor instructions, not as mathematical functions.

Why?  Why do I have to do what you say?  If I had to treat them as
groups of processor instructions, they'd be far too clumsy to
do anything useful with.

What I do is create modules of some sort in a higher-level programming
language.  I use the abstract *mathematical* model of the language to
demonstrate that they will do what I want them to reliably.  Most of
the time, the real, physical, systems do what the mathematical model
says; when they don't, I bitch to the vendor and get results.  When I
feel like it, I move my modules to another machine, very different,
that also has a Lisp or C or whatever system, and they still work.
I suspect you do much the same thing.

How do you do this while treating modules as groups of instructions?
You would be very hard put to come to any useful conclusions just
studying the machine code.  Assuming that after long, painful work
you succeeded, what if you want to upgrade your compiler or move to
another computer or change the optimization level?  You'd be screwed.

Yup.  Microsoft released an operating system for a personal computer
that requires far more resources to run fast than the mainframes
I used to work on.  I used it on a 80MHz Pentium with slightly slow
memory and it was a dog.

They have gone bankrupt, haven't they?

>> correct in a multiprocessing system.)

>    In a multiprocessing system, either there is shared memory,
>or data is passed back and forth between the processors. It can be
>looked
>at a one processor system with different inputs.

You are perfectly correct.  Now, expand on that insight.

You can look at a certain sort of system and see it as something else,
when it suits you.  This is very good.  Now, look at a mathematical
model and see it as a physical computer with software, and vice versa.
You're almost there.

OK, let's look at some of this mathematical and theory crap.

The study of algorithms is highly mathematical.  The result is that
many of our canned program libraries run swiftly and accurately.
Much of this is built into the software we use daily.  Did you
ever wonder what goes into a fast string search?

The study of optimization is highly mathematical.  The results are
vital to the production of processors.

Our compilers include the mathematics of finite automata, denotational
semantics, and some of the algorithms above.  Producing optimizers
requires a bit of mathematics itself.

The results of mathematics are all over our computers.  The stuff that
vital parts of stuff I can buy at Computer City today.

You've chosen to overlook this, and to do programming grunt work.  Right
now, this pays pretty well, since businesses will generally prefer to
pay people a lot of money to create bad custom software than to buy
decent packages.  It's likely to pay OK for the rest of your life.
It still might be better to open up your mind and learn something.

>> However, Lisp is not slow.  Recursion is not necessarily slow.  The
>> only way to show that Lisp is slow is to take a modern Lisp system
>> and a modern C system (or what have you), and benchmark them.  Be
>> sure that they do equivalent tasks.  Normally, C programs do less

>> When you've done all these things, do some timing.  I'd be really
>> surprised if, for a reasonable benchmark, Lisp was less than half
>> as fast as C.

>    Get rid of any unneccesary recursion, and you'd get your Lisp
>to run even faster. You'll get rid of the overhead needed for repeated
>calls to functions.

This may be good advice under some circumstances, but what does it
have to do with whether Lisp is fast or slow?  Maybe that, recursion
being slow, Lisp compilers are likely to produce fast code because
they generally eliminate more recursion than other compilers?

OK, so you've realized that Lisp can be fast, and that you can look
at a physical system (a multi-processor computer) as if it were
something else.  You're making progress.

David Thornley

To post a message you must first join this group.
You do not have the permission required to post.
More options Aug 14 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: cbbro...@news.brownes.org (Christopher B. Browne)
Date: 1997/08/14
Subject: Re: lisp is *SLOW*

On Wed, 13 Aug 1997 13:40:00 -0400, Sajid Ahmed the Peaceman
<peace...@capital.net> posted:

>David Thornley wrote:

>> If you mean that both x = 1 and x == 1 (in C) are not the same
>> as x = 1 (in the usual algebraic sense), yes, you're correct.
>> In other senses, you're wrong.  Computer science involves a lot
>> of mathematics, and computer functions are mathematical functions.
>    Not really. Computer science is restricted to the processor
>that programs run on, and the instructions that they have.

Huh?

You clearly got an unspeakably horrible education in computer science
if you can conclude that from what you learned.

Computer science is "restricted" to machines that reflect reasonable
facsimiles of Turing machines, which are a fundamentally generic sort of
"processor."

Any time you hit computing theory, it is *necessary* to create, whether
in logical or in theoretical terms, whatever "machine" (DFA, PDA, TM)
proves necessary for the task at hand.  These "processors" seldom have
any directly isomorphic analogue in the world of "implemented processors."

Which is quite separate from them being of practical value; people seldom
build TM's as the programming model is, while theoretically clean, rather
arcane (and inefficient) for realistic applications.  On the other hand,
the various finite state automatons are of tremendous practical value,
used particularly often in engineering applications...

You may have studied computer science, but based on your above claim,
you're clearly no computer scientist.

Based on the other commentaries, it's clear that you can't see the
difference between:
- Tail recursion, which need not consume either time or memory,
- State space search recursion, which *inherently* tends to consume
enormous amounts of memory.  (And no doubt you'd have stronger words
to say if you could comprehend what "unification" means...)
and
- Other recursive algorithms, which may or may not behave "efficiently,"
depending on how they branch.

In no case does any of this depend on what language is used for
implementation; an A* algorithm can readily consume 1GB of memory
regardless of whether it is implemented in C++, assembly language,
LISP, or Prolog.

But comprehending *that* would require some knowledge of computer
science...
--
Christopher B. Browne, cbbro...@hex.net, chris_bro...@sdt.com
PGP Fingerprint: 10 5A 20 3C 39 5A D3 12  D9 54 26 22 FF 1F E9 16
URL: <http://www.hex.net/~cbbrowne/>
Q: What does the CE in Windows CE stand for?  A: Caveat Emptor...

To post a message you must first join this group.
You do not have the permission required to post.
More options Aug 14 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: r...@tyrfing.ifi.uio.no (Bjørn Remseth)
Date: 1997/08/14
Subject: Re: lisp is *SLOW*

Sajid Ahmed the Peaceman <peace...@capital.net> writes:

>    Not really. Computer science is restricted to the processor
> that programs run on, and the instructions that they have. Also the
> amount of time it takes to execute these instructions. These
> instructions are simple bitwise operations.

Er, in a word "No". Comp. Sci. is quite a bit more than that, in fact
the borders are extremely fuzzy, and rightfully so, since it is not at
all clear where it is most convenient to put them..

--
(Rmz)

Bj\o rn Remseth   !Institutt for Informatikk    !Net:  r...@ifi.uio.no
Phone:+47 22855802!Universitetet i Oslo, Norway !ICBM: N595625E104337

To post a message you must first join this group.
You do not have the permission required to post.
More options Aug 14 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: Sajid Ahmed the Peaceman <peace...@capital.net>
Date: 1997/08/14
Subject: Re: lisp is *SLOW*

David Thornley wrote:
> All processors and instruction sets are equivalent, in a sense that
> can be quantified, to Turing machines.  Turing machines are mathematical
> constructs.  Moreover, a programming language definition can be
> treated as a mathematical entity.  It is the responsibility, then,
> of the people who build the system to make it work according to the
> abstract, mathematical, specification.

Fine, as long as you keep in mind that your working with a
processor, not some abstract mathematical world.

> Suppose we did restrict ourselves to existing processors:  how would
> we, as computer scientists, get anything done?  There are theorems
> all over the field that are very useful in practice, because we form
> mathematical abstractions that we can reason with.  If we had to
> prove that this K of machine language instructions does such and
> such on, say, a 6502, we'd never be able to generalize.  Computers
> today would be marginally more useful than they were in 1950.

Advanced computer science concepts should be treated the same
way as advanced mathematics concepts. If there is real world
applications
for it, good, if not, it's a complete and utter waste of time.

> >       You have to treat these building blocks as a group
> >of processor instructions, not as mathematical functions.

> Why?  Why do I have to do what you say?  If I had to treat them as
> groups of processor instructions, they'd be far too clumsy to
> do anything useful with.

The reason that you treat them as groups of processor
instructions is beacause that is in fact what they are.

> What I do is create modules of some sort in a higher-level programming
> language.  I use the abstract *mathematical* model of the language to
> demonstrate that they will do what I want them to reliably.  Most of
> the time, the real, physical, systems do what the mathematical model
> says; when they don't, I bitch to the vendor and get results.  When I
> feel like it, I move my modules to another machine, very different,
> that also has a Lisp or C or whatever system, and they still work.
> I suspect you do much the same thing.

Good. Your keeping in mind the the processor.

> How do you do this while treating modules as groups of instructions?
> You would be very hard put to come to any useful conclusions just
> studying the machine code.  Assuming that after long, painful work
> you succeeded, what if you want to upgrade your compiler or move to
> another computer or change the optimization level?  You'd be screwed.

To the extent that your driving, using mathematical functions is
fine. Levels above that, like the 'worlds fastest sorting algorithm'
of a former professor of mine, as well as many other so called computer
science mathematical concepts is complete ignorance of reality.

...

The other problems about looking at computer functions as
mathematical functions in general is the tendency
to make everything recursive.

> Yup.  Microsoft released an operating system for a personal computer
> that requires far more resources to run fast than the mainframes
> I used to work on.  I used it on a 80MHz Pentium with slightly slow
> memory and it was a dog.

I never used windows in the old days, before they came out with
win 95, for the same reasons that you stated. Microsoft has pretty much
forced every pc user to go into a gui, with their release of win95. I've
always wondered why they did it. At first I thought it was because of
the
easier interface for new users, but then I realized it was something
else.
Before the introduction of Windows 95, you had several companies making
their own version of dos (DR. dos,pc dos, etc.). It was fairly easy
to write ones own O.S. thats compatible with msdos. With windows 95,
it's
a completely different story. They pretty much have a lock on the
market.
I know OS/2 is there, but making os/2 compatible with win95 is no easy

> They have gone bankrupt, haven't they?

(in reference to wordperfect corp)

Naa. They sold out when the going got tough.

Peaceman

To post a message you must first join this group.
You do not have the permission required to post.
More options Aug 14 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: Markku Laukkanen <markku.laukka...@research.nokia.com>
Date: 1997/08/14
Subject: Re: lisp is *SLOW*

Sajid Ahmed the Peaceman wrote:

All right, so it easier to think on a iterative style than recursive
style ?

So let's say example from one kind of recursion :

if a process A, which does the job A'
Process B does job B'
Job A' need to call B' to get his job done (and in some cases A' too)
B' needs (in some cases) to call A' do get his jobs done.

So how you are going to implement this without using functions & some
kind of recursion ?

>         When I went to school (graduated in 95), I had a near 4.0
> GPA, and I thought that the Lisp course was the easiest CS course in
> the computer science curriculum.  I was planning on going to grad

I am very pleased to hear that, I always assumed, that the Common Lisp
especially is a little bit difficult language to understand for an
average programmer.

Could you please explain the following terms (I suppose, that the A

unwind-protect
dynamic binding
dynamic scope
differences between let and let*
symbol-macrolet
compiler-let
eval-when
differences between eq/eql/equal/equalp
loop macro
and maybe clos

Trivial, I suppose ?

> school in CS, until I took an upper level course in ML. The stuff
> was easy enough, I got an A in the course, but if that's what grad
> school is about, all this recursive crap that does not have
> any practical application, it's not for me. I then went out, got
> a programming job for about a year, and finally started up my own
> company, where I am now.

Recursive functions doesn't have any practical applications ?
How about games like chess ?

>         I tried to tell the CS teacher about machine language
> instructions,
> but he shoved it off and tried to say that some processors were
> designed
> to do only recursion; biggest bunch of bologna I've ever heard.

Personally I am not interested about machine instructions at all.But
sometimes it nice to be able to debug assembler to see  that the C/C++
compiler REALLY screwed it up

>         Anyway, I've come to conclusion that Lisp is bad, Lisp is
> evil,
> and as the subject says lisp is *SLOW*.
>                                                Peaceman

C++ is horrible, C++ is HUGE, C++ doesn't have GC, Most of the C??
compilers don't support templates properly, Most of the C++ programmers

PKY
C++ Code Development is EXTREMELY slow.

To post a message you must first join this group.
You do not have the permission required to post.
More options Aug 14 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: "Dennis Weldy" <nojunkm...@ever.com>
Date: 1997/08/14
Subject: Re: lisp is *SLOW*

Sajid Ahmed the Peaceman wrote in article <33F35D97.6...@capital.net>...>>

>> Suppose we did restrict ourselves to existing processors:  how would
>> we, as computer scientists, get anything done?  There are theorems
>> all over the field that are very useful in practice, because we form
>> mathematical abstractions that we can reason with.  If we had to
>> prove that this K of machine language instructions does such and
>> such on, say, a 6502, we'd never be able to generalize.  Computers
>> today would be marginally more useful than they were in 1950.

> Advanced computer science concepts should be treated the same
>way as advanced mathematics concepts. If there is real world
>applications
>for it, good, if not, it's a complete and utter waste of time.

Gosh, Im so glad that Turing, Von Neumann, etc didnt feel that way. If
y'look it up, you'll note that the TM predates the computer by a number of
years. Y'see, many of the concepts we use today (finite automata for
lexical analysis, PDAs for parsing, come from the advanced concepts in CS.
Even if t's not clear at th time it' s being developed whether it has
"practical" applications or not.

Hmm.. So do you also consider the work by Hawking, et al to be a waste of
time? After all, what are the practical applications of determining whether
or not black holes radiate? ;-)

It may not be clear now, but someday....?

To post a message you must first join this group.
You do not have the permission required to post.
More options Aug 14 1997, 3:00 am
Newsgroups: comp.lang.lisp
From: Ingvar Mattsson <ing...@sunserv.idasys.se>
Date: 1997/08/14
Subject: Re: lisp is *SLOW*

Sajid Ahmed the Peaceman <peace...@capital.net> writes:

> David Thornley wrote:

[SNIP]

Given that the resulting object code is equally fast, is it better to
spend time waiting for compilations or to write code?

[SNIP]

What part of the "high level CS crap" haven't you seen implemented in
a modern computer system? Queue theory shows up in schedulers, Lots of
hairy stuff ends up in optimizers, relational algebra shows up in
database systems, abstraction shows up in maintainable code, recursion
shows up (in the source) whenever you want to write a parser. Automata
theory shows up in compilers, interpreters, network code, whatever.

[Lisp isn't slow and suitable test environments scetched]

> > When you've done all these things, do some timing.  I'd be really
> > surprised if, for a reasonable benchmark, Lisp was less than half
> > as fast as C.

>    Get rid of any unneccesary recursion, and you'd get your Lisp
> to run even faster. You'll get rid of the overhead needed for repeated
> calls to functions.

That's what the compiler is there for. If I can win 30% development
time for a moderately low increase in compile time and (say) a 5%
speed loss, that is in general a good trade-off. Not that I say that I
will necessary lose speed. Since I have a good development tool (and a
good optimizer), I can concentrate on algorithmic optimization,
instead of low-level bitfiddling. When the algorithms have been
revised, I can then concentrate on bit fiddling. All in all, I
probably end up with a better/faster/more maintainable end product.

OK, this isn't (actually) lisp specific. Most of it is general.
But I must honestly say that I find prototyping to be *much* faster in
lisp than in C.

//Ingvar
--
I don't speak for my employer nor do they speak for me.
Accept this and life will be easier.            ing...@idasys.se

To post a message you must first join this group.
You do not have the permission required to post.
 Discussion subject changed to "lisp is *NOT* slow" by Martin Rodgers
More options Aug 15 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
Date: 1997/08/15
Subject: Re: lisp is *NOT* slow

Sajid Ahmed the Peaceman wheezed these wise words:

>    Advanced computer science concepts should be treated the same
> way as advanced mathematics concepts. If there is real world
> applications
> for it, good, if not, it's a complete and utter waste of time.

Then I suggest that you take a better look at the world of
mathematics. A lot of it may seem abstract and meaningless, and then
years later somebody notices something curious. Pow, suddenly you have
a whole "new" field, like fratcals or chaos theory.

Do you have a use for boolean algebra, lambda calculus? Perhaps not
you personally, but people write compilers might. Don't invite us to
discard everything that you don't understand. If you personally don't
have a use for something, there may still be another person who _will_
have an application for it.

However, your posts so far have suggested that you can not only
dismiss all these things, but advise the rest of us to do the same.
You need trust me on one thing only, which is that we don't need to be
"saved" from ourselves. If we've got it all wrong, fine. You won't
suffer at all, will you?

Give it a rest. You've made your point. If you have anything else to
say, then say it. Otherwise, I fail to see what you can gain you by
making the same point over and over, with us refuting it each time.

As I've said before, let the jury decide.
--
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
"There are no limits." -- ad copy for Hellraiser

To post a message you must first join this group.
You do not have the permission required to post.
 Discussion subject changed to "lisp is *SLOW*" by ? the platypus {aka David Formosa}
More options Aug 15 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: ? the platypus {aka David Formosa} <dform...@st.nepean.uws.edu.au>
Date: 1997/08/15
Subject: Re: lisp is *SLOW*

In <33F1F170.2...@capital.net> Sajid Ahmed the Peaceman <peace...@capital.net> writes:

>David Thornley wrote:

[...]  Computer science involves a lot

>> of mathematics, and computer functions are mathematical functions.
>    Not really. Computer science is restricted to the processor
>that programs run on, and the instructions that they have.

You have never hurd of turing equivlence?  Any computer (given time and
memory) can simulate any other one.  If you have a algrothum you can
implerment it on any proccessor you choose.

> Also the amount of time it takes to execute these instructions.

Of cause this it true,  however the speed of computers is getting faster
and faster. This combined with the fact that most cpus are ideal means that
it is more likely to get better effency gains by makeing the OS use the
proccesor more effently.

> These instructions are simple bitwise operations.

I wish this was true.  Even RISK arcutectures have pritty complex stametens.

[...]

>> correct in a multiprocessing system.)

>    In a multiprocessing system, either there is shared memory,
>or data is passed back and forth between the processors. It can be
>looked at a one processor system with different inputs.

If you could turn this abstraction into realty then a whole of compuer
scince would be alot easyer.  However you can't, a multy processor
device works diffrently and has diffrent costs.  For example porly
writton code can't spread itself accros multipul proccsors and gain
the speed that this brings.

>> Currently, it's easy to stay in business in programming with some
>> pretty lousy systems (the place I'm working now is an excellent
>> example), so it's easy to get a programming job without understanding
>> computer science.

[...]

>    I'm glad that you can admit that this so called good computer
>science stuff has no demand in the work place.

Reread the stament he made, he said its easy to get a job porgraming
where you don't know computer scince.  However a CS deggry means
that what you write will be good,  and seeing the compuleat trash
that I've read this is a grat pitty.  Peaple with no idear of effent
code ect, peaple who use bubble sorts, peaple who write without meaning
full names,  A good dose of CS would cure all that.

[...]
--
Please excuse my spelling as I suffer from agraphia see the url in my header.
Never trust a country with more peaple then sheep. Buy easter bilbies.
Save the ABC Is \$0.08 per day too much to pay?   ex-net.scum and proud
I'm sorry but I just don't consider 'because its yucky' a convincing argument

To post a message you must first join this group.
You do not have the permission required to post.
 Discussion subject changed to "lisp is *NOT* slow" by Sajid Ahmed the Peaceman
More options Aug 15 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: Sajid Ahmed the Peaceman <peace...@capital.net>
Date: 1997/08/15
Subject: Re: lisp is *NOT* slow

Martin Rodgers wrote:

> Give it a rest. You've made your point. If you have anything else to
> say, then say it. Otherwise, I fail to see what you can gain you by
> making the same point over and over, with us refuting it each time.

I think your right. I have way too much stuff to do, and can't
keep up with this thread.  You guys will have to keep going without me.

Peaceman

To post a message you must first join this group.
You do not have the permission required to post.
 Discussion subject changed to "lisp is *SLOW*" by Sajid Ahmed the Peaceman
More options Aug 15 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: Sajid Ahmed the Peaceman <peace...@capital.net>
Date: 1997/08/15
Subject: Re: lisp is *SLOW*

Markku Laukkanen wrote:

> So let's say example from one kind of recursion :

> if a process A, which does the job A'
> Process B does job B'
> Job A' need to call B' to get his job done (and in some cases A' too)
> B' needs (in some cases) to call A' do get his jobs done.

> So how you are going to implement this without using functions & some
> kind of recursion ?

There's a term for what you describe above. It's called
event driven programming, which is common in some kind of windows
programming environment.

With event driven programming, you need to write a handler
for the different events that occur. This is a whole different animal
than standard programming. In this case you have to look at each event
handler as a seperate program, though they may share some of the same
memory, i.e. variables.

The difference between these event handlers and standard recursive
functions is that they terminate completely, after posting an
event notification on some kind of event queu.

> > school in CS, until I took an upper level course in ML. The stuff
> > was easy enough, I got an A in the course, but if that's what grad
> > school is about, all this recursive crap that does not have
> > any practical application, it's not for me. I then went out, got
> > a programming job for about a year, and finally started up my own
> > company, where I am now.

> Recursive functions doesn't have any practical applications ?
> How about games like chess ?

It may not be easy, but it can be done in a nonrecursive
format.

Peaceman

To post a message you must first join this group.
You do not have the permission required to post.
More options Aug 16 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: cbbro...@news.brownes.org (Christopher B. Browne)
Date: 1997/08/16
Subject: Re: lisp is *SLOW*

On Fri, 15 Aug 1997 22:38:36 -0400, Sajid Ahmed the Peaceman
<peace...@capital.net> posted:

>> Recursive functions doesn't have any practical applications ?
>> How about games like chess ?

>    It may not be easy, but it can be done in a nonrecursive
>format.

And you can do accounting using roman numerals.  That makes it neither
sensible nor efficient.

I'd love to see your non-recursive implementation of A* search...  Or
frankly *any* implementation of *any* of the important state space search
algorithms that doesn't use recursion either directly or indirectly.

The question is not whether it can be done or not, but rather whether
the *real* implementations use recursion or not.

Can you describe a chess search algorithm in less than 10 lines of
code *without* using recursion?

MIT's "latest" is described at:
<a href="http://theory.lcs.mit.edu/~plaat/mtdf.html">MTD(f), a new
chess algorithm</a>

If you wish to demonstrate the evils of recursion properly, please
provide a nonrecursive implementation of any algorithm similar to MTD(f)
or Alpha/Beta pruning.  And show how your nonrecursive implementation:
a) Takes less time, and
b) Consumes less memory.

Do you think that the IBM developers that wrote Deep Blue used purely
"iterative" algorithms, eschewing recursion?  I rather doubt it...
--
Christopher B. Browne, cbbro...@hex.net, chris_bro...@sdt.com
PGP Fingerprint: 10 5A 20 3C 39 5A D3 12  D9 54 26 22 FF 1F E9 16
URL: <http://www.hex.net/~cbbrowne/>
Bill Gates to his broker: "You idiot, I said \$150 million on **SNAPPLE**!!!"

To post a message you must first join this group.
You do not have the permission required to post.
 Discussion subject changed to "Recursion vs. Iteration in Game Players (was Re: lisp is *SLOW*)" by Norman Bullen
More options Aug 16 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: Norman Bullen <nbul...@ix.netcom.com>
Date: 1997/08/16
Subject: Recursion vs. Iteration in Game Players (was Re: lisp is *SLOW*)

Without intending to get involved in any debate about the "evils of
recursion" (I use recursion where it's appropriate and try to identify
those algorithms where iteration might be better) I'd like to comment on
the issue of move trees in Chess.

There are good reasons for generating move trees in an iterative
fashion. I have myself written a Checkers player that generated the move
tree without use of recursion. (It wasn't very good but that's because
I'm not a very good Checkers player; had nothing to do with the way the
move tree was generated.)

The primary reason for the design of my iterative algorithm was to
achieve a "breadth-first" rather than "depth-first" move tree. A
recursive algorithm will be "depth-first": from any given board
situation it will generate a move and look at the board situation
resulting from that move, and so on, to some limit of recursion depth.
Only after consequences of that first move are examined will it generate
another move at the top level. I wanted an algorithm that would look
first at each move at the top level before looking at the moves possible
from each of those resulting board situations.

I did this by constructing a tree in "heap" memory as opposed to "stack"
memory as recursion normally does. The root of the tree is a
representation of the current board situation. Each node of the tree
must be linked not only to its parent node and all child nodes but also
in a single list of nodes in the order of creation. That list initially
consists of just the root node. Now, iterate along that list. Generate
each possible move and for each of them construct the resulting board
situation and append it to the list. Then step along to the next node on
the list and do the same thing, an iterative process. Note that you may
never get to the end of the list because you keep adding more nodes to
the end of it. Just as a recursive algorithm must have some depth limit
this algorithm must have a limit to the total number of nodes that it
can generate.

After generating this tree with a certain number of nodes I did use
recursive algorithm to walk the tree from the root to do the mini-max
alpha/beta search although I believe I could have incorporated that into
the initial iterative loop as well.

A breadth-first algorithm seems that it might be better for some timed
games, such as Chess and Checkers. You can generate nodes at a deeper
and deeper level until you run out of time without worrying that you
might have missed the best move at the root level simply because you
looked depth-first at other possibilities first.

Also, and this may be more important in Checkers, there are many cases
where different combinations of moves may lead to the same board
situation. If you have the whole tree in memory you can combine leaves
that match (even when they are on different levels) and only generate
the sub-tree once. (This turned out to be REALLY messy in practice!)

Finally, if you have the tree in memory, you may be able to save a
little time by saving the portion of it (probably a small portion) that
results from the move that you make and the move that the opponent
makes.

It does, probably, require more memory to build the move tree this way.

Norm

To post a message you must first join this group.
You do not have the permission required to post.
 Discussion subject changed to "Lisp *is* *fast*, was Re: lisp is *SLOW*" by Bulent Murtezaoglu
More options Aug 17 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: Bulent Murtezaoglu <mu...@servtech.com>
Date: 1997/08/17
Subject: Re: Lisp *is* *fast*, was Re: lisp is *SLOW*

In article <joswig-ya023180001008971648130...@news.lavielle.com>

Rainer Joswig writes:

[quoted parts of my posting deleted]

>"Speed" is often not an issue on modern personal computers. These
>machines are most of the time *idle*, the user is just fuzzing
>around, for many tasks there is little processing power needed or
>there is even hardware support.

Agreed.  I agreed with that observation in your original article too.
My point is: [knowledgeable] people who go after the latest and
fastest usually have a reason _other than_ GUI speed or what not.  We
_seem to_ disagree because we're talking about different people, and
the Peaceman clouded everybodies thinking recently.  I understand your
point about window systems, and mostly agree with it.  Having said
that, I should also point out -- as I'm sure you will agree -- there
are other legitimate uses of computers that do not entail the system
about by dynamic nature of Lisp is going to make any sense at all from
an engineering perspective, they will do so for computation intensive

[examples noted and deleted]

>**Maybe** some C code would iterate over 90000 windows in one or two
>seconds. I don't care.

You would care if you weren't iterating over windows but, say, records
of sensor data for a robotics system.

>People who don't know nothing about recursive functions (optimized
>or not ;-) ) enforce a totally wrong point of view:
>Premature optimization.

I'm not one of them.  I mostly worry about the big-Oh when I'm designing
the code and then worry about the constants later if I need to.  I suspect
that's what most CL programmers do.  Recursion and such is a side issue,
IMHO, unless you're writing for the benefit of you-know-who.

BM> If people want to argue "C++ is fast" they'd be much better off going
BM> after CLOS.  AFAIK (and I'd love to be corrected on this) once you enjoy
BM> the luxury of fast development using CLOS and get your code working with
BM> generic function calls in your inner loops, you're stuck with large

>What if you call generic functions in inner loops where speed
>is really needed?

>Hmm?

Huh?  In that case you get stuck with large constants.  Here's the
point I'm trying to make: With CL minus CLOS, one has the means to get
the code working fast and then optimizing the speed-critical parts of
the code w/o radical rewrites.  Good compilers help the programmer in
that regard (I used CMUCL as an example because it is outstanding).
OTOH, w/ CLOS If you call CLOS generic functions on CLOS objects in
your inner loops you get stuck with inefficency.  (AFAIK, correct me)
The compiler cannot optimize the slot accesses (offsets are unkown),
you cannot inline your generic function calls etc.  The dynamism that
helped you so much during development is now built into your code and
is giving you inefficiency.  So while you see web pages proudly displaying
"Lisp is faster than C" examples, you will not see "CLOS is faster than
C++" examples because even if you restrict CLOS functionality to what you
can do in C++, you cannot get comparable efficiency.  That's all I'm saying.

>Don't use them there when unnecessary. Simply as that.
>CLOS's generic functions are not the solution to all software
>problems. Sadly. ;-) Still you can write this code
>in Common Lisp and enjoy the fast development using Common Lisp.

Yes, of course, that's exactly what I mean.  What I am arguing
against is the attitude (not necessarily yours) that labels people
who bring up efficiency issues as ignorant un-enlightened bozos
when in fact _there are_ efficiency gains in C++ vs. CLOS and
there are legitimate needs for that efficiency.  It doesn't have
to be this way, as far as I can see (if we could take some
dynamism out of CLOS in finished code.)  But it is.

Are we converging?

BM

To post a message you must first join this group.
You do not have the permission required to post.
 Discussion subject changed to "Recursion vs. Iteration in Game Players (was Re: lisp is *SLOW*)" by smr...@mail.com
More options Aug 17 1997, 3:00 am
Newsgroups: comp.lang.lisp
From: smr...@mail.com
Date: 1997/08/17
Subject: Re: Recursion vs. Iteration in Game Players (was Re: lisp is *SLOW*)

> The primary reason for the design of my iterative algorithm was to
> achieve a "breadth-first" rather than "depth-first" move tree. A
> recursive algorithm will be "depth-first": from any given board
> situation it will generate a move and look at the board situation

A general way of looking at this is, given a problem, you'll divide it
into subproblems, adding the subproblems to the list. You then continue
by pulling a problem off the list.

If the list is queue, you are doing a breadth-first search.

If the list is stack, you are doing a depth-first search.

Or you can sort entries on the list for a prioritised search.

Or you can pluck subproblems randomly from the list.

You can iterate or recurse through the list in any case. If the list is a
stack, you can combine that with the protocol stack when you use
recursion.

-------------------==== Posted via Deja News ====-----------------------
http://www.dejanews.com/     Search, Read, Post to Usenet

To post a message you must first join this group.
You do not have the permission required to post.
 Discussion subject changed to "Lisp *is* *fast*, was Re: lisp is *SLOW*" by Erik Naggum
More options Aug 18 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: Erik Naggum <cle...@naggum.no>
Date: 1997/08/18
Subject: Re: Lisp *is* *fast*, was Re: lisp is *SLOW*

* Bulent Murtezaoglu
| So while you see web pages proudly displaying "Lisp is faster than C"
| examples, you will not see "CLOS is faster than C++" examples because
| even if you restrict CLOS functionality to what you can do in C++, you
| cannot get comparable efficiency.  That's all I'm saying.

I actually haven't seen any comparisons at all between C++ and CLOS.  I'm
sure you have, so I would be interested in looking at them and running them
on my own.

#\Erik
--
man who cooks while hacking eats food that has died twice.

To post a message you must first join this group.
You do not have the permission required to post.
 Discussion subject changed to "lisp is *SLOW*" by Andreas Eder
More options Aug 18 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
Followup-To: comp.lang.lisp
From: Andreas Eder <a...@laphroig.mch.sni.de>
Date: 1997/08/18
Subject: Re: lisp is *SLOW*

The Peaceman writes:
>Advanced computer science concepts should be treated the same
>way as advanced mathematics concepts. If there is real world
>applications for it, good, if not, it's a complete and utter waste of time.

In the first half of the 19th century Group Theory was developped by
some mathematicians, notably Evariste Galois. Nobody knew then, that it would be
of any use outside the world of mathematics. So it was an utter waste of time by
your definition. Now, about 100 years later it became a vital tool for
understanding quantum physics and the foundations of modern chemistry. Very
real world applications, if you ask me. (There wouldn't be any computers
without it). So, is it utter waste of time, or isn't it. You are surely wise
enoughto foresee all the applications a new mathematical theory has, or predict
that it will never have one, aren't you ?

Andreas

To post a message you must first join this group.
You do not have the permission required to post.
 Discussion subject changed to "lisp is *NOT* slow" by Martin Rodgers
More options Aug 18 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
Date: 1997/08/18
Subject: Re: lisp is *NOT* slow

Andreas Eder wheezed these wise words:

> You are surely wise
> enoughto foresee all the applications a new mathematical theory has, or predict
> that it will never have one, aren't you ?

And then there's that question about the light bulb, to which the

Potential is very hard to evaluate at a point where only potential can
be appreciated. Not all of us, if any, have the foresight required.

A little history can teach us a lot. Anyone who thinks otherwise
should be using candles. They might even prefer the darkness!
--
<URL:http://www.wildcard.demon.co.uk/> You can never browse enough
"There are no limits." -- ad copy for Hellraiser

To post a message you must first join this group.
You do not have the permission required to post.
 Discussion subject changed to "Recursion vs. Iteration in Game Players (was Re: lisp is *SLOW*)" by Richard A. O'Keefe
More options Aug 18 1997, 3:00 am
Newsgroups: comp.lang.lisp, comp.lang.c++, comp.programming
From: o...@goanna.cs.rmit.edu.au (Richard A. O'Keefe)
Date: 1997/08/18
Subject: Re: Recursion vs. Iteration in Game Players (was Re: lisp is *SLOW*)

Norman Bullen <nbul...@ix.netcom.com> writes:
>The primary reason for the design of my iterative algorithm was to
>achieve a "breadth-first" rather than "depth-first" move tree. A
>recursive algorithm will be "depth-first": from any given board
>situation it will generate a move and look at the board situation
>resulting from that move, and so on, to some limit of recursion depth.

Wrong, and wrong on two counts.
(a) I've written simple depth first and breadth first searches in a
variety of declarative languages.  It is extremely hard to tell
them apart.
(b) There are now quite a few variations on the idea of iterative
deepening, based on the idea of have two (or possibly even more)
searches:
outer search looks for a depth (or other) bound
inner search looks within that bound
The reason for this is to combine the virtues of depth first search
(simplicity and storage economy) with the virtues of breadth first search.

The algorithm you describe can easily and *naturally* be implemented in a
recursive way.  Even this bit:

>Also, and this may be more important in Checkers, there are many cases
>where different combinations of moves may lead to the same board
>situation. If you have the whole tree in memory you can combine leaves
>that match (even when they are on different levels) and only generate
>the sub-tree once. (This turned out to be REALLY messy in practice!)

That's quite easy to handle in a declarative language as long as the
implementation supports function caching (memo functions, tabling).
There is, for example, a Common Lisp implementation of memoing available
over the net.
--
Four policemen playing jazz on an up escalator in the railway station.
Richard A. O'Keefe; http://www.cs.rmit.edu.au/%7Eok; RMIT Comp.Sci.