I've been told not to continue considering Lisp, because of considerations
such as the above. I would like to have some good arguments available, just
in case I get a chance to argue in favor of Lisp again. The hardest point to
argue against is that the fastest Lisp, Allegro, does not allow distribution
of the compiler with runtimes, and the 2nd fastest Lisp, Xanalys, is several
times slower.
I'm omitting my name because one of my coworkers has a nasty habit of doing
global searches to see what each of us is posting about, and my name is
unique enough to be easy to search on. I don't have a blind box handy for
replies, so please post them. Thanks in advance.
> If a Lisp vendor does not allow distribution of the Lisp compiler with
> runtimes, is there some other way to translate a special purpose programming
> language to Lisp and execute the result? Could the result be made as fast as
> C++ for handling large amounts of simple data?
- one could use some of the CL->C compilers, compile
the Lisp code to C and link the compiled code into the running
Lisp system
- use a self-written (in Lisp) compiler (your language -> C) and
link the resulting code to your running Lisp (this is the way
a sound synthesis program written in Lisp works)
- maybe use CMU CL
> I've been told not to continue considering Lisp, because of considerations
> such as the above. I would like to have some good arguments available, just
> in case I get a chance to argue in favor of Lisp again. The hardest point to
> argue against is that the fastest Lisp, Allegro, does not allow distribution
> of the compiler with runtimes,
I think you can get a license from Franz for doing that - it may be
expensive, though.
> and the 2nd fastest Lisp, Xanalys, is several
> times slower.
Contact Xanalys with some simple benchmarks. In general I found LispWorks
to be quite fast - maybe you need to change the code a bit to make it perform
better (this is not unusual). Lisp compiler usually have different
ideas of how and when to optimize. CMU CL at full verbosity gives
a lot of ideas where code can be improved - its compiler tells
you what kind of optimization it does and which not (and why not). The
Lisp machine's compiler, for example, ignores almost all declarations -
but there are plenty special tools to speed up execution.
I guess, Xanalys would be happy to improve their compiler,
in case there is a real problem.
--
Rainer Joswig, Hamburg, Germany
Email: mailto:jos...@corporate-world.lisp.de
Web: http://corporate-world.lisp.de/
>
> I'm omitting my name because one of my coworkers has a nasty habit of doing
> global searches to see what each of us is posting about, and my name is
> unique enough to be easy to search on. I don't have a blind box handy for
> replies, so please post them. Thanks in advance.
This is the most redicoulous stuff I ever have written. Why do you
care about a co-worker. It's your life and you're opinion. Times of
slavery should have gone forever, and if you don't stand up, then
resign. And search for a more open minded environment.
Why do you post this question here and do not ask the vendors
directly? I'm quite sure that one can find an agreement which suits
both sided.
And for the speed of C++ please have a look at:
http://www-aig.jpl.nasa.gov/public/home/gat/lisp-study.html
just really good C Programmers get it faster than C++ programs. So at
least in this study Lisp beats hand down.
And nowhere did you mention what kind of problem you're working on. So
how should anyone have an idea if Common Lisp would suit the
requirements?
Friedrich
--
for e-mail reply remove all after .com
> If a Lisp vendor does not allow distribution of the Lisp compiler with
> runtimes, is there some other way to translate a special purpose programming
> language to Lisp and execute the result? Could the result be made as fast as
> C++ for handling large amounts of simple data?
>
I think that if you want native code compilation at some stage in the
process, then you need the compiler, so your choice is really either
to have the compiler or live with an interpreter. Of course the
interpreter may be faster than C++...
> I've been told not to continue considering Lisp, because of considerations
> such as the above. I would like to have some good arguments available, just
> in case I get a chance to argue in favor of Lisp again. The hardest point to
> argue against is that the fastest Lisp, Allegro, does not allow distribution
> of the compiler with runtimes, and the 2nd fastest Lisp, Xanalys, is several
> times slower.
>
There are other lisp systems which allow distributiion of the compiler
and can certainly be competitive in terms of performance. CMUCL is
often pretty good. As far as I'm aware none of the free systems
competes very well with the commercial ones in terms of overall
quality, support, completeness of implementation and number of
platforms supported, but this is only a problem if you need these
things. Significant commercial applications (yahoo store) have
been written using noncommercial lisps.
--tim
Then the right approach is to ask that vendor what it would take to
get permission to do that. All vendors allow distribution of the
Lisp compiler with runtimes, but at different terms than without.
| I've been told not to continue considering Lisp, because of
| considerations such as the above.
Most likely, you will be told not to continue to consider Lisp after
this issue has been resolved, because it is so obviously bogus.
You can most probably do without the compiler. Provided that you
compile enough of the support system, the interpreter can be used to
process a higher-level language in such a way that more than 90% of
the processing time is still spent in compiled code. This means
designing your own application-level language, but that's part of
the fun with programming in a powerful language.
| The hardest point to argue against is that the fastest Lisp,
| Allegro, does not allow distribution of the compiler with runtimes,
This is fortunately just plain wrong. Just call them and ask.
| I'm omitting my name because one of my coworkers has a nasty habit
| of doing global searches to see what each of us is posting about,
| and my name is unique enough to be easy to search on.
Please accept my sympathies. Cow-orkers like that should be shot.
(Now, if anyone of mine does the same, he sure won't tell anyone. :)
#:Erik
--
I agree with everything you say, but I would
attack to death your right to say it.
-- Tom Stoppard
>anon...@nosuchisp.com writes:
>things. Significant commercial applications (yahoo store) have
>been written using noncommercial lisps.
Do you know what lisp was used for Yahoo store? O:-) Just curious...
//-----------------------------------------------
// Fernando Rodriguez Romero
//
// frr at mindless dot com
//------------------------------------------------
> Do you know what lisp was used for Yahoo store? O:-) Just curious...
>
CLISP (at least this was true sometime in late 98, when Paul Graham
gave his LUGM talk on it)
--tim
In my (limited) experience, CLISP is often fast enough. In the PC, well
optimized code often runs as fast compiled in CLISP as it does in Corman Lisp
(a machine-language compiler). Corman beats CLISP easily on sloppy code,
though...
Corman is also a very good option for Windows machines. It's not free, but
it's not very expensive either (at $200 it's just about $100 more than what i
want to spend now). Still, Roger Corman is very nice to allow you to use the
bare-bones character environment for free.
glauber
--
Glauber Ribeiro
thegl...@my-deja.com http://www.myvehiclehistoryreport.com
"Opinions stated are my own and not representative of Experian"
Sent via Deja.com http://www.deja.com/
Before you buy.
CLISP is often good enough as long as you do not write any functions
of your own that implement any abstractions that require multiple
calls to user-land functions. CLISP's performance dichotomy between
its C-implemented compiled runtime functions and your byte-compiled
Lisp functions leads programmers to optimize at a low abstraction
level because they are penalized for their abstractions. This is
not a good thing for a Lisp environment, where we want to encourage
function calls and make abstractions as inexpensive as possible. We
don't want people _not_ to use Common Lisp because of performance
issues or to think that only built-ins are fast because they are
written in C. Approach CLISP as a good toy implementation of Common
Lisp, and move on to a real compiler if you ever plan to investigate
performance issues.
> CLISP (at least this was true sometime in late 98, when Paul Graham
> gave his LUGM talk on it)
Which subsystems of Yahoo! Store were developed with CLISP?
Paolo
--
EncyCMUCLopedia * Extensive collection of CMU Common Lisp documentation
http://cvs2.cons.org:8000/cmucl/doc/EncyCMUCLopedia/
It is my understanding that there is an, essentially, batch process in CLISP
that is used to create the pages for the individual stores. CLISP isn't used
to actually "run" the site, but rather to generate a set of related static
pages.
Somewhere, perhaps PG's home page (not that I have the URL, mind you), is
his enumerations about "Web Success". In them he frowns on server generated
HTML pages, so this would fit in quite well philisophically.
It's also interesting because people have to be thinking "How the heck is
Yahoo! driving a web site with CLISP?!?!??".
Simple, it isn't!
While everyone discusses the how to write web servers in Lisp, here's an
application that simply punts on the entire issue.
Of course, not every web application can be met with a block of static
pages, but many can.
Regards,
Will Hartung
(will.h...@havasint.com)
<http://www.paulgraham.com/mistakes.html>
The thing that seems most relevant:
Dynamically generated HTML is bad, because search engines ignore it.
And he comments that slow loading of pages is bad, and static generation
certainly means that there's no "runtime" cost to a lot of this...
>It's also interesting because people have to be thinking "How the heck
>is Yahoo! driving a web site with CLISP?!?!??".
>
>Simple, it isn't!
>
>While everyone discusses the how to write web servers in Lisp, here's an
>application that simply punts on the entire issue.
If it gets too slow, they need only spawn a few more CLISP processes to
do batch work. It doesn't forcibly hurt the runtime behaviour. Which is
pretty cool.
It's not particularly visible in looking at some of the HTML that is
generated; they don't have any <meta name="generator" content="CLISP">
tags :-). The HTML is very vaguely reminiscent in style to what is
generated by the application in his book "Common Lisp"
>Of course, not every web application can be met with a block of static
>pages, but many can.
This is in effect the same idea as the decision between:
a) Resolving code at compile time, using macros, versus
b) Resolving behaviour at runtime via some sort of dispatching.
Work that is done at compile time need never be done at runtime,
obviously saving that bit of CPU then. It doesn't too much matter how
slow CLISP is if the code is run at "compile" time.
In a sense, this application represents the opposite to the way
people tend to promote Lisp; "Lisp as Dynamic Language."
And I fairly heartily agree.
a) Search Engines Are Your Friend.
b) Web Apps tend to make Really Stupid Use of dynamic evaluation.
When they use JavaScript to do things that would be just as nicely
represented by static links, That's Really Stupid.
--
cbbr...@acm.org - <http://www.hex.net/~cbbrowne/linux.html>
(THASSERT (PLANNER RG))
-- Example of HACKER statement.
> In my (limited) experience, CLISP is often fast enough. In the PC,
> well optimized code often runs as fast compiled in CLISP as it does in
> Corman Lisp (a machine-language compiler). Corman beats CLISP easily
> on sloppy code though...
My limited experience with Corman Lisp tells that undeclared, general
code runs quite fast on it relative to other implementations (though
declarations don't make much difference). I would not call undeclared
code sloppy, and probably it's not what you have meant, so I'm wondering
about the meaning of "sloppyness" that favors one implementation over
the other. Any specifics?
Robert
Also see Philip Greenspun's comments on this issue, and his solution:
<URL:http://www.arsdigita.com/books/panda/publicizing>
[...skip down 2/3 of the way...]
Hiding Your Content from Search Engines (By Mistake)
...
I built a question and answer forum...all the postings were
stored in a relational database. ... The URLs end up looking
like "http://photo.net/bboard/fetch-msg.tcl?msg_id=000037".
...
AltaVista comes along and says, "Look at that question mark.
Look at the strange .tcl extension. This looks like a CGI script
to me. I'm going to be nice and not follow this link even though
there is no robots.txt file to discourage me."
Then WebCrawler says the same thing.
Then Lycos.
I achieved oblivion.
Briefly, his solution was:
Write another AOLServer TCL program that presents all the messages
from URLs that look like static files, e.g., "/fetch-msg-000037.html"
and point the search engines to a huge page of links like that.
The text of the Q&A forum postings will get indexed out of these
pseudo-static files and yet I can retain the user pages with their
*.tcl URLs.
...
(see my discussion of why the AOLserver *.tcl URLs are so good in
the chapters on Web programming; see http://photo.net/wtr/thebook/
bboard-for-search-engines.txt for the source code).
[Greenspun uses Tcl where many of us would choose Lisp (or even Scheme).]
-Rob
-----
Rob Warnock, 31-2-510 rp...@sgi.com
Network Engineering http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. PP-ASEL-IA
Mountain View, CA 94043
>The thing that seems most relevant:
> Dynamically generated HTML is bad, because search engines ignore it.
As long as your link doesn't look like a cgi script, you may fool a
robot into following it and indexing it, but usually it's not a good idea (for
you and the search engine).
> As long as your link doesn't look like a cgi script, you may fool a
> robot into following it and indexing it, but usually it's not a good
> idea (for you and the search engine).
Why? As long as my site provides a finite number of resources, I
don't see how it's anyone else's business how the pages were
generated.
--
chr
As long as your contents is limited its ok, but systematically
following cgi scripts could end up dumping huge databases into the index as
well putting too much load on your server.
en> CLISP is often good enough as long as you do not write any
en> functions of your own that implement any abstractions that
en> require multiple calls to user-land functions.
en> Approach CLISP as a good toy implementation of Common Lisp, and
en> move on to a real compiler if you ever plan to investigate
en> performance issues.
I agree with your second statement, but don't see any justification
for the first, unless you implicitly assume that every programmer is
interested in performance on each problem. If I am not particularly
concerned about performance, I will not notice the slowdown from
user-defined functions, so I will not be discouraged from using
abstraction.
--
Eric Marsden <URL:http://www.laas.fr/~emarsden/>
This is a good point. I tend to stress that CLISP is not suitable
if your goal is performance, anyhow, so I should also argue that
it's good enough as long as you don't prioritize performance.
_However_, my experience is that even though you ignore performance
(as long as you get your answers within a reasonable amount of time),
a large number of programmers will want to know the "expensiveness"
of what they do and then soon discover that builtins in CLISP are
very fast (especially bignums) while their own code runs much, much
slower. Even if you ignore performance consciously, I don't think
you can completely ignore the effect of _observing_ that some things
are much faster than others even if you did not set out to find out
about this to begin with.
Hence my cautions. If you know about them and are aware of the
conditions under which CLISP is and is not good enough, I don't
think CLISP is a bad choice (as long as you use the ANSI mode with
the -a option, but that that's not the default is another gripe).
> Hence my cautions. If you know about them and are aware of the
> conditions under which CLISP is and is not good enough, I don't
> think CLISP is a bad choice (as long as you use the ANSI mode with
> the -a option, but that that's not the default is another gripe).
Erik, i'm curious to know if you have a favorite among the free Unix Common
Lisp implementations.
When I looked around for Common Lisp environments back in late 1993
so I could program in a programming language instead of killing
myself with C++, I tried a number of languages, including Ada,
Smalltalk, Scheme and Common Lisp, all of which have reasonably good
free implementations. (MIT Scheme was excellent, but Scheme is one
of those languages that are OK only as long as you live in a dorm
room and are unaware that you will eventually want to buy a house.)
Common Lisp implementations included CLISP and CMUCL at the time and
CLISP was immature and nearly useless at the time, even worse than
KCL, which a friend got his hands on sometime in 1987. CMUCL has
been excellent all the time I have used it, but I was unimpressed
with the CLOS (PCL) performance and hence did not work a lot with
that part of CL until I got Allegro CL. I still have a complete
CMUCL system installed on my system and occasionally use it to
ensure that I know which parts of my code are portable to it, but I
no longer use it in any "production system" sense.
:-)
It's really nice, and it makes you reinvent the wheel each time, but hey,
it's so easy to reinvent the wheel in Scheme that many people don't care.
Some people apparently have a lot of free time.
> Common Lisp implementations included CLISP and CMUCL at the time and
> CLISP was immature and nearly useless at the time, even worse than
> KCL, which a friend got his hands on sometime in 1987. CMUCL has
> been excellent all the time I have used it, but I was unimpressed
> with the CLOS (PCL) performance and hence did not work a lot with
> that part of CL until I got Allegro CL. I still have a complete
> CMUCL system installed on my system and occasionally use it to
> ensure that I know which parts of my code are portable to it, but I
> no longer use it in any "production system" sense.
I downloaded CMUCL thinking maybe i could get it to compile for AIX and
Windows (the platforms i currently work in), but it looks like setting it up
is a daunting task.
CLISP has some pretty strange C source, but it compiled fine in AIX and had a
pre-made Windows binary. I'm glad for CLISP, because if it wasn't for it i
wouldn't have tried to learn Common Lisp (i'd probably be using some version
of Scheme instead). I think it's nice of Franz to let people have a limited
version of Allegro for free, but it's too limited for my needs and i'm just
not going to pay for the full system. So i guess i'll be using "CLISP -a" for
the time being until i have justification to buy a commercial system.
Incidentally, it seems that there's a lot more interest in developing free
Scheme systems than Lisp. I guess it's to be expected, since Scheme is
designed to be simple to implement.
>In article <31815181...@naggum.net>,
> Erik Naggum <er...@naggum.net> wrote:
>> * glauber <thegl...@my-deja.com>
>> | Erik, i'm curious to know if you have a favorite among the free Unix
>> | Common Lisp implementations.
>>
>> When I looked around for Common Lisp environments back in late 1993
>> so I could program in a programming language instead of killing
>> myself with C++, I tried a number of languages, including Ada,
>> Smalltalk, Scheme and Common Lisp, all of which have reasonably good
>> free implementations. (MIT Scheme was excellent, but Scheme is one
>> of those languages that are OK only as long as you live in a dorm
>> room and are unaware that you will eventually want to buy a house.)
>
>
>:-)
>
>It's really nice, and it makes you reinvent the wheel each time, but hey,
>it's so easy to reinvent the wheel in Scheme that many people don't care.
>Some people apparently have a lot of free time.
You're missing the point: Scheme was designed as a teaching and
investigation tool, not for "real world" programming. As a learning language
it has been far more successful that CL as a general purpose language. :-P
> On Thu, 26 Oct 2000 13:55:53 GMT, glauber <thegl...@my-deja.com> wrote:
>
> >In article <31815181...@naggum.net>,
> > Erik Naggum <er...@naggum.net> wrote:
> >> * glauber <thegl...@my-deja.com>
> >> | Erik, i'm curious to know if you have a favorite among the free Unix
> >> | Common Lisp implementations.
> >>
> >> When I looked around for Common Lisp environments back in late 1993
> >> so I could program in a programming language instead of killing
> >> myself with C++, I tried a number of languages, including Ada,
> >> Smalltalk, Scheme and Common Lisp, all of which have reasonably good
> >> free implementations. (MIT Scheme was excellent, but Scheme is one
> >> of those languages that are OK only as long as you live in a dorm
> >> room and are unaware that you will eventually want to buy a house.)
> >
> >
> >:-)
> >
> >It's really nice, and it makes you reinvent the wheel each time, but hey,
> >it's so easy to reinvent the wheel in Scheme that many people don't care.
> >Some people apparently have a lot of free time.
>
> You're missing the point: Scheme was designed as a teaching and
> investigation tool, not for "real world" programming. As a learning language
> it has been far more successful that CL as a general purpose language. :-P
When is a general purpose language successful? How much
in software sales would that be for example?
> I downloaded CMUCL thinking maybe i could get it to compile for AIX and
> Windows (the platforms i currently work in), but it looks like setting it up
> is a daunting task.
Since Windows is not a supported OS, you'd have to write low-level
support for the OS (e.g. signal handling code), etc. first. With AIX
the situation is even worse, since CMUCL has neither support for AIX,
nor a compiler back-end for the POWER processor.
In any case you'd need access to a platform that has a working CMUCL
first, for the cross-compilation.
Regs, Pierre.
--
Pierre R. Mai <pm...@acm.org> http://www.pmsf.de/pmai/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents. -- Nathaniel Borenstein
Thanks!
That's exactly what it looked like to me, and i decided to stay away for now.
g
The problem with tools that are only good for the teaching/learning
process is that they are actually teaching many very bad things
amidst all the good stuff. It is good to learn how to build your
own tools. It is bad to learn that you have to, meaning that you
are more likely to spend a lot of time building something that is
already out there rather than try to understand it and use it. It
is good to learn recursion instead of iteration because it expands
the way you think in ways those who don't grasp recursion cannot
fathom. It is bad _not_ to learn iteration and when it is a good
thing, when it is more readable and understandable than recursion,
and above all, when recursion would kill you performance-wise and
iteration would not. It's just like the irrational gotophobia that
some students of programming have (no real programmer _fears_ goto)
because they have never actually seen the hell and spaghetti code
that they have been saved from by structured programming.
| As a learning language it has been far more successful that CL as a
| general purpose language. :-P
Hmpf! But this probably _is_ true. Scheme succeeds because it is
easy to teach to bright people and fun to implement. Hence, we have
about a million implementations of Scheme, and only one of them is
good for anything more than teaching others how to implement Scheme.
I would submit that if Common Lisp was used instead of Scheme,
instead of the modules using:
(define (reverse lst)
(if (pair? lst)
(append (reverse (cdr lst)) (list (car lst)))
lst))
They would instead spend three weeks demonstrating recursion using:
(defun reverse (list)
(if (consp list)
(append (reverse (cdr list)) (list (car list)))
list))
Instead of spending a couple weeks ignoring the Scheme DO form, they
would spend a couple weeks ignoring DO and LOOP and all the other
iterative functions of CL.
Courses generally don't seem to present the more "advanced" structures
of Scheme, such as hash tables, macros, and records/structures. I
would question why you'd expect them to teach more about CL.
--
aa...@freenet.carleton.ca - <http://www.hex.net/~cbbrowne/lsf.html>
History of Epistemology in One Lesson
"First Hume said "We can't really know anything", but nobody believed
him, including Hume. Then Popper said "Hume was right, but here's what
you can do instead...". Bartley then debugged Popper's code."
-- Mark Miller
I made no comments to that effect. Why do you think I expect that?
I don't _compare_ the two languages -- I'm simply pointing out
serious flaws in Scheme and how it is taught and used. Since we
don't have any data on how CL would be taught if it were where
Scheme is, I fail to see how I could be thought to _expect_ any
improvements just by changing the language being taught.
#:Erik
--
Does anyone remember where I parked Air Force One?
-- George W. Bush
>>
>> You're missing the point: Scheme was designed as a teaching and
>> investigation tool, not for "real world" programming. As a learning language
>> it has been far more successful that CL as a general purpose language. :-P
>
>When is a general purpose language successful? How much
>in software sales would that be for example?
Don't take me wrong, I like Cl and I hope that growing usage of scheme
in colleges will benefit CL: some of the students that were introduced to
scheme might turn to CL when looking for something more powerful.
> On Thu, 26 Oct 2000 19:22:26 +0200, Rainer Joswig
> <jos...@corporate-world.lisp.de> wrote:
>
>
> >>
> >> You're missing the point: Scheme was designed as a teaching and
> >> investigation tool, not for "real world" programming. As a learning language
> >> it has been far more successful that CL as a general purpose language. :-P
> >
> >When is a general purpose language successful? How much
> >in software sales would that be for example?
>
> Don't take me wrong, I like Cl and I hope that growing usage of scheme
> in colleges will benefit CL:
Usage of Lisp at Schools/Colleges/Universities is **crucial** for
the success and survival of Lisp. This is the number one source for
contacts with Lisp technology (another being the
GNU/Linux/Free Software/Open Source movements).
> some of the students that were introduced to
> scheme
"Introduced to Scheme" doesn't mean they are doing something useful
(-> writing software that's being used by somebody) with it.
I have seen "academic exercises" ("write a routine that reverses a list" -
Common Lisp spoiled the party, since half of the exercises are
already standard in CL and many others got trivial).
So Scheme is often not taught as an introduction to "Scheme and
software development", but as an introduction to Computer Science
and as a kind of academic puzzle. CS students are often
not introduced to the idea that there is a customer for
the software, that he has a problem that has to be solved
and that he might have limited time and money.
> might turn to CL when looking for something more powerful.
Since often Scheme has not been used "useful", students later will
not look for Common Lisp, but for the obvious C++ and friends.
I think students take Common Lisp because they have to.
It is more of a tradtion question. Instructors often use Scheme
in basic courses and in their research projects Common Lisp
becomes an option. They can select the better students and
in multi-year, multi-university projects Common Lisp often is
(was?) a viable solution.
How could one make CL even more successful in Universities?
- all students should get access to the best Lisp environment around.
Many universities have site licenses for Lisp on their
Unix clusters. This should be extended to PCs/Macs/Linux/... .
- Enable application delivery in Lisp (it is technical possible,
but it needs to be painless -> costs? license? distribution options?
platforms? compiler distribution? downloading software?
application toolkits?), so students will learn that
writing APPLICATIONS in Lisp is a viable option.
- provide introductory material, so that students quickly can move on
from Scheme to CL. What are the differences to Scheme? The student
might have seen SICP, how does she apply this knowledge
to CL?
- follow research trends so that Lisp stays an option for research
projects
- somebody has to explain how to actually develop software with
Common Lisp, what the tools are and why they are the way they
are. Nice books were/are "LISP Style and Design" (Miller, Molly M.,
and Eric Benson, Digital Press, no longer available - sigh)
and "Lisp Lore: A Guide to Programming the Lisp Machine"
(Bromley/Lamson, still available I guess, but expensive).
Like somebody explained how one might develop software
using Smalltalk in a commercial environment (-> XP).
- make sure the Lisp software developed in Universities are
as accessible as possible for the public. Others might
want to use it or learn from it.
> * Christopher Browne
> | Courses generally don't seem to present the more "advanced" structures
> | of Scheme, such as hash tables, macros, and records/structures. I
> | would question why you'd expect them to teach more about CL.
First, while these may be the "advanced structures" of Scheme, they are hardly
advanced by today's standards. This, fundamentally, is the challenge that
faces Lisp today. When I graduated from college, now too many years ago to
remember, it was exactly as Christopher says here, I had special knowledge of
the advanced fabric of computation that would get me farther and faster than
anyone else--I knew how to use Lisp and Lisp gave me a leg up on the kinds of
computations that I'd need to be doing. While others (say, C programmers)
were going to need hash tables in their computations, they had to remember
details of their implementation. Not so with Lisp--I could just ask for a
hash table because I wanted its big-O computational properties for storage
and access and immediately move to the business of USING hash tables. Ditto,
it was a big deal thing to have "records/structures" to help me organize my
thoughts back then. Ideas like inheritance of methods and structure were new
and just knowing them was a leap ahead. No longer so.
Lisp provides considerable computational power, but without continual tracking
of what people today think are "advanced structures", we're hosed. This is
why I have made a big point to any Lisp vendor who would listen of saying that
you must not charge for "connective tissue" to other languages. An RMI
interface is essential right now, and nobody better charge for it. In the
meantime, it's a darned shame they're charging for CORBA. Maybe if there were
both it would be ok to charge for CORBA, but the problem is that right now
the modern functionality that people think in terms of is jpegs and gifs,
audio streams, http servers, html and xml data, fonts and font metrics, and so
on. These are programming tasks to the Lisp world, as hash tables were
programming tasks to the C world of my generation, but they are library
operations to Java and well should be to Lisp. If Lisp could access Java for
free, it could leverage all the good work that people have done in library
creation in Java and on top of all that good work could add a rational
language connective glue because Java truly sucks. However, for as much as
Java sucks for programming, as long as it keeps producing high quality
libraries, the amount of coding you have to do in that sucky language in order
to get work done is, relatively speaking, small in order not to do the
visionary things Lisp people want to do but in order to stay far enough ahead
of competition to make money. And so Java succeeds while Lisp fails, even
though Lisp has tons to teach Java.
Java people do not talk about record structures as advanced. They might
talk about macros as advanced if only the language had them, but only for a
short while I suspect. Those are glue issues and must never have the word
advanced in front of them because they are the fundamental building blocks of
thought. If teachers of languages don't teach people how to use the building
blocks and insist to people that this is basic stuff, there is no hope people
will ever get to the advanced stuff because they'll rest on their laurels
having achieved the faux superiority of understanding record structures.
Now, back to Erik's comment about the quote above...
> I made no comments to that effect. Why do you think I expect that?
> I don't _compare_ the two languages -- I'm simply pointing out
> serious flaws in Scheme and how it is taught and used. Since we
> don't have any data on how CL would be taught if it were where
> Scheme is, I fail to see how I could be thought to _expect_ any
> improvements just by changing the language being taught.
I'm not entirely sure I agree with Erik here. The way I'd say it is that
the languages are structured in a way that leads naturally to a certain
teaching. Scheme is ABOUT being spartan. It is about saying that the
langauge eschews libraries and in some ways barely tolerates, as an
orderly addition, syntactic abstraction. It is entirely about a teaching
order in which people learn to think in the language without abstraction
and then learn to think of the abstraction as an "add-on".
Lisp, taught properly, is about the more democratic (as Erik called it
earlier) and less orderly notion that macros are really in there from
the getgo, and even though one can go back and retroactively explain the
bootstrap order in terms of simpler and more complicated mechanisms, the
fact is that the intended way for people to THINK is to say that macros are
always there, and record structures are always there, and inheritance is
always there, and hash tables are always there. These are not library
structures. A conscious decision was made not to go the ANSI C way of saying
there is the language and there is a library but there might be other libraries
because that leads to deviation of thinking. A conscious decision was made to
say "it's just all there to start with and the whole language/library issue
is purely an issue of efficiency of implementation, subprimitive to the
language itself".
A teacher of Lisp can, and teachers routinely do, confuse CL with Lisp 1.5
and insist that people need to learn to program in terms of car/cdr/cons
and recursion, but those are not acceptable production solutions for commercial
quality CL programmers, and there had better be repair later. I prefer to
think that the problem is that people confuse "algorithms courses" (learning
how to build up useful abstractions from more primitive list and vector
abstractions) with "lisp courses" which should be about using, not
implementing, lisp. The whole POINT of modern lisp which distinguishes it
from early lisp is that it is a USER commodity, designed to hide from the
casual user the horror of implementing COUNT or SEARCH or POSITION or even
LOOP. Those things are not "cheating"--they are what you are intended to do.
And so courses that teach you that their use is wrong are not teaching LISP,
or at least certainly are not teaching CL, they are teaching algorithms and
doing CL the disservice of saying that just because it is computationally
powerful enough to implement low-level algorithms, that it itself is offered
by we the designers and/or implementors FOR that purpose (and often, it is
falsely suggested, either implicitly or explicitly, for no other).
What I do agree with Erik on is the second of his assertions--which is that
we don't have huge amounts of data about how people would teach CL if it had
books that were as popular as Structure and Interpretation of Computer
Programming (S&ICP). I have personally never liked S&ICP, and have never
recommended it to the up and coming reader in spite of its being packed with
very important insights, primarily because it promotes a horrendous programming
style. Historically, and it even seemed so at the time, I believe the reason
was that there was so much "Fortran-esque" indoctrination that people got that
S&ICP intended merely to offer "balance" to the world. However, in offering
balance it did not preach balance. Like the adversarial court system of the
US, it assumed that balance would naturally arise from two people saying the
opposite things--the Fortran people saying that iteration was a process of
stepping through something n times, the Scheme people saying that iteration
was a process of doing something once and then recursing into the problem
with one fewer times needing for it to be done. It amounts to the same. And
a BALANCED approach would teach RESPECT for both approaches. But S&ICP never
went back and did that, and most Scheme courses graduate either people who
have no idea what they learned and hate Lisp/Scheme for being so obtuse or else
zealots who think they have been taught that Fortran-esque iteration is a
confusion to be left aside as foolhardy. Neither of these is a useful outcome.
Like all things in life, good teaching should be monotonic, adding to what
one already knows and providing additional options, not evangelical, replacing
"bad thought" with so-called "good thought". Sometimes, the result of good
teaching will be that the learner will decide they can get away with collapsing
out two ways of thinking about something and using only one, but that should
be a personal choice of the learner, not a choice of the teacher. The teacher
should provide options.
CL is about options. Within it, it provides multiple ways to do
something. The Schemers often hate that. My impression is that they
see any form of redundancy as a weakness. (I think that's why they don't
like our two namespace system, too.) I see it as a strength, and
as a form of tolerance and enlightenment...
> Scheme succeeds because it is
> easy to teach to bright people and fun to implement. Hence, we have
> about a million implementations of Scheme, and only one of them is
> good for anything more than teaching others how to implement Scheme.
OK, i'll ask. Which one?
g.