Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Oft-shared (perhaps?) Impressions of a Lisp Newbie

56 views
Skip to first unread message

nef...@gmail.com

unread,
Apr 11, 2008, 10:09:23 PM4/11/08
to
Hello!

In need of something to occupy my brain this week, I decided to start
reading the excellent Practical Common Lisp book and trying my hand at
the language.

I am new to programming, and have really only been doing it a couple
years, with experience in Ruby, C#, C, PHP, a bit of Java, and more
recently Perl. I fancy myself an open-minded newbie, and have tried to
approach each language without being biased by what I have heard about
that language. And really, I have enjoyed seeing how each has its own
particular strengths and gives me more perspective as a programmer.
Even when I don't care for using a language (like PHP) I appreciate
the things that it makes simple or simpler. (The only language that's
really rubbed me wrong, actually, is C++... for some reason,
everything I've done in it has seemed unnecessarily complicated. I
imagine, as a newbie, I'm incapable of yet grasping its strengths,
which I imagine come in forms like 3D game engine development, which
are too much for my poor brain to handle.)

My first glimpse at Lisp came a few weeks ago when I started learning
Emacs. It looked quite foreign to me, but I did appreciate how
integrated it was with the editor--programming in a programmable
editor is pretty neat :)

And after making it through a few chapters of Practical Common Lisp,
it's been cool to see that Lisp is really so modern in a lot of ways,
supporting lots of the features I love from other languages. (P.S.,
SLIME is awesome.)

My concern, however, as I proceed through the book, is the increasing
number of things it seems to require me to keep in my brain. It has me
worried that the pure effort of memorization will be staggering, as
the built-in functions and macros seem quite unintuitive to me. setf
or getf or setq or eq or eql or defvar or defparameter or funcall or
apply or return or return-from or let or let* or...

Now, I don't doubt for a minute that that all becomes obvious once you
have all those names memorized, and I realize that the whole point of
the book I am reading is to explain such things. I'm simply worried
that there seems to be a general pattern of a very complicated "API",
with a lot of similarly-named constructs, arbitrary abbreviations,
etc.

Learning about a language is easy and fun--learning a language well is
an investment. As an extension of my brain--a tool to translate my
thoughts into instructions--I think a language will have to "mesh"
with my style in order for me to be an effective programmer in it. I
fought for a long time with the "Ruby way" of doing things, as it was
different from what I learned in C#. But I felt it was instructive and
helpful in the long run.

I would therefore like to ask of you, Lispers, 2 things:

1) Is there a pretty steep "memorization curve" for the "standard
library" of Common Lisp? (i.e., are my first impressions correct?) If
so, what tools are out there to aid in overcoming that? (SLIME is
great at helping once you already know what function/macro/whatever is
right for the job, but until then...)

2) Is the Lisp way of programming truly worthwhile to my overall skill
and mindset as a programmer? More directly, is it worth sinking the
hours into for this purpose? If you were in my shoes, as a new
programmer in 2008 with few preferences or barriers with regard to
what to learn, would you learn Lisp? If so, how would you attack it?


I know this has been long and probably rambling; sorry! But I greatly
appreciate any response--even the "get back to PHP if your widdle
bwain hurts, noob" ones :)

-J

Pascal Bourguignon

unread,
Apr 11, 2008, 11:53:50 PM4/11/08
to
nef...@gmail.com writes:

No, the curve is not steep. It just goes up and up for a long time.
Plan to spend ten years on lisp before starting to leave the newbie
state.

That is not as bad as it sounds. First for a lot of domains, you
really need ten years to master it. Then, Common Lisp is actually a
language + a library ( + a lot of third party libraries ). You don't
need to know everything before becoming proficient in CL.


> 2) Is the Lisp way of programming truly worthwhile to my overall skill
> and mindset as a programmer?

Yes.

> More directly, is it worth sinking the
> hours into for this purpose?

Definitely.

> If you were in my shoes, as a new
> programmer in 2008 with few preferences or barriers with regard to
> what to learn, would you learn Lisp?

I would learn only lisp. I only regret having lost my time on all the
other programming languages (but perhaps 680x0 assembler).

> If so, how would you attack it?

Well, you're on the right path. Finish PCL. Have some fun writing
some code. Be sure to use intensively the HyperSpec reference. And
start reading SICP:

Structure and Interpretation of Computer Programs
http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-4.html
http://swiss.csail.mit.edu/classes/6.001/abelson-sussman-lectures/

But write some code!
This is the best way to learn and understand why all these operators
exist.

GETF is unrelated to SETF.

You can mostly forgot SETQ and always use SETF. SETQ is like SETF
only it's limited to simple symbols as the places to be assigned. It
cannot do anything that SETF cannot do.

DEVAR is very different than DEFPARAMETER. DEFVAR assigns the
variable defined only if it is not already defined. DEFPARAMETER
assigns it always.

(defvar *x* expr) =~= (progn (declaim (special *x*))
(unless (boundp '*x*)
(setf *x* expr)))

(defparameter *x* expr) =~= (progn (declaim (special *x*))
(setf *x* expr))

Imagine you define variables in a file, and then you load this file,
run some functions, which change the state of these variables, and
then reload this file. For some of these variable, you may want the
value to be reset, but for some others you may rather want to keep the
changed value. For example, if the variable holds a database storing
user input, you may not want to lost it just because you reload the
file to take into account some debugged function. DEFVAR is for these
variables you don't want to reset everytime you load the file.
Otherwise you use DEFPARAMETER. (In fact, by default I use DEFPARAMETER).

Another case, is when you define some global parameters for your
program. Perhaps the user will want to set these parameters before
loading the program (eg. in the initialization file of the lisp
implementation). Then when you load your program file, you don't want
to reset the user's settings, so you'll use DEFVAR instead of
DEFPARAMETER (yes, I find also that the names have been swapped).


You can define FUNCALL with APPLY:

(defun funcall (func &rest args) (apply func args))

but you cannot define APPLY with FUNCALL. APPLY is more primitive.

But it is easier to write (funcall operation 1 2 3) than (apply
operation (list 1 2 3)). That's why CL defines funcall, to avoid that
every programmer defines his own funcall function. Remember, CL is a
_practical_ programming language, it tries to provide some niceties
and useful tools some other languages leave up to libraries or to
divergent implementations.


Similarly, RETURN is defined as:

(defmacro return (result) `(return-from nil ,result))

Since there are a lot of blocks named NIL implicitely defined by the
various flow control operators, most returns are done from blocks
named NIL and it's tiring to write always (return-from nil expr)...


The difference between LET and LET* is also important.
(let* ((a1 e1)
(a2 e2)
...
(aN eN))
body)

is equivalent to:

(let ((a1 e1))
(let ((a2 e2))
...
(let ((aN eN))
body)))

and when any of the expression ei refers some variable aj with j<i,
it's very different from:

(let ((a1 e1)
(a2 e2)
...
(aN eN))
body)

which evaluates all the expression BEFORE creating the variables aj.

(lambda (a b)
(if (< a b)
(print `(< ,a ,b))
(let ((a b)
(b a)) ; This refers the parameter a of the function,
; not the variable bound on the preceding line.
(print `(< ,a ,b)))))

(let ((a 0) (b 0) (c 0))
(let* ((c 1)
(b (+ c 2))
(a (+ b 3)))
(list a b c))) --> (6 3 1)

(let ((a 0) (b 0) (c 0))
(let ((c 1)
(b (+ c 2))
(a (+ b 3)))
(list a b c))) --> (3 2 1)

So LET and LET* are very different, and depending on the case, you may
want to use one or the other. But to know that, you must write enough
code for these cases to occur to you.


> I know this has been long and probably rambling; sorry! But I greatly
> appreciate any response--even the "get back to PHP if your widdle
> bwain hurts, noob" ones :)

--
__Pascal Bourguignon__ http://www.informatimago.com/
Wanna go outside.
Oh, no! Help! I got outside!
Let me back inside!

D Herring

unread,
Apr 12, 2008, 12:20:49 AM4/12/08
to
nef...@gmail.com wrote:
> 2) Is the Lisp way of programming truly worthwhile to my overall skill
> and mindset as a programmer? More directly, is it worth sinking the
> hours into for this purpose? If you were in my shoes, as a new
> programmer in 2008 with few preferences or barriers with regard to
> what to learn, would you learn Lisp? If so, how would you attack it?

It depends.

Take me. I've tinkered with Lisp at home; but I'm paid to develop C++
(and occasionally Java or Matlab) at work.

Today, you can hear the screams of "its 2008 and C++ still doesn't
have {lambda functions, multi-methods, macros, runtime function
redefinition, garbage collection, ...}?!?"

Before learning Lisp, these were mere murmers of "C++ is hard."

Half my assignment for the past two years has been an object lesson in
Greenspun's tenth.

Back on topic: Do you prefer hammering away with primitive tools,
a) not knowing any better
or
b) because the power tools aren't OSHA approved
?

Yeah, Common Lisp is missing big pieces of useful functionality -- but
that's due more to the small, often eccentric community than to the
language itself.

You just have to learn to look past the parentheses. See code for
what it is -- data which the computer manipulates and eventually
executes. Most languages try to hide this simple truth. They pretend
the compiler is some invariant virtual machine which shouldn't be touched.

- Daniel

Ken Tilton

unread,
Apr 12, 2008, 12:36:20 AM4/12/08
to

> worried that the pure effort of memorization...

"memorization"? Did you say "memorization"? Awesome, I am submitting a
proposal to O'Reilly tomorrow to publish a set of Lisp flash cards so we
can memorize the thousand-page index.

Silly me, I have been using apropos and hitting F1 to automatically look
things up!

>... will be staggering, as


> the built-in functions and macros seem quite unintuitive to me.

Thank god you started with "I know nothing about programming" or I would
be sitting here trying to figure out how the most intuitive library
extant could be construed as counter-intuitive.

> setf
> or getf or setq or eq or eql or defvar or defparameter or funcall or
> apply or return or return-from or let or let* or...

well, ok...

>
> Now, I don't doubt for a minute that that all becomes obvious once you
> have all those names memorized,

You just said that word again. Could you please stop it or even better
completely go away until you have programmed even a goddamn tower of
hanoi? Jesus H! I have the same problem with Chinese, I took ten classes
and I am worried about how I am going to /memorize/ the whole thing!

And what about the guitar? I bought one and hit some of the strings and
I am worried about how I am going to memorize all the motions it will
take to be like Hendrix! Do they have flash cards for guitars??? I can
only hope...

>... and I realize that the whole point of


> the book I am reading is to explain such things. I'm simply worried
> that there seems to be a general pattern of a very complicated "API",
> with a lot of similarly-named constructs, arbitrary abbreviations,
> etc.

I recommend 6502 assembler. Ok, three addressing modes, but beyond that
you could /memorize/ it in a night. <sigh>

>
> Learning about a language is easy and fun--learning a language well is
> an investment. As an extension of my brain--a tool to translate my
> thoughts into instructions--I think a language will have to "mesh"
> with my style in order for me to be an effective programmer in it.

Your style being "I have no intention of actually programming anything
so there is very little chance I will get fluent in any language"?

> I
> fought for a long time with the "Ruby way" of doing things, as it was
> different from what I learned in C#. But I felt it was instructive and
> helpful in the long run.

Oh, good, go back to Ruby, quick!

>
> I would therefore like to ask of you, Lispers, 2 things:
>
> 1) Is there a pretty steep "memorization curve" for the "standard
> library" of Common Lisp? (i.e., are my first impressions correct?) If
> so, what tools are out there to aid in overcoming that?

Sorry, just heard back from O'Reilly: No Lisp submissions welcome. :(

>(SLIME is
> great at helping once you already know what function/macro/whatever is
> right for the job, but until then...)

Try (apropos "APROPOS"), see what turns up. <sigh>

>
> 2) Is the Lisp way of programming truly worthwhile to my overall skill
> and mindset as a programmer?

There's a fat pitch. Thanks!

> More directly, is it worth sinking the
> hours into for this purpose?

if you were not a complete waste of time you would be Actually
Programming, not grandstanding in this NG with your handwringing.

> If you were in my shoes, as a new
> programmer in 2008 with few preferences or barriers with regard to
> what to learn, would you learn Lisp? If so, how would you attack it?

STFU and write some code? Too easy? Oh, I forgot, you have no interest
in programming.

> I know this has been long and probably rambling; sorry! But I greatly
> appreciate any response--even the "get back to PHP if your widdle
> bwain hurts, noob" ones :)

Sh*t, if I had read this far I could have saved myself a lot of trouble.
Really, you are not cut out for this game, I recommend tennis--the new
racquets are awesome.

hth, kenny

--
http://smuglispweeny.blogspot.com/
http://www.theoryyalgebra.com/

"I've never read the rulebook. My job is to catch the ball."
-- Catcher Josh Bard after making a great catch on a foul ball
and then sliding into the dugout, which by the rules allowed the
runners to advance one base costing his pitcher a possible shutout
because there was a runner on third base.

"My sig is longer than most of my articles."
-- Kenny Tilton

Paul Donnelly

unread,
Apr 12, 2008, 3:39:47 AM4/12/08
to
nef...@gmail.com writes:

> My concern, however, as I proceed through the book, is the increasing
> number of things it seems to require me to keep in my brain. It has me
> worried that the pure effort of memorization will be staggering, as
> the built-in functions and macros seem quite unintuitive to me. setf
> or getf or setq or eq or eql or defvar or defparameter or funcall or
> apply or return or return-from or let or let* or...

Once you've internalized these common functions (hint: write some code),
I think you'll find that Lisp actually requires you to keep fewer things
in your brain, since its syntax is so regular. Don't worry about
remembering the proper arguments or argument order to functions like
GETHASH -- SLIME will remind you. Similarly, there's no need to remember
every function in the language. You just want to know them well enough
that something in the back of your mind tells you to check the library
before you go rewriting half of it.

> 1) Is there a pretty steep "memorization curve" for the "standard
> library" of Common Lisp? (i.e., are my first impressions correct?) If
> so, what tools are out there to aid in overcoming that? (SLIME is
> great at helping once you already know what function/macro/whatever is
> right for the job, but until then...)

Not especially. You can write useful code with only a handful of
functions. It's easy to learn enough of them to get going, and the
others will just seep in as you go.

> 2) Is the Lisp way of programming truly worthwhile to my overall skill
> and mindset as a programmer? More directly, is it worth sinking the
> hours into for this purpose? If you were in my shoes, as a new
> programmer in 2008 with few preferences or barriers with regard to
> what to learn, would you learn Lisp? If so, how would you attack it?

Common Lisp, for all the "Lisp is different", is a practical
language. If I were a new programmer and learning Lisp was some kind of
nebulous self-improvement exercise I'd put it on the back burner next to
Haskell. But Lisp is useful and actualy fun to use, so I'd learn it.

joseph....@googlemail.com

unread,
Apr 12, 2008, 3:58:41 AM4/12/08
to

> 1) Is there a pretty steep "memorization curve" for the "standard
> library" of Common Lisp? (i.e., are my first impressions correct?) If
> so, what tools are out there to aid in overcoming that? (SLIME is
> great at helping once you already know what function/macro/whatever is
> right for the job, but until then...)

I had a pretty similar viewpoint to you, when I began learning (from
Paul Graham's ANSI Common Lisp) that there seems like an awful lot of
things which are counterintuitive and hard to remember.

What I did was that I went away and started to learn Scheme; which has
much simpler naming conventions but the same basic syntax as Common
Lisp. When I finished reading about Scheme I went back to Common Lisp
and I'm finding it a lot easier now.

You can find the excellent `Teach Yourself Scheme in Fixnum Days' for
free all over the web, just Google it

Hope it helps!
Joseph

John Thingstad

unread,
Apr 12, 2008, 4:27:17 AM4/12/08
to
PÃ¥ Sat, 12 Apr 2008 04:09:23 +0200, skrev <nef...@gmail.com>:

> Hello!


>
>
> I would therefore like to ask of you, Lispers, 2 things:
>
> 1) Is there a pretty steep "memorization curve" for the "standard
> library" of Common Lisp? (i.e., are my first impressions correct?) If
> so, what tools are out there to aid in overcoming that? (SLIME is
> great at helping once you already know what function/macro/whatever is
> right for the job, but until then...)
>

Well I remember when I was learning Common Lisp the task was somewhat
daunting.

You might want to read "Teach Yourself Programming in Ten Years" by Peter
Norvig
http://norvig.com/21-days.html

In the beginning it is bound to be some memory overload.
There are a few things that might help.

1. Learn to look things up (context sensitivity) in the hyper-spec from
SLIME.

2. Look at other peoples code. Find something that almost does what you
want and modify it.

3. Learn from experts.
Kent Pitman, Edi Weitz, Pascal Costanza, Pascal Bourmignon, Russel
Crowe, Ron Garreth
might be some of the names on comp.lang.lisp to look out for.
After a while you will learn to to see this yourself.

4. Ask questions. You will find people on comp.lang.lisp can provide
guidance, reviews and suggestions for improvement. Note that questions on
the line "What is wrong here?" or "Is there a better way to do this?" get
better treatment than things on the form "Solve this for me.". Effort
counts..

5. Write code. Remember that if it seems simple its hard, if it seems hard
it's damn near impossible. So adjust you ambitions accordingly. Start
simple. Do lots of small programs. Write code to test whether you
understood some parts etc. Then as you confidence grows scale up.

6. Use the REPL. If in doubt try running the program section in the REPL
and see if it does what you expect. Write a function, test a function
seems to work pretty well.

7. As for "Tools" what about
CL-COOKBOOK http://cl-cookbook.sourceforge.net/
cliki http://www.cliki.net/index
Norvig http://norvig.com

SLIME tutorial video
http://video.google.com/videoplay?docid=-2140275496971874400

Http server writing video
http://homepage.mac.com/svc/LispMovies/index.html

> 2) Is the Lisp way of programming truly worthwhile to my overall skill
> and mindset as a programmer? More directly, is it worth sinking the
> hours into for this purpose? If you were in my shoes, as a new
> programmer in 2008 with few preferences or barriers with regard to
> what to learn, would you learn Lisp? If so, how would you attack it?
>

Lisp is programming language for people that like to program.
In the end only you can say whether you find it worth it.

--------------
John Thingstad

Tayssir John Gabbour

unread,
Apr 12, 2008, 8:32:38 AM4/12/08
to
On Apr 12, 4:09 am, nefi...@gmail.com wrote:
> My concern, however, as I proceed through the book, is the increasing
> number of things it seems to require me to keep in my brain. It has me
> worried that the pure effort of memorization will be staggering, as
> the built-in functions and macros seem quite unintuitive to me. setf
> or getf or setq or eq or eql or defvar or defparameter or funcall or
> apply or return or return-from or let or let* or...

The situation isn't so linear as it may seem; these things aren't
equally used. For instance, I almost never use return and return-from.
It is perhaps better to get a vague sense of what things do, then look
them up when needed.

Also, there's a couple sources which I've found very helpful in
segmenting the soup of operators:
* "Common Lisp the Language, 2nd ed"
* the appendix of Paul Graham's "Ansi Common Lisp"

True, Common Lisp doesn't nicely segment built-in operators into
separate packages, as in many other languages.


> 1) Is there a pretty steep "memorization curve" for the "standard
> library" of Common Lisp? (i.e., are my first impressions correct?) If
> so, what tools are out there to aid in overcoming that? (SLIME is
> great at helping once you already know what function/macro/whatever is
> right for the job, but until then...)

The memorization doesn't have to be rote or "intellectual". I'm
reminded of Alan Kay's claims in "Doing with images makes symbols."
http://video.google.com/videoplay?docid=-533537336174204822#51m0s


> 2) Is the Lisp way of programming truly worthwhile to my overall skill
> and mindset as a programmer? More directly, is it worth sinking the
> hours into for this purpose? If you were in my shoes, as a new
> programmer in 2008 with few preferences or barriers with regard to
> what to learn, would you learn Lisp? If so, how would you attack it?

Unclear. If you are familiar with Emacs, Slime and ASDF, then you've
greatly lowered the obstacles to using Lisp. (Potential obstacles
include "batteries not included," social/community problems, library
incompleteness, etc.)

Personally, I probably would've been driven to learn Lisp (by that I
mean a family of languages culminating in Common Lisp, such as those
used on the Lisp machines) because... in certains ways it's saner than
other programming languages. I don't have much time to explain this
further.


> I know this has been long and probably rambling; sorry! But I greatly
> appreciate any response--even the "get back to PHP if your widdle
> bwain hurts, noob" ones :)

Actually, I suspect other languages may impose more of an intellectual
burden. Certain things are harder in those languages, because you're
pretty much stuck with the special operators they offer (for loops,
etc) unless you go outside the language.


Tayssir

viper-2

unread,
Apr 12, 2008, 9:43:21 AM4/12/08
to
On Apr 12, 4:27 am, "John Thingstad" <jpth...@online.no> wrote:

> PÃ¥ Sat, 12 Apr 2008 04:09:23 +0200, skrev <nefi...@gmail.com>:
>

> 3. Learn from experts.
> Kent Pitman, Edi Weitz, Pascal Costanza, Pascal Bourmignon, Russel
> Crowe, Ron Garreth
> might be some of the names on comp.lang.lisp to look out for.

John, yes these are some of my favourite Lisp gladiators too - but
don't you mean "Alan Crowe" and not "Russell Crowe" ? :-)

agt

John Thingstad

unread,
Apr 12, 2008, 10:32:21 AM4/12/08
to
PÃ¥ Sat, 12 Apr 2008 15:43:21 +0200, skrev viper-2
<visi...@mail.infochan.com>:

lol.. Yes just took it from the top of my head.
(But I liked him in "Far side of the World"!)

--------------
John Thingstad

Ken Tilton

unread,
Apr 12, 2008, 12:06:21 PM4/12/08
to

You, sir, have a beautiful mind.

Meanwhile, I can't believe how nice everyone is being to the OP. Why
doesn't some one just admit that Lisp is hard, then we can all go shopping?

viper-2

unread,
Apr 12, 2008, 12:21:00 PM4/12/08
to
On Apr 12, 10:32 am, "John Thingstad" <jpth...@online.no> wrote:
> PÃ¥ Sat, 12 Apr 2008 15:43:21 +0200, skrev viper-2
> <visio...@mail.infochan.com>:

>
> > On Apr 12, 4:27 am, "John Thingstad" <jpth...@online.no> wrote:
> >> PÃ¥ Sat, 12 Apr 2008 04:09:23 +0200, skrev <nefi...@gmail.com>:
>
> >> 3. Learn from experts.
> >> Kent Pitman, Edi Weitz, Pascal Costanza, Pascal Bourmignon, Russel
> >> Crowe, Ron Garreth
> >> might be some of the names on comp.lang.lisp to look out for.
>
> > John, yes these are some of my favourite Lisp gladiators too - but
> > don't you mean "Alan Crowe" and not "Russell Crowe" ? :-)
>
> lol.. Yes just took it from the top of my head.
> (But I liked him in "Far side of the World"!)
>

I've been a bit on the far side of the Lisp world myself lately -
learning Java, of all things!!:-)

Seriously, though, PCL is a good book from all the reviews I've seen,
but I think newbies would be better off combining the gems in PCL with
an introductory book that has loads of "homework" problems. Lisp 3rd
edition worked very well for me, but I gather this might not be a
popular choice. How about "Common Lisp: A Gentle Inroduction to
Symbolic Computation" by David Touretzky (AGI) http://www.cs.cmu.edu/~dst/LispBook/
?

I haven't actually used this book, just briefly skimmed sections of
it; but I've seen good reviews from time to time.

agt

viper-2

unread,
Apr 12, 2008, 12:29:59 PM4/12/08
to
On Apr 12, 12:06 pm, Ken Tilton <kennytil...@optonline.net> wrote:
> viper-2 wrote:
> > On Apr 12, 4:27 am, "John Thingstad" <jpth...@online.no> wrote:
>
> >>PÃ¥ Sat, 12 Apr 2008 04:09:23 +0200, skrev <nefi...@gmail.com>:
>
> >>3. Learn from experts.
> >> Kent Pitman, Edi Weitz, Pascal Costanza, Pascal Bourmignon, Russel
> >>Crowe, Ron Garreth
> >> might be some of the names on comp.lang.lisp to look out for.
>
> > John, yes these are some of my favourite Lisp gladiators too - but
> > don't you mean "Alan Crowe" and not "Russell Crowe" ? :-)
>
> > agt
>
> You, sir, have a beautiful mind.

Why, thank you Kenny. You are one of my favourite gladiators too, John
just forgot to mention you.;-)

>
> Meanwhile, I can't believe how nice everyone is being to the OP. Why
> doesn't some one just admit that Lisp is hard, then we can all go shopping?
>

Well if you work hard at it, like everything else, it becomes easier.

agt

John Thingstad

unread,
Apr 12, 2008, 1:15:11 PM4/12/08
to
PÃ¥ Sat, 12 Apr 2008 18:29:59 +0200, skrev viper-2
<visi...@mail.infochan.com>:

> On Apr 12, 12:06 pm, Ken Tilton <kennytil...@optonline.net> wrote:
>> viper-2 wrote:
>> > On Apr 12, 4:27 am, "John Thingstad" <jpth...@online.no> wrote:
>>
>> >>PÃ¥ Sat, 12 Apr 2008 04:09:23 +0200, skrev <nefi...@gmail.com>:
>>
>> >>3. Learn from experts.
>> >> Kent Pitman, Edi Weitz, Pascal Costanza, Pascal Bourmignon, Russel
>> >>Crowe, Ron Garreth
>> >> might be some of the names on comp.lang.lisp to look out for.
>>
>> > John, yes these are some of my favourite Lisp gladiators too - but
>> > don't you mean "Alan Crowe" and not "Russell Crowe" ? :-)
>>
>> > agt
>>
>> You, sir, have a beautiful mind.
>
> Why, thank you Kenny. You are one of my favourite gladiators too, John
> just forgot to mention you.;-)
>

That would be in the Newbie Warning! section..
(See Naggum complex.)
:)

--------------
John Thingstad

Robert Maas, http://tinyurl.com/uh3t

unread,
Apr 12, 2008, 1:17:36 PM4/12/08
to
> From: nefi...@gmail.com
> I ... have tried to approach each language without being biased

> by what I have heard about that language. And really, I have
> enjoyed seeing how each has its own particular strengths and gives
> me more perspective as a programmer. Even when I don't care for
> using a language (like PHP) I appreciate the things that it makes
> simple or simpler.

The only really good thing about PHP is that it's the only
server-side scripting language available on some free Web-hosting
services, and it's easy to use MySQL on such services. (Also, it
has nice regular-expression facilities built-in, but Perl has
essentially the same services.)

But I like your attitude of looking for what's good about each
language, like why would anybody in their right mind want to ever
use this langauge. Please take a look at this:
<http://www.rawbw.com/~rem/HelloPlus/hellos.html#s4outl>
skip to this section:
* Lesson 6: Briefly comparing the various programming languages
+ Why ever use another language except Lisp?
If you see any really major advantage to any particular non-Lisp
language that I've overlooked, please contact me with your
suggestion, and I might add your info to that section.

> The only language that's really rubbed me wrong, actually, is
> C++... for some reason, everything I've done in it has seemed
> unnecessarily complicated.

Hmm, I took one class in C++, mostly to satisfy requirements for
C++ knowledge in job ads, and although I got an 'A' in the class I
didn't much like it either, although I'd be willing to do some
small programs in C++ if somebody paid me. If you're willing to
deal with destructors, instead of having a decent garbage collector
as with Lisp and Java, C++ might be tolerable.

> My first glimpse at Lisp came a few weeks ago when I started
> learning Emacs.

Beware! That's Emacs-lisp, not Common Lisp, major differences, even
if the core stuff like car/cdr/cons is all the same. Emacs-Lisp is
fine for just getting started at the basics, but you really need to
learn Common Lisp eventually. It's so much better for programming a
wide range of data-processing applications. If you like Emacs as
an environment for development, where you don't have to
copy-and-paste from one window to another but just C-X C-E or
somesuch to execute the current s-expression, then you might
consider implementing a pipe from Emacs to a Common Lisp, whereby
any expresssion you mark in Emacs you can transfer to the Common
Lisp for read-eval-print. The way I did that (to Java BeanShell,
not to Common Lisp) on my laptop (running Linux) was to use a fifo
as the link between the two programs. Anything that Emacs wrote to
the fifo was then read-eval-printed by the other program (Java in
my case, Common Lisp in your future case).

> It looked quite foreign to me, but I did appreciate how
> integrated it was with the editor--programming in a programmable
> editor is pretty neat :)

Yeah, that's the big win about Emacs lisp. (That was a big win
about Macintosh Allegro Common Lisp too, which had Fred as the
editor, basically similar to Emacs, and R-E-P in other window.)
But with my idea for pipe between Emacs and Common Lisp, you can
have that nice Emacs interface with that nicer version of Lisp too.

> My concern, however, as I proceed through the book, is the
> increasing number of things it seems to require me to keep in my
> brain. It has me worried that the pure effort of memorization will
> be staggering, as the built-in functions and macros seem quite
> unintuitive to me.

Writing software is *not* like a closed-book exam in college, where
you are forbidden to use any sort of crib sheets/notes, where you
must memorize everything ahead of time or flunk the test. You are
*encouraged* to make liberal use of notes!! I suggest you
copy&paste sample code from whereever you find them (or key them in
manually if you're copying from a book) into a text file, using
whatever editor you like best, rearrange and organize them however
you like, maybe even set up a Web page so that you can browse
through links by just a click instead of having to manually search
for keyword strings. At first you could have the really basic stuff
in your sample-code notes, such as:
Assignment: (setq varName expressionToEvaluateForNewValue)
CONS pairs, access: (car pr) (cdr pr)
modifying: (setf (car pr) expressionToEvaluateForNewValue)
Some of the Lisp experts won't like this, but I recommend starting
with lots of SETQs, a new SETQ at every step, explicitly naming
every intermediate result when you assign to that new variable,
then fetching that value later by referring to that variable again.
In this way the R-E-P loop lets you debug one line of code at a
time. Then when you're ready to define a function, wrap (PROG ()
) around all your lines of code copy the names of the variables to
that () at the top of the PROG, and see if that block of code works
as a whole, then wrap (DEFUN fnName (parameters) ) around your
prog, and wrap return around the result to be returned, and see if
that works. Worry about refactoring into LET* or nested expressions
after you get the basic PROG version working. With punch cards
(1964-1977), refactoring syntax was a royal pain. But with text
editors on your personal computer (1989-present) that allow moving
sub-expressions around easily, refactoring from a PROG to something
more "nice" is so trivial you should plan on refactoring as a
regular course of programming. So do the first draft of an
algorithm the way that makes debugging easiest, then refactor into
the "nice" version later after you're confident of the algorithm.

Me? Yeah, I use my own software as a crib sheet for copy&paste. I
remember something I did before that's similar to what I want to do
now, use grep to find where I did it, load that source into editor,
find where I did it, and copy&paste to my new code I'm working on,
then change the parts that need changing.

By the way, in addition to that book you're reading, and the Common
Lisp hyper-reference, you might also want to glance at my
<http://www.rawbw.com/~rem/HelloPlus/CookBook/Matrix.html>
which tries to organize how-to-do-stuff in six different languages
(including Lisp) according to the one or two datatypes primarily
involved. I stopped work on it because nobody else thought it was
worth the effort, but if you like the basic idea I might get back
to finishing it.

> 1) Is there a pretty steep "memorization curve" for the "standard
> library" of Common Lisp?

No. All you need is (1) understand that basic idea of CONS-cells
and how they relate to nested lists and general binary trees, and
what Read Eval and Print do (Read parses text syntax to produce
CONS-cell structures, Print does the inverse, Eval is the one you
must understand fully), and how the R-E-P loop works, then (2)
build up your crib sheet of templates for doing stuff, starting
with the most basic, and working your way forward through that book
or whatever else you're trying to learn about Lisp.

> 2) Is the Lisp way of programming truly worthwhile to my overall
> skill and mindset as a programmer?

It isn't *the* Lisp way! Lisp supports several different ways to program:
- Procedural: Assign variables, call functions by name.
- Object-oriented: Define classes of objects with methods, invoke methods.
- Functional: Define and use higher-order functionals as first-class citizens.
- Continuation/Iterator: Define and use lazy-evaluated stream-like objects.
- Probably a few more styles that I can't think of at the moment.
and you can intermix all those styles any time you want any way you want.

> More directly, is it worth sinking the hours into for this purpose?

Yes. What other language lets you mix all those styles, and *all*
available within an integrated read-eval-print loop (via stdio or
emacs) for ease of debugging line-at-a-time?

> If you were in my shoes, as a new programmer in 2008 with few
> preferences or barriers with regard to what to learn, would you
> learn Lisp?

Yes.

> If so, how would you attack it?

By making liberal use of crib sheets for snippets of code to copy&paste.
By making liberal use of text editor for organizing the snippets
any way you want, as plain-text edit or as Web page with links.

Mikael Jansson

unread,
Apr 12, 2008, 1:50:40 PM4/12/08
to

For homework-style reading, I recommend PAIP (Paradigms of AI
Programming) by Peter Norvig. Short exercises to help you understand
what you've just read about, while assuming you do understand the
basics of programming (just not Lisp).

--
Mikael Jansson
http://mikael.jansson.be

Robert Maas, http://tinyurl.com/uh3t

unread,
Apr 12, 2008, 2:04:07 PM4/12/08
to
> From: Ken Tilton <kennytil...@optonline.net>

> I am submitting a proposal to O'Reilly tomorrow to publish a set
> of Lisp flash cards so we can memorize the thousand-page index.

No need. I have an online program that is much more effective at
memorizing flashcards than doing it via manual cards. Try my free
demo sometime (I have existing decks for missing English common
words, English-->Spanish missing common words,
English-->MandarinPinyan single words, and some ESL etc. stuff.)
Log in under the guest account and give it a try.
(Click on Contact Me to see info about guest account, then back at
the main page click on the login form.)

> Silly me, I have been using apropos and hitting F1 to
> automatically look things up!

That works if you happen to remember the right keyword per the
specific jargon of Emacs-Lisp. But an organized outline of stuff
would be useful for browsing in cases where you don't yet know the
jargon keyword.

> I have the same problem with Chinese, I took ten classes and I am
> worried about how I am going to /memorize/ the whole thing!

For the past several months I've been proposing an alternative, for
people who see Chinese text (in newspapers left around in public
places, and on some UHF TV stations that have Chinese-language
programs with Chinese text captions), namely a Web site that lets
you draw a character on your screen and submit it to a lookup
engine which then tells you what that character means, and includes
a link to the UniCode chart for that character which has lots of
additional information about that character. Maybe you'd work with
me to design and test such a system? (I'd do all the actual
Web-site programming.)

Robert Maas, http://tinyurl.com/uh3t

unread,
Apr 12, 2008, 2:28:30 PM4/12/08
to
> From: Tayssir John Gabbour <tayssir.j...@googlemail.com>

> True, Common Lisp doesn't nicely segment built-in operators into
> separate packages, as in many other languages.

Especially Java perhaps?
Nicely annotated listing of packages:
<http://java.sun.com/j2se/1.3/docs/api/overview-summary.html>
Alphabetical listing of all individual classes:
<http://java.sun.com/j2se/1.3/docs/api/allclasses-frame.html>

For Lisp, I like the idea of organizing according to the primary data type:
- Numbers
- Characters
- Strings and other sequences
- Symbols and the stuff hanging from them (print name, property list, etc.)
- Arrays
- Hash tables
- Streams
- Structures (similar to Pascal or C structs)
- CLOS classes and objects and methods and generic functions
etc.
This is of course after you get past the basic stuff about defining
and calling functions, using symbols as variables, read-eval-print
loop, CONS-based lists and other tree structures, and simple
control structures such as COND LOOP etc.

I've also been toying with organizing according to *two* datatypes:
<http://www.rawbw.com/~rem/HelloPlus/CookBook/Matrix.html>

Robert Maas, http://tinyurl.com/uh3t

unread,
Apr 12, 2008, 2:52:23 PM4/12/08
to
> From: Ken Tilton <kennytil...@optonline.net>

> I can't believe how nice everyone is being to the OP.

Except for you. Your earlier followup was somewhat un-nice to the OP,
accusing him of not wanting to really write a program, just complain.

> Why doesn't some one just admit that Lisp is hard, then we can
> all go shopping?

Because that would be a lie. Lisp is easy!! All you need to learn is:
- Relation between syntax (s-expressions) and internal form (CONS-cell trees).
- How Read-Eval-Print works.
- A few simple templates for doing simple things:
- Literals for numbers and strings: 42 "This is a test."
- Call a function: (functionName arg1 arg2 ...)
- Assign a value to a variable: (SETQ varName newValue)
- Functions for building and decomposing CONS-cell tree-structures:
- (CONS leftPart rightPart) => pr
- (CAR pr) => leftPart
- (CDR pr) => rightPart
- Nesting one template/form inside another, for example:
(SETQ pr (CONS 42 "This is a test."))
(SETQ z (+ (car pr) 624))
Then you can sit at the REP playing with building and decomposing
structure as long as it takes to get a deep feeling for it.

Then you decide what type of data you want to play with, for
example numbers or strings or arrays, and look in the chapter on
that data type to see a whole lot more functions you can play with.

For example, in the chapter on arrays, first you do (make-array ...),
then you can play with that array using all the other functions in
the chapter. Likewise (make-hashtable ...), (make-string ...), etc.

It's all easy, except perhaps for those very first two steps.
Once you get past them, it's all easy!!

Ken Tilton

unread,
Apr 12, 2008, 5:50:09 PM4/12/08
to

Robert Maas, http://tinyurl.com/uh3t wrote:
>>From: Ken Tilton <kennytil...@optonline.net>
>>I can't believe how nice everyone is being to the OP.
>
>
> Except for you. Your earlier followup was somewhat un-nice to the OP,
> accusing him of not wanting to really write a program, just complain.

But I was right. Can you not read minds? Actually, all you need to be
able to do is read, both what the OP wrote and the great honking neon
arrow I pointed to his use of the word "memorize".

What real programmer ever thinks of learning a computer language as
memorizing?

You people really need to turn up the gain on your antennae.

John Thingstad

unread,
Apr 12, 2008, 7:29:09 PM4/12/08
to
PÃ¥ Sat, 12 Apr 2008 23:50:09 +0200, skrev Ken Tilton
<kenny...@optonline.net>:

>
>
> Robert Maas, http://tinyurl.com/uh3t wrote:
>>> From: Ken Tilton <kennytil...@optonline.net>
>>> I can't believe how nice everyone is being to the OP.
>> Except for you. Your earlier followup was somewhat un-nice to the OP,
>> accusing him of not wanting to really write a program, just complain.
>
> But I was right. Can you not read minds? Actually, all you need to be
> able to do is read, both what the OP wrote and the great honking neon
> arrow I pointed to his use of the word "memorize".
>
> What real programmer ever thinks of learning a computer language as
> memorizing?
>
> You people really need to turn up the gain on your antennae.
>
> hth, kenny
>

The hyper-spec is useless if you don't know what to look for.
It takes time to know naming conventions and hyper-spec structure well
enough to be able to find the right thing.
I seem to remember spending hours writing code with would have taken me 20
minutes in C++ simply finding the functions and understanding the
behaviour. The spec can also be difficult to understand in places.
Then also it doen't explain how functions work togeter or what other
otions you have.
I'd say you need to memorize a good number of function names. PCL does go
a long way of filling that gap.
However I think he might have to read it a couple of times before the all
important finer details like the difference between defvar and
defparameter or setf and setq sink in. (Yes, they are in fact explained in
the book.) PCL is 500 page book and introduces 100's of functions. I'd say
anyone would find remembering all of it a bit daunting.

--------------
John Thingstad

Russell McManus

unread,
Apr 12, 2008, 10:47:14 PM4/12/08
to
"John Thingstad" <jpt...@online.no> writes:

> I'd say you need to memorize a good number of function names. PCL
> does go a long way of filling that gap.

All Kenny is saying is that you don't memorize function names. You
struggle to write programs, until you know what the primitives are,
what idioms are useful, what garbage code looks like, and what good
code looks like. If you're good, you learn some of this by reading
other people's code.

It's like when you see a spelling mistake of an obscure word in your
native language. You may not be able to spell the word properly, but
you sure know that the way it's spelled on the page is wrong. You
don't memorize words, you encounter them in books, they get burned
into your brain, and then you can't forget them if you tried.

It's true that Common Lisp is a big language, so there are a lot of
things to learn. That's cool though, because once you know your way
around, the tool you need is always handy.

-russ

Duane Rettig

unread,
Apr 12, 2008, 11:09:17 PM4/12/08
to
"John Thingstad" <jpt...@online.no> writes:

> PÃ¥ Sat, 12 Apr 2008 23:50:09 +0200, skrev Ken Tilton
> <kenny...@optonline.net>:
>
>>
>>
>> Robert Maas, http://tinyurl.com/uh3t wrote:
>>>> From: Ken Tilton <kennytil...@optonline.net>
>>>> I can't believe how nice everyone is being to the OP.
>>> Except for you. Your earlier followup was somewhat un-nice to the OP,
>>> accusing him of not wanting to really write a program, just complain.
>>
>> But I was right. Can you not read minds? Actually, all you need to
>> be able to do is read, both what the OP wrote and the great honking
>> neon arrow I pointed to his use of the word "memorize".
>>
>> What real programmer ever thinks of learning a computer language as
>> memorizing?
>>
>> You people really need to turn up the gain on your antennae.
>>
>> hth, kenny
>>
>
> The hyper-spec is useless if you don't know what to look for.

Unless you have the right tools:

http://www.franz.com/search/index.lhtml#ansispec

I try never to memorize what I can just look up.

--
Duane Rettig du...@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182

Ken Tilton

unread,
Apr 13, 2008, 2:27:58 AM4/13/08
to

Duane Rettig wrote:
> "John Thingstad" <jpt...@online.no> writes:
>
>
>>PÃ¥ Sat, 12 Apr 2008 23:50:09 +0200, skrev Ken Tilton
>><kenny...@optonline.net>:
>>
>>
>>>
>>>Robert Maas, http://tinyurl.com/uh3t wrote:
>>>
>>>>>From: Ken Tilton <kennytil...@optonline.net>
>>>>>I can't believe how nice everyone is being to the OP.
>>>>
>>>> Except for you. Your earlier followup was somewhat un-nice to the OP,
>>>>accusing him of not wanting to really write a program, just complain.
>>>
>>>But I was right. Can you not read minds? Actually, all you need to
>>>be able to do is read, both what the OP wrote and the great honking
>>>neon arrow I pointed to his use of the word "memorize".
>>>
>>>What real programmer ever thinks of learning a computer language as
>>>memorizing?
>>>
>>>You people really need to turn up the gain on your antennae.
>>>
>>>hth, kenny
>>>
>>
>>The hyper-spec is useless if you don't know what to look for.

God I love a non-sequitor! What is this idiot Thinging? No wonder he is
in my killfile. Do I need killfile by association? Anyone quoting anyone
in my killfile is a goner? That would cover everyone, I think, free up
even more time for freecell.

Read my lips, Thing: no programmer has ever memorized anything.

>
>
> Unless you have the right tools:
>
> http://www.franz.com/search/index.lhtml#ansispec
>
> I try never to memorize what I can just look up.
>

Right. I never memorized C precedence, I dog-eared that one page in K&R
and/or threw in a pair of air-bag parens and skipped the lookup altogether,

peace. out. kzo.

John Thingstad

unread,
Apr 13, 2008, 4:02:47 AM4/13/08
to
PÃ¥ Sun, 13 Apr 2008 04:47:14 +0200, skrev Russell McManus
<russell...@yahoo.com>:


Yes, but I think Kenny is gotten all worked up over a choice of word.
Perhaps "spitsgeful" is better. That is you need to work the functions
into your fingers.
It's obviously no purpose to memorize a index line by line. We are
programmer's not actors.
But I am pretty sure that is not what he means.
Like I said he looks for a fight. Sees something he disagrees with and
misses the rest of the text.
Thus he misses the point.

--------------
John Thingstad

Giorgos Keramidas

unread,
Apr 13, 2008, 11:52:51 AM4/13/08
to
On Sun, 13 Apr 2008 02:27:58 -0400, Ken Tilton <kenny...@optonline.net> wrote:
>Duane Rettig wrote:
>>"John Thingstad" <jpt...@online.no> writes:
>>>The hyper-spec is useless if you don't know what to look for.
>>[...]

> God I love a non-sequitor! What is this idiot Thinging? No wonder he
> is in my killfile. Do I need killfile by association? Anyone quoting
> anyone in my killfile is a goner? That would cover everyone, I think,
> free up even more time for freecell.

Seeing that this has already been implemented in certain newsreaders,
it's probably a good idea. At least someone found it useful enough to
sit down and write the code to do it.

Triple yay for freecell! :-)

Robert Maas, http://tinyurl.com/uh3t

unread,
Apr 13, 2008, 2:52:58 PM4/13/08
to
> From: "John Thingstad" <jpth...@online.no>

> The hyper-spec is useless if you don't know what to look for.

Agreed. That's why I prefer a manual that is organized according to
the data-type. Mostly I still use CLtL1. It doesn't include CLOS,
so if I want to read about CLOS I use Google to find a CLOS
tutorial somewhere.

Now if I can guess the name of the function I want, I have three
options:
- Index in back of CLtL1
- (describe 'nameOfFunction)
- HyperSpec

Unfortunately the info in (describe ...) in CMUCL is often quite
deficient/incomplete, so I really need to read CLtL1 or HyperSpec
to get the full story. But (describe ...) is so much more
convenient, so I often try it before switching to the HyperSpec.

> It takes time to know naming conventions and hyper-spec structure
> well enough to be able to find the right thing.

Agreed. Same remarks as above.

> I seem to remember spending hours writing code with [sic] would have


> taken me 20 minutes in C++ simply finding the functions and
> understanding the behaviour.

It would have taken only 5 minutes in Common Lisp if you could
quickly find the relevant useful function. Too bad nobody
showed any interest in my CookBook/Matrix:
<http://www.rawbw.com/~rem/HelloPlus/CookBook/Matrix.html>
I started out to accomplish two milestones:
-1- Document all the routines available in the standard C library
-2- Show how to do the same things, and better, in Common Lisp
but I got bogged down in -1- and without anybody showing any
interest I lost the energy to continue. But do you like the *idea*
of what it was to accomplish, how if it *were* completed you
*might* be able to find what you want really quickly, and
copy&paste the relevant code snippets directly from the document to
your in-edit source? After I finish my current project (my first
real development of ProxHash for practical benefit, hence first
real test of the technology), would you like to encourage me to
finish the CookBook/Matrix, or not?

> The spec can also be difficult to understand in places.

Yes, but those places are rare IMO.

> Then also it doen't explain how functions work togeter [sic] or
> what other otions [sic] you have.

Yes. That's why there are "CookBooks" for various programming
languages. Unfortunately most of the CookBooks show completed
programs for totally useless stuff such as the Tower of Hanoi or
Eight Queens. More useful would be snippets of how to do really
basic tasks, such as parse an integer from a string in a totally
safe way suitable for ServerSide Web applications via CGI:
<http://www.rawbw.com/~rem/HelloPlus/CookBook/h4s.html>
or how to read lines from a file and process them sequentially,
gracefully exiting the loop after the last line before EOF has been
processed, or how to break a string into "words" separated by some
delimiter such as whitespace or comma-with-optional-whitespace.

> I'd say you need to memorize a good number of function names.

I'd say you don't have to memorize even one. You just look at the
listing of datatypes, and try to think which would be suitable for
your purpose, and click on the link for details of how to do
various things with that datatype, and browse that listing of
details to find the service you need, or something close, and
copy&paste the code snippet, and adapt it to your needs, and test
it in the REP loop until you achieve the effect you wanted. Where
does memorizing happen here??? You do have to memorize how to use a
mouse to click the link and sweep the sample code, and where the
scroll bar on your browser is located, and it helps to know that
Copy and Paste are in the Edit menu, nothing else, right?

> PCL does go a long way of filling that gap.
> However I think he might have to read it a couple of times before
> the all important finer details like the difference between defvar
> and defparameter or setf and setq sink in.

My advice: Don't bother with defvar etc. at all when you're first
writing and testing one line of code at a time in the REP loop.
Ignore those annoying warnings about undeclared variables. When
those warnings get too annoying, make sure you know the difference
between globals you intended and globals that happened during
line-at-a-time testing that go away when you build the code into
functions. For *only* the deliberate globals, make them all defvar,
at the top of your set-up script, and explicitly setq their initial
values from your set-up script. Later when you're more comfortable
with Common Lisp and cleaning up your code, you can revisit all
those defvars to see which ought to really be defconstants or
defparameters. You don't have to memorize anything. Just sit
looking at your variables, with your **comments** as to what each
of them is supposed to be used for, and in another part of your
visual scene the section in the book that explains the difference
between them, and right there while everything is in front of you
and you're not distracted by developing new code, decide for each
defvar whether it is fine as-is or should really be changed to
defconstant or defparameter. You never need to memorize that info.

As for setq and setf, always use setq for assigning values to
symbols used as variables. Use setf only when you are reading about
how the way to modify something is via setf. For example, in the
chapter on hashtables, it says (in CLtL1) that you use gethash to
look for a key in a hash table, and it says you can make new
entries in a hash table by using setf with gethash, and by looking
at the exaples you can guess that you say
(setf (gethash key table) newvalue)
So just copy&paste that to your source code at that moment, modify
it to have the key and table and new value you want to use, and
nothing has to be *memorized*!!! You can even forget about setf
until the next time you learn that the way to change something in
an array or whatever is by using sets.

> PCL is 500 page book and introduces 100's of functions. I'd say
> anyone would find remembering all of it a bit daunting.

Anybody who would try to memorize all that is either a fool, or a
liar trying to talk his way through a job interview that requires
experience with Common Lisp which the applicant doesn't have but is
pretending to have. Better to write some useful utility and show
your interviewer the source listing and be prepared to explain any
questions about how your code works to prove you really wrote that
code (or at least understand how it works if you stole it). For
example, here's a utility I wrote:
- read-lines-of-file
;Read each line of file into one string, and pack all the strings into
; one array, except if keeplist is true then don't convert to array.
- load-whole-file-to-string
;Read the entire contents of a text file into one large string.
- ;Inline read one s-expression from a file (the file presumably contains
; just this one huge s-expression)
- ;Inline read all s-expressions from a file, return list of them.
- load-file-by-method
;Given name of file, and method of loading, just loads that file in
; and returns the result.
;method = :BIGREAD | :ALLREADS | :BIGSTR | :ALLLINES
load-file-by-method can then be called from an auto-loading
mechanism that works from a table that simply lists each file and
corresponding mode of loading and what to do with the result, such
as setting a global to that value.

So long as we have computers, and the InterNet, and electricity to
run them all, it will make more sense to look up details like which
function finds the index of a sub-string within a longer string
<http://www.rawbw.com/~rem/HelloPlus/CookBook/Matrix.html#StrInt>
Find substring in string: Given strings needle,haystack, find first
location where needle exactly matches a substring of haystack:
rather than memorize every such bit of useful trivia.

And if we ever do *not* have computers or electricity, you won't
need this information anyway, so stop memorizing already!

Griff

unread,
Apr 13, 2008, 11:30:22 PM4/13/08
to
> 2) Is the Lisp way of programming truly worthwhile to my overall skill
> and mindset as a programmer?

Yes.

> More directly, is it worth sinking the
> hours into for this purpose?

Yes.

> If you were in my shoes, as a new


> programmer in 2008 with few preferences or barriers with regard to

> what to learn, would you learn Lisp? If so, how would you attack it?

Start by learning Scheme. If you start with Common Lisp you will just
end up obsessing over the Common Lisp Object System.

1) Read _The Scheme Programming Language 3rd edition_. Do as many of
the problems as you can, skip whatever you don't get. Read it a second
time, but do all of the problems this time.
2) Read _The Little Schemer_.
3) By now you will have seen the majority of new concepts that Lisp
provides.
4) Learn Common Lisp.

Mikael Jansson

unread,
Apr 14, 2008, 5:00:17 AM4/14/08
to
On Apr 14, 5:30 am, Griff <gret...@gmail.com> wrote:
> [snip]

>
> Start by learning Scheme. If you start with Common Lisp you will just
> end up obsessing over the Common Lisp Object System.
>
> 1) Read _The Scheme Programming Language 3rd edition_. Do as many of
> the problems as you can, skip whatever you don't get. Read it a second
> time, but do all of the problems this time.
> 2) Read _The Little Schemer_.
> 3) By now you will have seen the majority of new concepts that Lisp
> provides.
> 4) Learn Common Lisp.

... and unlearn names like "lst" ...

Robert Uhl

unread,
Apr 14, 2008, 2:24:03 PM4/14/08
to
Ken Tilton <kenny...@optonline.net> writes:
>
> What real programmer ever thinks of learning a computer language as
> memorizing?

I think maybe syntax makes it feel less like memorisation to some
folks. Instead of learning a bunch of functions, they're learning a
bunch of syntax--and that seems easier.

--
Robert Uhl <http://public.xdi.org/=ruhl>
I intend to smoke a good cigar to the glory of God before I go to bed
to-night. --Rev. Charles Spurgeon

Kaz Kylheku

unread,
Apr 15, 2008, 2:13:06 AM4/15/08
to
On Apr 11, 7:09 pm, nefi...@gmail.com wrote:
> My concern, however, as I proceed through the book, is the increasing
> number of things it seems to require me to keep in my brain. It has me
> worried that the pure effort of memorization will be staggering, as
> the built-in functions and macros seem quite unintuitive to me. setf
> or getf or setq or eq or eql or defvar or defparameter or funcall or
> apply or return or return-from or let or let* or...

Some of the software projects that people work on dwarf the complexity
of programming languages.

Today I performed a three-way merge on a codebase consisting of 25,000
source files, taking up around half a gigabyte.

> 1) Is there a pretty steep "memorization curve" for the "standard
> library" of Common Lisp?

A professional developer works with lots of tools, all of which have
some kind of memorization curve.

If something isn't in the language, it will be in some common library,
or perhaps in a common code generation tool (or macro library, in the
case of Lisp). If it's not in a common library or macro, you will whip
it together yourself. And other programmers will whip together
themselves something which is similar, but different and incompatible.

You can't reduce your memorization curve by eliminating something
essential in one place. The complexity shows up elsewhere. If
eliminating one unit of complexity over here causes ten units to crop
up elsewhere, you are losing.

As a rule, pushing complexity into programming language reduces
complexity by reducing the number of wheels that are reinvented.

In a large C project, you might find M linked list libraries and N
character string libraries. I'd rather learn one list manipulation
library containing 50 functions, than 10 libraries containing 10
functions.

> (i.e., are my first impressions correct?)

You're only a newbie once (hopefully; alas, unfortunately, there do
exist permanent newbies). So one day you will not even remember your
first impression. It's not a question of whether or not it is correct,
but rather that it is not something of value that will stick with you.

> so, what tools are out there to aid in overcoming that? (SLIME is
> great at helping once you already know what function/macro/whatever is
> right for the job, but until then...)

The symbols in the Common Lisp language are organized into subsystems.
So for instance everything having to do with the lexical analysis of
the printed representation of Lisp objects (and thus Lisp source code)
is part of the reader, which is covered in Chapter 23 of the Common
Lisp HyperSpec. You don't have to memorize everything about the
reader; you just have to develop an intuition for what types of
problems are solved in a way that somehow involves the reader, and
where to find the documentation for that part of Lisp.

Similarly, if you are working with sequences (lists or vectors) of
items, there is a sequences library. It's good to know it, but you
don't have to remember everything. If some problem requires the
extraction of data from a sequence of other some such manipulation,
you can glance through the Sequences Dictionary (17.3) to see whether
anything rings a bell.

The more years you spend as a programmer, the more you learn ways to
avoid memorizing reams of stuff. You need to have the information
summarized in your head, in a kind of structure of hierarchical
detail. There are two somewhat competing goals: 1) avoid spending too
much time searching for something and 2) do not get caught in some
``local maximum'' whereby you think you have found the best solution
but are overlooking something.

To the newbie, it looks as if total memorization must be the answer to
satisfy these goals at the same time. How do you know that you aren't
overlooking something if you don't have everything in your head, and
do not recursively traverse every shred of documentation?

What happens is that you end up internalizing architectural patterns.

Here is an analogy. All of the world's airports are different. But
after a bit of traveling, you can find your way around in them quite
easily.

> 2) Is the Lisp way of programming truly worthwhile to my overall skill

> and mindset as a programmer? More directly, is it worth sinking the
> hours into for this purpose? If you were in my shoes, as a new


> programmer in 2008 with few preferences or barriers with regard to
> what to learn, would you learn Lisp? If so, how would you attack it?

If I were a new programmer in 2008, I have no clue what that would be
like, sorry. I was an ``old'' programmer in 2001, and the answer was
yes, to plunge in the Lisp thing.

Scott Burson

unread,
Apr 15, 2008, 1:07:47 PM4/15/08
to
On Apr 14, 11:13 pm, Kaz Kylheku <kkylh...@gmail.com> wrote:
> A professional developer works with lots of tools, all of which have
> some kind of memorization curve.
>
> If something isn't in the language, it will be in some common library,
>
> As a rule, pushing complexity into programming language reduces
> complexity by reducing the number of wheels that are reinvented.
>
> What happens is that you end up internalizing architectural patterns.

All excellent points. I have never understood those who complain
about the size of the Common Lisp libraries. (That is, I've never
understood complaints about size per se. Complaints that the same
functionality could be provided by a more elegant basis set of
primitives have some truth to them, I think, in some cases.)

-- Scott

Pascal Costanza

unread,
Apr 15, 2008, 3:20:09 PM4/15/08
to

No, not really, and I believe (!) that Scheme currently actually proves
otherwise. Yes, the core language of Scheme (roughly the subset defined
in R3RS/R4RS) is somewhat cleaner and more elegant that CL, but that
didn't prevent full-sized Scheme implementations to become complex
either. I would actually argue that R6RS is more complex and bloated
than CL. So a clean and elegant core language doesn't seem to help.

To the best of my knowledge nobody ever succeeded to grow a complex big
system from a simple and orthogonal core language. I am convinced that
this is a pipe dream that cannot be achieved - the requirements at the
macro level are simply different than at the micro level, and the belief
that everything can look more or less the same at every level is a myth,
IMHO.

Smalltalk seems to be a counter example to a certain degree: The
language remained essentially the same since the early 80's, but
Smalltalkers seem to be successful at writing large and complex systems
with a relatively simple core language. Alas, not everyone is happy with
the lack of change over there either...

Pascal

--
1st European Lisp Symposium (ELS'08)
http://prog.vub.ac.be/~pcostanza/els08/

My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/

Pertti Kellomäki

unread,
Apr 16, 2008, 2:50:04 AM4/16/08
to
Pascal Costanza wrote:
> To the best of my knowledge nobody ever succeeded to grow a complex big
> system from a simple and orthogonal core language.

That is basically what Guy Steele is trying to do with Fortress.

> I am convinced that
> this is a pipe dream that cannot be achieved - the requirements at the
> macro level are simply different than at the micro level, and the belief
> that everything can look more or less the same at every level is a myth,
> IMHO.

One thing that seems to be different is that libraries can provide
syntax as well as semantics, so things need not look the same at
different levels. So who knows, maybe the Age of Aquarius will
finally be upon us ;-)
--
Pertti

Ken Tilton

unread,
Apr 16, 2008, 8:56:05 AM4/16/08
to

Pertti Kellomäki wrote:
> Pascal Costanza wrote:
>
>> To the best of my knowledge nobody ever succeeded to grow a complex
>> big system from a simple and orthogonal core language.
>
>
> That is basically what Guy Steele is trying to do with Fortress.

The poster boy for failed languages? The Darth Vader of the Lisp Way?

Constraints programming was linear dataflow gone mad, Scheme was
minimalism mistaken for a design objective, and Java was a productivity
dead-end through excessive safeguarding (but he still crows about the
productivity gain made before hitting the ceiling -- I can see him in a
museum for suits of armor thinking "great idea!").

Actually Cells might be great for Fortress since they automatically lay
bare dependency information needed for parallelizing code automatically,
but he'd probably just cock it up again.

kt

Scott Burson

unread,
Apr 16, 2008, 4:15:38 PM4/16/08
to
On Apr 16, 5:56 am, Ken Tilton <kennytil...@optonline.net> wrote:
> Pertti Kellomäki wrote:
> > Pascal Costanza wrote:
>
> >> To the best of my knowledge nobody ever succeeded to grow a complex
> >> big system from a simple and orthogonal core language.
>
> > That is basically what Guy Steele is trying to do with Fortress.
>
> The poster boy for failed languages? The Darth Vader of the Lisp Way?

And the author of CLtL and CLtL2.

> Constraints programming was linear dataflow gone mad, Scheme was
> minimalism mistaken for a design objective

Scheme played a key role in expanding the computer science community's
understanding of programming language design and implementation. The
language itself was intended primarily as a teaching language, a job
it has filled very well.

If Scheme hadn't existed, Common Lisp would very likely have lacked
lexical closures.

Okay, the constraints thing didn't work out that well.

> and Java was a productivity
> dead-end through excessive safeguarding

Huh. I'd rather be working in Java than the C++ I have to use in my
day job.

-- Scott

Griff

unread,
Apr 16, 2008, 6:23:00 PM4/16/08
to
> No, not really, and I believe (!) that Scheme currently actually proves
> otherwise. Yes, the core language of Scheme (roughly the subset defined
> in R3RS/R4RS) is somewhat cleaner and more elegant that CL, but that
> didn't prevent full-sized Scheme implementations to become complex
> either. I would actually argue that R6RS is more complex and bloated
> than CL. So a clean and elegant core language doesn't seem to help.

Scheme's simplicity is essential to how it removes barriers to
learning.

I've never heard anyone claim that those features are going to take
you from "programming the small" to "programming in the large".

Ken Tilton

unread,
Apr 16, 2008, 6:36:48 PM4/16/08
to

Scott Burson wrote:
> On Apr 16, 5:56 am, Ken Tilton <kennytil...@optonline.net> wrote:
>
>>Pertti Kellomäki wrote:
>>
>>>Pascal Costanza wrote:
>>
>>>>To the best of my knowledge nobody ever succeeded to grow a complex
>>>>big system from a simple and orthogonal core language.
>>
>>>That is basically what Guy Steele is trying to do with Fortress.
>>
>>The poster boy for failed languages? The Darth Vader of the Lisp Way?
>
>
> And the author of CLtL and CLtL2.

?? That just means he has no excuse.

>
>
>>Constraints programming was linear dataflow gone mad, Scheme was
>>minimalism mistaken for a design objective
>
>
> Scheme played a key role in expanding the computer science community's
> understanding of programming language design and implementation. The
> language itself was intended primarily as a teaching language, a job
> it has filled very well.

Tell the Schemers their language is just a research/teaching device. :)

>
> If Scheme hadn't existed, Common Lisp would very likely have lacked
> lexical closures.

Doing Scheme magically made them think up lexical closures? Nope.
CL cannot do lexical closures? Nope.
So Scheme helped CL? Nope.
Scheme fragmented the Lisp world? Yep.

>
> Okay, the constraints thing didn't work out that well.
>
>
>>and Java was a productivity
>>dead-end through excessive safeguarding
>
>
> Huh. I'd rather be working in Java than the C++ I have to use in my
> day job.

Was that supposed to be saying something good about Java? :) And are you
sure you could live without even a preprocessor?

Ken Tilton

unread,
Apr 16, 2008, 7:27:39 PM4/16/08
to

Btw, Steele has two meta-problems: he always thinks you need a new
language to try a new idea. One of the first things I noticed about
Cells is that I did not have to write a compiler, editor, debugger, etc,
etc. I mean, there is a reason we like programmable programming
languages, yes?

Second, his knee-jerk instinct for all-x-all-the-time. His constraints
language used constraints for assignment to variables. They discovered
they should stop that because it was intractably slow. But why were they
even trying?

Hey, X is great! (fine)
Let's use it for /everything!/ (death. examples: prolog, smalltalk, F#)
First, we'll make a new language... (oh, go away)

Meanwhile Steele famously claims Java is halfway to Lisp. Perhaps he
meant starting from the stone axe?

kenny

Pascal Costanza

unread,
Apr 16, 2008, 8:12:38 PM4/16/08
to
Griff wrote:
>> No, not really, and I believe (!) that Scheme currently actually proves
>> otherwise. Yes, the core language of Scheme (roughly the subset defined
>> in R3RS/R4RS) is somewhat cleaner and more elegant that CL, but that
>> didn't prevent full-sized Scheme implementations to become complex
>> either. I would actually argue that R6RS is more complex and bloated
>> than CL. So a clean and elegant core language doesn't seem to help.
>
> Scheme's simplicity is essential to how it removes barriers to
> learning.

True.

> I've never heard anyone claim that those features are going to take
> you from "programming the small" to "programming in the large".

Read the intro of the Scheme specification, for example
http://www.r6rs.org/final/html/r6rs/r6rs-Z-H-3.html#node_chap_Temp_3

Pascal Costanza

unread,
Apr 16, 2008, 8:15:12 PM4/16/08
to
Ken Tilton wrote:
>
>
> Scott Burson wrote:
>> On Apr 16, 5:56 am, Ken Tilton <kennytil...@optonline.net> wrote:
>>
>>> Pertti Kellomäki wrote:
>>>
>>>> Pascal Costanza wrote:
>>>
>>>>> To the best of my knowledge nobody ever succeeded to grow a complex
>>>>> big system from a simple and orthogonal core language.
>>>
>>>> That is basically what Guy Steele is trying to do with Fortress.
>>>
>>> The poster boy for failed languages? The Darth Vader of the Lisp Way?
>>
>>
>> And the author of CLtL and CLtL2.
>
> ?? That just means he has no excuse.
>
>>
>>
>>> Constraints programming was linear dataflow gone mad, Scheme was
>>> minimalism mistaken for a design objective
>>
>>
>> Scheme played a key role in expanding the computer science community's
>> understanding of programming language design and implementation. The
>> language itself was intended primarily as a teaching language, a job
>> it has filled very well.
>
> Tell the Schemers their language is just a research/teaching device. :)
>
>>
>> If Scheme hadn't existed, Common Lisp would very likely have lacked
>> lexical closures.
>
> Doing Scheme magically made them think up lexical closures? Nope.
> CL cannot do lexical closures? Nope.
> So Scheme helped CL? Nope.
> Scheme fragmented the Lisp world? Yep.

You seem to have skipped the history classes...

Scott Burson

unread,
Apr 16, 2008, 8:34:36 PM4/16/08
to
On Apr 16, 3:36 pm, Ken Tilton <kennytil...@optonline.net> wrote:

> Scott Burson wrote:
> > If Scheme hadn't existed, Common Lisp would very likely have lacked
> > lexical closures.
>
> Doing Scheme magically made them think up lexical closures? Nope.

The existence of Scheme, and its demonstration of the relevant
implementation techniques, was key to persuading the nascent Common
Lisp community that lexical closures were the right thing.

In fact, there was a time when ZetaLisp -- the Lisp Machine dialect --
had only dynamic closures, a concept that has fortunately been
consigned to the dustbin of history. (It was a way to capture special
variable bindings and reinstate them later.)

Kent has also posted here (or somewhere) about how there was quite a
bit of resistance in some quarters of the nascent Common Lisp
community to lexical scoping. I don't know exactly what that
resistance consisted of -- MacLisp, for instance, had lexically scoped
local variables in compiled code, so it wasn't a completely foreign
concept -- but I think it had to include lexical closures.

(Kent, are you listening? You could help me out a little here :)

> Scheme fragmented the Lisp world? Yep.

Not even close. When Scheme came into being the Lisp world was
already fragmented. The very first Lisp Machines were still being
designed and/or built, to supplant MacLisp, around that time. The NIL
project was roughly contemporaneous. I think Interlisp was in use at
Stanford and Xerox PARC. Common Lisp wasn't even the proverbial gleam
in the eye at that point.

> > Huh. I'd rather be working in Java than the C++ I have to use in my
> > day job.
>
> Was that supposed to be saying something good about Java? :)

Sort of :) Seriously, it's saying that as languages designed for
mainstream programmers go, I prefer Java to C++.

> And are you
> sure you could live without even a preprocessor?

I don't think I would miss the preprocessor -- we don't use it much
anyway. But I expect I would rather work in Scala than Java, for the
lambda expressions if nothing else.

Anyway, back to the original point: you are not displaying the grasp
of the history of Lisp that would be required to support your
criticisms of someone like Guy Steele.

-- Scott

Ken Tilton

unread,
Apr 16, 2008, 10:09:41 PM4/16/08
to

Scott Burson wrote:
> On Apr 16, 3:36 pm, Ken Tilton <kennytil...@optonline.net> wrote:
>
>>Scott Burson wrote:
>>
>>>If Scheme hadn't existed, Common Lisp would very likely have lacked
>>>lexical closures.
>>
>>Doing Scheme magically made them think up lexical closures? Nope.
>
>
> The existence of Scheme, and its demonstration of the relevant
> implementation techniques, was key to persuading the nascent Common
> Lisp community that lexical closures were the right thing.

So if they had added lexical closures to a lisp it would not have been
persuasive? Did they agonize over the decision to start a new language
or add them to lisp? I doubt it, especially since a prime directive
seems to have been to get to a small core and shake off all the mud, get
a fresh start, do a lisp-1. That impetus is the same as the decision to
make a new language, so I guess in this case it is just bad design, not
starting a new language I would beat him (and Sussman) up for.

>
> In fact, there was a time when ZetaLisp -- the Lisp Machine dialect --
> had only dynamic closures, a concept that has fortunately been
> consigned to the dustbin of history. (It was a way to capture special
> variable bindings and reinstate them later.)
>
> Kent has also posted here (or somewhere) about how there was quite a
> bit of resistance in some quarters of the nascent Common Lisp
> community to lexical scoping.

I heard. Non sequitor.

> I don't know exactly what that
> resistance consisted of -- MacLisp, for instance, had lexically scoped
> local variables in compiled code, so it wasn't a completely foreign
> concept -- but I think it had to include lexical closures.
>
> (Kent, are you listening? You could help me out a little here :)

I would say the full range of design mistakes they wanted to make along
with the good one (lexical closures) justifies the new language.

>
>
>>Scheme fragmented the Lisp world? Yep.
>
>
> Not even close. When Scheme came into being the Lisp world was
> already fragmented.

No doubt. You are the one who read "was exclusively responsible for"
into my sentence. I was leaning towards "added to the fragmentation".

The very first Lisp Machines were still being
> designed and/or built, to supplant MacLisp, around that time. The NIL
> project was roughly contemporaneous. I think Interlisp was in use at
> Stanford and Xerox PARC. Common Lisp wasn't even the proverbial gleam
> in the eye at that point.
>
>
>>>Huh. I'd rather be working in Java than the C++ I have to use in my
>>>day job.
>>
>>Was that supposed to be saying something good about Java? :)
>
>
> Sort of :) Seriously, it's saying that as languages designed for
> mainstream programmers go, I prefer Java to C++.

Your objective was to say something good about Java that would then
reflect favorably on Steele and all you do is keep repeating that you
prefer Java to C++.

I mean, my position was that Steele did not have a good track record and
you are trying to hold up Java as a counterexample without even saying
anything good about Java in an absolute sense. Hmmmm.

>
>
>>And are you
>>sure you could live without even a preprocessor?
>
>
> I don't think I would miss the preprocessor -- we don't use it much
> anyway.

Oh, my.

But I expect I would rather work in Scala than Java, for the
> lambda expressions if nothing else.
>
> Anyway, back to the original point: you are not displaying the grasp
> of the history of Lisp that would be required to support your
> criticisms of someone like Guy Steele.

My criticisms of him are for designing CPL, the minimalist choice with
Scheme, and the strait-jacket thing with Java, a language that is
absolutely awful and wholly opposed to The Lisp Way no matter how much
you hate C++. Also for two fatal flaws: always wanting to make a new
language, and always falling for all-X-all-the-time.

Some history.

kenny

Griff

unread,
Apr 17, 2008, 7:55:54 PM4/17/08
to
On Apr 16, 7:12 pm, Pascal Costanza <p...@p-cos.net> wrote:

> Griff wrote:
> > I've never heard anyone claim that those features are going to take
> > you from "programming the small" to "programming in the large".
>
> Read the intro of the Scheme specification, for examplehttp://www.r6rs.org/final/html/r6rs/r6rs-Z-H-3.html#node_chap_Temp_3

Come on Pascal, you know that R6RS is an experiment to address
programming language features that people need "in the large".

When people talk about the features of Scheme, they are talking about
everything pre-R6RS.

From what I know about R6RS, you would be hard-pressed to find someone
who would argue that R6RS is ready to complete with CL when it comes

Kent M Pitman

unread,
Apr 18, 2008, 12:40:18 AM4/18/08
to
Scott Burson <FSet...@gmail.com> writes:

> Kent has also posted here (or somewhere) about how there was quite a
> bit of resistance in some quarters of the nascent Common Lisp
> community to lexical scoping.

Yes.

> I don't know exactly what that resistance consisted of

Fear of the unknown by people who didn't think they'd experienced it.
(In fact, most of those people had worked in languages with lexical
variables, they just hadn't seen it played out in Lisp.)
Lexical scoping first appeared in programming in Algol 60, which was
one of the big languages for teaching programming for many years
(until replaced by Pascal, then Scheme, then Java, ...?)

> -- MacLisp, for instance, had lexically scoped local variables
>in compiled code, so it wasn't a completely foreign concept
> -- but I think it had to include lexical closures.

Uh, we often called them that, but I often called it "compiling away
the names". It was kind of kludgey in places and certainly didn't
have lexical closures. You could sort of get downward closures,
but not upward ones.

> (Kent, are you listening? You could help me out a little here :)
>
> > Scheme fragmented the Lisp world? Yep.
>
> Not even close. When Scheme came into being the Lisp world was
> already fragmented. The very first Lisp Machines were still being
> designed and/or built, to supplant MacLisp, around that time. The
> NIL project was roughly contemporaneous. I think Interlisp was in
> use at Stanford and Xerox PARC. Common Lisp wasn't even the
> proverbial gleam in the eye at that point.

CL started a couple years later. But not hugely much. And no one saw
Scheme as an alternative for programming for many years. Its early
use was just as a teaching tool. I don't think there were any
seriously deployable implementations for quite some time. In fact,
that's one reason we did T at Yale, which I think kicked off in 1982.
By that time, MIT had Scheme on some machine (the Bobcat?) but it
wasn't widely available. Yale was buying some other kind of machine
that had no viable computational option. The T Project was paid for
out of "hardware budget" if I recall correctly because the savings was
enough if we made them a T for whatever machine they wanted to use
that they could pay for the entire language design and development out
of the savings in hardware over their fallback choice.

> > > Huh. I'd rather be working in Java than the C++ I have to use in my
> > > day job.
> >
> > Was that supposed to be saying something good about Java? :)
>
> Sort of :) Seriously, it's saying that as languages designed for
> mainstream programmers go, I prefer Java to C++.

Try C#. It's quite good--much better than I remember Java being, though I
know Java has improved in recent years, too; my Java experience isn't super
recent and I know it's been improving.

It's not surprising that C# is doing cool things given the Lisp influence
that's been seeping into it from people like Patrick Dussud (who worked
on the TI Explorer, one of the branches of the Lisp Machine family of
machines and contributed to the design of CL and CLOS):
* http://blogs.msdn.com/patrick_dussud/archive/2006/11/21/how-it-all-started-aka-the-birth-of-the-clr.aspx
* http://www.international-lisp-conference.org/2005/speakers.html
and Jim Miller (long-time member of the Lisp and Scheme community):
* http://www.theserverside.net/tt/talks/videos/JimMiller/interview.tss?bandwidth=dsl
and perhaps others I don't know of.

C# is a static language, and very different than Lisp. That means it
loses something for me. But if you have to work in the static language
family, it's my hands-down favorite. It's really very thoughtfully
designed and seems to be continuing to improve in innovative and
thoughtful ways.

> > And are you
> > sure you could live without even a preprocessor?
>
> I don't think I would miss the preprocessor -- we don't use it much
> anyway. But I expect I would rather work in Scala than Java, for the
> lambda expressions if nothing else.

Btw, on C#, there's something very like lambda expressions going into C# 3.0.
http://www.developer.com/net/csharp/article.php/3598381



> Anyway, back to the original point: you are not displaying the grasp
> of the history of Lisp that would be required to support your
> criticisms of someone like Guy Steele.

This I would agree with for several reasons. First, I don't think
it's exactly fair to characterize either Java or Scheme as failed
languages, so let's dispose of that as silly right out of the box,
though I know Kenny likes being provocative, so I also won't fault him
for it. But second, he conspicuously omitted CL in the list of those
"failed" languages, probably because it would have weakened the point.
(If CL is not failed, then certainly neither are those other languages.)

But finally, beyond all that, there was something that Erik Naggum
once pointed out to me (in private email) that has always stuck with me
as a fascinating observation. He was talking about the value of the US
in the world scene and he was saying that the thing that makes it such
an interesting place commercially was the degree of forgiveness it has
for failure. I don't have the text here in front of me, but I recall taking
his meaning that in other countries, if you fail, you're done. But people
who are inventors don't always succeed on the first try. It often takes
having some wild ideas to get things right. And the US contains a lot of
infrastructure for allowing people to screw up and survive. It's easy to
write that off as wasteful or undisciplined, and many countries do. But if
it's fundamental to the invention process, then one has to tolerate it.
And the US does. And one might say when Kenny says that Steele is

"The poster boy for failed languages? The Darth Vader of the Lisp Way?"

he is missing the big picture that it's important to have idea people.
And Steele is such a person.

I went to a Fortress talk at MIT a few months back (which may be
online for all I know--it was at the Media Lab and probably sponsored
by the Boston Computer Society or some such thing, but I'm too lazy to
search). Anyway, he was remarking that one reason it has the syntax
it does is that he likes languages he makes to look distinctive, and
not just to be the same old thing. He seemed to be wanting to leave
it to someone else to redesign the same old thing. And while I'm sure
he wants to see Fortress succeed, the thrust of his talk seemed to be
that it was more important to research to be bold and try something
than just to be timid and crank out one more of something that people
already have. I think he's done that pretty well.

Scott Burson

unread,
Apr 18, 2008, 1:42:29 AM4/18/08
to
On Apr 16, 7:09 pm, Ken Tilton <kennytil...@optonline.net> wrote:
> Scott Burson wrote:
> > On Apr 16, 3:36 pm, Ken Tilton <kennytil...@optonline.net> wrote:
>
> >>Scott Burson wrote:
>
> >>>If Scheme hadn't existed, Common Lisp would very likely have lacked
> >>>lexical closures.
>
> >>Doing Scheme magically made them think up lexical closures? Nope.
>
> > The existence of Scheme, and its demonstration of the relevant
> > implementation techniques, was key to persuading the nascent Common
> > Lisp community that lexical closures were the right thing.
>
> So if they had added lexical closures to a lisp it would not have been
> persuasive? Did they agonize over the decision to start a new language
> or add them to lisp? I doubt it, especially since a prime directive
> seems to have been to get to a small core and shake off all the mud, get
> a fresh start, do a lisp-1. That impetus is the same as the decision to
> make a new language, so I guess in this case it is just bad design, not
> starting a new language I would beat him (and Sussman) up for.

Scheme's original purpose was to show how a small but powerful
language could be built around the lambda calculus formalism. It
served that purpose exceedingly well.

> > Kent has also posted here (or somewhere) about how there was quite a
> > bit of resistance in some quarters of the nascent Common Lisp
> > community to lexical scoping.
>
> I heard. Non sequitor.

No, it's not a non sequitur. Scheme played a major role in overcoming
that resistance.

> >>Scheme fragmented the Lisp world? Yep.
>
> > Not even close. When Scheme came into being the Lisp world was
> > already fragmented.
>
> No doubt. You are the one who read "was exclusively responsible for"
> into my sentence. I was leaning towards "added to the fragmentation".

So what??? You seriously would prefer we were all still using Lisp
1.5?!?

No. Like any language, Lisp had to fragment to make real progress.
Scheme was a huge, massive step forward. Maybe you don't think it's
the ultimate answer... but again, so what?

> Your objective was to say something good about Java that would then
> reflect favorably on Steele and all you do is keep repeating that you
> prefer Java to C++.
>
> I mean, my position was that Steele did not have a good track record and
> you are trying to hold up Java as a counterexample without even saying
> anything good about Java in an absolute sense. Hmmmm.

First off, the originator of Java was not Guy Steele. The senior Sun
computer scientist who bears the greatest responsibility for Java is
James Gosling. Read some of the early history of Java; Steele is not
mentioned. Yes, I'm sure he helped a lot with the formalization of
the language specification, but that was well after the language had
been created and a lot of decisions had been made. I don't know, and
I doubt you know either, exactly what Steele thought of the language
then or thinks of it now. Yes, he wrote some of the books -- remind
me -- who is it who writes his paycheck?

And secondly, Java itself deserves more credit than you're giving it.
Java is the language that has brought garbage collection to the
programming masses. Not APL, not Smalltalk, not CL, not Modula-3, not
ML, and not Dylan. Java. My comments about C++ really weren't off
the track: if it weren't for Java, C++ would have no serious
competition. And that, I think we agree, would be a damn shame.

Am I crazy about Java? No. I've written some small stuff in it, and
I thought it was usable. I haven't written anything large in it, and
unless and until I do that, I don't think I have a basis to either
praise or damn it.

> My criticisms of him are for designing CPL, the minimalist choice with
> Scheme, and the strait-jacket thing with Java, a language that is
> absolutely awful and wholly opposed to The Lisp Way no matter how much
> you hate C++. Also for two fatal flaws: always wanting to make a new
> language, and always falling for all-X-all-the-time.

I don't think this is really about Guy Steele. I think this is really
about trying to pump up your ego by putting down someone who has done
far more for computer science than you could ever dream of doing.

-- Scott

Kaz Kylheku

unread,
Apr 18, 2008, 1:45:12 AM4/18/08
to
On Apr 16, 1:15 pm, Scott Burson <FSet....@gmail.com> wrote:
> On Apr 16, 5:56 am, Ken Tilton <kennytil...@optonline.net> wrote:
>
> > Pertti Kellomäki wrote:
> > > Pascal Costanza wrote:
>
> > >> To the best of my knowledge nobody ever succeeded to grow a complex
> > >> big system from a simple and orthogonal core language.
>
> > > That is basically what Guy Steele is trying to do with Fortress.
>
> > The poster boy for failed languages? The Darth Vader of the Lisp Way?
>
> And the author of CLtL and CLtL2.

Don't forget that he had his fingers in Java, as a coauthor of the
first edition of the Java Language Specification.

Can you imagine what utter crap Java would have been be without
Steele's involvement?

> If Scheme hadn't existed, Common Lisp would very likely have lacked
> lexical closures.

Then someone clearly needs to correct this:

``Lexical scoping was first introduced in Lisp 1.5 (via the FUNARG
device developed by Steve Russell, working under John McCarthy) and
added later into Algol 60 (also by Steve Russell), and has been picked
up in other languages since then.''

[ source: http://en.wikipedia.org/wiki/Scope_(programming) ]

Steve Russel is the same fellow who realized that EVAL could be a task
given to the computer, and the rest is history.

Pascal Costanza

unread,
Apr 18, 2008, 3:11:36 AM4/18/08
to
Kaz Kylheku wrote:
>> If Scheme hadn't existed, Common Lisp would very likely have lacked
>> lexical closures.
>
> Then someone clearly needs to correct this:
>
> ``Lexical scoping was first introduced in Lisp 1.5 (via the FUNARG
> device developed by Steve Russell, working under John McCarthy) and
> added later into Algol 60 (also by Steve Russell), and has been picked
> up in other languages since then.''
>
> [ source: http://en.wikipedia.org/wiki/Scope_(programming) ]

Yes, that needs to be corrected. Lisp 1.5 had dynamic closures, not
lexical closures.

Kent M Pitman

unread,
Apr 18, 2008, 3:21:37 AM4/18/08
to
Kaz Kylheku <kkyl...@gmail.com> writes:

> Can you imagine what utter crap Java would have been be without
> Steele's involvement?

Ah, so you're saying it's GLS's fault that Dylan wasn't able to win
the day in the marketplace. ;)

> Then someone clearly needs to correct this:
>
> ``Lexical scoping was first introduced in Lisp 1.5 (via the FUNARG
> device developed by Steve Russell, working under John McCarthy) and
> added later into Algol 60 (also by Steve Russell), and has been picked
> up in other languages since then.''
>
> [ source: http://en.wikipedia.org/wiki/Scope_(programming) ]
>
> Steve Russel is the same fellow who realized that EVAL could be a task
> given to the computer, and the rest is history.

I think I said in another post on this same thread that it was Algol
60 where I thought lexical scoping originated. The Scheme report
traditionally has credited Algol 60 for many of its ideas. And I'm
certainly willing to believe it goes farther back, as explained here;
I have no reason to disbelieve this--it's all before my time. Well,
before I was programming anyway. It may be that the idea came earlier
and Algol 60 just did a better job of working out or describing the
details.


Pascal Costanza

unread,
Apr 18, 2008, 8:12:08 AM4/18/08
to
Griff wrote:
> On Apr 16, 7:12 pm, Pascal Costanza <p...@p-cos.net> wrote:
>> Griff wrote:
>>> I've never heard anyone claim that those features are going to take
>>> you from "programming the small" to "programming in the large".
>> Read the intro of the Scheme specification, for examplehttp://www.r6rs.org/final/html/r6rs/r6rs-Z-H-3.html#node_chap_Temp_3
>
> Come on Pascal, you know that R6RS is an experiment to address
> programming language features that people need "in the large".

I like the word "experiment". ;)

> When people talk about the features of Scheme, they are talking about
> everything pre-R6RS.

Of course. It's not clear yet that R6RS has "won." It's more likely that
the Scheme community will fragment into pre-R6RS and post-R6RS fans.

> From what I know about R6RS, you would be hard-pressed to find someone
> who would argue that R6RS is ready to complete with CL when it comes
> to "programming in the large".

I wouldn't be so sure...

Pascal Costanza

unread,
Apr 18, 2008, 8:13:28 AM4/18/08
to
Kent M Pitman wrote:
> Kaz Kylheku <kkyl...@gmail.com> writes:
>
>> Can you imagine what utter crap Java would have been be without
>> Steele's involvement?
>
> Ah, so you're saying it's GLS's fault that Dylan wasn't able to win
> the day in the marketplace. ;)

To a certain extent, there is some truth in that statement, sadly.
Whenever a 'famous' person puts his/her name on a book cover (like the
Java Language Specification), people misunderstand that as an endorsement.

(No, I'm not saying that it's his fault...)

Zach Beane

unread,
Apr 18, 2008, 8:39:14 AM4/18/08
to
Pascal Costanza <p...@p-cos.net> writes:

> To a certain extent, there is some truth in that statement,
> sadly. Whenever a 'famous' person puts his/her name on a book cover
> (like the Java Language Specification), people misunderstand that as
> an endorsement.

That was certainly my intent on Practical Common Lisp! (Kenny's too, I
bet.)

Zach

Kent M Pitman

unread,
Apr 18, 2008, 10:16:08 AM4/18/08
to
Pascal Costanza <p...@p-cos.net> writes:

> Kaz Kylheku wrote:
> >> If Scheme hadn't existed, Common Lisp would very likely have lacked
> >> lexical closures.
> > Then someone clearly needs to correct this:
> > ``Lexical scoping was first introduced in Lisp 1.5 (via the FUNARG
> > device developed by Steve Russell, working under John McCarthy) and
> > added later into Algol 60 (also by Steve Russell), and has been picked
> > up in other languages since then.''
> > [ source: http://en.wikipedia.org/wiki/Scope_(programming) ]
>
> Yes, that needs to be corrected. Lisp 1.5 had dynamic closures, not
> lexical closures.

Oh, I see. I had assumed that people were meaning just that compiling
(lambda (x) (f x)) might not let F have access to the lambda's arguments
in Lisp 1.5 ... people sometimes call that "lexical scoping" in Maclisp.
Lisp 1.5 certainly didn't have lexical closures, so if that's what was
meant, I think you're right.

People sometimes talk about Maclisp like it had lexical scoping when
compiled just because when you compiled some functions, the effect was
that you couldn't get at their variables form outside of them... but
it was a pretty weak constraint. It had no closures.

So I usually assume the bar is pretty low on the nebulous term lexical
scoping and maybe some people are reading different things into it.
Your analysis of what the bug in wikipedia is sounds right. It hadn't
occurred to me that people were confusing dynamic and lexical closures,
though, in fairness, I think some of the original implementors also had
some confusions in that regard. They were just exploring what an
implementation could do. They didn't have the roadmaps we have now.

But that probably argues for either being conservative or much more
detailed in Wikipedia just to not confuse people.

Kent M Pitman

unread,
Apr 18, 2008, 10:17:39 AM4/18/08
to
Pascal Costanza <p...@p-cos.net> writes:

> Of course. It's not clear yet that R6RS has "won." It's more likely
> that the Scheme community will fragment into pre-R6RS and post-R6RS
> fans.

And in true Scheme community fashion, they will insist there should not be
two namespaces for the two different uses of the name.

Sorry, I couldn't resist. ;)

Pascal Costanza

unread,
Apr 18, 2008, 10:50:25 AM4/18/08
to

LOL

Ken Tilton

unread,
Apr 21, 2008, 10:14:49 PM4/21/08
to

Scott Burson wrote:
> On Apr 16, 7:09 pm, Ken Tilton <kennytil...@optonline.net> wrote:
>
>>Scott Burson wrote:
>>
>>>On Apr 16, 3:36 pm, Ken Tilton <kennytil...@optonline.net> wrote:
>>
>>>>Scott Burson wrote:
>>
>>>>>If Scheme hadn't existed, Common Lisp would very likely have lacked
>>>>>lexical closures.
>>
>>>>Doing Scheme magically made them think up lexical closures? Nope.
>>
>>>The existence of Scheme, and its demonstration of the relevant
>>>implementation techniques, was key to persuading the nascent Common
>>>Lisp community that lexical closures were the right thing.
>>
>>So if they had added lexical closures to a lisp it would not have been
>>persuasive? Did they agonize over the decision to start a new language
>>or add them to lisp? I doubt it, especially since a prime directive
>>seems to have been to get to a small core and shake off all the mud, get
>>a fresh start, do a lisp-1. That impetus is the same as the decision to
>>make a new language, so I guess in this case it is just bad design, not
>>starting a new language I would beat him (and Sussman) up for.
>
>
> Scheme's original purpose was to show how a small but powerful
> language could be built around the lambda calculus formalism. It
> served that purpose exceedingly well.

Nonsense. Every Scheme user piled back in all the mud (incompatibly,
each picking different batches of soil and water (god how I love
argument by analogy!)) to get any work done (you know, the definition of
"power"). The Scheme minimalist experiement failed miserably. Every last
proponent counters with his big fat pile of Scheme libraries when
challenged on the impotence of minimalism, never seeing the
non-counteringlyness.

>>>Kent has also posted here (or somewhere) about how there was quite a
>>>bit of resistance in some quarters of the nascent Common Lisp
>>>community to lexical scoping.
>>
>>I heard. Non sequitor.
>
>
> No, it's not a non sequitur. Scheme played a major role in overcoming
> that resistance.

You might be right. Marketing is a bitch. Taking one of the Lisps and
adding lexical scope might not have worked -- maybe it /had/ to be a
whole frickin language to get the point across. Maybe I should create a
/language/ called Cells to help you morons catch a clue.

>>>>Scheme fragmented the Lisp world? Yep.
>>
>>>Not even close. When Scheme came into being the Lisp world was
>>>already fragmented.
>>
>>No doubt. You are the one who read "was exclusively responsible for"
>>into my sentence. I was leaning towards "added to the fragmentation".
>
>
> So what??? You seriously would prefer we were all still using Lisp
> 1.5?!?

Wow, you are good at this! Ron Garret, meet your master!

> Am I crazy about Java? No. I've written some small stuff in it, and
> I thought it was usable. I haven't written anything large in it, and
> unless and until I do that, I don't think I have a basis to either
> praise or damn it.

Oh, my.

>>My criticisms of him are for designing CPL, the minimalist choice with
>>Scheme, and the strait-jacket thing with Java, a language that is
>>absolutely awful and wholly opposed to The Lisp Way no matter how much
>>you hate C++. Also for two fatal flaws: always wanting to make a new
>>language, and always falling for all-X-all-the-time.
>
>
> I don't think this is really about Guy Steele. I think this is really
> about trying to pump up your ego by putting down someone who has done
> far more for computer science than you could ever dream of doing.

You don't like Cells?

Meanwhile, pump up my ego?! I already need a frickin Wide Load sign to
walk across a room, why would I want it any bigger?

Cells is bigger than Scheme, Lisp, and the lambda calculus. Cells and
its kazillion prior and concurrent arts addresses something that
matters, a profound omission by the first language designers. Everything
from Sketchpad to Visicalc to Flapjax tries to fix this omission by
endowing higher-order development tools with cause and effect and by
focusing on /change/ in data as paramount. ECLM had presenter after
presenter talking about tools manifesting this capability, most of them
talking about Cells without knowing about it and with Kristoffer Kvello
making the connection explicit.

It's fun flaming, but the important thing is better software through
Lisp and through mechanisms that imbue development tools with the power
of cause and effect and give developers a window on change: Cells.

Ken Tilton

unread,
Apr 21, 2008, 10:21:23 PM4/21/08
to

> Pascal Costanza <p...@p-cos.net> writes:
>
>
>>To a certain extent, there is some truth in that statement,
>>sadly. Whenever a 'famous' person puts his/her name on a book cover
>>(like the Java Language Specification), people misunderstand that as
>>an endorsement.
>

What I saw was a defense of Java as being halfway to Lisp and the bit
about him having a chart trying to close all the possible wholes where
behavior was unspecified. True Lispers laugh in the face of unspecified.
Hell, we pay extra for it.

Pascal Costanza

unread,
Apr 22, 2008, 3:06:09 AM4/22/08
to
Ken Tilton wrote:
> Cells is bigger than Scheme, Lisp, and the lambda calculus. Cells and
> its kazillion prior and concurrent arts addresses something that
> matters, a profound omission by the first language designers. Everything
> from Sketchpad to Visicalc to Flapjax tries to fix this omission by
> endowing higher-order development tools with cause and effect and by
> focusing on /change/ in data as paramount. ECLM had presenter after
> presenter talking about tools manifesting this capability, most of them
> talking about Cells without knowing about it and with Kristoffer Kvello
> making the connection explicit.

No, they weren't talking about Cells, they were talking about the
underlying concepts and principles, which are shared among a couple of
approaches, including, amongst others, Cells.

> It's fun flaming, but the important thing is better software through
> Lisp and through mechanisms that imbue development tools with the power
> of cause and effect and give developers a window on change: Cells.

You're taking yourself too seriously.

The idea behind Cells is relatively simple, the details are hard to get
right, though (like with most ideas). Cells in particular isn't well
documented, it's hard to figure out what the 'definitive' version at any
moment in time is (it's basically a moving target). On top of that, it
suffers from idiosyncrasies of its developers (aka "I like
abbreviations") and seems to be badly implemented.

Yes, the underlying idea is good and probably the right way to go for a
certain restricted set of applications. Beyond, why anybody should pay
attention to Cells in particular is not clear (and you never seem to
give good reasons).

Frank "frgo" a.k.a DG1SBG

unread,
Apr 24, 2008, 3:42:47 PM4/24/08
to
Pascal Costanza <p...@p-cos.net> writes:

Now, while I do like your stuff like closer-mop etc, this seems not
to be the best of your posts, Pascal.

So, what's wrong with abbrevs? You don't have to use them - I can
tell: I actually have used Cells for some real customer shipment.

> Yes, the underlying idea is good and probably the right way to go for
> a certain restricted set of applications. Beyond, why anybody should
> pay attention to Cells in particular is not clear (and you never seem
> to give good reasons).

Actually it seems as if you had not been at the ECLM last weekend.

So, specifically to your arguments:

1. Certain restricted set of applications

What application does not handle data?
What application does not have to make sure any change is done
consistently?
If your application does have to ensure data consistency and has to
handle data then your application qualifies for Cells use.

2. Why shold anybody pay attention to Cells ?

Well, let's tackle this in two layers:

a) The underlying principle

Reading your post I reckon it's clear that data consistency and data
dependency modeling is the core of every application (at least I know
of). Oh, btw, Excel is the most popular application used to manage
data. Why? Because it's so damn easy to formulate dependencies!

In fact, the first thing I do when designing an application (nowadays,
after 2 years of Cells under my belt) is to write down the dependency
rules between (what I call) Information Objects (in the user language,
not as a normalized). How I do that? Well, that's easy:

(defmd information-object ()
slot-1
slot-2

:slot-1 (c-in "1")
:slot-2 (c? (conc$ (^slot-1) "-2")))

Brief, clear. Working code. Even accepted by customers as a
spec. That's what I need as productivity for real deadlines to be met.

One of the core productivity levers for us is also the fact that we
don't need to change the class hierarchy to change the behavior of
information objects. A change in behavior often is done by defining
new dependency rules. Assign them at the right point in time and:
Changed behavior for existing instances of objects without mop-ery
being involved.

b) Cells in particular

Well, Cells is readily available. There's certainly still a wish list
for it, but documentation is not on it. I learned Cerlls by code
reading and tracing... and I learned a lot more than just Cells by
reading the Cells code.

Cells has been around for many years now and has been
refined and extended on an ongoing basis. I actually enjoy great
support from Kenny. Admittedly I am more of a User type of Lisp
programmer - take what's there and build real applications with it.

So, I dare say: Cells is robust, performs well, runs stable even in
long-running applications (uptime of the TIBCO Bus Monitor now more
than three months - restarted due to Christmas downtime of customer's
servers) - and: It's fun to use, fun to see an app to "just run" after
kicking of the Cells machinery.

There is a learning curve associated with it. As with Lisp. I very
much liked Marc Battyani's comparison of FPGA being the counterpart of
Lisp in the analogy he gave - Cells would be the third in that round
in terms of thinking a concept to the end and making it work. - Just
like ContextL, btw.

Long story short:

* Use Cells - it will open up a new way of thinking
* Get over with Kenny's way of posting here in c.l.l.
* Have fun

No, I didn't want to speak up - it just jumped out of my fingers and I
hadn't posted for a very long time.

Ok - I am ready to get flamed ;-)

Best,
Frank

--
Frank Goenninger

frgo(at)mac(dot)com

"Don't ask me! I haven't been reading comp.lang.lisp long enough to
really know ..."

Pascal Costanza

unread,
Apr 24, 2008, 4:29:27 PM4/24/08
to
Frank "frgo" a.k.a DG1SBG wrote:
> Now, while I do like your stuff like closer-mop etc, this seems not
> to be the best of your posts, Pascal.

I can't always win. ;-)

> So, what's wrong with abbrevs? You don't have to use them - I can
> tell: I actually have used Cells for some real customer shipment.

It's not clear to me how these two statements are related. If this works
for you, then it's good for you. But in general, abbreviations are just
gratuitous and don't buy you a lot, but make the code unnecessarily
harder to read for other people.

>> Yes, the underlying idea is good and probably the right way to go for
>> a certain restricted set of applications. Beyond, why anybody should
>> pay attention to Cells in particular is not clear (and you never seem
>> to give good reasons).
>
> Actually it seems as if you had not been at the ECLM last weekend.

Yes, I have been there, but haven't been very impressed by Kenny's
presentation.

> So, specifically to your arguments:
>
> 1. Certain restricted set of applications
>
> What application does not handle data?
> What application does not have to make sure any change is done
> consistently?
> If your application does have to ensure data consistency and has to
> handle data then your application qualifies for Cells use.

Your description is too broad. There are other ways to ensure other
forms of data consistency (like transaction handling), so you have to be
more specific in your description. Cells provides dataflow abstractions,
and not all applications need dataflow abstractions. GUI applications
seem to indeed benefit from dataflow abstractions, but not all
applications are GUI applications.

> 2. Why shold anybody pay attention to Cells ?
>
> Well, let's tackle this in two layers:
>
> a) The underlying principle

The underlying principle has been implemented elsewhere as well, so is
not a particularly strong reason for using Cells.

> b) Cells in particular
>
> Well, Cells is readily available. There's certainly still a wish list
> for it, but documentation is not on it. I learned Cerlls by code
> reading and tracing... and I learned a lot more than just Cells by
> reading the Cells code.

I haven't studied the Cells code in detail, but just browsed through it,
and saw some horrible things in there. Based on what I have seen, I
wouldn't trust it. But I am not a simple application programmer, so what
do I know... ;)

Ken Tilton

unread,
Apr 24, 2008, 6:32:58 PM4/24/08
to

Pascal Costanza wrote:
> Frank "frgo" a.k.a DG1SBG wrote:
>> So, what's wrong with abbrevs? You don't have to use them - I can
>> tell: I actually have used Cells for some real customer shipment.
>
>
> It's not clear to me how these two statements are related. If this works
> for you, then it's good for you. But in general, abbreviations are just
> gratuitous and don't buy you a lot, but make the code unnecessarily
> harder to read for other people.

No, the general rule is always abbreviate the things one uses a lot.
That's how we got tick for quote and octothorpe for function and then
the macro lambda so we did not even need that.

But, hey, use c-formula if you want to build up your LOC.

>
>>> Yes, the underlying idea is good and probably the right way to go for
>>> a certain restricted set of applications. Beyond, why anybody should
>>> pay attention to Cells in particular is not clear (and you never seem
>>> to give good reasons).

If you have not programmed with the paradigm you cannot imagine a
problem being solved in that paradigm and so might not be a good judge
of the range of applications better solved in that paradigm.

Garbage collecting languages offer automatic memory management, dataflow
hacks offer automatic state management and automatic change management.
Who can live without that?

>>
>>
>> Actually it seems as if you had not been at the ECLM last weekend.
>
>
> Yes, I have been there, but haven't been very impressed by Kenny's
> presentation.

Yeah, me neither. Working on doing what I can to make up for that. Stay
tuned.

>
>> So, specifically to your arguments:
>>
>> 1. Certain restricted set of applications
>>
>> What application does not handle data?
>> What application does not have to make sure any change is done
>> consistently?
>> If your application does have to ensure data consistency and has to
>> handle data then your application qualifies for Cells use.
>
>
> Your description is too broad.

But that is the precise width covered by Cells, so I would say it is
exactly broad.

> There are other ways to ensure other
> forms of data consistency (like transaction handling), so you have to be
> more specific in your description. Cells provides dataflow abstractions,
> and not all applications need dataflow abstractions.

You are still guessing at the range and hoping it is small.

> GUI applications
> seem to indeed benefit from dataflow abstractions, but not all
> applications are GUI applications.

RoboCells for example is not a GUI application, nor was Thomas Burdick's
compiler verification thingy (pardon the technical jargon). How were
Cells applied to those problems? Don't know? Exactly.

>
>> 2. Why shold anybody pay attention to Cells ?
>>
>> Well, let's tackle this in two layers:
>>
>> a) The underlying principle
>
>
> The underlying principle has been implemented elsewhere as well, so is
> not a particularly strong reason for using Cells.

Yes, there have been general treatments such as COSI which was
proprietary and things like Screamer which are hard to program (tho
House Designer seems to have done very well with it) and many
applications offer the same thing as part of the package, such as CAD
apps. Over in Python land Philip Eby has taken the Cells concept as a
starting point and done Trellis, a general solution. Let's not forget
all the Lispniks who liked Cells and immediately dashed off to write
their own, so no, there is no reason to use Cells, but one should use
something like it. Aside from Trellis and I guess KR... well, we just
heard how KB has a Cellsy feel or vice versa...

Look, resistance is futile. OpenLaszlo, Adobe Adam, functional reactive
programming -- like Lisp the principle of automatic state management is
popping up more and more, and the more that happens the more people will
start to see we had it wrong in the beginning, we missed that what we
were doing was modelling and that modelling is about change and
interdependence of state. Languages should have had Cells (the class)
from the beginning. Now they do, Cells has been ported to C++, Java,
Python, and now RDF. You are welcome. Now learn the paradigm, it's fun.


>
>> b) Cells in particular
>>
>> Well, Cells is readily available. There's certainly still a wish list
>> for it, but documentation is not on it. I learned Cerlls by code
>> reading and tracing... and I learned a lot more than just Cells by
>> reading the Cells code.
>
>
> I haven't studied the Cells code in detail, but just browsed through it,
> and saw some horrible things in there.

I was busy! I am always busy, I write applications, Cells is just there
to make that easier. It would be fun to do a code cleanup, change all
those dolists to loop, but unlike academics I am not out there trying to
impress people with my code, I am trying to help them be more productive.

> Based on what I have seen, I
> wouldn't trust it.

No, it works. I just fixed that problem with the :owning attribute. I
just checked the regression test, all is well. I am no TDD maniac, but
with something as tricky and rich as Cells I think the regression test
has to be the driver. Innocent till proven guilty and all that.

kt

Ken Tilton

unread,
Apr 24, 2008, 8:29:11 PM4/24/08
to

Pascal Costanza wrote:
>>> Yes, the underlying idea is good and probably the right way to go for
>>> a certain restricted set of applications. Beyond, why anybody should
>>> pay attention to Cells in particular is not clear (and you never seem
>>> to give good reasons).

Let's not forget that you learned Lisp only to prove that it was not all
that great. The good news is that you did recognize a good thing when
you saw it, the bad news is that you did not learn that your instincts
for software... well, you might have learned always to try things before
telling the world about them.

Now you run around like some big Lisp expert -- maybe someday you can be
a dataflow expert if you decide to prove me wrong instead of asserting
it without foundation.

Here's a pop quiz.

#1. Two applications I have worked with over the years. One was a FX
system handling millions of trades a year. Users needed rapid on-line
reporting from this database broken down by a dozen criteris: currency,
time, salesperson, customer, sales office, country, whatever, but not
all on the same report of course, each broke things down different ways.
We figured out the millions of trades could be aggregated five ways and
then with alternate segmented keys on those five files (providing needed
sort orders and further aggregation by subtotaling on breaks during ISAM
browsing) provide all the inquiries they needed. As each trade was
written to the database, it was also fed to an aggregating function that
picked out the right keys for each of the five files and updated the
correct bucket with the trade amount.

How would Cells have changed that?

#2. Another application cleaned data by comparing it against all the
other data it had seen, and to some extent went with most-popular is
most-correct. But a design objective was that changes over time in what
was correct should be adjusted for, in the event that, say, something's
name changed. Yep. Similar to #1. So is the question.

#3. How do Cells make driving a framework (things like Celtk, Cells-Gtk,
and OpenAIR) a breeze? Corrollary: why did Celtk cover all of Tk before
LTk? What fundamental principle enunciated in my talk answers this question?

Feel free to work with others on these if you haven't a clue what I am
talking about.

Ken Tilton

unread,
Apr 24, 2008, 11:35:05 PM4/24/08
to

Ken Tilton wrote:
> Feel free to work with others on these if you haven't a clue what I am
> talking about.

In return, feel free to tell me about any application programming you
may have done, I will tell you if I spot any ways Cells might have been
used.

kt

Pascal Costanza

unread,
Apr 25, 2008, 2:17:48 AM4/25/08
to
Ken Tilton wrote:
>
>
> Pascal Costanza wrote:
>>>> Yes, the underlying idea is good and probably the right way to go for
>>>> a certain restricted set of applications. Beyond, why anybody should
>>>> pay attention to Cells in particular is not clear (and you never seem
>>>> to give good reasons).
>
> Let's not forget that you learned Lisp only to prove that it was not all
> that great. The good news is that you did recognize a good thing when
> you saw it, the bad news is that you did not learn that your instincts
> for software... well, you might have learned always to try things before
> telling the world about them.

It's annoying that you're always getting personal in such discussions,
instead of trying to stay objective and just talk about the subject matter.

> Now you run around like some big Lisp expert -- maybe someday you can be
> a dataflow expert if you decide to prove me wrong instead of asserting
> it without foundation.

I didn't claim to be a dataflow expert. You may want to read my posting
again in order to understand it.

>> Feel free to work with others on these if you haven't a clue what I
>> am talking about.
>

> In return, feel free to tell me about any application programming you
> may have done, I will tell you if I spot any ways Cells might have
> been used.

I have already implemented applications where I can tell in retrospect
that dataflow abstractions would have been really useful.

I wouldn't use Cells, though, and I have already said why not.

Frank "frgo" a.k.a DG1SBG

unread,
Apr 25, 2008, 7:10:16 AM4/25/08
to
Pascal Costanza <p...@p-cos.net> writes:

> Ken Tilton wrote:
>>
>>
>> Pascal Costanza wrote:
>>>>> Yes, the underlying idea is good and probably the right way to go for
>>>>> a certain restricted set of applications. Beyond, why anybody should
>>>>> pay attention to Cells in particular is not clear (and you never seem
>>>>> to give good reasons).
>>
>> Let's not forget that you learned Lisp only to prove that it was not
>> all that great. The good news is that you did recognize a good thing
>> when you saw it, the bad news is that you did not learn that your
>> instincts for software... well, you might have learned always to try
>> things before telling the world about them.
>
> It's annoying that you're always getting personal in such discussions,
> instead of trying to stay objective and just talk about the subject
> matter.
>
>> Now you run around like some big Lisp expert -- maybe someday you
>> can be a dataflow expert if you decide to prove me wrong instead of
>> asserting it without foundation.
>
> I didn't claim to be a dataflow expert. You may want to read my
> posting again in order to understand it.

Oh my God, Kenny, we really should sit down and talk about marketing
some day ;-) ...

>
>>> Feel free to work with others on these if you haven't a clue what I
>>> am talking about.
>>
>> In return, feel free to tell me about any application programming you
>> may have done, I will tell you if I spot any ways Cells might have
>> been used.
>
> I have already implemented applications where I can tell in retrospect
> that dataflow abstractions would have been really useful.
>
> I wouldn't use Cells, though, and I have already said why not.

... which is sad because I'd really like to see some code cleanup
suggestions with all your ContextL and Closer to MOP experience.

Well, I tried.

Thanks for being open and posting your thoughts.

Now on to fund raising for the next Cells app: Master Data Management
and Manufacturing Execution System for the Semiconductor Industry. Due
in 2 years. So, be patient.

Cheers
Frank

--

Frank Goenninger

http://www.goenninger.net

Ken Tilton

unread,
Apr 25, 2008, 8:36:44 AM4/25/08
to

Pascal Costanza wrote:

>> Let's not forget that you learned Lisp only to prove that it was not
>> all that great. The good news is that you did recognize a good thing
>> when you saw it, the bad news is that you did not learn that your
>> instincts for software... well, you might have learned always to try
>> things before telling the world about them.
>
>
> It's annoying that you're always getting personal in such discussions,
> instead of trying to stay objective and just talk about the subject matter.
>
>> Now you run around like some big Lisp expert -- maybe someday you can
>> be a dataflow expert if you decide to prove me wrong instead of
>> asserting it without foundation.
>
>
> I didn't claim to be a dataflow expert. You may want to read my posting
> again in order to understand it.
>
>>> Feel free to work with others on these if you haven't a clue what I
>>> am talking about.
>>
>>
>> In return, feel free to tell me about any application programming you
>> may have done, I will tell you if I spot any ways Cells might have
>> been used.
>
>
> I have already implemented applications where I can tell in retrospect
> that dataflow abstractions would have been really useful.
>
> I wouldn't use Cells, though, and I have already said why not.

You said a little but not enough, which is how your credentials (not
you) got challenged.

You said the code was so bad you would not trust it (fine) but did not
demonstrate how it was so bad (not fine). Without backing up a technical
opinion with hard evidence you (not me) implicitly inject into the
discussion your credentials as an algorithms expert.

Had you said "this bit of code sucks for this reason" we would be
talking about that code instead of your credentials, and we could start
doing that now if you care to back up your characterization of Cells.

I would seriously love to see which algorithms you find horrific.

I might agree, or I might find something to be documented -- some of
those huge comments in there are reminders to myself of quite hard
problems I had to solve to achieve the data integrity documented (yep)
in cells-manifesto.txt. Maybe you found some undocumented bizarritudes.

But that code is quite solid and close to Deeply Correct. I know because
it has not changed much in years and handles new requirements
effortlessly, generally by /taking out/ code that was enforcing
disciplines which turned out not to be necessary (and in Lisp we hold
inalienable the right to shoot off our own toes).

When code stops changing and starts handling new requirements unchanged,
it is done and I take it away from the monkeys (who would like a word
with you).

So bring it on. The lurkers here would much rather see an intense Lisp
algortihms discussion. Maybe we can drag computed-class into the brawl
as well.

hth, kenny

Pascal Costanza

unread,
Apr 25, 2008, 8:53:35 AM4/25/08
to

Nonsense. It's much simpler than that: Either I'm wrong, or I'm right.
Everyone can check this for themselves, because the code is publicly
available.

Ken Tilton

unread,
Apr 25, 2008, 10:59:32 AM4/25/08
to

Pascal Costanza wrote:


> Ken Tilton wrote:
>> You said the code was so bad you would not trust it (fine) but did not
>> demonstrate how it was so bad (not fine). Without backing up a
>> technical opinion with hard evidence you (not me) implicitly inject
>> into the discussion your credentials as an algorithms expert.
>
>
> Nonsense. It's much simpler than that: Either I'm wrong, or I'm right.
> Everyone can check this for themselves, because the code is publicly
> available.

Wow. C'mon, cut and paste, let's see it. You wrote:

"I haven't studied the Cells code in detail, but just browsed through
it, and saw some horrible things in there. Based on what I have seen, I
wouldn't trust it."

So horrible "you would not trust it" (that is pretty bad) even though it
has been driving enterprise applications since 2001, and that was before
Data Integrity <ta-dum>.

That means there is an unmistakeable smoking gun disaster in the code
readily identifiable without "studying in detail". Should be three
minutes work to cut and paste that.

Back it up or back down, dude -- well, your response above already makes
that choice implicitly, you can just let it stand.

:)

Pascal Costanza

unread,
Apr 25, 2008, 12:25:10 PM4/25/08
to
Ken Tilton wrote:
>
>
> Pascal Costanza wrote:
>> Ken Tilton wrote:
>>> You said the code was so bad you would not trust it (fine) but did
>>> not demonstrate how it was so bad (not fine). Without backing up a
>>> technical opinion with hard evidence you (not me) implicitly inject
>>> into the discussion your credentials as an algorithms expert.
>>
>>
>> Nonsense. It's much simpler than that: Either I'm wrong, or I'm right.
>> Everyone can check this for themselves, because the code is publicly
>> available.
>
> Wow. C'mon, cut and paste, let's see it. You wrote:
>
> "I haven't studied the Cells code in detail, but just browsed through
> it, and saw some horrible things in there. Based on what I have seen, I
> wouldn't trust it."
>
> So horrible "you would not trust it" (that is pretty bad) even though it
> has been driving enterprise applications since 2001, and that was before
> Data Integrity <ta-dum>.
>
> That means there is an unmistakeable smoking gun disaster in the code
> readily identifiable without "studying in detail". Should be three
> minutes work to cut and paste that.
>
> Back it up or back down, dude -- well, your response above already makes
> that choice implicitly

No, I just have more important things to do.

EL

unread,
Apr 25, 2008, 3:23:54 PM4/25/08
to
nef...@gmail.com schrieb:

> I am new to programming, and have really only been doing it a couple
> years, with experience in Ruby, C#, C, PHP, a bit of Java, and more
> recently Perl.

So you are not really /new/ to programming.

> And after making it through a few chapters of Practical Common Lisp,
> it's been cool to see that Lisp is really so modern in a lot of ways,
> supporting lots of the features I love from other languages.

I guess there will be features that you don't find in other languages
and you will like them even more... maybe

> My concern, however, as I proceed through the book, is the increasing
> number of things it seems to require me to keep in my brain.

I can't say more than the people before me: Don't memorize, practice!
How did you learn all the other languages before, did you memorize them?
If so, then yes I believe that you are new to programming.

> It has me
> worried that the pure effort of memorization will be staggering, as
> the built-in functions and macros seem quite unintuitive to me. setf
> or getf or setq or eq or eql or defvar or defparameter or funcall or
> apply or return or return-from or let or let* or...

You are probably too much used to popular infix languages. Now you have
a unpopular prefix language, which works different.
What you could do is learn Tcl first (www.tcl.tk). It is also an
unpopular prefix language, but not quite as strict as Lisp. It does also
allow you to create your own control structures, by means of commands,
but not quite as powerful as with Lisp. It has great libraries and Tk
(as an aside), and is extensible. But most useful: it is darn easy to
learn and use, with 18 core commands and 12 syntax rules. This should be
possible to memorize for you ;).
I have quite a bit experience with Tcl. Now, Lisp (or actually Scheme)
is nothing new to me - I feel almost at home with the prefix notation
and with the necessity of quoting, delayed evaluation etc. However the
Lisp languages add a few goodies that you don't have in Tcl, like
macros, lexical scoping and closures. That makes it interresting for me
to have a look.

BTW, I also do know enough popular infix languages to be sick of them.


--
Eckhard

Rob Warnock

unread,
Apr 25, 2008, 9:05:22 PM4/25/08
to
EL <eckhar...@gmx.de> wrote:
+---------------
| What you could do is learn Tcl first (www.tcl.tk). ... darn easy to
| learn and use, with 18 core commands and 12 syntax rules. This should
| be possible to memorize for you ;).
| I have quite a bit experience with Tcl. Now, Lisp (or actually Scheme)
| is nothing new to me - I feel almost at home with the prefix notation
| and with the necessity of quoting, delayed evaluation etc. However the
| Lisp languages add a few goodies that you don't have in Tcl, like
| macros, lexical scoping and closures.
+---------------

Don't forget lots of real datatypes other than just strings & numbers[1],
better speed even when "just" interpreting[2], and compilers to native
machine code that give "close enough to C" speed[3]!!


-Rob

[1,2] Which is what moved me to convert my user-mode hardware debugging
tools from Tcl to Scheme circa 1992. I could finally run my PIO
'scope loops fast enough to see without all the lights off! ;-}

[3] What moved me from Scheme to Common Lisp circa 2000. The other
motivations were: (a) I got tired of re-inventing for myself in
Scheme all the little functions that are already in Common Lisp;
(2) I got tired of the various Schemes that I was using changing
out from under me and breaking stuff. For better or worse, ANSI CL
is "ANSI CL forever".

-----
Rob Warnock <rp...@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

EL

unread,
Apr 26, 2008, 5:05:36 AM4/26/08
to
Rob Warnock schrieb:

> Don't forget lots of real datatypes other than just strings & numbers[1],
> better speed even when "just" interpreting[2], and compilers to native
> machine code that give "close enough to C" speed[3]!!

Whereas is to say that Tcl's speed has considerably improved since 1992
and it's also bytecode compiled now. Datatypes have been expanded -
everything /appears/ to be a string still, but internally they are
handled as different, more appropriate types.
Notably for Tcl is, that the language is still evolving, BUT in a way
that doesn't break old code.

Uhm, I should stop to be off topic now... sorry.


--
Eckhard

Ken Tilton

unread,
Apr 26, 2008, 6:00:15 AM4/26/08
to

EL wrote:
> Rob Warnock schrieb:
>
>> Don't forget lots of real datatypes other than just strings & numbers[1],
>> better speed even when "just" interpreting[2], and compilers to native
>> machine code that give "close enough to C" speed[3]!!
>
>
> Whereas is to say that Tcl's speed has considerably improved since 1992
> and it's also bytecode compiled now.

And you can just use it as a subroutine library from Common Lisp via
CFFI, have speed and CL most of the time, TCL as needed.

kt

0 new messages