Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Lisp-2 or Lisp-1

64 views
Skip to first unread message

Pascal Bourguignon

unread,
May 15, 2003, 9:44:32 PM5/15/03
to

I was dubious about Lisp-2 at first, but finally I've noticed that in
human languages, there are a lot of instances that show that we're
wired for a Lisp-2 rather than a Lisp-1:

The fly flies. (FLIES FLY)
The flies fly. (FLY FLIES)


;-)

--
__Pascal_Bourguignon__ http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.

_ XL1201 _ Sebek _ Budo _ Kafka @hotmail.com Franz Kafka

unread,
May 15, 2003, 10:10:44 PM5/15/03
to

Scheme has a few things that are nicer than CommonLisp:
continuations (are useful to implement language features.)
cleaner semantics (easier to write Functional code, and
more beautiful too,)


CommonLisp has a few things that are nicer than Scheme:
CLOS (Built-in Object system.)
defmacro/defstructure (was missing from standard Scheme
for way too long)
More tools for building large systems (more built-in functions.)
2 Name-spaces (No chance a symbol-name will conflict with
a function-name.)

Hear's an idea why don't we take the benefits of Scheme(Lisp-1)
and CommonLisp(Lisp-2) and build a Lisp-3 (not named yet)
that combinds all the benefits of Scheme and CommonLisp.

Lisp-3 = Continuations + (Clean Function Calling Syntax) + Backtracking &
Unification (from Prolog) + APL (array stuff) + CommonLisp + (support for
introspection/reflection, code-walking)

&& make a kick ass Lisp system.

Cheers for Paul Garham's ARC.


Kent M Pitman

unread,
May 15, 2003, 10:49:35 PM5/15/03
to
[ replying to comp.lang.lisp only
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com> writes:

> Hear's an idea why don't we take the benefits of Scheme(Lisp-1)
> and CommonLisp(Lisp-2) and build a Lisp-3 (not named yet)
> that combinds all the benefits of Scheme and CommonLisp.
>
> Lisp-3 = Continuations + (Clean Function Calling Syntax) + Backtracking &
> Unification (from Prolog) + APL (array stuff) + CommonLisp + (support for
> introspection/reflection, code-walking)
>
> && make a kick ass Lisp system.
>
> Cheers for Paul Garham's ARC.

First, the names Lisp1 and Lisp2 are from the paper RPG and I
wrote for X3J13 [1] when considering the namespace issue. The debate was
originally over Scheme-style or CL-style, and I felt I was losing the
debate because Scheme has too much "affection" going for it. I wanted
it to be clear that the only part of Scheme we were talking about was
the namespace part, so I concocted a family of language dialects
called Lisp1 which have a single namespace (and which include Scheme),
and another family that has dual namespaces (and which presumably
included CL). The idea was that people should be able to conceive of
Scheme with 2 namespaces and a CL with 1 namespace, and so by talking
about Lisp1 and Lisp2 rather than Scheme and CL, we were being neutral
as to what other language features the two languages under discussion
had. This brought balance back to the discussion. Anyway, so the
digit counts namespaces, and it was an error in the paper not to call
CL a Lisp4, since there are also tagbody/go and block/return
namespaces. You are, therefore, incorrect in assuming that Lisp3 has
no designation. To the extent that Lisp1 and Lisp2 have any meaning,
Lisp3 means the family of languages with 3 namespaces.

But ok, so we know what you meant. Here are my thoughts for what they
are worth:

Everyone wants something different in a Lisp. So the more people
you involve, the more what you make will look like a big pile of things...
kind of like CL already does. :)

Nothing keeps you from making your own Lisp dialect, just as nothing
keeps you from starting your own political party [2], except the fact that
it's a lot of work and initially quite lonely. It can either succeed
or fail spectacularly.

Personally, I think there are enough dialects about and that it's better
to just use one and extend it. If you don't like CL or Scheme, then try
ISLISP or, as you say, work with PG on ARC before you just start your own
completely from scratch. It will not only be less lonely, but it will
also mean that resourcewise you are adding to the energies of others rather
than dividing things up still further.

[1] "Technical Issues of Separation in Function Cells and Value Cells"
http://www.nhplace.com/kent/Papers/Technical-Issues.html

[2] Parenthetically Speaking with Kent M. Pitman:
"More Than Just Words: Lambda The Ultimate Political Party"
http://www.nhplace.com/kent/PS/Lambda.html

Jim Bender

unread,
May 15, 2003, 11:12:08 PM5/15/03
to
At last I understand what all those yellow-colored "Dummy's Guide to
[whatever]" books
are really about. The only thing I am puzzled about is whether this is from
the "Dummy's
Guide to Lisp and Scheme" or from the "Dummy's Guide to Linguistics" ;)

"Pascal Bourguignon" <sp...@thalassa.informatimago.com> wrote in message
news:87of23l...@thalassa.informatimago.com...

Pascal Costanza

unread,
May 16, 2003, 5:24:26 AM5/16/03
to
Franz Kafka wrote:

> Hear's an idea why don't we take the benefits of Scheme(Lisp-1)
> and CommonLisp(Lisp-2) and build a Lisp-3 (not named yet)
> that combinds all the benefits of Scheme and CommonLisp.
>
> Lisp-3 = Continuations + (Clean Function Calling Syntax) + Backtracking &
> Unification (from Prolog) + APL (array stuff) + CommonLisp + (support for
> introspection/reflection, code-walking)
>
> && make a kick ass Lisp system.

Because it's extremely hard to do. In general, language features don't
combine very well.


Pascal

--
Pascal Costanza University of Bonn
mailto:cost...@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Dorai Sitaram

unread,
May 16, 2003, 8:27:52 AM5/16/03
to
In article <87of23l...@thalassa.informatimago.com>,

Pascal Bourguignon <sp...@thalassa.informatimago.com> wrote:
>
>I was dubious about Lisp-2 at first, but finally I've noticed that in
>human languages, there are a lot of instances that show that we're
>wired for a Lisp-2 rather than a Lisp-1:
>
> The fly flies. (FLIES FLY)
> The flies fly. (FLY FLIES)

Human-language verbs and nouns correspond to global
function names and global variables. Common Lisp
practice doesn't allow you to share names between
these. For Common Lisp, the right column above should
properly be

(FLIES *FLY*)
(FLY *FLIES*)

or

(FLIES +FLY+)
(FLY +FLIES+)

Pascal Bourguignon

unread,
May 16, 2003, 9:28:30 AM5/16/03
to
ds...@goldshoe.gte.com (Dorai Sitaram) writes:

Please, let me introduce you Wally. Wally is a fly. The fly flies.
Do you know Bally? Bally's a fly too. The flies fly. (Bally and Wally).

(defvar *Wally* (make-instance 'fly))
(defvar *Bally* (make-instance 'fly))
(let ((fly *Wally)
(flies (list *Wally* *Bally*)))
(flies fly)
(fly flies))

Bruce Lewis

unread,
May 16, 2003, 9:34:47 AM5/16/03
to
Pascal Bourguignon <sp...@thalassa.informatimago.com> writes:

> I was dubious about Lisp-2 at first, but finally I've noticed that in
> human languages, there are a lot of instances that show that we're
> wired for a Lisp-2 rather than a Lisp-1:
>
> The fly flies. (FLIES FLY)
> The flies fly. (FLY FLIES)

Here are a couple of hints if you want to start a CL/Scheme flame war:

1) Timing. There was just an extended discussion on lisp1 vs lisp2 in
c.l.l, so weariness of the topic will likely cause the flame war to
die out sooner than you intended.

2) Timing. Flame wars that start on Fridays tend to die out over the
weekend. Do your incendiary crosspost early in the week for best
results.

Paul Wallich

unread,
May 16, 2003, 9:42:15 AM5/16/03
to

> I was dubious about Lisp-2 at first, but finally I've noticed that in
> human languages, there are a lot of instances that show that we're
> wired for a Lisp-2 rather than a Lisp-1:
>
> The fly flies. (FLIES FLY)
> The flies fly. (FLY FLIES)
>
>
> ;-)

More realistically, we're wired for a Lisp-N, where N is the number of
part-of-speech roles that can be used in a single sentence. For example
(excuse the bad attempt at urban slang): "Fly flies fly fly." It's the
parser technology combined with a desire for elegant syntax that makes 2
a reasonable limit.

paul

_ XL1201 _ Sebek _ Budo _ Kafka @hotmail.com Franz Kafka

unread,
May 16, 2003, 9:42:50 AM5/16/03
to

"Jim Bender" <j...@benderweb.net> wrote in message
news:cGYwa.1730$dE.536...@newssvr12.news.prodigy.com...

> At last I understand what all those yellow-colored "Dummy's Guide to
> [whatever]" books
> are really about. The only thing I am puzzled about is whether this is
from
> the "Dummy's
> Guide to Lisp and Scheme"

David T's Common Lisp: A Gentile Introduction to Symbolic Computation avail.
for free on line.

But, don't expect it to teach you all of Lisp in 21 days. It'll prob.
take three-six months.

Then Read Sonya E. Keene's Object Oriented Programming in Common Lisp: A
Programmers Guide to CLOS. not avail. on Line.

After that you can glance at Peter Novig's Lisp-Bible Paradigms of
Artificial Intellignece Programming: Case-Studies in Common Lisp, and
understand some of it.

In about 6 months, or less if you are a fast reader, you should understand
CommonLisp.

Reading The Schemers Guide, from schemers.com should teach
people who are in high school or in gifted programs how to use
Scheme.

That will take a month or two to get through iff you really want to
understand it.

& The Little Lisper/The Little Schemer and The Seasoned Schemers
should help people who are not Comp. Sci. majors understand Lisp/Scheme.


Kent M Pitman

unread,
May 16, 2003, 10:11:00 AM5/16/03
to
[ replying to comp.lang.lisp only
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com> writes:

> David T's Common Lisp: A Gentile Introduction to Symbolic Computation avail.
> for free on line.

Repeat after me...

* Common Lisp, unlike _some_ languages, accepts and fosters _multiple_
programming philosophies.

* You know those s-expressions in Common Lisp? They're _secular_ expressions.

* Touretzky's book is a "Gentle" introduction, not a "Gentile" introduction.

Thank you for your attention.

Joe Marshall

unread,
May 16, 2003, 10:21:32 AM5/16/03
to
Bruce Lewis <brl...@yahoo.com> writes:

> Here are a couple of hints if you want to start a CL/Scheme flame war:
>
> 1) Timing. There was just an extended discussion on lisp1 vs lisp2 in
> c.l.l, so weariness of the topic will likely cause the flame war to
> die out sooner than you intended.
>
> 2) Timing. Flame wars that start on Fridays tend to die out over the
> weekend. Do your incendiary crosspost early in the week for best
> results.

3) Attitude. Assume that Common Lisp users are unaware of Scheme
and that they would prefer it if they were not so obviously
ignorant.

4) Attitude. Assume that use of Scheme is prima facie evidence of
superior reasoning power. Note that you yourself use Scheme.
Use fallacious modus tollens to draw conclusions about CL users.

5) Ad hominem resoning can be used to extend the thread. Remember
that we are all unfriendly savages. Nazi's too.

Coby Beck

unread,
May 16, 2003, 10:44:03 AM5/16/03
to

"Pascal Bourguignon" <sp...@thalassa.informatimago.com> wrote in message
news:87fznf1...@thalassa.informatimago.com...

> ds...@goldshoe.gte.com (Dorai Sitaram) writes:
> Please, let me introduce you Wally. Wally is a fly. The fly flies.
> Do you know Bally? Bally's a fly too. The flies fly. (Bally and Wally).
>
> (defvar *Wally* (make-instance 'fly))
> (defvar *Bally* (make-instance 'fly))
> (let ((fly *Wally)
> (flies (list *Wally* *Bally*)))
> (flies fly)
> (fly flies))
>

<SPLAT>

C-LUSER -67322 > flies
(*wally*)


Dorai Sitaram

unread,
May 16, 2003, 11:14:39 AM5/16/03
to
In article <ba2tl4$1igv$1...@otis.netspace.net.au>,

Coby Beck <cb...@mercury.bc.ca> wrote:
>
>"Pascal Bourguignon" <sp...@thalassa.informatimago.com> wrote in message
>news:87fznf1...@thalassa.informatimago.com...
>> ds...@goldshoe.gte.com (Dorai Sitaram) writes:
>> Please, let me introduce you Wally. Wally is a fly. The fly flies.
>> Do you know Bally? Bally's a fly too. The flies fly. (Bally and Wally).

Please watch your attributions. I would never
introduce flies to people, unless they [1] were really
hungry.

--d

[1] Ambiguous anaphora retained, to show that lexical
variables don't model anaphora, which Pascal B
seems to think they do.

Matthias Blume

unread,
May 16, 2003, 11:16:12 AM5/16/03
to
Pascal Bourguignon <sp...@thalassa.informatimago.com> writes:

> I was dubious about Lisp-2 at first, but finally I've noticed that in
> human languages, there are a lot of instances that show that we're
> wired for a Lisp-2 rather than a Lisp-1:
>
> The fly flies. (FLIES FLY)
> The flies fly. (FLY FLIES)

So what? We are also "wired" for all sorts of misunderstandings,
ambiguities, cross-talk, etc. And these are just the difficultis that
*humans* have with natural language; computers are much, much worse
still. In other words, programming languages should *not*(!!) be
like natural languages.

(Note that this is not really an argument which directly applies to
the Lisp-1 vs. Lisp-2 debate. All I'm saying is that anyone who
defends a particular programming language design because of how it
resembles natural language is seriously confused.)

Matthias

Eli Barzilay

unread,
May 16, 2003, 12:12:59 PM5/16/03
to
Matthias Blume <fi...@me.else.where.org> writes:

> So what? We are also "wired" for all sorts of misunderstandings,
> ambiguities, cross-talk, etc. And these are just the difficultis
> that *humans* have with natural language; computers are much, much
> worse still. In other words, programming languages should *not*(!!)
> be like natural languages.
>
> (Note that this is not really an argument which directly applies to
> the Lisp-1 vs. Lisp-2 debate. All I'm saying is that anyone who
> defends a particular programming language design because of how it
> resembles natural language is seriously confused.)

Sorry for the AOL-replay but... *Exactly*! Any kind of such
comparison between formal and natural languages leads to confusion and
problems. If anyone really wants to get better unification between
these two extremes (making them "wired" in the same way) they better
be prepared to go the whole way... For example, you'd use statistical
parsers to understand your code, resulting in programs with
probabilistic outcomes ("I wrote a program that solves your problem,
but it has a few bugs which makes it unreliable in bad weather") and
ambiguities -- (let ((let 1)) let) might give you 1, or it might
complain about a syntax error, or just give up and produce a "code too
confusing error".

Also, operational semantics will need to consider such things as the
local cultures, slang, and general current knowledge since they can
all change the way that a program runs. And don't forget about many
other natural devices that will be available:

loop with a-variable from 1 to the-length-of-that-array-we-just-read
loop with a-different-variable from the-above-variable to the-same-length
increment counter by the-multiplication-of-these-two-loop-variables

The results will definitely be interesting, but I think I'll stick
with formal languages for hacking.

--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://www.barzilay.org/ Maze is Life!

Kent M Pitman

unread,
May 16, 2003, 12:29:23 PM5/16/03
to
[ replying to comp.lang.lisp only
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

Eli Barzilay <e...@barzilay.org> writes:

> Matthias Blume <fi...@me.else.where.org> writes:

(Not that these guys are reading on this newsgroup, but I'm not going
to cross-post anyway. Someone can tell them a "continuation" is available
on comp.lang.lisp. (And they thought we had no continuations...))

> > (Note that this is not really an argument which directly applies to
> > the Lisp-1 vs. Lisp-2 debate. All I'm saying is that anyone who
> > defends a particular programming language design because of how it
> > resembles natural language is seriously confused.)

I disagree.



> Sorry for the AOL-replay but... *Exactly*! Any kind of such

ANY kind? That seems a bit broad, to the point of useless. And if
narrowed, your remarks here seem to reduce to statements that are
largely false or irrelevant, depending on your point of view.

> comparison between formal and natural languages leads to confusion and
> problems.

I disagree.

Natural language is a cue to how we organize our brains.

I see no reason not to organize a language in a way that bears a shape
resemblance to how we think we think.

I CERTAINLY see no reason not to organize a computer language, which is
after all a means of communication, in the way people innately desire
to communicate. Most or all human languages exploit context and namespace;
I see no reason for programming languages not to follow their lead
provided no ambiguity results. People have said many things about multiple
namespaces, but they have never said it results in ambiguity.

The simplest and most obvious example of basing programming languages on
human languages is that we try to make the nouns, verbs, prepositions,
conjunctions, etc. mean something like what they mean in human language.

> For example, you'd use statistical
> parsers to understand your code,

This is a bogus argument. You're offering one possible way of choosing
badly as an argument for saying that there exists no possible way of
choosing well. No one is suggesting taking ALL features of natural
language into programming langauges. To do that is simply to eliminate
programming languages. But once one is talking about taking only some
features, I don't think you can credibly argue there are no features in
natural languages that ought to be deliberately mimicked by programming
languages.

> Also, operational semantics will need to consider such things as the
> local cultures, slang, and general current knowledge since they can
> all change the way that a program runs.

More of same.

> The results will definitely be interesting, but I think I'll stick
> with formal languages for hacking.

Use of multiple namespaces is not "unformal", even if it does come from
natural languages.

Eli Barzilay

unread,
May 16, 2003, 1:08:20 PM5/16/03
to
Kent M Pitman <pit...@world.std.com> writes:

> [ replying to comp.lang.lisp only
> http://www.nhplace.com/kent/PFAQ/cross-posting.html ]
>
> Eli Barzilay <e...@barzilay.org> writes:
>
> > Matthias Blume <fi...@me.else.where.org> writes:
>
> (Not that these guys are reading on this newsgroup,

You'd be surprised... *Posting* to *this* newsgroup is a different
issue, as well as choosing to reply to a cross-posted message to just
one of the groups (especially when you have that assumption...).


> > Sorry for the AOL-replay but... *Exactly*! Any kind of such
>
> ANY kind? That seems a bit broad, to the point of useless. And if
> narrowed, your remarks here seem to reduce to statements that are
> largely false or irrelevant, depending on your point of view.

Well, OK, "any" is too broad -- there are some shared concepts like
"communication" (but hey, I'm using natural language here so vagueness
is a tool I can use).


> > comparison between formal and natural languages leads to confusion
> > and problems.
>
> I disagree.
>
> Natural language is a cue to how we organize our brains.

I agree to that, but that still object to this fact making natural
languages a role model for formal languages. Ambiguity is the most
obvious example of a tool that we explictly want to have in a NL, yet
a FL even if you want to talk about ambiguity you should do it in
unmbiguous terms.


> [...] Most or all human languages exploit context and namespace; I


> see no reason for programming languages not to follow their lead
> provided no ambiguity results.

So the fact that such things as the standard example of "time flies
like an arrow" are part of natural languages, means that you would
like to have such features in such a programming language?


> People have said many things about multiple namespaces, but they
> have never said it results in ambiguity.

Ambiguity was just an example of my argument -- which is *not* in
favor or agains the double namespace. My argument is against using NL
analogies in the discussion (and I try to keep on this meta level
since the argument itself leads to very well-known results in this
context).


> The simplest and most obvious example of basing programming
> languages on human languages is that we try to make the nouns,
> verbs, prepositions, conjunctions, etc. mean something like what
> they mean in human language.

I'm sorry, but I don't see it that way. I can certainly see that some
objects in computer languages might resemble objects in natural ones,
but this all goes down to describing what a machine should do. If I
see a computer language that allows me to do get such descriptions
better while not having such analogies of verbs and nouns, I would not
have problems using it. (But obviously that would not happen -- it is
the domain of making computers do things which have verb/noun/etc-like
concepts.)


> [...] No one is suggesting taking ALL features of natural language
> into programming langauges. [...]

Right, -- which is what makes it a bad example to borrow from. I have
seen many arguments go by:

* Start with PL-feature X,
* Observe that feature X has an analogy in (unrelated) domain D,
* Observe that domain D has feature Y,
* Translate feature Y back into PL.

This, and some variants, are the sort of arguments which I object to
and want to avoid. The Lisp1/2 and verbs/nouns thing just happens to
be a popular instantiation. (Or, the fact that I might live in a
place with no theaters should not make me choose sides in the
continuation thread.)

BTW, there are natural language with seperate constructs for verbs and
nouns. If I use such a language, should I prefer Lisp1? Should I be
utterly confused on any arguments based on that feature of English?
Should I deteriorate arguments by sticking my language in and leading
the discussion into a language discussion (ie, cll and Latin)?


> > The results will definitely be interesting, but I think I'll stick
> > with formal languages for hacking.
>
> Use of multiple namespaces is not "unformal", even if it does come
> from natural languages.

It's the arguments used which are unformal.


[I will try to not followup.]

Kent M Pitman

unread,
May 16, 2003, 1:29:50 PM5/16/03
to
Eli Barzilay <e...@barzilay.org> writes:

> It's the arguments used which are unformal.

In my personal experience, for whatever that's worth, I've found that
more often than not, requiring formal arguments is used as a means of
excluding people who are not prepared to offer them. It's often no
different than having a public servant who you try to ask a simple
question and they won't deal with it unless you fill out a standardized
form in triplicate.

There's a difference between requiring a sound argument and requiring
a formal one.

It's also the case that peoples' intuitions about what they want are
often well-founded even when they are unable to articulate a coherent
argument.

I personally do accept arguments that are unformal, both because it makes
me feel less like I dismiss people out of hand and also because I sometimes
learn things about the world that dismissing things on the basis of form
would not allow me to learn.

You're welcome to do otherwise.

My intuition is that the reason that people from the Scheme community feel
(incorrectly) like there's some ambiguity in a Lisp2 is that they do not
see it following the rules they are familiar with and so they feel it must
not be following rules at all. The problem is then compounded because they
are intent on believing that it's not a natural way to think, and so they
refuse to learn the rule, and then in their mind I think they start to
believe that their unwillingness to learn the rule is a proof that people
can't learn the rule. I certainly am willing to believe that people who are
determined not to learn something will not learn it. That's a general truth
about the world. Beyond that, though, there's overwhelming evidence that
people can disambiguate even ACTUAL ambiguities. I see no reason to believe
they will have any trouble "disambiguating" (if that's what they like to
call it) the unambiguous notation offered by Common Lisp. And given that
they can do this, I have no shame about having been among those championing
the continued inclusion of this natural and useful feature (multiple
namespaces) in the language.

Incidentally, Java has this feature and no one makes noise about it at all.
Not only may the same name may have multiple casifications, and this
apparently causes no confusion, but in fact they have a separation of
function and value namespaces, which also causes no confusion. I think
the reason it causes confusion in our community is that a few people have
elected themselves to teach people the confusion, just as racism persists
because people elect themselves to teach hatred and fear rather than
tolerance. If those in the Scheme community taught simply that there was
a choice between a single namespace and multiple namespaces, and that Scheme
has made the choice one way for sound reasons and that CL has made the choice
the other way for equally sound reasons, the issue would die away. But
because many in the Scheme community insist on not only observing the
difference (which surely objectively exists) but also claiming it is a Wrong
decision (in some absolutist sense, as if given by some canonically
designated God) (which is surely a mere subjective judgment), the problem
persists. The problem is not a technical one, but a social one.
And it will not be fixed by technical means, since nothing is broken.
It will be fixed by social means, that of tolerance.

I can live with Scheme people programming somewhere in a single namespace
without going out of my way to criticize them. I raise the criticisms I
do ONLY in the context of defending myself from someone's active attack
that claims I am using a multiple namespace language out of ignorance,
poverty, wrongheadedness, or some other such thing. I am not. I am using
it out of informed preference.


be...@sonic.net

unread,
May 16, 2003, 3:26:41 PM5/16/03
to
Pascal Bourguignon wrote:
>
> I was dubious about Lisp-2 at first, but finally I've noticed that in
> human languages, there are a lot of instances that show that we're
> wired for a Lisp-2 rather than a Lisp-1:
>
> The fly flies. (FLIES FLY)
> The flies fly. (FLY FLIES)

In human languages, there is a balance to be achieved.
A certain amount of imprecision and ambiguity is actually
desirable, even necessary, in human languages. There are
many things important to us which we could not discuss at
all without ambiguity and imprecision. As somebody famous
once said, "never express yourself more clearly than you
think."

It is not fruitful to generalize too much from what makes
a good human language to ideas of what makes a good computer
language. Or vice versa.

Bear

Kent M Pitman

unread,
May 16, 2003, 3:49:17 PM5/16/03
to
[ replying to comp.lang.lisp only
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

be...@sonic.net writes:

People are NOT generalizing about what makes a good language.
They are using observations about the brain to claim that the
definition of "simple" that is often used by Scheme advocates
("must have a textually shorter formal semantics") is not the
only possible definition of simple. Some languages are hard to
learn but easy to use; some are easy to learn but hard to use.
It's hard to make a language that is all things to all people.

Languages are implemented more than once (except that Scheme advocates
seem obsessed with the exercise of implementing their own Scheme;
thank whatever deity you have that we made Common Lisp complex enough
that everyone off the street doesn't attempt that wasted exercise),
but programs are written many times. I want the language implemented
for easy program-writing, not for easy language implementation. I
want human languages implemented for easy reading of War and Peace,
not for ease of teaching the language and for tripling my reading
pleasure by tripling the length of War and Peace...

I take it as a given that the brain IS capable, without slowing it
down, of executing multiple rules at the same time with no loss of
speed, and there is ample evidence that it is or else we wouldn't
design all human languages that way because it would slow down our
thinking, then all you are left with is a question of whether to
pretend we have a less powerful processor available than we do.

It's funny to me how many people in the Scheme community profess a
strong desire for accomodating parallel processing, and yet how many
of those same people reject the possibility that the brain does
either parallelism or sufficiently fast multi-tasking that the
complexity of resolving context-based unambiguous notations is an
irrelevance. These same people assert boldly that I must not make
assumptions about the brain, but then themselves go and just as boldly
assert that they have an idea of what it is for something to be simple.
It's just... odd.

Bruce Lewis

unread,
May 16, 2003, 4:40:21 PM5/16/03
to
Kent M Pitman <pit...@world.std.com> writes:

> If those in the Scheme community taught simply that there was a choice
> between a single namespace and multiple namespaces, and that Scheme
> has made the choice one way for sound reasons and that CL has made the
> choice the other way for equally sound reasons, the issue would die
> away. But because many in the Scheme community insist on not only
> observing the difference (which surely objectively exists) but also
> claiming it is a Wrong decision (in some absolutist sense, as if given
> by some canonically designated God) (which is surely a mere subjective
> judgment), the problem persists.

I only started following c.l.l again recently, so I was unaware that
folks here had quelled the voices claiming Lisp2 to be superior in some
absolutist sense. If the Scheme community has fallen behind in terms of
making sure everyone understands the valid reasons on both sides, then
I'm certainly happy to do my part bringing Schemers up to the level of
maturity you've achieved here. I certainly wouldn't want us to be the
sole cause of the problem.

Pascal Costanza

unread,
May 16, 2003, 4:54:35 PM5/16/03
to

> In human languages, there is a balance to be achieved.
> A certain amount of imprecision and ambiguity is actually
> desirable, even necessary, in human languages. There are
> many things important to us which we could not discuss at
> all without ambiguity and imprecision. As somebody famous
> once said, "never express yourself more clearly than you
> think."
>
> It is not fruitful to generalize too much from what makes
> a good human language to ideas of what makes a good computer
> language. Or vice versa.

Polymorphism is all about _making_ code _deliberately_ ambiguous, so
that it has different semantics in different contexts!

Of course, programming languages are supposed to offer polymorphism in a
controlled way, but "a certain amount" is available in almost every
programming language.


Pascal

Pascal Costanza

unread,
May 16, 2003, 5:10:06 PM5/16/03
to
I have thought about this whole issue a little, and at the moment I
think it could probably be possible to integrate Lisp-2 and Lisp-1 into
one system.

The idea would be to integrate this in the package system such that you
can choose between Lisp-2 and Lisp-1 semantics when defining a new
package.

Problems should only occur when interfacing between Lisp-2 and Lisp-1
packages.

- When a Lisp-2 package imports a symbol from a Lisp-1 package, it would
see the same definition for both value cell and function cell. (If the
value cell doesn't contain a closure, the function cell could be
(constantly (symbol-value sym)), or something along these lines.)

- When a Lisp-1 package imports a symbol from a Lisp-2 package, either
the value cell gets priority over the function cell, or vice versa. This
could perhaps be configured at package definiton time.

- When a Lisp-2 package modifies either the value cell or the function
cell of a Lisp-1 symbol, the other cell should be modified accordingly,
or this signals a warning/error, or?

Would this be a reasonable approach, or am I missing something very
important?


Pascal

Shriram Krishnamurthi

unread,
May 16, 2003, 6:28:30 PM5/16/03
to
Pascal Costanza <cost...@web.de> writes:

> Polymorphism is all about _making_ code _deliberately_ ambiguous, so
> that it has different semantics in different contexts!

You must be referring to subtype polymorphism, because this isn't true
of parametric polymorphism.

[Note fwps.]

Shriram

pentaside asleep

unread,
May 16, 2003, 6:51:35 PM5/16/03
to
Kent M Pitman <pit...@world.std.com> wrote in message news:<sfwissa...@shell01.TheWorld.com>...

> Incidentally, Java has this feature and no one makes noise about it at all.
> Not only may the same name may have multiple casifications

Whatever the merits of separate namespaces, I'm not a fan of this
argument because people don't expect that much from Java's syntax.
Other than not being too shocking. (Anyway, it's a static language,
and so the criticism until 1.5 is the need for casts.)

Bill Richter

unread,
May 16, 2003, 8:14:25 PM5/16/03
to
In other words, programming languages should *not*(!!) be like
natural languages.

Right, Matthias. As Noam Chomsky says, natural languages are all
a zillion times more complicated than computer languages.

Kent M Pitman

unread,
May 16, 2003, 8:33:59 PM5/16/03
to
Pascal Costanza <cost...@web.de> writes:

> I have thought about this whole issue a little, and at the moment I
> think it could probably be possible to integrate Lisp-2 and Lisp-1 into
> one system.
>
> The idea would be to integrate this in the package system such that you
> can choose between Lisp-2 and Lisp-1 semantics when defining a new
> package.

I've done serious work on this but ran out of time/resources in the
middle so my half-done project is languishing... It's more
complicated than I originally thought. That's not to say it's not
doable, but it's non-trivial.

> Problems should only occur when interfacing between Lisp-2 and Lisp-1
> packages.

Lisp1 and Lisp2-ness should be an attribute of identifiers, not of
symbols and packages. And certainly it should not be aggregated.
... well, I dunno about should not, but certainly _I_ wouldn't...
I can see where you're going here but I'm doubting I'm going to like
the solution much. You're basically just agreeing to disagree and solving
the problem at a package-to-package interface level, but that's in my mind
not that much different than what you can already do by linking a Lisp
and Scheme together and using an FFI to communicate. A real solution would
confront the problem of tighter integration rather than sweep that under
the rug. If you're going to sweep it under the rug, there are already good
solutions to that, like the ones we use to insulate Lisp from C or Java.

> - When a Lisp-2 package imports a symbol from a Lisp-1 package, it would
> see the same definition for both value cell and function cell. (If the
> value cell doesn't contain a closure, the function cell could be
> (constantly (symbol-value sym)), or something along these lines.)
>
> - When a Lisp-1 package imports a symbol from a Lisp-2 package, either
> the value cell gets priority over the function cell, or vice versa. This
> could perhaps be configured at package definiton time.
>
> - When a Lisp-2 package modifies either the value cell or the function
> cell of a Lisp-1 symbol, the other cell should be modified accordingly,
> or this signals a warning/error, or?
>
> Would this be a reasonable approach, or am I missing something very
> important?

This is not how I'd do it, but I don't have the energy today to
explain how I'd do it differently. The short form of the answer is,
though, that the words "symbol" and "import" would not occur anywhere
in my explanation of how to do it. These are, IMO, at the wrong level
of abstraction.

I also think it's a dreadful mistake to limit the nature of any serious
solution to merely CL and Scheme or merely Lisp1/Lisp2... I'd generalize
the result as much as possible once I was going to the work of doing it
at all.

Kent M Pitman

unread,
May 16, 2003, 8:54:11 PM5/16/03
to
[ replying to comp.lang.lisp only
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

Bill Richter <ric...@artin.math.northwestern.edu> writes:

And as the late Prof. Bill Martin, a professor in the Lab for Computer
Science, some of whose many specialties were computational linguistics
and knowledge representation, said in a class I took from him [I'm
paraphrasing from a 20+ year old memory, but I think I've got the
basic sense of the remark right]: `People designed natural language in
order to be possible to learn.'

I took this statement of the seemingly obvious to be a form of
reassurance to those of us toying with getting computers to learn
language, like when I was working with Rubick's cube and it helped me
to see that at least _someone_ had screwed up a cube and then later
solved it ... so that I would know it was worth persisting to find a
solution because there _was_ a solution waiting to be had, and it was
known to be tractable. (After you've gotten it sufficiently mixed up,
the fact that you could just 'invert what you've done' seems about as
promising as thinking that 'inverting what you've done' would work to
reassemble a sandcastle you've just kicked.)

But in the context I mention it here, it has the same meaning, just a
different spin: If people can understand a language that is a
"zillion" times more complicated than computer languages, then stop
telling me that CL is so much more complicated than Scheme that no one
will ever understand it. We have wetware that is field-tested for
much worse and is known to work just fine on that.

What _is_ a barrier to learning is putting fingers into your ears
and saying "I won't, I won't, I won't" or "I can't, I can't, I can't"
over and over.

_ XL1201 _ Sebek _ Budo _ Kafka @hotmail.com Franz Kafka

unread,
May 16, 2003, 10:37:37 PM5/16/03
to

>
> Languages are implemented more than once (except that Scheme advocates
> seem obsessed with the exercise of implementing their own Scheme;
> thank whatever deity you have that we made Common Lisp complex enough
> that everyone off the street doesn't attempt that wasted exercise),
> but programs are written many times.
>

You don't need to reimplement Common Lisp to get an understanding of how it
works--just implementing a subset of the language would be a better
educational experience than implementing the whole language, unless you want
to be a compiler writer.

I like Lisp because you can embed other languages such as Prolog--which has
been done by LispWorks and in numerious programming books.

When I first was learning Lisp--I wrote a simple Assmbly language in Lisp to
get a feel for the language.

It had something like.

(assemble
'((move r1 5)
(move r2 7)
(add r1 r2)
(sub r2 4)))

I used something like

(cond
((eql (first opcode) 'move) (setf (second opcode) (third opcode)))
((eql (first opcode) 'add) (+ (second opcode) (third opcode)))
...
((eql (first opcode) 'sub) (- (second opcode) (third opcode))))

I just wrote this off the top of my head it might be wrong. :)

I did not add jumps in it because I wanted just to get a feel for how Lisp
worked.

I had a symbol-table with move, add, sub, mul, div--and I did not
allow assignment--each line would print what happened

r1= 5 r2=7 r1+t2=12 r2-4=2

This taught me alot--implementing a full Assembler would have prob. been a
waste of time--I helped me learn how to implement an Assembler in C for a
school project--I at least knew the algorith was correct.

What I am trying to say is that Kent is right you don't need to implement a
whole language--implementing a part of a language or instruction set will
teach you alot about how the lang./asm. works.

Eli Barzilay

unread,
May 16, 2003, 11:10:01 PM5/16/03
to
Kent M Pitman <pit...@world.std.com> writes:

> [...] I can live with Scheme people programming somewhere in a


> single namespace without going out of my way to criticize them. I
> raise the criticisms I do ONLY in the context of defending myself
> from someone's active attack that claims I am using a multiple
> namespace language out of ignorance, poverty, wrongheadedness, or

> some other such thing. [...]

For the record, there was *no* such context here or in the other
subthread (your reply to bear). In both cases Scheme was only
mentioned by yourself, and and no criticism of Lisp-n for any value of
n was given.

Coby Beck

unread,
May 17, 2003, 2:48:07 AM5/17/03
to

"Matthias Blume" <fi...@me.else.where.org> wrote in message
news:m24r3uj...@localhost.localdomain...

> Pascal Bourguignon <sp...@thalassa.informatimago.com> writes:
>
> > I was dubious about Lisp-2 at first, but finally I've noticed that in
> > human languages, there are a lot of instances that show that we're
> > wired for a Lisp-2 rather than a Lisp-1:
> >
> > The fly flies. (FLIES FLY)
> > The flies fly. (FLY FLIES)
>
> So what? We are also "wired" for all sorts of misunderstandings,
> ambiguities, cross-talk, etc. And these are just the difficultis that
> *humans* have with natural language; computers are much, much worse
> still. In other words, programming languages should *not*(!!) be
> like natural languages.

I don't think I agree with much of the above really. Firstly, computer
languages, despite the name, should be design for human understanding so I
don't think the fact that it is hard for a computer (read compiler writer)
should be a major factor in programming language design.

To your first point, and more off-topic, I don't know what you mean that we
are wired for misunderstanding. Don't we humans overcome these difficulties
for the most part? You express yourself very well in English. Some truly
great works of literature have been expressed in many different natural
languages. Do you think you could design a better language? I don't mean
just fix a few ambiguous words, I mean start from scratch. What fundamental
things about natural language would you change?

I don't mean to suggest this is impossible, I'm really curious if you have
some concrete ideas for improvement.

> (Note that this is not really an argument which directly applies to
> the Lisp-1 vs. Lisp-2 debate. All I'm saying is that anyone who
> defends a particular programming language design because of how it
> resembles natural language is seriously confused.)

I disagree because again, a computer language is for a human programmer.

--
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")


Raffael Cavallaro

unread,
May 20, 2003, 6:29:53 PM5/20/03
to
"Coby Beck" <cb...@mercury.bc.ca> wrote in message news:<ba4lve$2euh$1...@otis.netspace.net.au>...

> Some truly
> great works of literature have been expressed in many different natural
> languages. Do you think you could design a better language? I don't mean
> just fix a few ambiguous words, I mean start from scratch. What fundamental
> things about natural language would you change?
>
> I don't mean to suggest this is impossible, I'm really curious if you have
> some concrete ideas for improvement.

It's even been argued, by the biological anthropologist Terry Deacon,
that our cognitive abilities have forced the form of natural
languages. In other words, since neurobiology changes fairly slowly
compared to language, natural languages have been selected for easy
learning and comprehension by *all* human beings, not just the best
and the brightest. What has survived is a range of natural languages
that take the same basic range of syntactical forms, i.e., those that
people can learn easily. Deacon's point is that the selective
bottleneck is childhood language acquisition. Any syntactic feature
too difficult for children to master will not survive as part of that
language into the next generation.

This co-evolution of language and human neurobiology has led to a
range of syntax that is easy for humans to learn, and to understand
relatively unambiguously. Stray outside this range, and you start
having difficulty mastering the syntax, and much greater chances of
misunderstanding and error.

This would suggest that computer languages should hew to the common
patterns and elements of natural languages in order to assure easier,
and clearer, comprehension by human programmers.

Viewed in this light, Larry Wall's views on natural language features
in Perl seem somewhat less idiosyncratic. I suspect that when the
history of computer languages comes to be written in a century, the
surviving languages will follow the natural language view much more
closely than the lisp view that everything can and should be rendered
as an s-expression. Under the covers, maybe, but not the syntax meant
to be read by human programmers.

I've always agreed with the position Coby is taking here - let the
comiler writers worry about how to make the language work. Especially
with ever increasing hardware resources, there's little point in
languages forcing human users to wrap their minds around unnatural
syntactic constructs just to make compiler writers' lives, or academic
proofs of program correctness, easier.

Raf

Thien-Thi Nguyen

unread,
May 20, 2003, 7:31:12 PM5/20/03
to
raf...@mediaone.net (Raffael Cavallaro) writes:

> This would suggest that computer languages should hew to the common
> patterns and elements of natural languages in order to assure easier,
> and clearer, comprehension by human programmers.

yes, if you are only interested in teaching human programmers to program.

thi

Matthew Danish

unread,
May 20, 2003, 7:27:20 PM5/20/03
to
On Tue, May 20, 2003 at 03:29:53PM -0700, Raffael Cavallaro wrote:
> It's even been argued, by the biological anthropologist Terry Deacon,
> that our cognitive abilities have forced the form of natural
> languages. In other words, since neurobiology changes fairly slowly
> compared to language, natural languages have been selected for easy
> learning and comprehension by *all* human beings, not just the best
> and the brightest.

That still leaves a gigantic range of possibilities.

> This co-evolution of language and human neurobiology has led to a
> range of syntax that is easy for humans to learn, and to understand
> relatively unambiguously. Stray outside this range, and you start
> having difficulty mastering the syntax, and much greater chances of
> misunderstanding and error.

References would be nice, please.

> This would suggest that computer languages should hew to the common
> patterns and elements of natural languages in order to assure easier,
> and clearer, comprehension by human programmers.

``Please close the window when it is raining.''

Computer checks: raining? No. Then it moves on.

5 minutes later it rains. Is the window going to be closed? No.

Would a human know to close the window? Yes.

So should the `when' operator imply some kind of constant background loop? I
am really unclear on this.

> Viewed in this light, Larry Wall's views on natural language features
> in Perl seem somewhat less idiosyncratic. I suspect that when the
> history of computer languages comes to be written in a century, the
> surviving languages will follow the natural language view much more
> closely than the lisp view that everything can and should be rendered
> as an s-expression. Under the covers, maybe, but not the syntax meant
> to be read by human programmers.

This assumes that natural languages are inherently more readable. Natural
language is optimized for the task of communicating with other human beings,
who are aware of context and can remember points and communicate back.

Even something as simple as nestable block structure is not well supported by
any natural language I can think of. Sure you can get by without that, but I
thought those languages were part of the past?

> I've always agreed with the position Coby is taking here - let the
> comiler writers worry about how to make the language work. Especially
> with ever increasing hardware resources, there's little point in
> languages forcing human users to wrap their minds around unnatural
> syntactic constructs just to make compiler writers' lives, or academic
> proofs of program correctness, easier.

Here's a sample piece of code inspired by a natural language:

``2 parameter and 2 parameter add number parameter and 4 parameter multiply, in
my opinion''

Silly me, (2 + 2) * 4 is so much more readable, right?

Well mathematician's notation is, IMHO, a hodge-podge of heavily
context-optimized notepad sketching thrown together into a giant pile and
slowly fed to new people in bits and pieces. I know that many people consider
it to be an emblem of higher learning, but I think that it's really just a
confusion of the syntax with the underlying concepts. Plus it can get to be
a real pain to edit when dealing with larger expressions.

So, arguments that (* (+ 2 2) 4) is not ``natural'' or ``intuitive'' (oh no!)
don't really fly with me, since I don't think that any notation is natural or
intuitive. And people have used many other notations in the past, in other
places, even though it is tempting to think that the status quo is somehow the
best.

I don't know about you, but I like having easy-to-write macros, and
code-as-data. And the Lisp syntax is merely a reflection of the truth of the
underlying nested data structure. Any other syntax is just an attempt to hide
that, and for what cause? A primitive need to ``look like'' natural language
while not really being close to it?

Do human logicians fall back to writing their theorems entirely in natural
language because it is somehow ``more readable'' that way? As much as logic
books can be dense with symbols, they are incredibly more readable than the
equivalent written out in some natural language.

Note that the above discussion is orthogonal to the Lisp-n issue, which is a
semantic one. The relation to natural language there is the sharing of the
ability to handle separate contexts; a necessity to understand natural
language. I believe Pascal is arguing that humans are able to do it for
natural language and therefore also for other languages. Certainly,
mathematics is full of context-sensitive notation, and would be a pain without
it. (I can see it now: how many different alphabets would have to be exhausted
so that every theorem can have it's own "greek" letter(s)?).

But it isn't that Lisp-n for n > 1 is better because it imitates natural
language. There is, after all, no proof that imitating natural language
results in better computer language. However, the fact that humans can
accomodate separate namespaces/contexts in natural languages does seem to
indicate that humans can accomodate separate namespaces/contexts in computer
languages too.

--
; Matthew Danish <mda...@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."

Daniel Barlow

unread,
May 21, 2003, 8:10:13 AM5/21/03
to
Matthew Danish <mda...@andrew.cmu.edu> writes:

> ``Please close the window when it is raining.''
>
> Computer checks: raining? No. Then it moves on.
>
> 5 minutes later it rains. Is the window going to be closed? No.

I don't think I've ever seen a window rain, but I'd imagine that once
the glass is hot enough to melt there's not a lot of window left to
close anyway


-dan

--

http://www.cliki.net/ - Link farm for free CL-on-Unix resources

Grzegorz Chrupala

unread,
May 21, 2003, 11:00:48 AM5/21/03
to
aff...@mediaone.net (Raffael Cavallaro) wrote in message news:<aeb7ff58.03052...@posting.google.com>...

> people can learn easily. Deacon's point is that the selective
> bottleneck is childhood language acquisition. Any syntactic feature
> too difficult for children to master will not survive as part of that
> language into the next generation.

Natural languages are only (relatively) easy to acquire in natural
settings (interacting with parents and peers), because humans seem to
have specialized wireing to deal with this. But anyway this only works
until pubescence more or less. Otherwise natural languages are pretty
hard to learn, as anyone who tried learning a foreign language as an
adult can testify. They are rather more difficult than programming
languages, as far as I can tell from the experience of learning both
human and computer languages as an adult.

>
> This would suggest that computer languages should hew to the common
> patterns and elements of natural languages in order to assure easier,
> and clearer, comprehension by human programmers.

Given that the purpose and the way you use human languages is vastly
different from programming languages, I doubt designing a
pseudo-natural-syntax for a programming language would help to make it
clearer. I personally would much prefer a consistent, simple syntax
that I don't have to remember the quirks of to some sort of
pseudo-English (or pseudo-Polish ;)). I have also noted than when I
attempted to paraphrase some code I wrote (in Scheme, say) in order to
describe what it does, the resulting English prose is horribly
tortuous, wordy and far less clear than the original code. So I just
don't do it anymore if I don't have to.
All this makes me think that modelling a programming language syntax
after a natural language is, in general, a bad idea.

--
Grzegorz

Jochen Schmidt

unread,
May 21, 2003, 11:40:29 AM5/21/03
to
On 21 May 2003 08:00:48 -0700, Grzegorz Chrupala <grze...@pithekos.net>
wrote:

> Given that the purpose and the way you use human languages is vastly
> different from programming languages, I doubt designing a
> pseudo-natural-syntax for a programming language would help to make it
> clearer. I personally would much prefer a consistent, simple syntax
> that I don't have to remember the quirks of to some sort of
> pseudo-English (or pseudo-Polish ;)). I have also noted than when I
> attempted to paraphrase some code I wrote (in Scheme, say) in order to
> describe what it does, the resulting English prose is horribly
> tortuous, wordy and far less clear than the original code. So I just
> don't do it anymore if I don't have to.
> All this makes me think that modelling a programming language syntax
> after a natural language is, in general, a bad idea.

The discussion is *not* to model a programming language syntax after a
natural language but to make use of the available wetware in peoples brains
for purposes to write more expressive code.

I disagree that the purpose and the way of use of human languages is vastly
different from programming languages. The purpose is to communicate an idea
- to other humans _and_ to the computer.
The way you use it is by dialog or through whole "documents".

Programming languages are better suited to describe typical programming
ideas
than plain human language, because they are designed and/grown to do
better.
This is not much different to the language a mechanic uses to talk to his
colleguaes. Just because you learnt english does not make you able to
understand what they talk. Other environments - like being underwater -
leads
to other constraints in which sub-languages evolve which are obviously more
efficient than plain spoken human language.

Since the brain is indeed able to cope with context pretty well there the
idea
to make use of this facility is not a bad one.

ciao,
Jochen

Raffael Cavallaro

unread,
May 21, 2003, 6:36:46 PM5/21/03
to
Matthew Danish <mda...@andrew.cmu.edu> wrote in message news:<2003052019...@mapcar.org>...

> On Tue, May 20, 2003 at 03:29:53PM -0700, Raffael Cavallaro wrote:
> > It's even been argued, by the biological anthropologist Terry Deacon,
> > that our cognitive abilities have forced the form of natural
> > languages. In other words, since neurobiology changes fairly slowly
> > compared to language, natural languages have been selected for easy
> > learning and comprehension by *all* human beings, not just the best
> > and the brightest.
>
> That still leaves a gigantic range of possibilities.

Not really. The range of human grammars is actually quite limited. The
differences are largely superficial - e.g., different bindings for
different concepts, as it were, but still the same basic structures.
This whole issue is the basis for the now universally accepted view
that we have built in neurological "wiring" for language acquisition.
This would not be possible if the range of grammars were not extremely
limited - the language "instinct" wouldn't work with a sufficiently
different grammar.


> References would be nice, please.

<http://www.amazon.com/exec/obidos/tg/detail/-/0393317544/104-1144123-1355147?vi=glance>

(if the above gets split it should be one line)

_The Symbolic Species: The Co-Evolution of Language and the Brain_ by
Terry Deacon.


> ``Please close the window when it is raining.''
>
> Computer checks: raining? No. Then it moves on.
>
> 5 minutes later it rains. Is the window going to be closed? No.
>
> Would a human know to close the window? Yes.
>
> So should the `when' operator imply some kind of constant background loop? I
> am really unclear on this.


In brief, yes. An "if" construct suggests a single conditional check.
A "when," or "whenever" construct suggests a continuous background
polling. This is how GUIs are written. Whether the actual program text
uses the word "when," the semantics of the program are clear: "When
the user presses this button, execute this block of code." I am merely
suggesting that the program text should parallel the existing
structures in natural languages, i.e., the program text should read:
"when(button1.pressed?) exec(block1)" which would set up a continuous
background loop. The programmer could stop this loop with code like:
"stop-checking(button1.pressed?)"

In other words, we are continually forced to jump through unnatural
mental hoops to make our ideas take the form of a mathematical
algorithm (since this is what computer languages were originally
designed to execute). This may work well for scientific calculations
(hence the continued popularity of Fortran), but it really sucks for
most other types of information processing.

> This assumes that natural languages are inherently more readable. Natural
> language is optimized for the task of communicating with other human beings,
> who are aware of context and can remember points and communicate back.

Then the compiler writers will need include the ability to be aware of
relevant context (i.e, different compiler behavior with the same
program text depending on context), and remember points (a good deal
more introspection and maintenance of program state), and communicate
back (greatly improved, and *context sensitive* compiler warnings and
error reporting). This is going to happen. It's just a question of
when, and by whom, not whether.

Why? Because the "Software Crisis" will only be solved by enabling
power users to write their own applications. I'm convinced that the
real scarcity is not competent programmers, but domain expertise. Many
people can learn to code. Very few people have the domain expertise to
code the right thing. Acquiring domain expertise in many fields that
need a great deal of software is far more difficult than learning to
program competently. How many professional software developers have
the equivalent domain knowledge of a Ph.D. in molecular biology, or a
professional options trader, etc. Wouldn't it make more sense to
develop compilers that were easier to work with, than to have coders
acquire a half baked, partly broken level of domain expertise for each
new project they undertake?

> Even something as simple as nestable block structure is not well supported by
> any natural language I can think of. Sure you can get by without that, but I
> thought those languages were part of the past?

If you think that nestable block structure is necessary for the
communication of complex ideas then you're thinking in assember and
not in a natural language. The final compiler output may need to have
nested blocks, but that doesn't mean that the program text needs to be
expressed as nested blocks.

> Here's a sample piece of code inspired by a natural language:
>
> ``2 parameter and 2 parameter add number parameter and 4 parameter multiply, in
> my opinion''

Or "(two plus two) times 4." Your suggestion above is laughably
contrived.


> Do human logicians fall back to writing their theorems entirely in natural
> language because it is somehow ``more readable'' that way? As much as logic
> books can be dense with symbols, they are incredibly more readable than the
> equivalent written out in some natural language.


But most software is not needed by human logicians. It is needed by
human bankers, and human market traders, and human accountants, and
human molecular biologists, and they all communicate quite well in
natural language, modulo a sprinkling of domain specific notation.

> But it isn't that Lisp-n for n > 1 is better because it imitates natural
> language. There is, after all, no proof that imitating natural language
> results in better computer language.

Better for whom? For ordinary people, there is ample proof that
computer languages that more closely resemble natural language are
"better" - they simply don't use languages that aren't sufficiently
like natural languages at all. But lay people do use more
natural-language-like languages such as AppleScript, and Smalltalk.



> However, the fact that humans can
> accomodate separate namespaces/contexts in natural languages does seem to
> indicate that humans can accomodate separate namespaces/contexts in computer
> languages too.

Yes. WRT the thread topic, a lisp-n for n>1 (in fact, for unbounded n,
since new contexts can always arise) would be closer to natural
language.

Alexander Schmolck

unread,
May 21, 2003, 7:28:33 PM5/21/03
to
raf...@mediaone.net (Raffael Cavallaro) writes:
> Not really. The range of human grammars is actually quite limited. The
> differences are largely superficial - e.g., different bindings for
> different concepts, as it were, but still the same basic structures.
> This whole issue is the basis for the now universally accepted view
> that we have built in neurological "wiring" for language acquisition.

Although this is rather OT here, outside Chomskian linguistics this view is
certainly not universally accepted.

'as

Grzegorz Chrupala

unread,
May 22, 2003, 5:31:06 AM5/22/03
to
Jochen Schmidt <j...@dataheaven.de> wrote in message news:<oprpi4hr...@news.btx.dtag.de>...

> I disagree that the purpose and the way of use of human languages is vastly
> different from programming languages. The purpose is to communicate an idea
> - to other humans _and_ to the computer.
> The way you use it is by dialog or through whole "documents".

Well, maybe not *vastly* different, but telling a computer what to do
and having a conversation with a human being are sufficiently
different that most analogies will be misleading.

>
> Programming languages are better suited to describe typical programming
> ideas
> than plain human language, because they are designed and/grown to do
> better.

Agreed. And I happen to think that making programming languages
context dependent or ambiguous or syntactically similar to human
language would probably not make them any better suited to "describe
typical programming ideas".

>
> Since the brain is indeed able to cope with context pretty well there the
> idea
> to make use of this facility is not a bad one.

The brain is able to cope with *a lot*. The queston is:
Is introducing context actually going to help humans learn and use CL?
If there is a cost associated with context dependent processing, then
do its supposed benefits outweigh this cost?

--
Grzegorz

Raffael Cavallaro

unread,
May 22, 2003, 8:06:21 AM5/22/03
to
Alexander Schmolck <a.sch...@gmx.net> wrote in message news:<yfsy90z...@black132.ex.ac.uk>...

I think you misunderstand me. The Chomskian view is the extreme
position that *all* gramatical learning abilities are pre-wired. The
other extreme position is the classical "tabula rasa," or blank slate
position, i.e., that people are born with *no* cognitive instincts.

No linguist, indeed, almost no student of human cognition, now holds
the tabula rasa position, although it was widely held only a century
ago. This does not mean that all linguists hold the extreme Chomskian
position, and Deacon certainly does not.

Terry Deacon takes a more moderate position - specifically, that the
innate "language instinct" consists mostly of our inborn ability to
think symbolically, combined with some very well documented brain
specialization. Given the limited cognitive abilities of children
compared to adults, we get a range of human grammars limited by their
learnability by human children.

Raf

Tim Bradshaw

unread,
May 22, 2003, 8:32:21 AM5/22/03
to
* Raffael Cavallaro wrote:

> No linguist, indeed, almost no student of human cognition, now holds
> the tabula rasa position, although it was widely held only a century
> ago. This does not mean that all linguists hold the extreme Chomskian
> position, and Deacon certainly does not.

As I understand it there are some very good arguments against the
tabula rasa position. In particular you can look at the amount of
data a general grammar-learner needs to learn a grammar, and you find
that people get a small fraction of this. So either they have
some special wiring, or they do magic.

--tim

Burton Samograd

unread,
May 22, 2003, 10:02:00 AM5/22/03
to
Tim Bradshaw <t...@cley.com> writes:
> As I understand it there are some very good arguments against the
> tabula rasa position. In particular you can look at the amount of
> data a general grammar-learner needs to learn a grammar, and you find
> that people get a small fraction of this. So either they have
> some special wiring, or they do magic.

I just finished reading a very interesting book that covered this
subject called "Jungian Archetypes" (I forget the author's name
though). It's a very interesting book for geek types and was written
by a mathematician and discusses the evolution of scientific and
mathematical thought over the centuries and how they lead to clinical
psychology. The idea of "tabula rasa" is replaced by ingrained
archetypes which are carried in ourselves and the stories we are
exposed to (which make up part of the collective unconciousness). It
also gives one of the best explanations of Godel's Law I've read
anywhere. It's some very interesting reading and a perfect geek
pychology book.

--
burton samograd
kru...@kruhft.dyndns.org
http://kruhftwerk.dyndns.org

Alexander Schmolck

unread,
May 22, 2003, 4:21:52 PM5/22/03
to
Tim Bradshaw <t...@cley.com> writes:

This is the so-called "poverty of stimulus argument" and both Chomsky (1959)
[1] and particularly Gold (1967) are often cited as having formally
demonstrated that human languages are not learnable without, as you write
above, "some special wiring".

Alas, as is maybe not too surprisingly, there is some prominent disagreement
on the validity of assumptions of the underlying learning model that Gold's
learning theoretical treatment rests upon.

For example, Quartz and Sejnowksi (1997) conclude [2]:

"Hence, the negative learnability results do not indicate anything about the
learnability of human language as much as they do about the insufficiency of
the particular learning model."

Although Sejnowski, Elman, McClelland, Rumelhart and other connectionists (who
have been challenging the established nativist position on language
acquisition since the late 80ies) might be dead wrong, they are certainly not
stupid or marginal. Indeed many of them have significantly contributed to both
psychology and AI/pattern recognition and baring some grave misunderstanding
on my part none of them seems to be particularly committed to the "universally


accepted view that we have built in neurological "wiring" for language
acquisition".

'as

[1] http://cogprints.ecs.soton.ac.uk/archive/00001148/00/chomsky.htm
[2] http://citeseer.nj.nec.com/quartz97neural.html

post scriptum for the linguistically inclined:

Apart from learning theory and neurological studies, another nativist line of
defense is the demonstration of so called "universals" that hold across all
languages, many of which are deemed to be functionally arbitrary (and hence
neutral alternatives in a non-nativist framework). Although again there is
quite a bit of controversy about the validity and interpretation of much of
the data, I find the intellectual appeal of many of these arguments and
observations quite undeniable.

This is maybe not the best example, but pay attention to the referents:

Mary is eager to please.

vs.

Mary is easy to please.

John promises Bill to wash him.
John promises Bill to wash himself.

vs.

John persuades Bill to wash him.
John persuades Bill to wash himself.

A nativist would say that the contrasted sentences are structurally
equivalent, so how is the learner supposed to implicitly derive when 'himself'
refers to the subject and when to the object of the sentence? Children are
never taught explicitly and yet never seem to make certain kinds of mistakes
one would expect on the basis of similar such examples.

Jochen Schmidt

unread,
May 22, 2003, 4:48:43 PM5/22/03
to
On 22 May 2003 02:31:06 -0700, Grzegorz Chrupala <grze...@pithekos.net>
wrote:

> Jochen Schmidt <j...@dataheaven.de> wrote in message

> news:<oprpi4hr...@news.btx.dtag.de>...
>
>> I disagree that the purpose and the way of use of human languages is
>> vastly
>> different from programming languages. The purpose is to communicate an
>> idea
>> - to other humans _and_ to the computer.
>> The way you use it is by dialog or through whole "documents".
>
> Well, maybe not *vastly* different, but telling a computer what to do
> and having a conversation with a human being are sufficiently
> different that most analogies will be misleading.

Programming languages are not only meant to communicate to computers.
Programs get read more often by humans than by machines.
What makes Programming languages special is that they can be "understood"
by computers in a straightforward way.

>> Programming languages are better suited to describe typical programming
>> ideas
>> than plain human language, because they are designed and/grown to do
>> better.
>
> Agreed. And I happen to think that making programming languages
> context dependent or ambiguous or syntactically similar to human
> language would probably not make them any better suited to "describe
> typical programming ideas".

The "typical programming ideas" are a very fluid and quick changing thing.
Depending on what you want to accomplish you need to adapt your language to
your domain to be efficient. What you perceive statically as "human
language"
here doesn't make any domain topic easier to talk about than the right
domain
language. When creating such domain languages do you really claim that one
should stay away from concepts mainly known from "human languages"? Why?

>> Since the brain is indeed able to cope with context pretty well there
>> the idea
>> to make use of this facility is not a bad one.
>
> The brain is able to cope with *a lot*. The queston is:
> Is introducing context actually going to help humans learn and use CL?
> If there is a cost associated with context dependent processing, then
> do its supposed benefits outweigh this cost?

Concepts like context allow humans to express programs with means they
already understood in their wetware. We already paid the bill - the
facility
is already installed and gets used and trained on a daily base...

ciao,
Jochen

Matthew Danish

unread,
May 22, 2003, 6:18:12 PM5/22/03
to
On Thu, May 22, 2003 at 09:21:52PM +0100, Alexander Schmolck wrote:
> This is maybe not the best example, but pay attention to the referents:
>
> Mary is eager to please.
>
> vs.
>
> Mary is easy to please.

I find 'eager' to be a more 'active' term than 'easy', something that
Mary is actively doing rather than a passive description.

> John promises Bill to wash him.
> John promises Bill to wash himself.
>
> vs.
>
> John persuades Bill to wash him.
> John persuades Bill to wash himself.
>
> A nativist would say that the contrasted sentences are structurally
> equivalent, so how is the learner supposed to implicitly derive when 'himself'
> refers to the subject and when to the object of the sentence? Children are
> never taught explicitly and yet never seem to make certain kinds of mistakes
> one would expect on the basis of similar such examples.

Because the difference lies deeper than structural. It lies in the
meaning of the verbs.

John promises Bill to wash him.

meaning, at some future time

John washes him.

But if it were to be "John washes John" then you would normally use
'himself', so the ambiguity is resolved by choosing the other person.

Similarly,

John promises Bill to wash himself.

to

John washes himself.

Because the verb "to promise" implies that John will do something.

Whereas the verb "to persuade" implies that Bill is going to do
something.

John persuades Bill to wash him.

means that

Bill washes him.

And 'him' is resolved similarly to before.

I'm sure that linguists have thought of this difference before, and
there are probably better examples. I doubt the nativists are that
naive.

There's always the learn-by-example mode too. I think that I learned a
lot of English just by being exposed to it in books. I never knew
anything formal about grammar until I learned Spanish, and I figured out
how to form sentences often times by picking out remembered phrases.
(This was called the "feels-right" school of grammar by one English
teacher).

Raffael Cavallaro

unread,
May 22, 2003, 7:46:23 PM5/22/03
to
Tim Bradshaw <t...@cley.com> wrote in message news:<ey3ptmb...@cley.com>...


> As I understand it there are some very good arguments against the
> tabula rasa position. In particular you can look at the amount of
> data a general grammar-learner needs to learn a grammar, and you find
> that people get a small fraction of this. So either they have
> some special wiring, or they do magic.

Indeed. This argument is known as "The Poverty of the Input," i.e.,
children are not exposed to enough examples to generate all of the
grammatical rules that they learn.

This is one of several reasons that no serious student of human
cognition holds the strong tabula rasa position any more.

Raf

Alexander Schmolck

unread,
May 22, 2003, 8:34:38 PM5/22/03
to
I'm rather tired now, but I'll try to answer your points before I leave for a
couple of days. I make no claims to special expertise on the topic.

Matthew Danish <mda...@andrew.cmu.edu> writes:
> > John promises Bill to wash him.
> > John promises Bill to wash himself.
> >
> > vs.
> >
> > John persuades Bill to wash him.
> > John persuades Bill to wash himself.
> >
> > A nativist would say that the contrasted sentences are structurally
> > equivalent, so how is the learner supposed to implicitly derive when 'himself'
> > refers to the subject and when to the object of the sentence? Children are
> > never taught explicitly and yet never seem to make certain kinds of mistakes
> > one would expect on the basis of similar such examples.
>
> Because the difference lies deeper than structural. It lies in the
> meaning of the verbs.

Yes -- but then it is precisely the meaning of the verb (promise/persuade)
that you are trying to learn and this is, I hope we will agree, not made
easier by the fact what that correct determination of referents is not
possible by just understanding the structure of or indeed all the other words
in the sentence.

>
> John promises Bill to wash him.
>
> meaning, at some future time
>
> John washes him.
>
> But if it were to be "John washes John" then you would normally use
> 'himself', so the ambiguity is resolved by choosing the other person.
>
> Similarly,
>
> John promises Bill to wash himself.
>
> to
>
> John washes himself.
>
> Because the verb "to promise" implies that John will do something.
>
> Whereas the verb "to persuade" implies that Bill is going to do
> something.
>
> John persuades Bill to wash him.
>
> means that
>
> Bill washes him.
>
> And 'him' is resolved similarly to before.
>
> I'm sure that linguists have thought of this difference before, and there
> are probably better examples. I doubt the nativists are that naive.

While I am willing to take blame for my selection of the examples (and any
misrepresentation of their use in nativist arguments) I'd like to point out
that the the first example, if I am not mistaken, originates from Chomsky (the
source of citation is Ken Wexler's MIT Encyclopedia of Cognitive Science Entry
on Poverty of Stimulus A.). Since both examples are not of my own making and,
with slight variations, occur not infrequently in the literature any naivity
obvious from the examples alone is indeed shared by prominent nativists.


> There's always the learn-by-example mode too. I think that I learned a lot
> of English just by being exposed to it in books. I never knew anything
> formal about grammar until I learned Spanish, and I figured out how to form
> sentences often times by picking out remembered phrases. (This was called
> the "feels-right" school of grammar by one English teacher).

I am not quite sure what to make of this paragraph. The nativist argument is
precisely that you can't learn *just* by example (as the training input you
receive alone is by far not rich enough to deduce the rules that generated
this input; whether these rules be explict or not), thus the claim that there
is always learn-by-example mode, too, seems rather bizzarre to me in my tired
condition.

'as

Jeff Caldwell

unread,
May 23, 2003, 12:09:13 AM5/23/03
to
Raffael Cavallaro wrote:
...

> In other words, we are continually forced to jump through unnatural
> mental hoops to make our ideas take the form of a mathematical
> algorithm (since this is what computer languages were originally
> designed to execute). This may work well for scientific calculations
> (hence the continued popularity of Fortran), but it really sucks for
> most other types of information processing.

...

It appears you agree that different languages are appropriate for
different knowledge domains.

> ... the "Software Crisis" will only be solved by enabling


> power users to write their own applications.

This is a re-hash of the rise of the spreadsheet and the appearance of
PCs in the accounting department. The statement is true, has been
proven true, and the phenomenon will continue to evolve. (Spreadsheet
programs can be viewed as a different language with a different user
interface, reinforcing the prior point.)

> I'm convinced that the

> real scarcity is not competent programmers, but domain expertise....


> How many professional software developers have
> the equivalent domain knowledge of a Ph.D. in molecular biology, or a
> professional options trader, etc.

How many Ph.D.'s in molecular biology have the equivalent computer
science domain knowledge of a Ph.D. in computer science? I think you
are saying more that as spreadsheets became available for accountants,
something else will become available for molecular biologists and a
broad group of other people, and saying less that everyone must learn to
hold Ph.D.'s in computer science as well as in their own domain.

The workload done with spreadsheets did in fact lessen the workload
placed upon corporate IT departments. Those departments were overloaded
far beyond their ability to respond when spreadsheets appeared, and
spreadsheets and PCs were a good thing for the user departments. This
also allowed the IT departments to begin focusing more upon projects
affecting the larger enterprise rather than locally optimizing specific
departments.

The real impact upon corporate IT departments came when standard
packages, such as SAP, became widespread. IT became responsible more for
implementation and less for development. To some extent, I think of
SAP-like applications as a meta-super-spreadsheet language used by IT.

> Wouldn't it make more sense to
> develop compilers that were easier to work with, than to have coders
> acquire a half baked, partly broken level of domain expertise for each
> new project they undertake?

Yes but spreadsheets can go only so far. I discuss more about this below.

> If you think that nestable block structure is necessary for the
> communication of complex ideas then you're thinking in assember and
> not in a natural language.

Most books have a table of contents. Most books are structured at least
into chapters. Many chapters are divided into sections and most chapters
and sections are divided into paragraphs. Most paragraphs are divided
into sentences. These structures define the boundaries of contextual
structures.

> But most software is not needed by human logicians. It is needed by
> human bankers, and human market traders, and human accountants, and
> human molecular biologists, and they all communicate quite well in
> natural language, modulo a sprinkling of domain specific notation.
>

A programmer, or more likely a team consisting of project managers,
software engineers, quality assurance personnel, documentation
specialists, and programmers, will work with those with domain expertise
to design and develop a language appropriate to a range of domains.
Consider this the invention of the spreadsheet for that domain range.
Only then will the banker/trader/accountant/biologist be empowered to
use their spreadsheet equivalent.

Yes, the efforts may gain leverage from each other. Yes the domain
ranges may be large or may grow over time.

...


> Better for whom? For ordinary people, there is ample proof that
> computer languages that more closely resemble natural language are
> "better" - they simply don't use languages that aren't sufficiently
> like natural languages at all. But lay people do use more
> natural-language-like languages such as AppleScript, and Smalltalk.
>

I disagree that true natural language will produce the results you seem
to claim. The law is written in natural language, with domain-specific
extensions and idioms. Look at the practice of law, the number of
lawyers and judges, and the disputes over fine points of meaning. How
many lines in a nation's consitution have been questioned and
reinterpreted, asking about the founder's intent and other factors? Is
there unambiguous meaning to complex law? Are ambiguous software
specifications to be trusted in domains such as banking?

Beyond a certain simplistic level, anything stated by a banker about
software desired by the banker quickly exceeds the banker's domain
knowledge about systems and system behavior. Locking strategies,
replication mechanisms, performance bottlenecks, concurrent behaviors...
the banker knows nothing about these. The banker must rely upon the
default behaviors provided by the underlying software. When more than
that is required, people with domain knowledge about locking strategies
and replication mechanisms must become involved. These people always
will be needed, although a trend of many years has these people
concentrated more in system-domain companies and less in
application-domain companies.

The brain's basic wiring for language enabled communication between
humans providing a competitive survival edge in a given natural
environment. Saying that that mechanism is the ideal way to specify
Ph.D.-level computer science thought about machine behavior does not
seem like a strong claim to me.

To say that bankers can express application behaviors best through
natural language is true only to the extent that what they express does
not exceed their domain knowledge and to the extent that what they
express in natural language is not subject to multiple interpretations
such as we find even in the shortest legal documents such as
constitutions.

A banker using natural language to express desired machine behaviors
quickly will find repetitive expressions, begin to find them tedious,
and begin to look for shorter methods of expressing common patterns.
Over a period of years, if the compilers adapt to the banker's desire
for these less-tedious means of expression, the banker will end up with
a domain-specific language for expressing desired machine behaviors.

Ph.D. computer scientists, and others with machine and system domain
knowledge, today can use languages such as Lisp to build domain-specific
languages for expressing desired machine behaviors. These people build
domain-specific languages up from languages built to express machine
behaviors. Bankers ultimately may use natural language to build
domain-specific languages for expressing desired machine behaviors. They
will have built their domain-specific language down from natural language.

One idea is that when building large, complex financial systems, a
banker can express all the proper machine behaviors in natural language
without a systems person on the team. Another idea is that when building
large, complex banking systems, a systems person can express all the
proper financial behaviors in any language without a banker on the team.
Both ideas seem equally untenable to me. Sufficiently reduce the domain
knowledge required, and a banker can write a spreadsheet application and
a programmer can balance a checkbook.

Raffael Cavallaro

unread,
May 23, 2003, 8:28:25 AM5/23/03
to
Jeff Caldwell <jd...@yahoo.com> wrote in message news:<J9hza.718$H84.3...@news1.news.adelphia.net>...

> How many Ph.D.'s in molecular biology have the equivalent computer
> science domain knowledge of a Ph.D. in computer science?

You miss my point. With better (read, more natural-language-like)
computer languages, Ph.D.s in molecular biology wouldn't *need* the
equivalent science domain knowledge of a Ph.D. in computer science.
Only compiler writers would need this level of knowledge. Everyone
else would leverage it by using a better designed computer language.

> I think you
> are saying more that as spreadsheets became available for accountants,
> something else will become available for molecular biologists and a
> broad group of other people, and saying less that everyone must learn to
> hold Ph.D.'s in computer science as well as in their own domain.

Yup, now you're arguing my point. We need a natural-language-like
computer language that is the next step beyond spreadsheets, as it
were.


> Most books have a table of contents. Most books are structured at least
> into chapters. Many chapters are divided into sections and most chapters
> and sections are divided into paragraphs. Most paragraphs are divided
> into sentences. These structures define the boundaries of contextual
> structures.

Exactly. These are natural language structures that could be
*compiled* into nested block structures. But no ordinary person lays
out paragraphs as nested blocks. Their nesting (or lack of nesting, as
not all sequential paragraphs correspond to nested blocks) is
determined by such cue phrases as "alternatively," (read, here now, I
present a different branch).


> A programmer, or more likely a team consisting of project managers,
> software engineers, quality assurance personnel, documentation
> specialists, and programmers, will work with those with domain expertise
> to design and develop a language appropriate to a range of domains.
> Consider this the invention of the spreadsheet for that domain range.
> Only then will the banker/trader/accountant/biologist be empowered to
> use their spreadsheet equivalent.

Or a more general purpose language will be developed that allows
people from different domains to write their own software. This will
be much more useful, affordable, and flexible than calling in a team
of software engineers, QA personnel, documentation specialists, and
programmers, for each new domain to receive its limited extent, domain
specific language. What's the failure rate of large, complex software
projects these days? And you expect domain experts to play those sorts
of odds just to get a limited use language?

Better for the people with the greatest computer science expertise to
write a compiler for a general purpose, natural-language-like computer
language.


> I disagree that true natural language will produce the results you seem
> to claim. The law is written in natural language, with domain-specific
> extensions and idioms. Look at the practice of law, the number of
> lawyers and judges, and the disputes over fine points of meaning. How
> many lines in a nation's consitution have been questioned and
> reinterpreted, asking about the founder's intent and other factors? Is
> there unambiguous meaning to complex law? Are ambiguous software
> specifications to be trusted in domains such as banking?


Did you know that in Europe, most business that requires gangs of
lawyers and binders full of contracts here in the US, is transacted
with a two paragraph letter of intent, and a hand shake? The broken
complexity of the US legal system is not a necessary feature of legal
systems, nor of legal language. It is a product of a guild working to
make its services indispensable (remember, Congress is composed mostly
of lawyers, so all the laws are written by guild members). Rather like
IT people and programmers working to make computer use and computer
programming harder than it needs to be in order to maintain the IT
priesthood, to which all users must supplicate.

US law, and Common Law traditions in general, are *intentionally*
ambiguous, since they rely on precedent (i.e., previous case decisions
hold as much importance in how a judge will rule as what is actually
written in the legal code). I am not claiming that intentionally
ambiguous language can magically be made unambiguous. Merely that
ordinary domain experts can express themselves unambiguously when it
is needed, especially with the help of decent compiler warnings and
error messages.

>
> Beyond a certain simplistic level, anything stated by a banker about
> software desired by the banker quickly exceeds the banker's domain
> knowledge about systems and system behavior. Locking strategies,
> replication mechanisms, performance bottlenecks, concurrent behaviors...
> the banker knows nothing about these. The banker must rely upon the
> default behaviors provided by the underlying software. When more than
> that is required, people with domain knowledge about locking strategies
> and replication mechanisms must become involved.

But at what level? I'm saying that they only need to be involved at
the compiler writing level. The banker simply specifies that he's
dealing with a transaction, and the compiler generates all the
necessary locking strategies, etc, from that context, namely, that of
a transaction.

Your only valid argument here is performance. But moore's law will
take care of that for most cases (no premature optimization please).
There will probably always exist domains where real programmers will
need to tune for performance, but this is much easier to do when the
specification is a *working program* written by the domain experts.


> The brain's basic wiring for language enabled communication between
> humans providing a competitive survival edge in a given natural
> environment. Saying that that mechanism is the ideal way to specify
> Ph.D.-level computer science thought about machine behavior does not
> seem like a strong claim to me.

Its not a claim I made. I claim that computer languages can be made
more like natural languages, and the result would broaden the range of
domain experts who could write functioning software systems for
themselves. Would these systems by optimized for best CPU/memory/mass
storage use? No, probably not. But in most cases, that wouldn't
matter. In those few cases where such performance issues did matter,
the domain expert could call in a real software engineer to tune the
already correct, but slow/memory pig/disk hog program for better
performance.

> To say that bankers can express application behaviors best through
> natural language is true only to the extent that what they express does
> not exceed their domain knowledge and to the extent that what they
> express in natural language is not subject to multiple interpretations
> such as we find even in the shortest legal documents such as
> constitutions.


This is a red herring. Bankers, and other non-computer-scientists, are
perfectly capable of specifying things unambiguously when they know
that it is necessary. This process would be aided by *useful* compiler
messages, specifying what is ambiguous, and possible interpretations,
allowing the user to specify a specific, unambiguous alternative, that
would then become the saved version.


> A banker using natural language to express desired machine behaviors
> quickly will find repetitive expressions, begin to find them tedious,
> and begin to look for shorter methods of expressing common patterns.

People already do this with spreadsheet formulas. There's no reason to
believe that they wouldn't generalize this to methods, and modules of
functionality which would be regularly re-used.

> Over a period of years, if the compilers adapt to the banker's desire
> for these less-tedious means of expression, the banker will end up with
> a domain-specific language for expressing desired machine behaviors.

This would be nice, but is a step beyond even what I am advocating.
Having an adaptive compiler would be nice, but lets get one that is
merely more natural-language-like first.


> One idea is that when building large, complex financial systems, a
> banker can express all the proper machine behaviors in natural language
> without a systems person on the team. Another idea is that when building
> large, complex banking systems, a systems person can express all the
> proper financial behaviors in any language without a banker on the team.
> Both ideas seem equally untenable to me. Sufficiently reduce the domain
> knowledge required, and a banker can write a spreadsheet application and
> a programmer can balance a checkbook.

But look at the historical trend that you yourself have acknowledged;
in the future, do you think that we'll be moving in the direction of
teams with fewer systems people (like a banker with a language that is
simpler, yet more flexible than current spreadsheets), or teams with
fewer domain experts (like a cube farm full of programmers trying to
implement a banking system with no bankers to guide them)?

It think its clear that the former scenario is one that I'll see in my
lifetime, and that if I ever hear about the latter, I'll know its time
to dump stock in that bank.

Karl A. Krueger

unread,
May 23, 2003, 10:04:29 AM5/23/03
to
Raffael Cavallaro <raf...@mediaone.net> wrote:
> Jeff Caldwell <jd...@yahoo.com> wrote in message news:<J9hza.718$H84.3...@news1.news.adelphia.net>...
>> How many Ph.D.'s in molecular biology have the equivalent computer
>> science domain knowledge of a Ph.D. in computer science?
>
> You miss my point. With better (read, more natural-language-like)
> computer languages, Ph.D.s in molecular biology wouldn't *need* the
> equivalent science domain knowledge of a Ph.D. in computer science.

Pardon my cluelessness, but it doesn't seem to me that spreadsheets (the
example being used of a domain-specific "programming language") are any
more akin to a natural language than are ordinary programming languages.

Spreadsheets don't have to be like -natural- languages to be easier for
accountants. They have to be more like the notation that evolved
specifically to handle accountancy, the domain-specific conlang as it
were: ledger books. And so they are. They visually resemble printed
ledgers, and easily support operations that make sense in a ledger, like
"sum this column" or "let these values here be 106.5% of those values
over there".

The analogue of spreadsheets in a given domain would be a programmable
system with support for that domain's specific notation and operations
-- as, say, computer algebra systems offer for mathematics. This would
only resemble natural language insofar as the domain lends itself to
same: accountants' columns of figures do not look much like English to
me.

--
Karl A. Krueger <kkru...@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped. s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews

Raffael Cavallaro

unread,
May 23, 2003, 4:35:00 PM5/23/03
to
"Karl A. Krueger" <kkru...@example.edu> wrote in message news:<bal9pd$4s4$1...@baldur.whoi.edu>...


> Pardon my cluelessness, but it doesn't seem to me that spreadsheets (the
> example being used of a domain-specific "programming language") are any
> more akin to a natural language than are ordinary programming languages.

Which is why they are a domain specific solution, not a general
purpose one.

This is the big picture problem. Software engineers keep crafting
either:

1. domain specific, user friendly solutions, like spreadsheets, or CAD
packages, or...
2. general purpose languages that only software engineers can really
use effectively.

What we need are general purpose computer languages that are also user
friendly. When it comes to languages, "user friendly" means
natural-language-like.

>
> Spreadsheets don't have to be like -natural- languages to be easier for
> accountants.

But not everyone who needs software is an accountant, or an architect
(CAD packages), etc.

> The analogue of spreadsheets in a given domain would be a programmable
> system with support for that domain's specific notation and operations
> -- as, say, computer algebra systems offer for mathematics. This would
> only resemble natural language insofar as the domain lends itself to
> same: accountants' columns of figures do not look much like English to
> me.

You're thinking in a domain-specific-solution way. This is bound to
fail, because each new domain will require its own unique, mutually
incompatible, domain specific language. Unless your needs fall
precisely into that particular realm, and do not extend beyond it in
any way, you lose. Better to craft a general purpose,
natural-language-like computer language, that all the specific domains
can use. As new application domains arise, a general purpose languge
can be turned to those tasks, but domain specific solutions are
unlikely to be flexible enough to be useful.

Matthew Danish

unread,
May 23, 2003, 4:54:07 PM5/23/03
to
On Fri, May 23, 2003 at 01:35:00PM -0700, Raffael Cavallaro wrote:
> When it comes to languages, "user friendly" means
> natural-language-like.

Where has this been proven? I don't think that this is the case at all.
What will end up happening is this: Joe User will type out a sentence
expecting it to have X behavior. Jill User will type out the same
sentence expecting it to have Y behavior. If X is not Y, then one of
them will be very surprised. And natural language leaves so much
ambiguity, normally, that this is bound to happen. And eliminating
ambiguity from natural language will just give you a stiff, difficult
language which is more akin to an overly verbose formal language than
anything a human might use for day-to-day conversation.

A true natural language interface requires, in my opinion, artificial
intelligence in order to be usable. Without that, it will be entirely
too frustrating for any user who thinks that they can pretend to be
speaking to another human being. And if they don't think that way, then
what is the point of being natural-language-like?

Logicians went through this over a century ago, when Frege published
`Begriffsschrift'. They arrived at precisely the opposite conclusion that
you have.

Jeff Caldwell

unread,
May 23, 2003, 11:25:43 PM5/23/03
to
Raffael Cavallaro wrote:
...

> We need a natural-language-like
> computer language that is the next step beyond spreadsheets, as it
> were.
...

> Having an adaptive compiler would be nice, but lets get one that is
> merely more natural-language-like first.

Please try to convince a musician that he or she would be better off
writing and reading music in English or Chinese rather than in musical
notation.

The accountants I know would be angry if you tried to force them to
construct their spreadsheets in English. Take a complex spreadsheet,
write out its specifications in unambiguous English, show the result to
an accountant and try to convince her that she should begin entering all
her spreadsheets your new way.

> But no ordinary person lays out paragraphs as nested blocks.

A paragraph is a nested block of language but that was made clear in my
original post.

> Or a more general purpose language will be developed that allows
> people from different domains to write their own software. This will
> be much more useful, affordable, and flexible than calling in a team
> of software engineers, QA personnel, documentation specialists, and
> programmers,

Programmers need software engineers, QA personnel, and documentation
specialists but bankers won't! When machines can parse bankers's natural
language, the banker's software will be well designed, bug free, fully
documented, and comprehensive enough to run their entire enterprise, no
matter how large! With no QA! Or documentation! Or SE's! It will all
work! Because the compiler will be so smart! And the program will be the
documentation! Much better than those lousy programmers who need all
that extra support!


Karl A. Krueger

unread,
May 24, 2003, 2:19:22 AM5/24/03
to
Raffael Cavallaro <raf...@mediaone.net> wrote:
> You're thinking in a domain-specific-solution way. This is bound to
> fail, because each new domain will require its own unique, mutually
> incompatible, domain specific language. Unless your needs fall
> precisely into that particular realm, and do not extend beyond it in
> any way, you lose.

I am clearly confused. It seems to me, though, that every program is a
specificity, a selection of function for some purpose. It also seems to
me that programs that are to serve a particular domain must of necessity
incorporate domain-specific knowledge. They would not be very useful if
they did not.

The example of a spreadsheet reminds me of the warlord of Wu and his
question:

http://www.canonical.org/~kragen/tao-of-programming.html#book3

Kenny Tilton

unread,
May 24, 2003, 10:16:35 AM5/24/03
to

Jeff Caldwell wrote:
> Raffael Cavallaro wrote:
> ...
> > We need a natural-language-like
> > computer language that is the next step beyond spreadsheets, as it
> > were.
> ...
> > Having an adaptive compiler would be nice, but lets get one that is
> > merely more natural-language-like first.
>
> Please try to convince a musician that he or she would be better off
> writing and reading music in English or Chinese rather than in musical
> notation.

Why so negative, everyone? We already have one profession that has a
natural-language-like way to express requirements unambiguously -- the
legal profession. This is why contracts are so easy to write and read
never end up in court... ok, never mind.

Now there's an idea. Go the other way, write contracts as programs. Run
a kazillion eventual futures thru them, make sure the outcomes are
acceptable.

--

kenny tilton
clinisys, inc
http://www.tilton-technology.com/
---------------------------------------------------------------
"Everything is a cell." -- Alan Kay

Marc Spitzer

unread,
May 24, 2003, 12:15:34 PM5/24/03
to
Kenny Tilton <kti...@nyc.rr.com> writes:

>
> Now there's an idea. Go the other way, write contracts as
> programs. Run a kazillion eventual futures thru them, make sure the
> outcomes are acceptable.

The lawyers would never ever go for it. Way too expensive, if it
worked think of all the billed hours they would loose.

marc

Matthew Danish

unread,
May 24, 2003, 2:01:23 PM5/24/03
to
On Sat, May 24, 2003 at 03:25:43AM +0000, Jeff Caldwell wrote:
> When machines can parse bankers's natural language, the banker's
> software will be well designed, bug free, fully documented, and
> comprehensive enough to run their entire enterprise, no matter how
> large! With no QA! Or documentation! Or SE's!

Or bankers.

Eduardo Muñoz

unread,
May 24, 2003, 6:43:56 PM5/24/03
to

* Kenny Tilton <kti...@nyc.rr.com>

| Now there's an idea. Go the other way, write contracts as
| programs. Run a kazillion eventual futures thru them, make sure the
| outcomes are acceptable.

Already done (sort of), see:
http://article.gmane.org/gmane.comp.lang.lightweight/1148


--
Eduardo Muñoz | (prog () 10 (print "Hello world!")
http://213.97.131.125/ | 20 (go 10))

Tim Bradshaw

unread,
May 25, 2003, 7:47:20 AM5/25/03
to
* Raffael Cavallaro wrote:

> What we need are general purpose computer languages that are also user
> friendly. When it comes to languages, "user friendly" means
> natural-language-like.

It's interesting that in domains where precision in language is
required, languages which are more-or-less artificial are used, even
when they are not interpreted by machines. For instance air traffic
control, legal terminology and so on.

--tim

Kent M Pitman

unread,
May 25, 2003, 1:02:10 PM5/25/03
to
Tim Bradshaw <t...@cley.com> writes:

It makes sense though.

Natural language is designed in distributed fashion and so it's hard
to make sure that it satisfies any particular requirement of
consistency or lack of ambiguity. It is adapted dynamically by changing
bits and pieces in ways that suit individual speakers or sets of speakers
without regard to the parts of the language that are not used by those
speakers that might be adversely affected.

It also seems unlikely that this phenomenon could lead to other than chaos
if it were not governed at least to some extent by underlying rules that
limit the "possible languages" or provide incentives for converging on
certain features that work well for human listeners.

For example, I recall an linguistics class in which someone reported on a
paper about "infixing" in English. Almost all of English compounding uses
suffixes and prefixes, and words are almost never inserted in the middle of
another word ("infixing"). The only exceptions are "a whole 'nother" which
is probably the result of some internal confusion about whether "another"
is a single word or two words and the infixing of foul language in words
(as in "in- f*cking -credible" or "fan- f*cking -tastic"). Curiously,
though, study of these shows that even there, in the peak of anger, with
no grammar teachers looking over, people don't just arbitrarily inject
these words into words. They carefully place them befor the stressed
syllable because they have intuitive senses of what works best and what
does not.

So when people using natural languages go and create several or even
dozens of meanings of the same word (according to context), especially
without a resultant "mass [webster.com entry #4, defintion 1b] public
[entry #1, def 1a] outcry [def 1b]", one is led to believe that this
is on the intuitively "approved list" of language building tools.

When people finally try to coordinate their actions, it makes sense that
the rules need to be tightened and _actual_ ambiguity goes out of the
system as much as possible.

The point, though, is that the potential for ambiguity can be removed
in any of several ways, but for some reason the first and only impulse
of certain people is to say that there is only one way. A language
could have just one definition for each word (and so 3 or 10 times
as many words if the language is as dense as English); one could have
separated namespaces as human languages do; or one could do like CLHS
and italicize/subscript words that are thought to be ambiguous to require
special meaning clarification. There are lots of ways of doing it.

I claim it's not a coincidence that many of the champions of a Lisp1
are also the ones claiming there's only one way to do things. That
is, the overarching assumption seems to me that you can have a single
right way to do things, and that you can get away without respecting
or tolerating others, and Lisp1 seems to be only one of many
manifestations of this. Lisp2 people not only tolerate a second
namespace, but also are not troubled that Lisp1 people are off happily
programming in Lisp1. We don't say that it's unnatural for them to do
as they're doing because we know there are multiple ways of doing things
and ours is just one way. However, we routinely have to tolerate the
other camp nosing into our world and telling us that it's unnatural
becuase the missing skill is, I think, not one of a specific linguistic
pattern, but more generally one of just plain tolerance.

Erann Gat

unread,
May 25, 2003, 5:57:42 PM5/25/03
to
In article <sfwu1bj...@shell01.TheWorld.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> we routinely have to tolerate the

> other camp [Lisp-1 advocates] nosing into our world


> and telling us that it's unnatural

I've been reading c.l.l. for a very long time, and my impression is that
the complaints about "the other camp nosing into our world" are much more
common than the actual event. Since you say this is a routine occurrence,
would you mind citing a few recent examples?

E.

Kent M Pitman

unread,
May 25, 2003, 10:23:26 PM5/25/03
to
g...@jpl.nasa.gov (Erann Gat) writes:

> I've been reading c.l.l. for a very long time, and my impression is that
> the complaints about "the other camp nosing into our world" are much more
> common than the actual event. Since you say this is a routine occurrence,
> would you mind citing a few recent examples?

Fights over lisp1/lisp2 are common although this is a long-settled issue.
They usually come from people surprised that lisp2 is possible, and I
perceive that such confusion must come from Scheme since a great many
other common languages routinely tolerate multiple namespaces without
comment.

By contrast, I've never been directed to a single message on comp.lang.lisp
agitating for it to be a lisp2.

Raffael Cavallaro

unread,
May 26, 2003, 12:15:57 AM5/26/03
to
Matthew Danish <mda...@andrew.cmu.edu> wrote in message news:<20030523205...@lain.cheme.cmu.edu>...

> On Fri, May 23, 2003 at 01:35:00PM -0700, Raffael Cavallaro wrote:
> > When it comes to languages, "user friendly" means
> > natural-language-like.
>
> Where has this been proven? I don't think that this is the case at all.

Jochen Schmidt has your answer:

"Concepts like context allow humans to express programs with means
they
already understood in their wetware. We already paid the bill - the
facility is already installed and gets used and trained on a daily
base..."

In other words, user friendly means leveraging facilities the user
already has. Human users already have an extraordinarily complex
natural language facility, evolved over tens of thousands (if not
hundreds of thousands) of years. It is foolish not to leverage it in
designing a general purpose computer language.

Raffael Cavallaro

unread,
May 26, 2003, 12:20:23 AM5/26/03
to
Tim Bradshaw <t...@cley.com> wrote in message news:<ey3smr3...@cley.com>...

> It's interesting that in domains where precision in language is
> required, languages which are more-or-less artificial are used, even
> when they are not interpreted by machines.

You're confusing a specialized written notation with an artificial
language. Listen to a conversation between two accountants - they do
*not* speak an artificial language to each other, just a natural
language with some domain specific terminology.

Michael Park

unread,
May 26, 2003, 1:51:51 AM5/26/03
to
Kent M Pitman <pit...@world.std.com> wrote:

> Fights over lisp1/lisp2 are common although this is a long-settled issue.


Saeed al-Shahhhaf <sa...@information.gov.iq> wrote

> We chased them... and they ran away...

I've been following this discussion very closely, but I haven't seen a
single valid argument in favor of Lisp2. Maybe I'm thick or something.
The only arguments I've seen were in the "it's not as bad as everyone
thinks", "you can get used to it if you really try" and "he made me do
it" categories.

P.S. This is a troll. Thanks for noticing.

Jeff Caldwell

unread,
May 26, 2003, 11:18:56 AM5/26/03
to
Raffael,

Perhaps if I saw an example of what you are driving at, I could begin to
understand it. Would you please translate the following Lisp code into
the kind of natural language program you are describing? I realize we
don't have the compiler you desire but perhaps one day the natural
language program you post here could be the first one compiled by that
compiler.

I often listen to conversations between programmers. My observation is
that the ideas expressed in code are initiated and organized by the
conversations but are not the same as the ideas expressed in the
conversations. Two different tools for two different purposes. I would
hate to have to use Lisp to talk with my co-workers and I don't yet see
how it would be good for me to specify programs in natural language. I
do agree there is a place for what you describe but I don't think it
will replace, for example, Lisp.

This code is from c.l.l. Fred Gilham posted it on 2003-05-09 and said it
came from Christian Jullien in 2000.

(defconstant Y-combinator
(lambda (f)
(funcall (lambda (g) (funcall g g))
(lambda (x) (funcall f (lambda () (funcall x x)))))))

(defun yfib (n)
(let ((fib (funcall Y-combinator
(lambda (fg)
(lambda (n)
(cond ((= n 1) 1)
((= n 2) 1)
(t (+ (funcall (funcall fg) (- n 1))
(funcall (funcall fg) (- n 2))))))))))
(funcall fib n)))

Paolo Amoroso

unread,
May 26, 2003, 12:02:00 PM5/26/03
to
[followup posted to comp.lang.lisp only]

On 25 May 2003 22:51:51 -0700, dont_...@whoever.com (Michael Park)
wrote:

> Kent M Pitman <pit...@world.std.com> wrote:
>
> > Fights over lisp1/lisp2 are common although this is a long-settled issue.

[...]


> I've been following this discussion very closely, but I haven't seen a
> single valid argument in favor of Lisp2. Maybe I'm thick or something.

In the early 1980s, a number of Lisp folks started gathering together for
designing a new Lisp dialect, which was later known as ANSI Common Lisp.

One of the most important things all those guys did, was agreeing on a
voting-based process for deciding which features would be included in the
language, and which should not. By using that process, they decided that
the new language would have been a Lisp-2 (more correctly, a Lisp-n)
dialect.

That's it. It's really that simple.

Everybody who doesn't like the outcome of this standardization process, or
who doesn't accept any kind of process for deciding which features should
go in a language, is welcome not to use ANSI Common Lisp, and maybe
design/use his favorite Lisp-1 dialect.


> P.S. This is a troll. Thanks for noticing.

You may want to listen "New Trolls" music:

http://www.aldebarannet.com


Paolo
--
Paolo Amoroso <amo...@mclink.it>

Donald Fisk

unread,
May 26, 2003, 1:54:38 PM5/26/03
to
Raffael Cavallaro wrote:

> Why? Because the "Software Crisis" will only be solved by enabling
> power users to write their own applications. I'm convinced that the
> real scarcity is not competent programmers, but domain expertise. Many
> people can learn to code. Very few people have the domain expertise to
> code the right thing. Acquiring domain expertise in many fields that
> need a great deal of software is far more difficult than learning to
> program competently. How many professional software developers have


> the equivalent domain knowledge of a Ph.D. in molecular biology, or a

> professional options trader, etc. Wouldn't it make more sense to


> develop compilers that were easier to work with, than to have coders
> acquire a half baked, partly broken level of domain expertise for each
> new project they undertake?

No. The easier a programming language is to use, the lower
the quality of its average programmers (cf Dijkstra's comments
on Cobol and Basic, both optimized for inexperienced programmers).
Programming languages should be optimized for good programmers, and
it that means making things difficult for the rest, so be it.

You sometimes need to call in specialist experts. If I ever need
to do genetic engineering, I'll hire an expert in that area rather
than expect it to have been dumbed down to a level that I can
understand without years of study. Similarly, I wouldn't
represent myself in a court of law, or conduct a survey of a
house I intend to buy.

Finally, there is the issue of whether computer programs are,
in general, expressible in anything even remotely resembling
English. I doubt it, unless you mean something like Cobol.
In many other fields, the inadequacy of English to express
instructions precisely had been recognized and that's why we have
formal notations for mathematics, music, choreography and knitting.

> But most software is not needed by human logicians. It is needed by
> human bankers, and human market traders, and human accountants, and
> human molecular biologists, and they all communicate quite well in
> natural language, modulo a sprinkling of domain specific notation.

Yes, but they have brains, culture and an education. Computers
are high-speed idiots with no common sense, and there's no prospect
of that changing any time soon. Of course, it would be nice if
computers /were/ made more intelligent. Perhaps some psychologists
could just write down how to do this in plain English, and feed it
into a computer.

:ugah179
--
"I'm outta here. Python people are much nicer."
-- Erik Naggum (out of context)

Adrian Kubala

unread,
May 26, 2003, 2:19:30 PM5/26/03
to
On 25 May 2003, Kent M Pitman wrote:
> So when people using natural languages go and create several or even
> dozens of meanings of the same word (according to context), especially
> without a resultant "mass [webster.com entry #4, defintion 1b] public
> [entry #1, def 1a] outcry [def 1b]", one is led to believe that this
> is on the intuitively "approved list" of language building tools.

I'm one of those scheme users who finds lisp2 confusing, but I think the
source of my (and others') confusion is misrepresented here, so I'll try
and clarify it.

Separate namespaces for /names/, such as one finds in Java, where methods
and fields are different, is not confusing. What I find confusing is
seperate namespaces for /values/. I find it non-intuitive, that if I pass
one kind of value in a variable, I have to use it differently than if I
passed another kind of value.

The correct analogy in English would be illustrated by the following:

Flying is fun. I wish I could do *it*.
as opposed to
That camera is nice. I wish I had *it*.

In English, pronouns must take a noun-ified version of a verb, like the
gerund, so we have to add a "do" when we want to use it in a verb-like
way. You might call this closer to lisp2 than lisp1, though it's clearly
quite different from either. I suppose it's like Elisp, where you pass the
symbol name of the function as a value and then funcall it.

Peter Seibel

unread,
May 26, 2003, 2:59:56 PM5/26/03
to
Adrian Kubala <adr...@sixfingeredman.net> writes:

> On 25 May 2003, Kent M Pitman wrote:
> > So when people using natural languages go and create several or even
> > dozens of meanings of the same word (according to context), especially
> > without a resultant "mass [webster.com entry #4, defintion 1b] public
> > [entry #1, def 1a] outcry [def 1b]", one is led to believe that this
> > is on the intuitively "approved list" of language building tools.
>
> I'm one of those scheme users who finds lisp2 confusing, but I think the
> source of my (and others') confusion is misrepresented here, so I'll try
> and clarify it.
>
> Separate namespaces for /names/, such as one finds in Java, where methods
> and fields are different, is not confusing. What I find confusing is
> seperate namespaces for /values/. I find it non-intuitive, that if I pass
> one kind of value in a variable, I have to use it differently than if I
> passed another kind of value.

Do you find it non-intuitive that we have to use the value of x
differently in the following two functions?

(defun foo (x) (car x))

(defun foo (x) (+ x 10))

If not, then why is this so different?

(defun foo (x) (funcall x))

Just a bit of food for thought.

-Peter

--
Peter Seibel pe...@javamonkey.com

Lisp is the red pill. -- John Fraser, comp.lang.lisp

Matthew Danish

unread,
May 26, 2003, 5:03:42 PM5/26/03
to
On Mon, May 26, 2003 at 01:19:30PM -0500, Adrian Kubala wrote:
> I'm one of those scheme users who finds lisp2 confusing, but I think the
> source of my (and others') confusion is misrepresented here, so I'll try
> and clarify it.
>
> Separate namespaces for /names/, such as one finds in Java, where methods
> and fields are different, is not confusing. What I find confusing is
> seperate namespaces for /values/. I find it non-intuitive, that if I pass
> one kind of value in a variable, I have to use it differently than if I
> passed another kind of value.

This confusion smells of what I have long suspected is the root of
Scheme users' confusion with Lisp-2.

In Scheme, when a list is evaluated, every element of the list is first
evaluated and then the value of the first element is applied with the
values of the rest of the elements.

The key to eliminating the confusion is to understand that in Common
Lisp this is not the case. In Common Lisp, when a list is evaluated,
the first element is not evaluated, but taken to be a special operator
name, macro name, function name or lambda expression.

If it is a function name or lambda expression, the rest of the elements
are evaluated in left-to-right order and then applied to the function
designated by the first element.

The argument to the FUNCTION special operator is also expected to be a
function name or lambda expression.

Thus there is nothing `different' about function values. It is just
that the first element of a list to be evaluated is not treated the same
as it is in Scheme.

Examples:

(let ((f 1))
(flet ((f (x) x))
(f f)))

The first element of (f f) is considered to be a function designator and
is looked up in the function namespace. The second element of (f f) is
evaluated in the `usual' manner and looked up in the lexical variable
namespace.

(defun f (g)
(funcall g))

When F is called with a function value, FUNCALL is looked up in the
function namespace, and G is looked up in the lexical variable
namespace, thus allowing you to call functions which are stored in
variables.

((lambda (x) x) 1)

Lambda expressions are also permitted.

Chapter 3.1.2.1.2 of the Hyperspec has all the details.


[1] Actually, it may be a special operator name, macro name, function
name, or lambda expression.

Harald Hanche-Olsen

unread,
May 26, 2003, 5:35:32 PM5/26/03
to
+ Adrian Kubala <adr...@sixfingeredman.net>:

| I'm one of those scheme users who finds lisp2 confusing, but I think the
| source of my (and others') confusion is misrepresented here, so I'll try
| and clarify it.
|
| Separate namespaces for /names/, such as one finds in Java, where
| methods and fields are different, is not confusing. What I find
| confusing is seperate namespaces for /values/. I find it
| non-intuitive, that if I pass one kind of value in a variable, I
| have to use it differently than if I passed another kind of value.

I agree that it can be confusing if you think of them as values. The
proper use of metaphors can go a long way to relieve confusion: Think
instead of verbs versus nouns.

| The correct analogy in English would be illustrated by the following:
|
| Flying is fun. I wish I could do *it*.
| as opposed to
| That camera is nice. I wish I had *it*.

I think a better analogy is this:

Time flies like an arrow.
Fruit flies like a banana.

In the first sentence "flies" is a verb. In the second, it is a
noun. The example is confusing because the sentence structure is
superficially the same in both sentences. Normally, in English, word
order and sentence structure will disambiguate (what a wonderful
phrase!) such words easily. The same thing happens in lisp, where the
typical "sentence" looks like (verb noun noun ...).

In English any noun can be verbed. This cannot be done in Lisp
(storing a non-function in a function slot), but instead verbs can
nouned, i.e., a function can be stored in a value slot. It is in
these exceptional circumstances that you need a special mechanism
(funcall or apply) in order to use the value as a function, instead of
just passing it around like any other value.

BTW, I don't recommend that you adopt this terminlogy of verbs and
nouns when talking about Lisp programs - it will just confuse the heck
out of fellow programmers. But it can be a useful way to talk to
yourself until you have internalized the way the concepts work.

This is like other unorthodox aspects of Lisp in that it seems quite
confusing in theory, but is much less so in practice. You typically
have to come up with some pretty contrived example in order to manage
to get really confused by it.

--
* Harald Hanche-Olsen <URL:http://www.math.ntnu.no/~hanche/>
- Debating gives most of us much more psychological satisfaction
than thinking does: but it deprives us of whatever chance there is
of getting closer to the truth. -- C.P. Snow

Coby Beck

unread,
May 26, 2003, 7:22:59 PM5/26/03
to

"Jeff Caldwell" <jd...@yahoo.com> wrote in message
news:AfqAa.2729$H84.1...@news1.news.adelphia.net...

> Raffael,
>
> Perhaps if I saw an example of what you are driving at, I could begin to
> understand it. Would you please translate the following Lisp code into
> the kind of natural language program you are describing? I realize we
> don't have the compiler you desire but perhaps one day the natural
> language program you post here could be the first one compiled by that
> compiler.

LOL! Such a friendly tone, you really had me going! Then I see your
code...

Just to chime in with a high level overview: I think it is as always a
question of balance. We need specialized language and syntactic structures
to communicate with a computer for sure, but we should use to their full
advantages all the instinctual or genetic abilities that allow us to
communicate to each other. Don't forget this whole thing started with the
still present subject line "Lisp-2 or Lisp-1". I don't think anyone claims
CL is modelled on natural language but I and others claim seperate
namespaces are good because they relect a property of natural language.

> This code is from c.l.l. Fred Gilham posted it on 2003-05-09 and said it
> came from Christian Jullien in 2000.
>
> (defconstant Y-combinator
> (lambda (f)
> (funcall (lambda (g) (funcall g g))
> (lambda (x) (funcall f (lambda () (funcall x x)))))))
>
> (defun yfib (n)
> (let ((fib (funcall Y-combinator
> (lambda (fg)
> (lambda (n)
> (cond ((= n 1) 1)
> ((= n 2) 1)
> (t (+ (funcall (funcall fg) (- n 1))
> (funcall (funcall fg) (- n 2))))))))))
> (funcall fib n)))
>

Though I believe the above is mostly rhetorical, I will take a stab:

define function fib on n
if n is 1 or 0
then return 1
otherwise return fib of (n - 2) + fib of (n - 1).

which can be nicely abbreviated as:
(defun fib (n)
(if (or (= 0 n) (= 1 n))
1
(+ (fib (1- n)) (fib (- n 2)))))

If you really want it in the form you present above, well, write it
yourself. Most programming tasks in The Real World are only about getting
something done, not specifically how to do it.

So what's my point? Such a simple and purely mathematical example does not
tell us much about where we need to go from here.

A better sample might be:

Make an interface to the customer table that allows updates only on public
details and read priviledges on all attributes. Use all of the default
color and widget schemes. Make it accessible to members of the Data Entry
group. Link to main menu.

Want something more succint? Very easy, I won't even bother to rewrite it.

Why can't this be part of a proper program? This is the kind of "crank it
out" code that is reinvented thousands of times per day that can and will be
replaced by higher and higher level constructs. These are the constructs
that will benefit from natural language properties like disambiguation from
context.

If you need more control, just drop down to a lower level of code and embed
it. Just like you can embed assembler in some Lisps. Language design is
all a question of balance.

--
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")


Coby Beck

unread,
May 26, 2003, 7:27:06 PM5/26/03
to

"Donald Fisk" <hibou000...@enterprise.net> wrote in message
news:3ED254DE...@enterprise.net...

> Raffael Cavallaro wrote:
>
> > Why? Because the "Software Crisis" will only be solved by enabling
> > power users to write their own applications. I'm convinced that the
> > real scarcity is not competent programmers, but domain expertise. Many
> > people can learn to code. Very few people have the domain expertise to
> > code the right thing. Acquiring domain expertise in many fields that
> > need a great deal of software is far more difficult than learning to
> > program competently. How many professional software developers have
> > the equivalent domain knowledge of a Ph.D. in molecular biology, or a
> > professional options trader, etc. Wouldn't it make more sense to
> > develop compilers that were easier to work with, than to have coders
> > acquire a half baked, partly broken level of domain expertise for each
> > new project they undertake?
>
> No. The easier a programming language is to use, the lower
> the quality of its average programmers (cf Dijkstra's comments
> on Cobol and Basic, both optimized for inexperienced programmers).
> Programming languages should be optimized for good programmers, and
> it that means making things difficult for the rest, so be it.

I don't see the issue as easy versus hard at all. I believe natural
language principles are a good thing to keep the sights on not for ease of
use but for increased density of information in a file of text you will feed
to the compiler.

Kent M Pitman

unread,
May 26, 2003, 8:24:45 PM5/26/03
to
Adrian Kubala <adr...@sixfingeredman.net> writes:

> I'm one of those scheme users who finds lisp2 confusing, but I think the
> source of my (and others') confusion is misrepresented here, so I'll try
> and clarify it.

This is ok. And I'm happy to discuss this on such terms. But what
I'm getting at is that this is something that confuses newcomers who
are initially pointed in the wrong direction. That's common with any
of a number of well-formed concepts. _Deep_ problems in a language
(or any domain) are better illustrated by the inability of an expert
to ever quite get things right, because of an architectural mismatch
that must be continually overcome, even in the steady state.

> Separate namespaces for /names/, such as one finds in Java, where methods
> and fields are different, is not confusing. What I find confusing is
> seperate namespaces for /values/. I find it non-intuitive, that if I pass
> one kind of value in a variable, I have to use it differently than if I
> passed another kind of value.

I think this is a point of view thing. I think your real problem is
not the separate namespace, which in your own terms is not used differently
at all, but the fact that function calling is not directly accessible on a
value using only primitive syntax.

> The correct analogy in English would be illustrated by the following:
>
> Flying is fun. I wish I could do *it*.
> as opposed to
> That camera is nice. I wish I had *it*.
>

Sure. Or sometimes the infinitive, too. For example,
"I want that camera."
"I want to fly."

> In English, pronouns must take a noun-ified version of a verb, like the
> gerund, so we have to add a "do" when we want to use it in a verb-like
> way.

Yes.

"That's something I want." (the camera)
"That's something I want do do." (flying / to fly)

> You might call this closer to lisp2 than lisp1, though it's clearly
> quite different from either.

The crisper case is probably:

"That fly is flying."

and

"I want that." (the fly)
"I want to do that." (the act of flying)

I'd certainly agree that here it's just an issue of naming. That is,
the notion of the unadorned "that" is acting at the syntax level to
refer to the previous noun, while the phrase "to do that" is a
discriminator that says to search among references in the function
namespace. And you could make the case that 'want' is a simple higher
order function operating on either of these, independent of their notation.
(I'm not sure I'd believe that, but this example would push that direction.)

There's also a whole big question of what is a 'reference' (syntactic
entity) and what's a 'value' (domain entity) in English since the two are
blurred; a pronoun can be used to disambiguate using information that will
not be known until domain resolution time (runtime). But it doesn't matter
to us, because those are not the parts of natural language we have borrowed.

What we have observed and borrowed is not the blurring of compile time and
runtime information, but the fact that the person writing the text is well
capable of managing context. There can really be no doubt of this fact.

In T (Yale Scheme), we discussed my idea that the whole language should have
been designed as special forms, but I don't recall if we implemented it.
That is, the idea that (f x) should really be shorthand for
(CALL (VAR f) (VAR x))
and that _all_ forms should be compound forms whose car told you how to do
the internal language dispatch. Understood in this way, (f x) is just a
shorthand that acknowledges a primitive kind of function calling and provides
a data flow path in which all arguments are evaluated equivalently. If
both Scheme and CL had a CALL special form, they could still have multiple
namespaces while having a compatible CALL form. It would simply be the
case that (f x) in Scheme meant (CALL (VAR f) ...) and in CL it meant
(CALL (FN f) ...). You would see then that FUNCALL's job is not what you
think it is. FUNCALL is just a primitive operation that takes variable
'f' instead of constant 'f'. That is, (FUNCALL f x) would mean
(CALL (VAR f) ...) instead of (FUNCALL (FUNCTION f) ...). This is not
unlike the way Java does x.m(a) to mean use method "m" from x's method set
on object a, while it uses x.(m)(a) to mean use method
[value of m] from x's method set
on object a. The less common operation (computing a method name) is given
a special notation (extra parens), the analog of FUNCALL, to remind the
reader that something funny is going on. In the MOO programming language,
this happens as well. In MOO, f.x means "get slot x of the value of f" but
f.(x) means "get from the value of f the slot whose name is the value of x".
The extra parens tell you something funny is going on. One might argue that
Java should have done f."x"(x) for the normal case and then f.x(x) for the
variable case, and that it would be more "regular". One might even argue that
Java should have made methods be objects, and make "open" be a method name
that evaluates to an open message so that window.open(x) really meant
window.#<openmsg>(...) and then window.x(x) would not requires a special
syntax. But they didn't. And, incidentally, Steele [who designed Scheme]
was right in there on the design and could have said something.

FUNCALL is not treating the value differently. The real case is that the
Lisp notation provides only one way to call a function: to have it live
in the function namespace. Not because a value type is treated differently,
but because the syntax type (f x) in CL always means (CALL (FUNCTION f) x)
and (FUNCTION x) quotes the name X. There is no FUNCTION* operation that
takes X unquoted, other than SYMBOL-FUNCTION [which only accesses the global
environment] nor is there access to the primitive operation 'CALL'. Instead,
we have APPLY and FUNCALL [which we couldn't implement from user code because
we don't have those operations].

I'm sorry this is somewhat jumbled but I don't have time to do better today.
Hopefully there's something in here that will help you get the issues in
a new light.

> I suppose it's like Elisp, where you pass the
> symbol name of the function as a value and then funcall it.

No, this is different. That's a case that is really about values.
FUNCALL is not doing coercion. You could just as well do
(funcall #'f x)
as
(funcall 'f x)
in elisp. Funcall supplies uniform treatment of any representation of a
function and calls that function.

The 'discriminatory behavior' (pardon the pun) you're complaining about is
not in the place you think it is.

If you really want to get nitpicky, Scheme, IMO, is just as ill-founded as
CL. It has only one primitive way to call a function and that's to put it
at the start of a list. There is no "operator" for calling a function, and
that makes it hard to have this conversation. CL commits the equal sin.
Likewise, there is no operator for accessing the value of variables in either
langauge--only the shorthand. As such, people get caught up discussing
favorite shorthands for underlying things that have no manifest notation,
and I think this makes the argument remarkably more difficult than it should
be because each person makes up their own fabric of internals to substitute
for these missing terms. If there's one thing we've learned about symbolic
computing, it's that it helps to have names for things. And "the place after
the open paren" is a poor name. Likewise, the term "a variable position" is
a poor name becuase each language defines this position arbitrarily, and
each language's users forget that things which are contingent on variable
position are therefore also arbitrary as a dependent consequence. That's why
I like to have this discussion in terms of imaginary operators CALL and VAR,
and with the (f x) understood to just be macro syntax that goes away in the
underlying semantics.

Adrian Kubala

unread,
May 26, 2003, 11:33:14 PM5/26/03
to
On 26 May 2003, Kent M Pitman wrote:
> I think this is a point of view thing. I think your real problem is not
> the separate namespace, which in your own terms is not used differently
> at all, but the fact that function calling is not directly accessible on
> a value using only primitive syntax.

Actually, after more thought I realized that I too was confused about why
I was confused, so let me try again. The confusing thing about lisp2 is
when you have to decide which namespace you mean to use. Java doesn't have
this problem because you don't have a choice which to use. Scheme, because
it's just one namespace. Whereas in CL, you can have the function value in
the function namespace or the variable namespace, so you have to know how
to use it accordingly.

In that light, I can see that if I used CL regularly, I would get used to
it pretty quickly, so now my only argument against lisp2 is that I'm not
sure I agree with the arguments /for/ it.

> It would simply be the case that (f x) in Scheme meant (CALL (VAR f)
> ...) and in CL it meant (CALL (FN f) ...).

And of course CL then has to have "FN" and "VAR" versions of lots of forms
where Scheme only has "VAR" ones -- which is the thing I find confusing
and distasteful. If I understand correctly what you mean by "VAR".

> As such, people get caught up discussing favorite shorthands for
> underlying things that have no manifest notation, and I think this makes
> the argument remarkably more difficult than it should be because each
> person makes up their own fabric of internals to substitute for these
> missing terms.

Amen to that.

Adrian Kubala

unread,
May 27, 2003, 12:03:15 AM5/27/03
to
On 26 May 2003, Harald Hanche-Olsen wrote:
> | Flying is fun. I wish I could do *it*.
> | as opposed to
> | That camera is nice. I wish I had *it*.
>
> I think a better analogy is this:
> Time flies like an arrow.
> Fruit flies like a banana.
>
> In the first sentence "flies" is a verb. In the second, it is a
> noun.

I was trying to say that this "better analogy" is not. You see the
differences between Lisp1 and Lisp2 when writing a higher order function,
and I believe that "pronouns" correspond better to the parameters in
higher order functions than do "words".

So it's:
; Flying -- do it.
(let ((it #'fly))
(funcall it))
; The fly -- swat it.
(let ((it fly))
(swat it))

So yes, English is more like lisp2 in this small respect -- though I'm not
convinced this makes it somehow more intuitive.

Ray Blaak

unread,
May 27, 2003, 2:11:00 AM5/27/03
to
Kent M Pitman <pit...@world.std.com> writes:
> This is not unlike the way Java does x.m(a) to mean use method "m" from x's
> method set on object a, while it uses x.(m)(a) to mean use method [value of
> m] from x's method set on object a.

Are you sure? Is this new?

In JDK 1.4 this is a syntax error.

Java has no method values, unless this is something new in 1.5, but I know of
only generics, enums, static imports, and a few other bits as new in 1.5.

Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
rAYb...@STRIPCAPStelus.net The Rhythm has my soul.

pentaside asleep

unread,
May 27, 2003, 3:28:19 AM5/27/03
to
Paolo Amoroso <amo...@mclink.it> wrote in message news:<6jnSPj2uvm56ND...@4ax.com>...

> Everybody who doesn't like the outcome of this standardization process, or
> who doesn't accept any kind of process for deciding which features should
> go in a language, is welcome not to use ANSI Common Lisp, and maybe
> design/use his favorite Lisp-1 dialect.

To an outside observer, it appears that if flamewars concerning how
Lisp "should be" are unwanted, perhaps this newsgroup shouldn't be
named comp.lang.lisp. No doubt this has come up before, but a quick
search turns up little.

This does not seem to be a matter of lisp-1 supporters staying on
c.l.scheme, since as I understand, Scheme was one of the first lisps
with lexical scoping, influencing CL. Looks like there is a group of
people who are interested in discussing a lisp that builds on CL's
power, but with a modified syntax.

Pascal Costanza

unread,
May 27, 2003, 6:10:56 AM5/27/03
to
Matthew Danish wrote:

> In Scheme, when a list is evaluated, every element of the list is first
> evaluated and then the value of the first element is applied with the
> values of the rest of the elements.

BTW, how does Scheme handle macros? Are they an exception to this rule,
or do they somehow fit in this model?

Pascal

--
Pascal Costanza University of Bonn
mailto:cost...@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Jens Axel Søgaard

unread,
May 27, 2003, 7:47:03 AM5/27/03
to
Pascal Costanza wrote:
> Matthew Danish wrote:
>
>> In Scheme, when a list is evaluated, every element of the list is first
>> evaluated and then the value of the first element is applied with the
>> values of the rest of the elements.
>
>
> BTW, how does Scheme handle macros? Are they an exception to this rule,
> or do they somehow fit in this model?

I have this simple mental model of the evaluation of (f e1 e2)

If f is bound to a macro transformer named transform-f:

(eval (f e1 e2)) = (eval (transform-f e1 e2))

If f is bound to a function:

(eval (f e1 e2)) = (apply (eval f) (eval e1) (eval e2))

In other words, to evaluate a form (f e1 e2) rewrite
it using macrotransformers until the first element is
bound to a function, then apply that function to the
evaluated arguments.

It isn't entirely correct but it's works for me.


For the more rigorous I have have found relevant pieces
of the R5RS:

The macro definition facility consists of two parts:

* A set of expressions used to establish that certain identifiers
are macro keywords, associate them with macro transformers, and
control the scope within which a macro is defined, and

* a pattern language for specifying macro transformers.

The syntactic keyword of a macro may shadow variable bindings, and
local variable bindings may shadow keyword bindings.

And here is the description for one of the binding forms of

let-syntax <bindings> <body> syntax
Syntax: <Bindings> should have the form

((<keyword> <transformer spec>) ...,)

Each <keyword> is an identifier, each <transformer spec> is an
instance of syntax-rules, ...

Semantics: The <body> is expanded in the syntactic environment
obtained by extending the syntactic environment of the let-syntax
expression with macros whose keywords are the <keyword>s, bound to the
specified transformers. Each binding of a <keyword> has <body> as its
region.

One way to make a <transformer spec> is to use syntax-rules.
You can specify the rewriting as a pair of a pattern and and
a template.

I'll cut straight to the semantics:

Semantics: An instance of syntax-rules produces a new macro
transformer by specifying a sequence of hygienic rewrite rules. A use
of a macro whose keyword is associated with a transformer specified by
syntax-rules is matched against the patterns contained in the <syntax
rule>s, beginning with the leftmost <syntax rule>. When a match is
found, the macro use is transcribed hygienically according to the
template.

Note that the defacto standard syntax-case system opposed to the R5RS
syntax-rules is more flexible (and allows non-hygienic rules too :-) ).

An excellent introduction is:

* Dybvig. "Writing Hygenic Macros in Scheme with Syntax-Case".
<http://www.cs.utah.edu/plt/publications/macromod.pdf>

Another interesting read is:

* Matthew Flatt.
"Composable and Compilable Macros: You Want it When?".
International Conference on Functional Programming (ICFP'2002).
2002.
<http://www.cs.utah.edu/plt/publications/macromod.pdf>

A comprehensive bibliography on the research of macros is found at:

<http://library.readscheme.org/page3.html>

--
Jens Axel Søgaard


Kent M Pitman

unread,
May 27, 2003, 8:27:25 AM5/27/03
to
pentasid...@yahoo.com (pentaside asleep) writes:

The world moves ahead only by agreeing not to make holy wars out of the
arbitrary.

Were it the case that this was the only minorly controversial area in CL,
it might be worth discussing. However, it's not. There are a truckload
of stupid little issues that come up from time to time that were decided
in an arbitrary way that suit some people and not others.

We could spend time talking about why strings are not null-terminated;
about why the anguage syntax is case-translating or about why the
internal canonical case for symbols and pathnames is uppercase; about
why there is WHEN and UNLESS instead of just IF and COND or about why
there is WHEN and UNLESS and COND instead of just IF or about why
there is WHEN and UNLESS and IF instead of just COND; about why we
have GO or why we don't have CALL/CC; about why missing multiple values
default or about why additional multiple values are ignored; about why
we allow left to right order of evaluation instead of leaving it
unspecified; about why CLOS subclassing produces a total order instead
of merely a partial order in underspecified situatios; about why there
are hash tables for EQUALP but not for STRING-EQUAL nor =; about why there
are cartesian complexes but not polars; about why (COMPLEXP real) returns
true instead of false; about why we don't require IEEE arithmetic
or tail call elimination; about why SPECIAL variables are managed using
DECLARE instead of their own set of binding forms; and on and on. Not
everything in CL is to everyone's liking.

But all we can really do is tell you why those things were decided, we
cannot as a community go back and revisit each and every one of those
decisions. To revisit only one is silly. A language that was just like
CL but different in exactly one way would needlessly divide the community.
A language that was a lot different would just be another language, and
might appeal to people who liked a different design theory than CL has.

Telling someone to design their own language is NOT trying to put them off.
It is addressing the practical fact that every language has, at its core,
a political process [1] and a set of adherents who like its politics. CL was
designed as an industrial strength language [2] for the purpose of people who
didn't want to see the language diddled to death, and its body of users is
best described (IMO) as people who care more about "having a way to do
things today" than "fighting forever about what the truly right way is"
because they know in their hearts there never is any truly right way, and
that arguing is just a way of putting off an aribtrary decision anyway.
So matters are settled becuase it's more important to this community to
settle them than to bicker.

I cite in the real world the decisions to use 110V or 220V outlets as an
example of something relatively arbitrary where it was more important in a
given region to "just decide" and then build a lot of devices compatible
with the wall sockets than to have the "best" value and to hold off the
deployment of electricity everywhere until there was worldwide agreement.
I cite again the decision to go with VHS for video cassettes over the
(claimed to be more technically superior) BetaMax format. What video store
owners REALLY wanted was to have a single inventory that would satisfy all
customers, not to have several unrelated stores for differing communities.

This is NOT to say that CL is the best language that could result from such
a process, but it is "good enough" and has done well. The utility of having
the whole language turned upside down to satisfy some selfish (and that's
what it is) need of some person who wants to do one or two things differently
but otherwise likes the language is just not sufficient to justify doing it.

ON THE OTHER HAND, if that same person wants to go raise their own banner
and do their own work to make a language to whatever specs they like, I've
got no problem because then they are paying their dues by understanding and
taking on the true cost to the community of the major upheaval that results
from divergence. Starting a new language is not easy, nor should it be.
Languages bring people together. You can't have each person have a custom
language without either a common core and a system for organizing extensions
[in which case it's not a different language] or else without losing the
advantages of having brought people together.

Whether or not such discussion is welcome on comp.lang.lisp is another
matter. As I understand Usenet guidelines, ARC's a new language that should
use this list until it gets enough traffic to need to diverge. That is,
we shouldn't grow new newsgroups in anticipation of traffic that is not there.

But the spin on the message I see above is that we are being elitist in not
wanting to reconsider old decisions. We are not. We are simply (re)applying
the design guidelines that brought us the language at all.

CL resulted historically from people's (and DARPA)'s lack of desire to see
myriad little almost-the-same dialects changing all the time because it kept
large-sized projects from getting built.

Incidentally, I want to add my feeling that this whole conversational
problem is a result of the natural but peculiar consequence of the
particular place we are in history. We are only far enough out from
the early design of Lisp (and other things) that we can still keep it
all in our head what would be involved to go back and do things over
again, and so it "feels possible". And we are confronted with people
who feel "left out" by not having gotten to do many of the things they
realistically feel they could have done. People often feel similarly
when confronted with history of other professions. I mean, who cool
would it have been to get to be Newton (at least, the grade school
version of him that you learn before you realize he had to know some
math to do what he did) and to sit there and see an apple fall and say
"Wow. There must be gravity." We feel like any of us could have done
the same, and we're annoyed that this person or that person got to do
it. Once history has advanced far enough that there are hundreds of
years of computer science, especially given its rate of growth, I
suspect that the desire to start over will feel more like the choice
of an artist or a luddite than the choice of a serious engineer. I
don't know if that's good or bad. But I do know that we won't _get_
300 years out if we restart every decade because someone feels left
out and wants to tweak some matter of mere syntax.

Geez, I'd change almost everything about the Java language if I were
allowed to, and yet I think it's solid enough to be quite a stable
base for developing long-lived programs. I see it as an assembly
language, not as a high-level language, but that's a detail that
doesn't affect its design, just how I would personally use it. And
while I'm quite sure I could make some good suggestions about how to
change Java for the better, the true value of Java is not unlike the
value of CL -- it's stability. It would do the community no favors to
see my suggestions implemented at the expense of community stability.
If someone wanted to make something that was not Java but was like it,
I'd have fun helping, because that was a different language and an
appropriate place to play, absent the negative effects on a user base.
I'm suggesting no less to others in the CL community who want a place
to play: Either learn how to use CL's extensive facilities for
modifying your programming environment to your liking, or make your
own language... But don't ask everyone else to take on enormous
instability and risk just to see your own petty pet peave soothed.

Lisp1/Lisp2 is not just an issue the CL community did not deal with.
We spawned a committee specifically to review the matter and we as a
community voted we wanted to keep it as a Lisp2. [3]

Moroever, the ISO working group originally set out to change things in Lisp
it didn't like, and the Lisp1/Lisp2 issue again came up. Once again the
issue resulted in Lisp2 semantics.

Scheme didn't start out by trying to change Lisp. It made its own
community and attracted its own users. But it didn't attract
everyone. By culling the Lisp1 advocates, I think it's plain that
Scheme left a higher density of Lisp2 advocates in the CL camp.
You're welcome to make another dialect that does the same, and to hope
for a similar shift of user base. But if you succeed wildly, don't be
surprised if a side-effect is that the CL camp will become even more
rigidly Lisp2. You're welcome to change your own users, but you shouldn't
expect to change someone else's users.

And sometimes, you should notice that the language you're using isn't the
one you want to be using.

References:

[1] "More Than Just Words / Lambda, the Ultimate Political Party" (1994)
http://www.nhplace.com/kent/PS/Lambda.html

[2] X3J13 Charter
http://www.nhplace.com/kent/CL/x3j13-86-020.html

[3] Technical Issues of Separation in Function Cells and Value Cells
http://www.nhplace.com/kent/Papers/Technical-Issues.html

Pascal Costanza

unread,
May 27, 2003, 8:32:06 AM5/27/03
to
Jens Axel Søgaard wrote:
> Pascal Costanza wrote:
>
>> Matthew Danish wrote:
>>
>>> In Scheme, when a list is evaluated, every element of the list is first
>>> evaluated and then the value of the first element is applied with the
>>> values of the rest of the elements.
>>
>> BTW, how does Scheme handle macros? Are they an exception to this
>> rule, or do they somehow fit in this model?
>
> I have this simple mental model of the evaluation of (f e1 e2)
>
> If f is bound to a macro transformer named transform-f:
>
> (eval (f e1 e2)) = (eval (transform-f e1 e2))
>
> If f is bound to a function:
>
> (eval (f e1 e2)) = (apply (eval f) (eval e1) (eval e2))
>
> In other words, to evaluate a form (f e1 e2) rewrite
> it using macrotransformers until the first element is
> bound to a function, then apply that function to the
> evaluated arguments.
>
> It isn't entirely correct but it's works for me.

So this means that the first position of an s-expression is also treated
specially in Scheme, before any evaluation of subexpressions take place,
right? The model that all subexpressions are treated the same is only an
approximation of what really goes on in Scheme. Or am I missing
something here?

Kent M Pitman

unread,
May 27, 2003, 8:39:01 AM5/27/03
to
Adrian Kubala <adr...@sixfingeredman.net> writes:

> Actually, after more thought I realized that I too was confused about why
> I was confused, so let me try again.

I appreciate your going through the exercise. I think you're not the only
one who has confronted this, and I think people often find it politically
expedient to push their own agenda into these gaps in understanding, rather
than to fairly assess the underlying issues as it seems like you're trying
to do.

> The confusing thing about lisp2 is
> when you have to decide which namespace you mean to use. Java doesn't have
> this problem because you don't have a choice which to use. Scheme, because
> it's just one namespace. Whereas in CL, you can have the function value in
> the function namespace or the variable namespace, so you have to know how
> to use it accordingly.

Yes, I agree that this takes a little getting used to. But I think it will
eventually feel quite natural once you see what's really going on. The trick
is not to start down the blind alley in the first place, because the
consequent misunderstandings that result from having mischaracterized the
difference ar the real problem; and those misunderstandings are not so much
a property of the language, but more a property of people rushing too fast to
"fix" the problem in their mind without asking someone who knows what's going
on.

> In that light, I can see that if I used CL regularly, I would get used to
> it pretty quickly, so now my only argument against lisp2 is that I'm not
> sure I agree with the arguments /for/ it.

That's a fair assessment. The issues are detailed, btw, in
http://www.nhplace.com/kent/Papers/Technical-Issues.html
in case you haven't read them. This document is the essential
technical content of a slightly longer document that Dick Gabriel and
I wrote to X3J13 discussing the matter. The document has a bit of MPD
(multiple personality disorder) to it because Gabriel and I had
opposite opinions on the matter. If you read it carefully, you'll see it's
more like a dialog between people of opposing points of view and not just
an explanation of a single point of view. Like any debate, people often
come away thinking it was a paper supporting their own point of view because
they read only the parts they like and they ignore the "other half". :)

> > It would simply be the case that (f x) in Scheme meant (CALL (VAR f)
> > ...) and in CL it meant (CALL (FN f) ...).
>
> And of course CL then has to have "FN" and "VAR" versions of lots of forms
> where Scheme only has "VAR" ones -- which is the thing I find confusing
> and distasteful. If I understand correctly what you mean by "VAR".

Yes. I've worked this out in a lot more detail than I showed here but
never completed it. I had an idea as a result of this discussion for how
to share the partial work I've done. Ping me in a few weeks if you've not
seen followup on this tiny subthread and still care for elaboration.



> > As such, people get caught up discussing favorite shorthands for
> > underlying things that have no manifest notation, and I think this makes
> > the argument remarkably more difficult than it should be because each
> > person makes up their own fabric of internals to substitute for these
> > missing terms.
>
> Amen to that.

Hopefully, if I published at least the notations and partial attempts I've
used, it would help to address this a little. Maybe the communities could
evolve, over time, a common interchange notation...

Kent M Pitman

unread,
May 27, 2003, 8:41:10 AM5/27/03
to
Ray Blaak <rAYb...@STRIPCAPStelus.net> writes:

> Kent M Pitman <pit...@world.std.com> writes:
> > This is not unlike the way Java does x.m(a) to mean use method "m" from x's
> > method set on object a, while it uses x.(m)(a) to mean use method [value of
> > m] from x's method set on object a.
>
> Are you sure? Is this new?
>
> In JDK 1.4 this is a syntax error.
>
> Java has no method values, unless this is something new in 1.5, but I know of
> only generics, enums, static imports, and a few other bits as new in 1.5.

Sorry. MOO does this and I hadn't used Java's laborious reflective
layer in long enough that my brain helpfully covered over the pain
with the more friendly memory of MOO syntax. [I think Java _could_
do this, of course. But oh well.] Thanks for flagging this so I could
correct myself; sorry for the misinformation.

Jens Axel Søgaard

unread,
May 27, 2003, 9:24:32 AM5/27/03
to
Pascal Costanza wrote:
> Jens Axel Søgaard wrote:
>> Pascal Costanza wrote:
>>> Matthew Danish wrote:

>>>> In Scheme, when a list is evaluated, every element of the list is first
>>>> evaluated and then the value of the first element is applied with the
>>>> values of the rest of the elements.

>>> BTW, how does Scheme handle macros? Are they an exception to this
>>> rule, or do they somehow fit in this model?
>>
>>
>> I have this simple mental model of the evaluation of (f e1 e2)
>>
>> If f is bound to a macro transformer named transform-f:
>>
>> (eval (f e1 e2)) = (eval (transform-f e1 e2))
>>
>> If f is bound to a function:
>>
>> (eval (f e1 e2)) = (apply (eval f) (eval e1) (eval e2))

Oops [ears beecoming red]. I have a bug.

(eval (f e1 e2)) = (apply (eval f)

(list (eval e1) (eval e2)))

> So this means that the first position of an s-expression is also treated
> specially in Scheme, before any evaluation of subexpressions take place,
> right? The model that all subexpressions are treated the same is only an
> approximation of what really goes on in Scheme. Or am I missing
> something here?

I think Matthew Danish is thinking about function application,
which was kind of implicit in that part of the thread.

I read his prose as:

(eval (f e1 e2)) = (apply (eval f) (list (eval e1) (eval e2)))

It is clear that this rule alone is not enough to take care of
for example if-expresions like

(if #t 42 (/ 1 0)) .


Let me try to state the first rule slightly different

(eval (syntax (f e1 e2))
= (if (name-bound-to-transformer? (syntax f))
(eval (apply (eval f) (list (syntax e1) (syntax e2)))
(eval (apply (eval f) (list e1 e2)))

where (syntax e) is a value representing the syntax of
the expression e.

Again, this is just how I think, which means that's
not neccesarily correct. William Clinger is the person
to ask, if you want definite answer.


--
Jens Axel Søgaard

Pascal Costanza

unread,
May 27, 2003, 10:48:33 AM5/27/03
to
Kent M Pitman wrote:

> The issues are detailed, btw, in
> http://www.nhplace.com/kent/Papers/Technical-Issues.html
> in case you haven't read them. This document is the essential
> technical content of a slightly longer document that Dick Gabriel and
> I wrote to X3J13 discussing the matter. The document has a bit of MPD
> (multiple personality disorder) to it because Gabriel and I had
> opposite opinions on the matter. If you read it carefully, you'll see it's
> more like a dialog between people of opposing points of view and not just
> an explanation of a single point of view.

So who of you was in favor of Lisp-1 and who was in favor of Lisp-2? ;)

Kent M Pitman

unread,
May 27, 2003, 11:34:36 AM5/27/03
to
Pascal Costanza <cost...@web.de> writes:

> Kent M Pitman wrote:
>
> > The issues are detailed, btw, in
> > http://www.nhplace.com/kent/Papers/Technical-Issues.html in case
> > you haven't read them. This document is the essential
> > technical content of a slightly longer document that Dick Gabriel and
> > I wrote to X3J13 discussing the matter. The document has a bit of MPD
> > (multiple personality disorder) to it because Gabriel and I had
> > opposite opinions on the matter. If you read it carefully, you'll see it's
> > more like a dialog between people of opposing points of view and not just
> > an explanation of a single point of view.
>
> So who of you was in favor of Lisp-1 and who was in favor of Lisp-2? ;)

RPG wrote the Lisp1 side. Was that not clear? :)

Michael Sullivan

unread,
May 27, 2003, 11:38:41 AM5/27/03
to
Raffael Cavallaro <raf...@mediaone.net> wrote:

> What we need are general purpose computer languages that are also user
> friendly. When it comes to languages, "user friendly" means
> natural-language-like.

I agree with the first sentence. I'm not sure I agree with the second.
In the very, very long term, it's probably true. But to make a
natural-language-like general purpose programming language that isn't
just dumbed down for non-programmers, but actually helps good
programmers more than it hurts them, is probably a strong-AI problem. I
don't think this is a really bad thing to work towards, but I think you
need to be careful of putting the cart before the horse. You need a
really strong base of natural language processing before something
that's a real improvement over existing computer languages can be
possible.

COBOL is what you get when you try do this with *zero* natural language
expertise and a willingness to dumb things down to the nth degree.

Applescript would be the latest attempt, and it would actually be a
decent general purpose language if it weren't for some articial
implementation limitations. In some respects, we can consider
applescript to be remarkably successful. Lots of people use it who
don't think of themselves as programmers. But it's still a reasonably
good language for programmers (as long as they aren't running into the
stupid implementation restrictions -- I'm talking solely about the
language design here). No, it doesn't compare to lisp, but, with a
decent standard library, a few key non-architectural improvements (like
native hashing, stack space limited only by available RAM, etc.) it
would be much more pleasurable to work in than many far more popular
languages.

But does this mean that AS represents a win for your philosophy? I
don't think so. AS popularity among "non-programmers" is attributable
primarily (IMO) to the fact that it *is* the scripting language of the
mac, and to the support built into popular applications on that
platform. If QuarkXpress and a few other programs had not come out
early with very good apple event scripting support, then applescript
would have gone nowhere fast, and probably no one at all would be using
it today.

If anything, applescript's *syntax* gets a lot of complaints from people
new to using it, even if they have never used any "real" programming
languages. It tempts you into believing that it is english, when it is
really its own bizarre form of legalese. Things that you would expect
to mean one thing, often mean something completely different.

Once you get over that hump, and begin to understand it as a computer
language, it's really not that much different from other computer
languages, but it *is* more verbose.

I contend that if apple had chosen scheme or caml, or csh, or pretty
much any other language as its extension language, and the same support
had been built, and the same runtime characteristics (or better) were
exhibited, that whatever this language was would have had the same kind
of adoption that applescript has now. Look at Visual Basic. It's
atrocious. Applescript makes it look like old fortran. It's the kind
of language that inspired Dijkstra's famous comment about brain damage.
But it's arguably the most used computer language in the world, and by
far the most used by people who would not identify as programmers. Why?
Because it's the standard scripting interface to the monopoly OS.

Power users want to automate, and they will wrap their heads around
anything which will let them do this with a minimum of architecture
building. They will not learn lisp or C++ in order to write a whole
structure, when they can plug away at existing applications and get
60-70% of what they need. give them a decent scripting environment to
go with those applications and suddenly they can get 95%+ of what they
need. Why reinvent all those wheels for that last 5%?

So power users will always gravitate toward whatever the scripting
language is of the applications they use? Give them lisp as that
scripting language, and they will learn lisp. Give them TECO, and they
will learn TECO, give them unlambda, and well... Okay, maybe not.

But the point is -- power users get into programming because of good
libraries, and hooks into existing software. They do *NOT* need an
easier language. They are already using languages that are *far less
easy* than most of the lisp and ML family. The fact that they are also
less powerful is just another sad truth about the computing landscape.

I'll state here because it should affect your reading of this -- *I* am
among your target audience. Maybe not the exemplar because I love math
and CS, and studied those in college and still keep up to some extent.
But as a professional -- I am primarily a domain expert who scripts and
programs to save me and my company time. I don't have the time or
money to build large applications from the ground up.

I am in a large community of similar folks who use applescript every
day. I certainly can't speak for them all, but I speak for a lot of
them when I say that trying to make a computer language be more like a
natural language was one of the *mistakes* of applescript, and we will
be better off without attempting this until we are *much* further along
into natural language processing than we currently are.

Libraries and hooks. Libraries and hooks. Libraries and hooks. That's
what power users need.

> You're thinking in a domain-specific-solution way. This is bound to
> fail, because each new domain will require its own unique, mutually
> incompatible, domain specific language.

No, it requires its own unique domain specific libraries -- and it
requires domain specific software that is relatively standard and has
great scripting hooks. Once you have that, power users will do the
rest.

The problem is that writing the whole software project for a domain is
hard, no matter how good your language is. And it's hard in a software
architecture sense, as well as a domain expertise sense. Most domain
experts don't have time to deal with such a project in its entirety
unless it is their primary function. Economically, that only happens
when they are working for a company whose core product is that software.
And when domain experts take on the project anyway because nothing does
anything close to what they need -- they don't do the software
architecture part as well as a real software expert unless they are
lucky enough to be both. I still believe this would be true even if a
true AI-complete style natural-programming language existed.

The real problem today is that software producers won't generally assign
domain experts who are not also software experts to do the fundamental
design, and much of most fundamental design remains about constricting
capabilities of the users rather than extending them. Existing
technology is plenty powerful enough to empower scripting friendly
users. If only it were implemented in powerful, standard and
well-documented ways in every domain for which we have standard
software, there would be no domain expertise crisis in the software
industry.

> Unless your needs fall
> precisely into that particular realm, and do not extend beyond it in
> any way, you lose. Better to craft a general purpose,
> natural-language-like computer language, that all the specific domains
> can use. As new application domains arise, a general purpose languge
> can be turned to those tasks, but domain specific solutions are
> unlikely to be flexible enough to be useful.

The problem is that any true general purpose language without a lot of
domain specific technology is insufficient to get people who aren't
software architects building huge applications akin to CAD programs or
general ledger accounting systems. At least not until your compiler
becomes an AI software engineer.

OTOH, down here in the real world of 2003, if your domain specific
solutions are as extensible as possible, then it will often be possible
to adapt solutions appropriate for one domain to related domains saving
much work. Witness all the ways that power users use spreadsheets to do
things that have little or nothing to do with accountancy.

The key here is -- when building domain specific solutions, don't ever
assume that they will never be used for anything outside the domain.
When you can provide extensibility at very little cost, *DO IT*.

Michael

Wade Humeniuk

unread,
May 27, 2003, 11:52:38 AM5/27/03
to

"Michael Park" <dont_...@whoever.com> wrote in message
news:ff20888b.03052...@posting.google.com...

As Kent as said the CL committe did not make the decision based on purely
technical grounds. The important point is that by choosing a Lisp-N,
this allows users of CL to use CL as a Lisp-1, IF THEY WHAT TO. All
they have to do is apply a little self discipline, a few macros and you
have a Lisp-1. If they had choosen a Lisp-1 then those people who
wanted to do Lisp-N (I am among them) would have great difficutly
in easily changing the language from Lisp-1 to Lisp-N. The
choice of Lisp-N increased the freedom of expression of the
programmer.

Wade

Just the other day I coded something full of Lisp-2ishness

Just a snippet

(defpseudo uri-reference
(:documentation "URI-reference = [ absoluteURI | relativeURI ] [ \"#\" fragment ]")
(and (~optional~ (or (absoluteuri) (relativeuri)))
(~optional~ (and (~match~ #\#) (fragment)))))

(defun parse-uri (uri-string)
(let (uri-properties)
(pseudo-parse ((copy-seq uri-string)
(uri-reference scheme authority net-path abs-path
opaque-part path-segments
rel-path query fragment host port
absoluteuri relativeuri)
:collector (lambda (pseudo value)
(setf (getf uri-properties pseudo) value))
:return
(if (equal (getf uri-properties 'uri-reference) uri-string)
(apply #'make-instance 'uri 'string uri-string uri-properties)
(error 'uri-parsing-error :uri-string uri-string
:parser-info uri-properties)))
(uri-reference))))

Michael Park

unread,
May 27, 2003, 12:52:54 PM5/27/03
to
I'm sorry, but I think your logic is faulty. This is like saying "We
voted for Bush, now you can not criticize him or his party. We need to
stay united. You are welcome to create your own country, however". As
a matter of fact, it's worse. Since Lisp <> ANSI CL, what you wrote is
logically akin to demanding that GOP should not be criticized on the
whole American continent.

P.S. Who voted for the guys who voted for Lisp2?

Kent M Pitman <pit...@world.std.com> wrote in message news:<sfwbrxo...@shell01.TheWorld.com>...

Daniel Barlow

unread,
May 27, 2003, 2:23:26 PM5/27/03
to

[ comp.lang.scheme removed from newsgroups line, because I don't read
that group so this message is unlikely to be relevant to it ]

dont_...@whoever.com (Michael Park) writes:

> I'm sorry, but I think your logic is faulty. This is like saying "We
> voted for Bush, now you can not criticize him or his party. We need to
> stay united. You are welcome to create your own country, however". As
> a matter of fact, it's worse. Since Lisp <> ANSI CL, what you wrote is
> logically akin to demanding that GOP should not be criticized on the
> whole American continent.

Nonsense. You are free to criticise who and what you want to. Kent
is simply explaining why the comp.lang.lisp community is on the whole
not interested in your criticism.

It's generally considered rude on Usenet to post questions to a
newsgroup that have been repeatedly answered in the past. Once upon a
time, many newsgroups had FAQ lists so that people could see what had
gone before. These days that's less often the case than it used to
be, but on the other hand we have Google so that you can see _exactly_
what's gone before. And Lisp-1 vs Lisp-2 is definitely one of those
things.

Perhaps if you had _constructive_ criticism you'd receive a less
hostile reaction. What outcome would you like to see happen, and what
do you intend to offer to facilitate it?


-dan

--

http://www.cliki.net/ - Link farm for free CL-on-Unix resources

Paolo Amoroso

unread,
May 27, 2003, 2:27:44 PM5/27/03
to
On 27 May 2003 00:28:19 -0700, pentasid...@yahoo.com (pentaside
asleep) wrote:

> Paolo Amoroso <amo...@mclink.it> wrote in message news:<6jnSPj2uvm56ND...@4ax.com>...
> > Everybody who doesn't like the outcome of this standardization process, or
> > who doesn't accept any kind of process for deciding which features should
> > go in a language, is welcome not to use ANSI Common Lisp, and maybe
> > design/use his favorite Lisp-1 dialect.
>
> To an outside observer, it appears that if flamewars concerning how
> Lisp "should be" are unwanted, perhaps this newsgroup shouldn't be
> named comp.lang.lisp. No doubt this has come up before, but a quick
> search turns up little.

[...]


> with lexical scoping, influencing CL. Looks like there is a group of
> people who are interested in discussing a lisp that builds on CL's
> power, but with a modified syntax.

I wouldn't personally mind if that group of people discussed a new Lisp-1
dialect in comp.lang.lisp--and I didn't imply it in my article.

But that group of people shouldn't be surprised by the fact that a number
of users have settled, for whatever reason, for a Lisp-2 dialect such as
ANSI Common Lisp, and are productively using that language to solve actual
problems. It's just a different set of goals, tradeoffs, priorities,
and--why not?--preferences.


Paolo
--
Paolo Amoroso <amo...@mclink.it>

Erann Gat

unread,
May 27, 2003, 2:16:20 PM5/27/03
to

It's actually even worse than that. Kent opposes even people who "support
the administration" but think that the community needs to pull together
for any sort of collective change. It's more like, "We wrote the
Constitution [the ANSI standard], and now you cannot pass any more laws.
If you think something needs to be done (like, say, repair a bridge)
you're welcome to go do it yourself, but don't bother us with it."

E.

In article <ff20888b.03052...@posting.google.com>,

Kent M Pitman

unread,
May 27, 2003, 3:18:04 PM5/27/03
to
[ replying to comp.lang.lisp only
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

dont_...@whoever.com (Michael Park) writes:

> I'm sorry, but I think your logic is faulty.

If your message and its selection of recipients did not look so much
like flamebait, I might have responded in more detail. But as it is,
I think I'll pass.

This is especially egregious because the subject line of this thread
is so bogus to start with. The "ANN" label appears to give it a
special "look here for an important announcement" quality. Abusing
that to escalate a private spat to an unprepared community that has
not been following this discussion seems quite inappropriate.

> P.S. Who voted for the guys who voted for Lisp2?

The ANSI voting processes are open to anyone in the world to read
about, and membership is in fact open to anyone in the world. (The
only thing non-US people cannot vote on are matters of US voting
position for the US International Representative.) This is all no
doubt adequately documented at www.ansi.org or www.itic.org. It was
possible for you to have participated, but I guess you "didnt_bother".
ANSI members are normally companies, but (as long as not employed by
other companies already represented) individuals can participate and
several did.

It was possible for you to have responded to the relatively well-advertised
public review. In addition to the normal ANSI notification lists, notice of
the public review also went to comp.lang.lisp, comp.lang.scheme, comp.ai,
comp.object, comp.lang.clos, comp.std.misc

http://www.google.com/groups?selm=19qij5INN8q4%40early-bird.think.com

Also, after ANSI CL became available, no one was forced to use it.
People do not use it under duress--almost the opposite. The committee
made decisions it thought the community could live with, and so far
the community has basically agreed. When the language came up for
reconfirmation in 1999, it was reconfirmed "as is".

It's not like the Scheme community nor any other language community enjoys
any greater degree of concensus. Ultimately, every language makes a bunch
of decisions and then sits back and sees if anyone wants it.

If there's been commercial opposition to CL, you can be quite sure it's
not over anything as silly as the namespace issue.

Kent M Pitman

unread,
May 27, 2003, 3:33:08 PM5/27/03
to
[ replying to comp.lang.lisp only
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

g...@jpl.nasa.gov (Erann Gat) writes:

> It's actually even worse than that. Kent opposes even [...]

Erann,

The Scheme community is entitled to its own sensibilities, and as a
rule I try not to engage public debates there.

Cross-posting selects the union of people who care about two topics,
not the intersection. I won't debate it out of context there.

I would appreciate it if you would confine your (mis)characterizations
of my position to a forum that where I have elected to be a public persona.
I'm not going to go there to fix the mess you guys have made. I guess
that means my name will remain undefended. With luck, conscience will
strike and one or both of you will retract your remarks to that venue,
optionally also inviting anyone over there that cares about CL politics to
come here to debate it.

As it is, I think a unified debate will just bring a lot of uninformed
people to the table and cause a lot of people from each community
with no real stake in the other's community to whine uselessly with
little hope of positive outcome.

Kent M Pitman

unread,
May 27, 2003, 3:38:11 PM5/27/03
to
[ replying to comp.lang.lisp only
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

g...@jpl.nasa.gov (Erann Gat) writes:

> It's actually even worse than that. Kent opposes even people who "support
> the administration" but think that the community needs to pull together
> for any sort of collective change. It's more like, "We wrote the
> Constitution [the ANSI standard], and now you cannot pass any more laws.
> If you think something needs to be done (like, say, repair a bridge)
> you're welcome to go do it yourself, but don't bother us with it."

I'd prefer to say what I oppose and don't, I.

I do think, though, that it's improper to propose changes in the language
without one or more implementations having implemented it. In general, that
was a standard feature of the X3J13 process. We tried not to make untested
changes. That's what's meant by "codifying existing practice" in the X3J13
Charter. http://www.nhplace.com/kent/CL/x3j13-86-020.html

The idea was that standards are not places for experimentation.
Experimentation can go wrong, and standards are too hard to back out of.
Experiments should be run in implementations. When multiple implementations
have a way of doing something, then there's motivation for a standard so that
everyone doesn't use gratuitously different ways of doing the same thing.
THEN a standard is needed.

And even then, standards don't have to start over from scratch. They can
be layered atop. I have plans for doing precisely that which I am actually
actively working on this week ... when not defending my good name.

Sigh.

Erann Gat

unread,
May 27, 2003, 3:40:34 PM5/27/03
to
In article <sfw4r3g...@shell01.TheWorld.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> Cross-posting selects the union of people who care about two topics,
> not the intersection. I won't debate it out of context there.

Sorry, I wasn't paying attention and so didn't notice that I was cross-posting.

> I would appreciate it if you would confine your (mis)characterizations
> of my position to a forum that where I have elected to be a public persona.

I don't believe I was mischaracterizing your position, and if you like I
will go back and look up specific quotes from you to back up my view, but
this is an orthogonal issue to the cross-posting. What would you like me
to do about that at this point?

E.

It is loading more messages.
0 new messages