Lisp-2 or Lisp-1

48 views
Skip to first unread message

Pascal Bourguignon

unread,
May 15, 2003, 9:44:32 PM5/15/03
to

I was dubious about Lisp-2 at first, but finally I've noticed that in
human languages, there are a lot of instances that show that we're
wired for a Lisp-2 rather than a Lisp-1:

The fly flies. (FLIES FLY)
The flies fly. (FLY FLIES)


;-)

--
__Pascal_Bourguignon__ http://www.informatimago.com/
----------------------------------------------------------------------
Do not adjust your mind, there is a fault in reality.

_ XL1201 _ Sebek _ Budo _ Kafka @hotmail.com Franz Kafka

unread,
May 15, 2003, 10:10:44 PM5/15/03
to

Scheme has a few things that are nicer than CommonLisp:
continuations (are useful to implement language features.)
cleaner semantics (easier to write Functional code, and
more beautiful too,)


CommonLisp has a few things that are nicer than Scheme:
CLOS (Built-in Object system.)
defmacro/defstructure (was missing from standard Scheme
for way too long)
More tools for building large systems (more built-in functions.)
2 Name-spaces (No chance a symbol-name will conflict with
a function-name.)

Hear's an idea why don't we take the benefits of Scheme(Lisp-1)
and CommonLisp(Lisp-2) and build a Lisp-3 (not named yet)
that combinds all the benefits of Scheme and CommonLisp.

Lisp-3 = Continuations + (Clean Function Calling Syntax) + Backtracking &
Unification (from Prolog) + APL (array stuff) + CommonLisp + (support for
introspection/reflection, code-walking)

&& make a kick ass Lisp system.

Cheers for Paul Garham's ARC.


Kent M Pitman

unread,
May 15, 2003, 10:49:35 PM5/15/03
to
[ replying to comp.lang.lisp only
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com> writes:

> Hear's an idea why don't we take the benefits of Scheme(Lisp-1)
> and CommonLisp(Lisp-2) and build a Lisp-3 (not named yet)
> that combinds all the benefits of Scheme and CommonLisp.
>
> Lisp-3 = Continuations + (Clean Function Calling Syntax) + Backtracking &
> Unification (from Prolog) + APL (array stuff) + CommonLisp + (support for
> introspection/reflection, code-walking)
>
> && make a kick ass Lisp system.
>
> Cheers for Paul Garham's ARC.

First, the names Lisp1 and Lisp2 are from the paper RPG and I
wrote for X3J13 [1] when considering the namespace issue. The debate was
originally over Scheme-style or CL-style, and I felt I was losing the
debate because Scheme has too much "affection" going for it. I wanted
it to be clear that the only part of Scheme we were talking about was
the namespace part, so I concocted a family of language dialects
called Lisp1 which have a single namespace (and which include Scheme),
and another family that has dual namespaces (and which presumably
included CL). The idea was that people should be able to conceive of
Scheme with 2 namespaces and a CL with 1 namespace, and so by talking
about Lisp1 and Lisp2 rather than Scheme and CL, we were being neutral
as to what other language features the two languages under discussion
had. This brought balance back to the discussion. Anyway, so the
digit counts namespaces, and it was an error in the paper not to call
CL a Lisp4, since there are also tagbody/go and block/return
namespaces. You are, therefore, incorrect in assuming that Lisp3 has
no designation. To the extent that Lisp1 and Lisp2 have any meaning,
Lisp3 means the family of languages with 3 namespaces.

But ok, so we know what you meant. Here are my thoughts for what they
are worth:

Everyone wants something different in a Lisp. So the more people
you involve, the more what you make will look like a big pile of things...
kind of like CL already does. :)

Nothing keeps you from making your own Lisp dialect, just as nothing
keeps you from starting your own political party [2], except the fact that
it's a lot of work and initially quite lonely. It can either succeed
or fail spectacularly.

Personally, I think there are enough dialects about and that it's better
to just use one and extend it. If you don't like CL or Scheme, then try
ISLISP or, as you say, work with PG on ARC before you just start your own
completely from scratch. It will not only be less lonely, but it will
also mean that resourcewise you are adding to the energies of others rather
than dividing things up still further.

[1] "Technical Issues of Separation in Function Cells and Value Cells"
http://www.nhplace.com/kent/Papers/Technical-Issues.html

[2] Parenthetically Speaking with Kent M. Pitman:
"More Than Just Words: Lambda The Ultimate Political Party"
http://www.nhplace.com/kent/PS/Lambda.html

Jim Bender

unread,
May 15, 2003, 11:12:08 PM5/15/03
to
At last I understand what all those yellow-colored "Dummy's Guide to
[whatever]" books
are really about. The only thing I am puzzled about is whether this is from
the "Dummy's
Guide to Lisp and Scheme" or from the "Dummy's Guide to Linguistics" ;)

"Pascal Bourguignon" <sp...@thalassa.informatimago.com> wrote in message
news:87of23l...@thalassa.informatimago.com...

Pascal Costanza

unread,
May 16, 2003, 5:24:26 AM5/16/03
to
Franz Kafka wrote:

> Hear's an idea why don't we take the benefits of Scheme(Lisp-1)
> and CommonLisp(Lisp-2) and build a Lisp-3 (not named yet)
> that combinds all the benefits of Scheme and CommonLisp.
>
> Lisp-3 = Continuations + (Clean Function Calling Syntax) + Backtracking &
> Unification (from Prolog) + APL (array stuff) + CommonLisp + (support for
> introspection/reflection, code-walking)
>
> && make a kick ass Lisp system.

Because it's extremely hard to do. In general, language features don't
combine very well.


Pascal

--
Pascal Costanza University of Bonn
mailto:cost...@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Dorai Sitaram

unread,
May 16, 2003, 8:27:52 AM5/16/03
to
In article <87of23l...@thalassa.informatimago.com>,

Pascal Bourguignon <sp...@thalassa.informatimago.com> wrote:
>
>I was dubious about Lisp-2 at first, but finally I've noticed that in
>human languages, there are a lot of instances that show that we're
>wired for a Lisp-2 rather than a Lisp-1:
>
> The fly flies. (FLIES FLY)
> The flies fly. (FLY FLIES)

Human-language verbs and nouns correspond to global
function names and global variables. Common Lisp
practice doesn't allow you to share names between
these. For Common Lisp, the right column above should
properly be

(FLIES *FLY*)
(FLY *FLIES*)

or

(FLIES +FLY+)
(FLY +FLIES+)

Pascal Bourguignon

unread,
May 16, 2003, 9:28:30 AM5/16/03
to
ds...@goldshoe.gte.com (Dorai Sitaram) writes:

Please, let me introduce you Wally. Wally is a fly. The fly flies.
Do you know Bally? Bally's a fly too. The flies fly. (Bally and Wally).

(defvar *Wally* (make-instance 'fly))
(defvar *Bally* (make-instance 'fly))
(let ((fly *Wally)
(flies (list *Wally* *Bally*)))
(flies fly)
(fly flies))

Bruce Lewis

unread,
May 16, 2003, 9:34:47 AM5/16/03
to
Pascal Bourguignon <sp...@thalassa.informatimago.com> writes:

> I was dubious about Lisp-2 at first, but finally I've noticed that in
> human languages, there are a lot of instances that show that we're
> wired for a Lisp-2 rather than a Lisp-1:
>
> The fly flies. (FLIES FLY)
> The flies fly. (FLY FLIES)

Here are a couple of hints if you want to start a CL/Scheme flame war:

1) Timing. There was just an extended discussion on lisp1 vs lisp2 in
c.l.l, so weariness of the topic will likely cause the flame war to
die out sooner than you intended.

2) Timing. Flame wars that start on Fridays tend to die out over the
weekend. Do your incendiary crosspost early in the week for best
results.

Paul Wallich

unread,
May 16, 2003, 9:42:15 AM5/16/03
to

> I was dubious about Lisp-2 at first, but finally I've noticed that in
> human languages, there are a lot of instances that show that we're
> wired for a Lisp-2 rather than a Lisp-1:
>
> The fly flies. (FLIES FLY)
> The flies fly. (FLY FLIES)
>
>
> ;-)

More realistically, we're wired for a Lisp-N, where N is the number of
part-of-speech roles that can be used in a single sentence. For example
(excuse the bad attempt at urban slang): "Fly flies fly fly." It's the
parser technology combined with a desire for elegant syntax that makes 2
a reasonable limit.

paul

_ XL1201 _ Sebek _ Budo _ Kafka @hotmail.com Franz Kafka

unread,
May 16, 2003, 9:42:50 AM5/16/03
to

"Jim Bender" <j...@benderweb.net> wrote in message
news:cGYwa.1730$dE.536...@newssvr12.news.prodigy.com...

> At last I understand what all those yellow-colored "Dummy's Guide to
> [whatever]" books
> are really about. The only thing I am puzzled about is whether this is
from
> the "Dummy's
> Guide to Lisp and Scheme"

David T's Common Lisp: A Gentile Introduction to Symbolic Computation avail.
for free on line.

But, don't expect it to teach you all of Lisp in 21 days. It'll prob.
take three-six months.

Then Read Sonya E. Keene's Object Oriented Programming in Common Lisp: A
Programmers Guide to CLOS. not avail. on Line.

After that you can glance at Peter Novig's Lisp-Bible Paradigms of
Artificial Intellignece Programming: Case-Studies in Common Lisp, and
understand some of it.

In about 6 months, or less if you are a fast reader, you should understand
CommonLisp.

Reading The Schemers Guide, from schemers.com should teach
people who are in high school or in gifted programs how to use
Scheme.

That will take a month or two to get through iff you really want to
understand it.

& The Little Lisper/The Little Schemer and The Seasoned Schemers
should help people who are not Comp. Sci. majors understand Lisp/Scheme.


Kent M Pitman

unread,
May 16, 2003, 10:11:00 AM5/16/03
to
[ replying to comp.lang.lisp only
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com> writes:

> David T's Common Lisp: A Gentile Introduction to Symbolic Computation avail.
> for free on line.

Repeat after me...

* Common Lisp, unlike _some_ languages, accepts and fosters _multiple_
programming philosophies.

* You know those s-expressions in Common Lisp? They're _secular_ expressions.

* Touretzky's book is a "Gentle" introduction, not a "Gentile" introduction.

Thank you for your attention.

Joe Marshall

unread,
May 16, 2003, 10:21:32 AM5/16/03
to
Bruce Lewis <brl...@yahoo.com> writes:

> Here are a couple of hints if you want to start a CL/Scheme flame war:
>
> 1) Timing. There was just an extended discussion on lisp1 vs lisp2 in
> c.l.l, so weariness of the topic will likely cause the flame war to
> die out sooner than you intended.
>
> 2) Timing. Flame wars that start on Fridays tend to die out over the
> weekend. Do your incendiary crosspost early in the week for best
> results.

3) Attitude. Assume that Common Lisp users are unaware of Scheme
and that they would prefer it if they were not so obviously
ignorant.

4) Attitude. Assume that use of Scheme is prima facie evidence of
superior reasoning power. Note that you yourself use Scheme.
Use fallacious modus tollens to draw conclusions about CL users.

5) Ad hominem resoning can be used to extend the thread. Remember
that we are all unfriendly savages. Nazi's too.

Coby Beck

unread,
May 16, 2003, 10:44:03 AM5/16/03
to

"Pascal Bourguignon" <sp...@thalassa.informatimago.com> wrote in message
news:87fznf1...@thalassa.informatimago.com...

> ds...@goldshoe.gte.com (Dorai Sitaram) writes:
> Please, let me introduce you Wally. Wally is a fly. The fly flies.
> Do you know Bally? Bally's a fly too. The flies fly. (Bally and Wally).
>
> (defvar *Wally* (make-instance 'fly))
> (defvar *Bally* (make-instance 'fly))
> (let ((fly *Wally)
> (flies (list *Wally* *Bally*)))
> (flies fly)
> (fly flies))
>

<SPLAT>

C-LUSER -67322 > flies
(*wally*)


Dorai Sitaram

unread,
May 16, 2003, 11:14:39 AM5/16/03
to
In article <ba2tl4$1igv$1...@otis.netspace.net.au>,

Coby Beck <cb...@mercury.bc.ca> wrote:
>
>"Pascal Bourguignon" <sp...@thalassa.informatimago.com> wrote in message
>news:87fznf1...@thalassa.informatimago.com...
>> ds...@goldshoe.gte.com (Dorai Sitaram) writes:
>> Please, let me introduce you Wally. Wally is a fly. The fly flies.
>> Do you know Bally? Bally's a fly too. The flies fly. (Bally and Wally).

Please watch your attributions. I would never
introduce flies to people, unless they [1] were really
hungry.

--d

[1] Ambiguous anaphora retained, to show that lexical
variables don't model anaphora, which Pascal B
seems to think they do.

Matthias Blume

unread,
May 16, 2003, 11:16:12 AM5/16/03
to
Pascal Bourguignon <sp...@thalassa.informatimago.com> writes:

> I was dubious about Lisp-2 at first, but finally I've noticed that in
> human languages, there are a lot of instances that show that we're
> wired for a Lisp-2 rather than a Lisp-1:
>
> The fly flies. (FLIES FLY)
> The flies fly. (FLY FLIES)

So what? We are also "wired" for all sorts of misunderstandings,
ambiguities, cross-talk, etc. And these are just the difficultis that
*humans* have with natural language; computers are much, much worse
still. In other words, programming languages should *not*(!!) be
like natural languages.

(Note that this is not really an argument which directly applies to
the Lisp-1 vs. Lisp-2 debate. All I'm saying is that anyone who
defends a particular programming language design because of how it
resembles natural language is seriously confused.)

Matthias

Eli Barzilay

unread,
May 16, 2003, 12:12:59 PM5/16/03
to
Matthias Blume <fi...@me.else.where.org> writes:

> So what? We are also "wired" for all sorts of misunderstandings,
> ambiguities, cross-talk, etc. And these are just the difficultis
> that *humans* have with natural language; computers are much, much
> worse still. In other words, programming languages should *not*(!!)
> be like natural languages.
>
> (Note that this is not really an argument which directly applies to
> the Lisp-1 vs. Lisp-2 debate. All I'm saying is that anyone who
> defends a particular programming language design because of how it
> resembles natural language is seriously confused.)

Sorry for the AOL-replay but... *Exactly*! Any kind of such
comparison between formal and natural languages leads to confusion and
problems. If anyone really wants to get better unification between
these two extremes (making them "wired" in the same way) they better
be prepared to go the whole way... For example, you'd use statistical
parsers to understand your code, resulting in programs with
probabilistic outcomes ("I wrote a program that solves your problem,
but it has a few bugs which makes it unreliable in bad weather") and
ambiguities -- (let ((let 1)) let) might give you 1, or it might
complain about a syntax error, or just give up and produce a "code too
confusing error".

Also, operational semantics will need to consider such things as the
local cultures, slang, and general current knowledge since they can
all change the way that a program runs. And don't forget about many
other natural devices that will be available:

loop with a-variable from 1 to the-length-of-that-array-we-just-read
loop with a-different-variable from the-above-variable to the-same-length
increment counter by the-multiplication-of-these-two-loop-variables

The results will definitely be interesting, but I think I'll stick
with formal languages for hacking.

--
((lambda (x) (x x)) (lambda (x) (x x))) Eli Barzilay:
http://www.barzilay.org/ Maze is Life!

Kent M Pitman

unread,
May 16, 2003, 12:29:23 PM5/16/03
to
[ replying to comp.lang.lisp only
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

Eli Barzilay <e...@barzilay.org> writes:

> Matthias Blume <fi...@me.else.where.org> writes:

(Not that these guys are reading on this newsgroup, but I'm not going
to cross-post anyway. Someone can tell them a "continuation" is available
on comp.lang.lisp. (And they thought we had no continuations...))

> > (Note that this is not really an argument which directly applies to
> > the Lisp-1 vs. Lisp-2 debate. All I'm saying is that anyone who
> > defends a particular programming language design because of how it
> > resembles natural language is seriously confused.)

I disagree.



> Sorry for the AOL-replay but... *Exactly*! Any kind of such

ANY kind? That seems a bit broad, to the point of useless. And if
narrowed, your remarks here seem to reduce to statements that are
largely false or irrelevant, depending on your point of view.

> comparison between formal and natural languages leads to confusion and
> problems.

I disagree.

Natural language is a cue to how we organize our brains.

I see no reason not to organize a language in a way that bears a shape
resemblance to how we think we think.

I CERTAINLY see no reason not to organize a computer language, which is
after all a means of communication, in the way people innately desire
to communicate. Most or all human languages exploit context and namespace;
I see no reason for programming languages not to follow their lead
provided no ambiguity results. People have said many things about multiple
namespaces, but they have never said it results in ambiguity.

The simplest and most obvious example of basing programming languages on
human languages is that we try to make the nouns, verbs, prepositions,
conjunctions, etc. mean something like what they mean in human language.

> For example, you'd use statistical
> parsers to understand your code,

This is a bogus argument. You're offering one possible way of choosing
badly as an argument for saying that there exists no possible way of
choosing well. No one is suggesting taking ALL features of natural
language into programming langauges. To do that is simply to eliminate
programming languages. But once one is talking about taking only some
features, I don't think you can credibly argue there are no features in
natural languages that ought to be deliberately mimicked by programming
languages.

> Also, operational semantics will need to consider such things as the
> local cultures, slang, and general current knowledge since they can
> all change the way that a program runs.

More of same.

> The results will definitely be interesting, but I think I'll stick
> with formal languages for hacking.

Use of multiple namespaces is not "unformal", even if it does come from
natural languages.

Eli Barzilay

unread,
May 16, 2003, 1:08:20 PM5/16/03
to
Kent M Pitman <pit...@world.std.com> writes:

> [ replying to comp.lang.lisp only
> http://www.nhplace.com/kent/PFAQ/cross-posting.html ]
>
> Eli Barzilay <e...@barzilay.org> writes:
>
> > Matthias Blume <fi...@me.else.where.org> writes:
>
> (Not that these guys are reading on this newsgroup,

You'd be surprised... *Posting* to *this* newsgroup is a different
issue, as well as choosing to reply to a cross-posted message to just
one of the groups (especially when you have that assumption...).


> > Sorry for the AOL-replay but... *Exactly*! Any kind of such
>
> ANY kind? That seems a bit broad, to the point of useless. And if
> narrowed, your remarks here seem to reduce to statements that are
> largely false or irrelevant, depending on your point of view.

Well, OK, "any" is too broad -- there are some shared concepts like
"communication" (but hey, I'm using natural language here so vagueness
is a tool I can use).


> > comparison between formal and natural languages leads to confusion
> > and problems.
>
> I disagree.
>
> Natural language is a cue to how we organize our brains.

I agree to that, but that still object to this fact making natural
languages a role model for formal languages. Ambiguity is the most
obvious example of a tool that we explictly want to have in a NL, yet
a FL even if you want to talk about ambiguity you should do it in
unmbiguous terms.


> [...] Most or all human languages exploit context and namespace; I


> see no reason for programming languages not to follow their lead
> provided no ambiguity results.

So the fact that such things as the standard example of "time flies
like an arrow" are part of natural languages, means that you would
like to have such features in such a programming language?


> People have said many things about multiple namespaces, but they
> have never said it results in ambiguity.

Ambiguity was just an example of my argument -- which is *not* in
favor or agains the double namespace. My argument is against using NL
analogies in the discussion (and I try to keep on this meta level
since the argument itself leads to very well-known results in this
context).


> The simplest and most obvious example of basing programming
> languages on human languages is that we try to make the nouns,
> verbs, prepositions, conjunctions, etc. mean something like what
> they mean in human language.

I'm sorry, but I don't see it that way. I can certainly see that some
objects in computer languages might resemble objects in natural ones,
but this all goes down to describing what a machine should do. If I
see a computer language that allows me to do get such descriptions
better while not having such analogies of verbs and nouns, I would not
have problems using it. (But obviously that would not happen -- it is
the domain of making computers do things which have verb/noun/etc-like
concepts.)


> [...] No one is suggesting taking ALL features of natural language
> into programming langauges. [...]

Right, -- which is what makes it a bad example to borrow from. I have
seen many arguments go by:

* Start with PL-feature X,
* Observe that feature X has an analogy in (unrelated) domain D,
* Observe that domain D has feature Y,
* Translate feature Y back into PL.

This, and some variants, are the sort of arguments which I object to
and want to avoid. The Lisp1/2 and verbs/nouns thing just happens to
be a popular instantiation. (Or, the fact that I might live in a
place with no theaters should not make me choose sides in the
continuation thread.)

BTW, there are natural language with seperate constructs for verbs and
nouns. If I use such a language, should I prefer Lisp1? Should I be
utterly confused on any arguments based on that feature of English?
Should I deteriorate arguments by sticking my language in and leading
the discussion into a language discussion (ie, cll and Latin)?


> > The results will definitely be interesting, but I think I'll stick
> > with formal languages for hacking.
>
> Use of multiple namespaces is not "unformal", even if it does come
> from natural languages.

It's the arguments used which are unformal.


[I will try to not followup.]

Kent M Pitman

unread,
May 16, 2003, 1:29:50 PM5/16/03
to
Eli Barzilay <e...@barzilay.org> writes:

> It's the arguments used which are unformal.

In my personal experience, for whatever that's worth, I've found that
more often than not, requiring formal arguments is used as a means of
excluding people who are not prepared to offer them. It's often no
different than having a public servant who you try to ask a simple
question and they won't deal with it unless you fill out a standardized
form in triplicate.

There's a difference between requiring a sound argument and requiring
a formal one.

It's also the case that peoples' intuitions about what they want are
often well-founded even when they are unable to articulate a coherent
argument.

I personally do accept arguments that are unformal, both because it makes
me feel less like I dismiss people out of hand and also because I sometimes
learn things about the world that dismissing things on the basis of form
would not allow me to learn.

You're welcome to do otherwise.

My intuition is that the reason that people from the Scheme community feel
(incorrectly) like there's some ambiguity in a Lisp2 is that they do not
see it following the rules they are familiar with and so they feel it must
not be following rules at all. The problem is then compounded because they
are intent on believing that it's not a natural way to think, and so they
refuse to learn the rule, and then in their mind I think they start to
believe that their unwillingness to learn the rule is a proof that people
can't learn the rule. I certainly am willing to believe that people who are
determined not to learn something will not learn it. That's a general truth
about the world. Beyond that, though, there's overwhelming evidence that
people can disambiguate even ACTUAL ambiguities. I see no reason to believe
they will have any trouble "disambiguating" (if that's what they like to
call it) the unambiguous notation offered by Common Lisp. And given that
they can do this, I have no shame about having been among those championing
the continued inclusion of this natural and useful feature (multiple
namespaces) in the language.

Incidentally, Java has this feature and no one makes noise about it at all.
Not only may the same name may have multiple casifications, and this
apparently causes no confusion, but in fact they have a separation of
function and value namespaces, which also causes no confusion. I think
the reason it causes confusion in our community is that a few people have
elected themselves to teach people the confusion, just as racism persists
because people elect themselves to teach hatred and fear rather than
tolerance. If those in the Scheme community taught simply that there was
a choice between a single namespace and multiple namespaces, and that Scheme
has made the choice one way for sound reasons and that CL has made the choice
the other way for equally sound reasons, the issue would die away. But
because many in the Scheme community insist on not only observing the
difference (which surely objectively exists) but also claiming it is a Wrong
decision (in some absolutist sense, as if given by some canonically
designated God) (which is surely a mere subjective judgment), the problem
persists. The problem is not a technical one, but a social one.
And it will not be fixed by technical means, since nothing is broken.
It will be fixed by social means, that of tolerance.

I can live with Scheme people programming somewhere in a single namespace
without going out of my way to criticize them. I raise the criticisms I
do ONLY in the context of defending myself from someone's active attack
that claims I am using a multiple namespace language out of ignorance,
poverty, wrongheadedness, or some other such thing. I am not. I am using
it out of informed preference.


be...@sonic.net

unread,
May 16, 2003, 3:26:41 PM5/16/03
to
Pascal Bourguignon wrote:
>
> I was dubious about Lisp-2 at first, but finally I've noticed that in
> human languages, there are a lot of instances that show that we're
> wired for a Lisp-2 rather than a Lisp-1:
>
> The fly flies. (FLIES FLY)
> The flies fly. (FLY FLIES)

In human languages, there is a balance to be achieved.
A certain amount of imprecision and ambiguity is actually
desirable, even necessary, in human languages. There are
many things important to us which we could not discuss at
all without ambiguity and imprecision. As somebody famous
once said, "never express yourself more clearly than you
think."

It is not fruitful to generalize too much from what makes
a good human language to ideas of what makes a good computer
language. Or vice versa.

Bear

Kent M Pitman

unread,
May 16, 2003, 3:49:17 PM5/16/03
to
[ replying to comp.lang.lisp only
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

be...@sonic.net writes:

People are NOT generalizing about what makes a good language.
They are using observations about the brain to claim that the
definition of "simple" that is often used by Scheme advocates
("must have a textually shorter formal semantics") is not the
only possible definition of simple. Some languages are hard to
learn but easy to use; some are easy to learn but hard to use.
It's hard to make a language that is all things to all people.

Languages are implemented more than once (except that Scheme advocates
seem obsessed with the exercise of implementing their own Scheme;
thank whatever deity you have that we made Common Lisp complex enough
that everyone off the street doesn't attempt that wasted exercise),
but programs are written many times. I want the language implemented
for easy program-writing, not for easy language implementation. I
want human languages implemented for easy reading of War and Peace,
not for ease of teaching the language and for tripling my reading
pleasure by tripling the length of War and Peace...

I take it as a given that the brain IS capable, without slowing it
down, of executing multiple rules at the same time with no loss of
speed, and there is ample evidence that it is or else we wouldn't
design all human languages that way because it would slow down our
thinking, then all you are left with is a question of whether to
pretend we have a less powerful processor available than we do.

It's funny to me how many people in the Scheme community profess a
strong desire for accomodating parallel processing, and yet how many
of those same people reject the possibility that the brain does
either parallelism or sufficiently fast multi-tasking that the
complexity of resolving context-based unambiguous notations is an
irrelevance. These same people assert boldly that I must not make
assumptions about the brain, but then themselves go and just as boldly
assert that they have an idea of what it is for something to be simple.
It's just... odd.

Bruce Lewis

unread,
May 16, 2003, 4:40:21 PM5/16/03
to
Kent M Pitman <pit...@world.std.com> writes:

> If those in the Scheme community taught simply that there was a choice
> between a single namespace and multiple namespaces, and that Scheme
> has made the choice one way for sound reasons and that CL has made the
> choice the other way for equally sound reasons, the issue would die
> away. But because many in the Scheme community insist on not only
> observing the difference (which surely objectively exists) but also
> claiming it is a Wrong decision (in some absolutist sense, as if given
> by some canonically designated God) (which is surely a mere subjective
> judgment), the problem persists.

I only started following c.l.l again recently, so I was unaware that
folks here had quelled the voices claiming Lisp2 to be superior in some
absolutist sense. If the Scheme community has fallen behind in terms of
making sure everyone understands the valid reasons on both sides, then
I'm certainly happy to do my part bringing Schemers up to the level of
maturity you've achieved here. I certainly wouldn't want us to be the
sole cause of the problem.

Pascal Costanza

unread,
May 16, 2003, 4:54:35 PM5/16/03
to

> In human languages, there is a balance to be achieved.
> A certain amount of imprecision and ambiguity is actually
> desirable, even necessary, in human languages. There are
> many things important to us which we could not discuss at
> all without ambiguity and imprecision. As somebody famous
> once said, "never express yourself more clearly than you
> think."
>
> It is not fruitful to generalize too much from what makes
> a good human language to ideas of what makes a good computer
> language. Or vice versa.

Polymorphism is all about _making_ code _deliberately_ ambiguous, so
that it has different semantics in different contexts!

Of course, programming languages are supposed to offer polymorphism in a
controlled way, but "a certain amount" is available in almost every
programming language.


Pascal

Pascal Costanza

unread,
May 16, 2003, 5:10:06 PM5/16/03
to
I have thought about this whole issue a little, and at the moment I
think it could probably be possible to integrate Lisp-2 and Lisp-1 into
one system.

The idea would be to integrate this in the package system such that you
can choose between Lisp-2 and Lisp-1 semantics when defining a new
package.

Problems should only occur when interfacing between Lisp-2 and Lisp-1
packages.

- When a Lisp-2 package imports a symbol from a Lisp-1 package, it would
see the same definition for both value cell and function cell. (If the
value cell doesn't contain a closure, the function cell could be
(constantly (symbol-value sym)), or something along these lines.)

- When a Lisp-1 package imports a symbol from a Lisp-2 package, either
the value cell gets priority over the function cell, or vice versa. This
could perhaps be configured at package definiton time.

- When a Lisp-2 package modifies either the value cell or the function
cell of a Lisp-1 symbol, the other cell should be modified accordingly,
or this signals a warning/error, or?

Would this be a reasonable approach, or am I missing something very
important?


Pascal

Shriram Krishnamurthi

unread,
May 16, 2003, 6:28:30 PM5/16/03
to
Pascal Costanza <cost...@web.de> writes:

> Polymorphism is all about _making_ code _deliberately_ ambiguous, so
> that it has different semantics in different contexts!

You must be referring to subtype polymorphism, because this isn't true
of parametric polymorphism.

[Note fwps.]

Shriram

pentaside asleep

unread,
May 16, 2003, 6:51:35 PM5/16/03
to
Kent M Pitman <pit...@world.std.com> wrote in message news:<sfwissa...@shell01.TheWorld.com>...

> Incidentally, Java has this feature and no one makes noise about it at all.
> Not only may the same name may have multiple casifications

Whatever the merits of separate namespaces, I'm not a fan of this
argument because people don't expect that much from Java's syntax.
Other than not being too shocking. (Anyway, it's a static language,
and so the criticism until 1.5 is the need for casts.)

Bill Richter

unread,
May 16, 2003, 8:14:25 PM5/16/03
to
In other words, programming languages should *not*(!!) be like
natural languages.

Right, Matthias. As Noam Chomsky says, natural languages are all
a zillion times more complicated than computer languages.

Kent M Pitman

unread,
May 16, 2003, 8:33:59 PM5/16/03
to
Pascal Costanza <cost...@web.de> writes:

> I have thought about this whole issue a little, and at the moment I
> think it could probably be possible to integrate Lisp-2 and Lisp-1 into
> one system.
>
> The idea would be to integrate this in the package system such that you
> can choose between Lisp-2 and Lisp-1 semantics when defining a new
> package.

I've done serious work on this but ran out of time/resources in the
middle so my half-done project is languishing... It's more
complicated than I originally thought. That's not to say it's not
doable, but it's non-trivial.

> Problems should only occur when interfacing between Lisp-2 and Lisp-1
> packages.

Lisp1 and Lisp2-ness should be an attribute of identifiers, not of
symbols and packages. And certainly it should not be aggregated.
... well, I dunno about should not, but certainly _I_ wouldn't...
I can see where you're going here but I'm doubting I'm going to like
the solution much. You're basically just agreeing to disagree and solving
the problem at a package-to-package interface level, but that's in my mind
not that much different than what you can already do by linking a Lisp
and Scheme together and using an FFI to communicate. A real solution would
confront the problem of tighter integration rather than sweep that under
the rug. If you're going to sweep it under the rug, there are already good
solutions to that, like the ones we use to insulate Lisp from C or Java.

> - When a Lisp-2 package imports a symbol from a Lisp-1 package, it would
> see the same definition for both value cell and function cell. (If the
> value cell doesn't contain a closure, the function cell could be
> (constantly (symbol-value sym)), or something along these lines.)
>
> - When a Lisp-1 package imports a symbol from a Lisp-2 package, either
> the value cell gets priority over the function cell, or vice versa. This
> could perhaps be configured at package definiton time.
>
> - When a Lisp-2 package modifies either the value cell or the function
> cell of a Lisp-1 symbol, the other cell should be modified accordingly,
> or this signals a warning/error, or?
>
> Would this be a reasonable approach, or am I missing something very
> important?

This is not how I'd do it, but I don't have the energy today to
explain how I'd do it differently. The short form of the answer is,
though, that the words "symbol" and "import" would not occur anywhere
in my explanation of how to do it. These are, IMO, at the wrong level
of abstraction.

I also think it's a dreadful mistake to limit the nature of any serious
solution to merely CL and Scheme or merely Lisp1/Lisp2... I'd generalize
the result as much as possible once I was going to the work of doing it
at all.

Kent M Pitman

unread,
May 16, 2003, 8:54:11 PM5/16/03
to
[ replying to comp.lang.lisp only
http://www.nhplace.com/kent/PFAQ/cross-posting.html ]

Bill Richter <ric...@artin.math.northwestern.edu> writes:

And as the late Prof. Bill Martin, a professor in the Lab for Computer
Science, some of whose many specialties were computational linguistics
and knowledge representation, said in a class I took from him [I'm
paraphrasing from a 20+ year old memory, but I think I've got the
basic sense of the remark right]: `People designed natural language in
order to be possible to learn.'

I took this statement of the seemingly obvious to be a form of
reassurance to those of us toying with getting computers to learn
language, like when I was working with Rubick's cube and it helped me
to see that at least _someone_ had screwed up a cube and then later
solved it ... so that I would know it was worth persisting to find a
solution because there _was_ a solution waiting to be had, and it was
known to be tractable. (After you've gotten it sufficiently mixed up,
the fact that you could just 'invert what you've done' seems about as
promising as thinking that 'inverting what you've done' would work to
reassemble a sandcastle you've just kicked.)

But in the context I mention it here, it has the same meaning, just a
different spin: If people can understand a language that is a
"zillion" times more complicated than computer languages, then stop
telling me that CL is so much more complicated than Scheme that no one
will ever understand it. We have wetware that is field-tested for
much worse and is known to work just fine on that.

What _is_ a barrier to learning is putting fingers into your ears
and saying "I won't, I won't, I won't" or "I can't, I can't, I can't"
over and over.

_ XL1201 _ Sebek _ Budo _ Kafka @hotmail.com Franz Kafka

unread,
May 16, 2003, 10:37:37 PM5/16/03
to

>
> Languages are implemented more than once (except that Scheme advocates
> seem obsessed with the exercise of implementing their own Scheme;
> thank whatever deity you have that we made Common Lisp complex enough
> that everyone off the street doesn't attempt that wasted exercise),
> but programs are written many times.
>

You don't need to reimplement Common Lisp to get an understanding of how it
works--just implementing a subset of the language would be a better
educational experience than implementing the whole language, unless you want
to be a compiler writer.

I like Lisp because you can embed other languages such as Prolog--which has
been done by LispWorks and in numerious programming books.

When I first was learning Lisp--I wrote a simple Assmbly language in Lisp to
get a feel for the language.

It had something like.

(assemble
'((move r1 5)
(move r2 7)
(add r1 r2)
(sub r2 4)))

I used something like

(cond
((eql (first opcode) 'move) (setf (second opcode) (third opcode)))
((eql (first opcode) 'add) (+ (second opcode) (third opcode)))
...
((eql (first opcode) 'sub) (- (second opcode) (third opcode))))

I just wrote this off the top of my head it might be wrong. :)

I did not add jumps in it because I wanted just to get a feel for how Lisp
worked.

I had a symbol-table with move, add, sub, mul, div--and I did not
allow assignment--each line would print what happened

r1= 5 r2=7 r1+t2=12 r2-4=2

This taught me alot--implementing a full Assembler would have prob. been a
waste of time--I helped me learn how to implement an Assembler in C for a
school project--I at least knew the algorith was correct.

What I am trying to say is that Kent is right you don't need to implement a
whole language--implementing a part of a language or instruction set will
teach you alot about how the lang./asm. works.

Eli Barzilay

unread,
May 16, 2003, 11:10:01 PM5/16/03
to
Kent M Pitman <pit...@world.std.com> writes:

> [...] I can live with Scheme people programming somewhere in a


> single namespace without going out of my way to criticize them. I
> raise the criticisms I do ONLY in the context of defending myself
> from someone's active attack that claims I am using a multiple
> namespace language out of ignorance, poverty, wrongheadedness, or

> some other such thing. [...]

For the record, there was *no* such context here or in the other
subthread (your reply to bear). In both cases Scheme was only
mentioned by yourself, and and no criticism of Lisp-n for any value of
n was given.

Coby Beck

unread,
May 17, 2003, 2:48:07 AM5/17/03
to

"Matthias Blume" <fi...@me.else.where.org> wrote in message
news:m24r3uj...@localhost.localdomain...

> Pascal Bourguignon <sp...@thalassa.informatimago.com> writes:
>
> > I was dubious about Lisp-2 at first, but finally I've noticed that in
> > human languages, there are a lot of instances that show that we're
> > wired for a Lisp-2 rather than a Lisp-1:
> >
> > The fly flies. (FLIES FLY)
> > The flies fly. (FLY FLIES)
>
> So what? We are also "wired" for all sorts of misunderstandings,
> ambiguities, cross-talk, etc. And these are just the difficultis that
> *humans* have with natural language; computers are much, much worse
> still. In other words, programming languages should *not*(!!) be
> like natural languages.

I don't think I agree with much of the above really. Firstly, computer
languages, despite the name, should be design for human understanding so I
don't think the fact that it is hard for a computer (read compiler writer)
should be a major factor in programming language design.

To your first point, and more off-topic, I don't know what you mean that we
are wired for misunderstanding. Don't we humans overcome these difficulties
for the most part? You express yourself very well in English. Some truly
great works of literature have been expressed in many different natural
languages. Do you think you could design a better language? I don't mean
just fix a few ambiguous words, I mean start from scratch. What fundamental
things about natural language would you change?

I don't mean to suggest this is impossible, I'm really curious if you have
some concrete ideas for improvement.

> (Note that this is not really an argument which directly applies to
> the Lisp-1 vs. Lisp-2 debate. All I'm saying is that anyone who
> defends a particular programming language design because of how it
> resembles natural language is seriously confused.)

I disagree because again, a computer language is for a human programmer.

--
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")


Raffael Cavallaro

unread,
May 20, 2003, 6:29:53 PM5/20/03
to
"Coby Beck" <cb...@mercury.bc.ca> wrote in message news:<ba4lve$2euh$1...@otis.netspace.net.au>...

> Some truly
> great works of literature have been expressed in many different natural
> languages. Do you think you could design a better language? I don't mean
> just fix a few ambiguous words, I mean start from scratch. What fundamental
> things about natural language would you change?
>
> I don't mean to suggest this is impossible, I'm really curious if you have
> some concrete ideas for improvement.

It's even been argued, by the biological anthropologist Terry Deacon,
that our cognitive abilities have forced the form of natural
languages. In other words, since neurobiology changes fairly slowly
compared to language, natural languages have been selected for easy
learning and comprehension by *all* human beings, not just the best
and the brightest. What has survived is a range of natural languages
that take the same basic range of syntactical forms, i.e., those that
people can learn easily. Deacon's point is that the selective
bottleneck is childhood language acquisition. Any syntactic feature
too difficult for children to master will not survive as part of that
language into the next generation.

This co-evolution of language and human neurobiology has led to a
range of syntax that is easy for humans to learn, and to understand
relatively unambiguously. Stray outside this range, and you start
having difficulty mastering the syntax, and much greater chances of
misunderstanding and error.

This would suggest that computer languages should hew to the common
patterns and elements of natural languages in order to assure easier,
and clearer, comprehension by human programmers.

Viewed in this light, Larry Wall's views on natural language features
in Perl seem somewhat less idiosyncratic. I suspect that when the
history of computer languages comes to be written in a century, the
surviving languages will follow the natural language view much more
closely than the lisp view that everything can and should be rendered
as an s-expression. Under the covers, maybe, but not the syntax meant
to be read by human programmers.

I've always agreed with the position Coby is taking here - let the
comiler writers worry about how to make the language work. Especially
with ever increasing hardware resources, there's little point in
languages forcing human users to wrap their minds around unnatural
syntactic constructs just to make compiler writers' lives, or academic
proofs of program correctness, easier.

Raf

Thien-Thi Nguyen

unread,
May 20, 2003, 7:31:12 PM5/20/03
to
raf...@mediaone.net (Raffael Cavallaro) writes:

> This would suggest that computer languages should hew to the common
> patterns and elements of natural languages in order to assure easier,
> and clearer, comprehension by human programmers.

yes, if you are only interested in teaching human programmers to program.

thi

Matthew Danish

unread,
May 20, 2003, 7:27:20 PM5/20/03
to
On Tue, May 20, 2003 at 03:29:53PM -0700, Raffael Cavallaro wrote:
> It's even been argued, by the biological anthropologist Terry Deacon,
> that our cognitive abilities have forced the form of natural
> languages. In other words, since neurobiology changes fairly slowly
> compared to language, natural languages have been selected for easy
> learning and comprehension by *all* human beings, not just the best
> and the brightest.

That still leaves a gigantic range of possibilities.

> This co-evolution of language and human neurobiology has led to a
> range of syntax that is easy for humans to learn, and to understand
> relatively unambiguously. Stray outside this range, and you start
> having difficulty mastering the syntax, and much greater chances of
> misunderstanding and error.

References would be nice, please.

> This would suggest that computer languages should hew to the common
> patterns and elements of natural languages in order to assure easier,
> and clearer, comprehension by human programmers.

``Please close the window when it is raining.''

Computer checks: raining? No. Then it moves on.

5 minutes later it rains. Is the window going to be closed? No.

Would a human know to close the window? Yes.

So should the `when' operator imply some kind of constant background loop? I
am really unclear on this.

> Viewed in this light, Larry Wall's views on natural language features
> in Perl seem somewhat less idiosyncratic. I suspect that when the
> history of computer languages comes to be written in a century, the
> surviving languages will follow the natural language view much more
> closely than the lisp view that everything can and should be rendered
> as an s-expression. Under the covers, maybe, but not the syntax meant
> to be read by human programmers.

This assumes that natural languages are inherently more readable. Natural
language is optimized for the task of communicating with other human beings,
who are aware of context and can remember points and communicate back.

Even something as simple as nestable block structure is not well supported by
any natural language I can think of. Sure you can get by without that, but I
thought those languages were part of the past?

> I've always agreed with the position Coby is taking here - let the
> comiler writers worry about how to make the language work. Especially
> with ever increasing hardware resources, there's little point in
> languages forcing human users to wrap their minds around unnatural
> syntactic constructs just to make compiler writers' lives, or academic
> proofs of program correctness, easier.

Here's a sample piece of code inspired by a natural language:

``2 parameter and 2 parameter add number parameter and 4 parameter multiply, in
my opinion''

Silly me, (2 + 2) * 4 is so much more readable, right?

Well mathematician's notation is, IMHO, a hodge-podge of heavily
context-optimized notepad sketching thrown together into a giant pile and
slowly fed to new people in bits and pieces. I know that many people consider
it to be an emblem of higher learning, but I think that it's really just a
confusion of the syntax with the underlying concepts. Plus it can get to be
a real pain to edit when dealing with larger expressions.

So, arguments that (* (+ 2 2) 4) is not ``natural'' or ``intuitive'' (oh no!)
don't really fly with me, since I don't think that any notation is natural or
intuitive. And people have used many other notations in the past, in other
places, even though it is tempting to think that the status quo is somehow the
best.

I don't know about you, but I like having easy-to-write macros, and
code-as-data. And the Lisp syntax is merely a reflection of the truth of the
underlying nested data structure. Any other syntax is just an attempt to hide
that, and for what cause? A primitive need to ``look like'' natural language
while not really being close to it?

Do human logicians fall back to writing their theorems entirely in natural
language because it is somehow ``more readable'' that way? As much as logic
books can be dense with symbols, they are incredibly more readable than the
equivalent written out in some natural language.

Note that the above discussion is orthogonal to the Lisp-n issue, which is a
semantic one. The relation to natural language there is the sharing of the
ability to handle separate contexts; a necessity to understand natural
language. I believe Pascal is arguing that humans are able to do it for
natural language and therefore also for other languages. Certainly,
mathematics is full of context-sensitive notation, and would be a pain without
it. (I can see it now: how many different alphabets would have to be exhausted
so that every theorem can have it's own "greek" letter(s)?).

But it isn't that Lisp-n for n > 1 is better because it imitates natural
language. There is, after all, no proof that imitating natural language
results in better computer language. However, the fact that humans can
accomodate separate namespaces/contexts in natural languages does seem to
indicate that humans can accomodate separate namespaces/contexts in computer
languages too.

--
; Matthew Danish <mda...@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."

Daniel Barlow

unread,
May 21, 2003, 8:10:13 AM5/21/03
to
Matthew Danish <mda...@andrew.cmu.edu> writes:

> ``Please close the window when it is raining.''
>
> Computer checks: raining? No. Then it moves on.
>
> 5 minutes later it rains. Is the window going to be closed? No.

I don't think I've ever seen a window rain, but I'd imagine that once
the glass is hot enough to melt there's not a lot of window left to
close anyway


-dan

--

http://www.cliki.net/ - Link farm for free CL-on-Unix resources

Grzegorz Chrupala

unread,
May 21, 2003, 11:00:48 AM5/21/03
to
aff...@mediaone.net (Raffael Cavallaro) wrote in message news:<aeb7ff58.03052...@posting.google.com>...

> people can learn easily. Deacon's point is that the selective
> bottleneck is childhood language acquisition. Any syntactic feature
> too difficult for children to master will not survive as part of that
> language into the next generation.

Natural languages are only (relatively) easy to acquire in natural
settings (interacting with parents and peers), because humans seem to
have specialized wireing to deal with this. But anyway this only works
until pubescence more or less. Otherwise natural languages are pretty
hard to learn, as anyone who tried learning a foreign language as an
adult can testify. They are rather more difficult than programming
languages, as far as I can tell from the experience of learning both
human and computer languages as an adult.

>
> This would suggest that computer languages should hew to the common
> patterns and elements of natural languages in order to assure easier,
> and clearer, comprehension by human programmers.

Given that the purpose and the way you use human languages is vastly
different from programming languages, I doubt designing a
pseudo-natural-syntax for a programming language would help to make it
clearer. I personally would much prefer a consistent, simple syntax
that I don't have to remember the quirks of to some sort of
pseudo-English (or pseudo-Polish ;)). I have also noted than when I
attempted to paraphrase some code I wrote (in Scheme, say) in order to
describe what it does, the resulting English prose is horribly
tortuous, wordy and far less clear than the original code. So I just
don't do it anymore if I don't have to.
All this makes me think that modelling a programming language syntax
after a natural language is, in general, a bad idea.

--
Grzegorz

Jochen Schmidt

unread,
May 21, 2003, 11:40:29 AM5/21/03
to
On 21 May 2003 08:00:48 -0700, Grzegorz Chrupala <grze...@pithekos.net>
wrote:

> Given that the purpose and the way you use human languages is vastly
> different from programming languages, I doubt designing a
> pseudo-natural-syntax for a programming language would help to make it
> clearer. I personally would much prefer a consistent, simple syntax
> that I don't have to remember the quirks of to some sort of
> pseudo-English (or pseudo-Polish ;)). I have also noted than when I
> attempted to paraphrase some code I wrote (in Scheme, say) in order to
> describe what it does, the resulting English prose is horribly
> tortuous, wordy and far less clear than the original code. So I just
> don't do it anymore if I don't have to.
> All this makes me think that modelling a programming language syntax
> after a natural language is, in general, a bad idea.

The discussion is *not* to model a programming language syntax after a
natural language but to make use of the available wetware in peoples brains
for purposes to write more expressive code.

I disagree that the purpose and the way of use of human languages is vastly
different from programming languages. The purpose is to communicate an idea
- to other humans _and_ to the computer.
The way you use it is by dialog or through whole "documents".

Programming languages are better suited to describe typical programming
ideas
than plain human language, because they are designed and/grown to do
better.
This is not much different to the language a mechanic uses to talk to his
colleguaes. Just because you learnt english does not make you able to
understand what they talk. Other environments - like being underwater -
leads
to other constraints in which sub-languages evolve which are obviously more
efficient than plain spoken human language.

Since the brain is indeed able to cope with context pretty well there the
idea
to make use of this facility is not a bad one.

ciao,
Jochen

Raffael Cavallaro

unread,
May 21, 2003, 6:36:46 PM5/21/03
to
Matthew Danish <mda...@andrew.cmu.edu> wrote in message news:<2003052019...@mapcar.org>...

> On Tue, May 20, 2003 at 03:29:53PM -0700, Raffael Cavallaro wrote:
> > It's even been argued, by the biological anthropologist Terry Deacon,
> > that our cognitive abilities have forced the form of natural
> > languages. In other words, since neurobiology changes fairly slowly
> > compared to language, natural languages have been selected for easy
> > learning and comprehension by *all* human beings, not just the best
> > and the brightest.
>
> That still leaves a gigantic range of possibilities.

Not really. The range of human grammars is actually quite limited. The
differences are largely superficial - e.g., different bindings for
different concepts, as it were, but still the same basic structures.
This whole issue is the basis for the now universally accepted view
that we have built in neurological "wiring" for language acquisition.
This would not be possible if the range of grammars were not extremely
limited - the language "instinct" wouldn't work with a sufficiently
different grammar.


> References would be nice, please.

<http://www.amazon.com/exec/obidos/tg/detail/-/0393317544/104-1144123-1355147?vi=glance>

(if the above gets split it should be one line)

_The Symbolic Species: The Co-Evolution of Language and the Brain_ by
Terry Deacon.


> ``Please close the window when it is raining.''
>
> Computer checks: raining? No. Then it moves on.
>
> 5 minutes later it rains. Is the window going to be closed? No.
>
> Would a human know to close the window? Yes.
>
> So should the `when' operator imply some kind of constant background loop? I
> am really unclear on this.


In brief, yes. An "if" construct suggests a single conditional check.
A "when," or "whenever" construct suggests a continuous background
polling. This is how GUIs are written. Whether the actual program text
uses the word "when," the semantics of the program are clear: "When
the user presses this button, execute this block of code." I am merely
suggesting that the program text should parallel the existing
structures in natural languages, i.e., the program text should read:
"when(button1.pressed?) exec(block1)" which would set up a continuous
background loop. The programmer could stop this loop with code like:
"stop-checking(button1.pressed?)"

In other words, we are continually forced to jump through unnatural
mental hoops to make our ideas take the form of a mathematical
algorithm (since this is what computer languages were originally
designed to execute). This may work well for scientific calculations
(hence the continued popularity of Fortran), but it really sucks for
most other types of information processing.

> This assumes that natural languages are inherently more readable. Natural
> language is optimized for the task of communicating with other human beings,
> who are aware of context and can remember points and communicate back.

Then the compiler writers will need include the ability to be aware of
relevant context (i.e, different compiler behavior with the same
program text depending on context), and remember points (a good deal
more introspection and maintenance of program state), and communicate
back (greatly improved, and *context sensitive* compiler warnings and
error reporting). This is going to happen. It's just a question of
when, and by whom, not whether.

Why? Because the "Software Crisis" will only be solved by enabling
power users to write their own applications. I'm convinced that the
real scarcity is not competent programmers, but domain expertise. Many
people can learn to code. Very few people have the domain expertise to
code the right thing. Acquiring domain expertise in many fields that
need a great deal of software is far more difficult than learning to
program competently. How many professional software developers have
the equivalent domain knowledge of a Ph.D. in molecular biology, or a
professional options trader, etc. Wouldn't it make more sense to
develop compilers that were easier to work with, than to have coders
acquire a half baked, partly broken level of domain expertise for each
new project they undertake?

> Even something as simple as nestable block structure is not well supported by
> any natural language I can think of. Sure you can get by without that, but I
> thought those languages were part of the past?

If you think that nestable block structure is necessary for the
communication of complex ideas then you're thinking in assember and
not in a natural language. The final compiler output may need to have
nested blocks, but that doesn't mean that the program text needs to be
expressed as nested blocks.

> Here's a sample piece of code inspired by a natural language:
>
> ``2 parameter and 2 parameter add number parameter and 4 parameter multiply, in
> my opinion''

Or "(two plus two) times 4." Your suggestion above is laughably
contrived.


> Do human logicians fall back to writing their theorems entirely in natural
> language because it is somehow ``more readable'' that way? As much as logic
> books can be dense with symbols, they are incredibly more readable than the
> equivalent written out in some natural language.


But most software is not needed by human logicians. It is needed by
human bankers, and human market traders, and human accountants, and
human molecular biologists, and they all communicate quite well in
natural language, modulo a sprinkling of domain specific notation.

> But it isn't that Lisp-n for n > 1 is better because it imitates natural
> language. There is, after all, no proof that imitating natural language
> results in better computer language.

Better for whom? For ordinary people, there is ample proof that
computer languages that more closely resemble natural language are
"better" - they simply don't use languages that aren't sufficiently
like natural languages at all. But lay people do use more
natural-language-like languages such as AppleScript, and Smalltalk.



> However, the fact that humans can
> accomodate separate namespaces/contexts in natural languages does seem to
> indicate that humans can accomodate separate namespaces/contexts in computer
> languages too.

Yes. WRT the thread topic, a lisp-n for n>1 (in fact, for unbounded n,
since new contexts can always arise) would be closer to natural
language.

Alexander Schmolck

unread,
May 21, 2003, 7:28:33 PM5/21/03
to
raf...@mediaone.net (Raffael Cavallaro) writes:
> Not really. The range of human grammars is actually quite limited. The
> differences are largely superficial - e.g., different bindings for
> different concepts, as it were, but still the same basic structures.
> This whole issue is the basis for the now universally accepted view
> that we have built in neurological "wiring" for language acquisition.

Although this is rather OT here, outside Chomskian linguistics this view is
certainly not universally accepted.

'as

Grzegorz Chrupala

unread,
May 22, 2003, 5:31:06 AM5/22/03
to
Jochen Schmidt <j...@dataheaven.de> wrote in message news:<oprpi4hr...@news.btx.dtag.de>...

> I disagree that the purpose and the way of use of human languages is vastly
> different from programming languages. The purpose is to communicate an idea
> - to other humans _and_ to the computer.
> The way you use it is by dialog or through whole "documents".

Well, maybe not *vastly* different, but telling a computer what to do
and having a conversation with a human being are sufficiently
different that most analogies will be misleading.

>
> Programming languages are better suited to describe typical programming
> ideas
> than plain human language, because they are designed and/grown to do
> better.

Agreed. And I happen to think that making programming languages
context dependent or ambiguous or syntactically similar to human
language would probably not make them any better suited to "describe
typical programming ideas".

>
> Since the brain is indeed able to cope with context pretty well there the
> idea
> to make use of this facility is not a bad one.

The brain is able to cope with *a lot*. The queston is:
Is introducing context actually going to help humans learn and use CL?
If there is a cost associated with context dependent processing, then
do its supposed benefits outweigh this cost?

--
Grzegorz

Raffael Cavallaro

unread,
May 22, 2003, 8:06:21 AM5/22/03
to
Alexander Schmolck <a.sch...@gmx.net> wrote in message news:<yfsy90z...@black132.ex.ac.uk>...

I think you misunderstand me. The Chomskian view is the extreme
position that *all* gramatical learning abilities are pre-wired. The
other extreme position is the classical "tabula rasa," or blank slate
position, i.e., that people are born with *no* cognitive instincts.

No linguist, indeed, almost no student of human cognition, now holds
the tabula rasa position, although it was widely held only a century
ago. This does not mean that all linguists hold the extreme Chomskian
position, and Deacon certainly does not.

Terry Deacon takes a more moderate position - specifically, that the
innate "language instinct" consists mostly of our inborn ability to
think symbolically, combined with some very well documented brain
specialization. Given the limited cognitive abilities of children
compared to adults, we get a range of human grammars limited by their
learnability by human children.

Raf

Tim Bradshaw

unread,
May 22, 2003, 8:32:21 AM5/22/03
to
* Raffael Cavallaro wrote:

> No linguist, indeed, almost no student of human cognition, now holds
> the tabula rasa position, although it was widely held only a century
> ago. This does not mean that all linguists hold the extreme Chomskian
> position, and Deacon certainly does not.

As I understand it there are some very good arguments against the
tabula rasa position. In particular you can look at the amount of
data a general grammar-learner needs to learn a grammar, and you find
that people get a small fraction of this. So either they have
some special wiring, or they do magic.

--tim

Burton Samograd

unread,
May 22, 2003, 10:02:00 AM5/22/03
to
Tim Bradshaw <t...@cley.com> writes:
> As I understand it there are some very good arguments against the
> tabula rasa position. In particular you can look at the amount of
> data a general grammar-learner needs to learn a grammar, and you find
> that people get a small fraction of this. So either they have
> some special wiring, or they do magic.

I just finished reading a very interesting book that covered this
subject called "Jungian Archetypes" (I forget the author's name
though). It's a very interesting book for geek types and was written
by a mathematician and discusses the evolution of scientific and
mathematical thought over the centuries and how they lead to clinical
psychology. The idea of "tabula rasa" is replaced by ingrained
archetypes which are carried in ourselves and the stories we are
exposed to (which make up part of the collective unconciousness). It
also gives one of the best explanations of Godel's Law I've read
anywhere. It's some very interesting reading and a perfect geek
pychology book.

--
burton samograd
kru...@kruhft.dyndns.org
http://kruhftwerk.dyndns.org

Alexander Schmolck

unread,
May 22, 2003, 4:21:52 PM5/22/03
to
Tim Bradshaw <t...@cley.com> writes:

This is the so-called "poverty of stimulus argument" and both Chomsky (1959)
[1] and particularly Gold (1967) are often cited as having formally
demonstrated that human languages are not learnable without, as you write
above, "some special wiring".

Alas, as is maybe not too surprisingly, there is some prominent disagreement
on the validity of assumptions of the underlying learning model that Gold's
learning theoretical treatment rests upon.

For example, Quartz and Sejnowksi (1997) conclude [2]:

"Hence, the negative learnability results do not indicate anything about the
learnability of human language as much as they do about the insufficiency of
the particular learning model."

Although Sejnowski, Elman, McClelland, Rumelhart and other connectionists (who
have been challenging the established nativist position on language
acquisition since the late 80ies) might be dead wrong, they are certainly not
stupid or marginal. Indeed many of them have significantly contributed to both
psychology and AI/pattern recognition and baring some grave misunderstanding
on my part none of them seems to be particularly committed to the "universally


accepted view that we have built in neurological "wiring" for language
acquisition".

'as

[1] http://cogprints.ecs.soton.ac.uk/archive/00001148/00/chomsky.htm
[2] http://citeseer.nj.nec.com/quartz97neural.html

post scriptum for the linguistically inclined:

Apart from learning theory and neurological studies, another nativist line of
defense is the demonstration of so called "universals" that hold across all
languages, many of which are deemed to be functionally arbitrary (and hence
neutral alternatives in a non-nativist framework). Although again there is
quite a bit of controversy about the validity and interpretation of much of
the data, I find the intellectual appeal of many of these arguments and
observations quite undeniable.

This is maybe not the best example, but pay attention to the referents:

Mary is eager to please.

vs.

Mary is easy to please.

John promises Bill to wash him.
John promises Bill to wash himself.

vs.

John persuades Bill to wash him.
John persuades Bill to wash himself.

A nativist would say that the contrasted sentences are structurally
equivalent, so how is the learner supposed to implicitly derive when 'himself'
refers to the subject and when to the object of the sentence? Children are
never taught explicitly and yet never seem to make certain kinds of mistakes
one would expect on the basis of similar such examples.

Jochen Schmidt

unread,
May 22, 2003, 4:48:43 PM5/22/03
to
On 22 May 2003 02:31:06 -0700, Grzegorz Chrupala <grze...@pithekos.net>
wrote:

> Jochen Schmidt <j...@dataheaven.de> wrote in message

> news:<oprpi4hr...@news.btx.dtag.de>...
>
>> I disagree that the purpose and the way of use of human languages is
>> vastly
>> different from programming languages. The purpose is to communicate an
>> idea
>> - to other humans _and_ to the computer.
>> The way you use it is by dialog or through whole "documents".
>
> Well, maybe not *vastly* different, but telling a computer what to do
> and having a conversation with a human being are sufficiently
> different that most analogies will be misleading.

Programming languages are not only meant to communicate to computers.
Programs get read more often by humans than by machines.
What makes Programming languages special is that they can be "understood"
by computers in a straightforward way.

>> Programming languages are better suited to describe typical programming
>> ideas
>> than plain human language, because they are designed and/grown to do
>> better.
>
> Agreed. And I happen to think that making programming languages
> context dependent or ambiguous or syntactically similar to human
> language would probably not make them any better suited to "describe
> typical programming ideas".

The "typical programming ideas" are a very fluid and quick changing thing.
Depending on what you want to accomplish you need to adapt your language to
your domain to be efficient. What you perceive statically as "human
language"
here doesn't make any domain topic easier to talk about than the right
domain
language. When creating such domain languages do you really claim that one
should stay away from concepts mainly known from "human languages"? Why?

>> Since the brain is indeed able to cope with context pretty well there
>> the idea
>> to make use of this facility is not a bad one.
>
> The brain is able to cope with *a lot*. The queston is:
> Is introducing context actually going to help humans learn and use CL?
> If there is a cost associated with context dependent processing, then
> do its supposed benefits outweigh this cost?

Concepts like context allow humans to express programs with means they
already understood in their wetware. We already paid the bill - the
facility
is already installed and gets used and trained on a daily base...

ciao,
Jochen

Matthew Danish

unread,
May 22, 2003, 6:18:12 PM5/22/03
to
On Thu, May 22, 2003 at 09:21:52PM +0100, Alexander Schmolck wrote:
> This is maybe not the best example, but pay attention to the referents:
>
> Mary is eager to please.
>
> vs.
>
> Mary is easy to please.

I find 'eager' to be a more 'active' term than 'easy', something that
Mary is actively doing rather than a passive description.

> John promises Bill to wash him.
> John promises Bill to wash himself.
>
> vs.
>
> John persuades Bill to wash him.
> John persuades Bill to wash himself.
>
> A nativist would say that the contrasted sentences are structurally
> equivalent, so how is the learner supposed to implicitly derive when 'himself'
> refers to the subject and when to the object of the sentence? Children are
> never taught explicitly and yet never seem to make certain kinds of mistakes
> one would expect on the basis of similar such examples.

Because the difference lies deeper than structural. It lies in the
meaning of the verbs.

John promises Bill to wash him.

meaning, at some future time

John washes him.

But if it were to be "John washes John" then you would normally use
'himself', so the ambiguity is resolved by choosing the other person.

Similarly,

John promises Bill to wash himself.

to

John washes himself.

Because the verb "to promise" implies that John will do something.

Whereas the verb "to persuade" implies that Bill is going to do
something.

John persuades Bill to wash him.

means that

Bill washes him.

And 'him' is resolved similarly to before.

I'm sure that linguists have thought of this difference before, and
there are probably better examples. I doubt the nativists are that
naive.

There's always the learn-by-example mode too. I think that I learned a
lot of English just by being exposed to it in books. I never knew
anything formal about grammar until I learned Spanish, and I figured out
how to form sentences often times by picking out remembered phrases.
(This was called the "feels-right" school of grammar by one English
teacher).

Raffael Cavallaro

unread,
May 22, 2003, 7:46:23 PM5/22/03
to
Tim Bradshaw <t...@cley.com> wrote in message news:<ey3ptmb...@cley.com>...


> As I understand it there are some very good arguments against the
> tabula rasa position. In particular you can look at the amount of
> data a general grammar-learner needs to learn a grammar, and you find
> that people get a small fraction of this. So either they have
> some special wiring, or they do magic.

Indeed. This argument is known as "The Poverty of the Input," i.e.,
children are not exposed to enough examples to generate all of the
grammatical rules that they learn.

This is one of several reasons that no serious student of human
cognition holds the strong tabula rasa position any more.

Raf

Alexander Schmolck

unread,
May 22, 2003, 8:34:38 PM5/22/03
to
I'm rather tired now, but I'll try to answer your points before I leave for a
couple of days. I make no claims to special expertise on the topic.

Matthew Danish <mda...@andrew.cmu.edu> writes:
> > John promises Bill to wash him.
> > John promises Bill to wash himself.
> >
> > vs.
> >
> > John persuades Bill to wash him.
> > John persuades Bill to wash himself.
> >
> > A nativist would say that the contrasted sentences are structurally
> > equivalent, so how is the learner supposed to implicitly derive when 'himself'
> > refers to the subject and when to the object of the sentence? Children are
> > never taught explicitly and yet never seem to make certain kinds of mistakes
> > one would expect on the basis of similar such examples.
>
> Because the difference lies deeper than structural. It lies in the
> meaning of the verbs.

Yes -- but then it is precisely the meaning of the verb (promise/persuade)
that you are trying to learn and this is, I hope we will agree, not made
easier by the fact what that correct determination of referents is not
possible by just understanding the structure of or indeed all the other words
in the sentence.

>
> John promises Bill to wash him.
>
> meaning, at some future time
>
> John washes him.
>
> But if it were to be "John washes John" then you would normally use
> 'himself', so the ambiguity is resolved by choosing the other person.
>
> Similarly,
>
> John promises Bill to wash himself.
>
> to
>
> John washes himself.
>
> Because the verb "to promise" implies that John will do something.
>
> Whereas the verb "to persuade" implies that Bill is going to do
> something.
>
> John persuades Bill to wash him.
>
> means that
>
> Bill washes him.
>
> And 'him' is resolved similarly to before.
>
> I'm sure that linguists have thought of this difference before, and there
> are probably better examples. I doubt the nativists are that naive.

While I am willing to take blame for my selection of the examples (and any
misrepresentation of their use in nativist arguments) I'd like to point out
that the the first example, if I am not mistaken, originates from Chomsky (the
source of citation is Ken Wexler's MIT Encyclopedia of Cognitive Science Entry
on Poverty of Stimulus A.). Since both examples are not of my own making and,
with slight variations, occur not infrequently in the literature any naivity
obvious from the examples alone is indeed shared by prominent nativists.


> There's always the learn-by-example mode too. I think that I learned a lot
> of English just by being exposed to it in books. I never knew anything
> formal about grammar until I learned Spanish, and I figured out how to form
> sentences often times by picking out remembered phrases. (This was called
> the "feels-right" school of grammar by one English teacher).

I am not quite sure what to make of this paragraph. The nativist argument is
precisely that you can't learn *just* by example (as the training input you
receive alone is by far not rich enough to deduce the rules that generated
this input; whether these rules be explict or not), thus the claim that there
is always learn-by-example mode, too, seems rather bizzarre to me in my tired
condition.

'as

Jeff Caldwell

unread,
May 23, 2003, 12:09:13 AM5/23/03
to
Raffael Cavallaro wrote:
...

> In other words, we are continually forced to jump through unnatural
> mental hoops to make our ideas take the form of a mathematical
> algorithm (since this is what computer languages were originally
> designed to execute). This may work well for scientific calculations
> (hence the continued popularity of Fortran), but it really sucks for
> most other types of information processing.

...

It appears you agree that different languages are appropriate for
different knowledge domains.

> ... the "Software Crisis" will only be solved by enabling


> power users to write their own applications.

This is a re-hash of the rise of the spreadsheet and the appearance of
PCs in the accounting department. The statement is true, has been
proven true, and the phenomenon will continue to evolve. (Spreadsheet
programs can be viewed as a different language with a different user
interface, reinforcing the prior point.)

> I'm convinced that the

> real scarcity is not competent programmers, but domain expertise....


> How many professional software developers have
> the equivalent domain knowledge of a Ph.D. in molecular biology, or a
> professional options trader, etc.

How many Ph.D.'s in molecular biology have the equivalent computer
science domain knowledge of a Ph.D. in computer science? I think you
are saying more that as spreadsheets became available for accountants,
something else will become available for molecular biologists and a
broad group of other people, and saying less that everyone must learn to
hold Ph.D.'s in computer science as well as in their own domain.

The workload done with spreadsheets did in fact lessen the workload
placed upon corporate IT departments. Those departments were overloaded
far beyond their ability to respond when spreadsheets appeared, and
spreadsheets and PCs were a good thing for the user departments. This
also allowed the IT departments to begin focusing more upon projects
affecting the larger enterprise rather than locally optimizing specific
departments.

The real impact upon corporate IT departments came when standard
packages, such as SAP, became widespread. IT became responsible more for
implementation and less for development. To some extent, I think of
SAP-like applications as a meta-super-spreadsheet language used by IT.

> Wouldn't it make more sense to
> develop compilers that were easier to work with, than to have coders
> acquire a half baked, partly broken level of domain expertise for each
> new project they undertake?

Yes but spreadsheets can go only so far. I discuss more about this below.

> If you think that nestable block structure is necessary for the
> communication of complex ideas then you're thinking in assember and
> not in a natural language.

Most books have a table of contents. Most books are structured at least
into chapters. Many chapters are divided into sections and most chapters
and sections are divided into paragraphs. Most paragraphs are divided
into sentences. These structures define the boundaries of contextual
structures.

> But most software is not needed by human logicians. It is needed by
> human bankers, and human market traders, and human accountants, and
> human molecular biologists, and they all communicate quite well in
> natural language, modulo a sprinkling of domain specific notation.
>

A programmer, or more likely a team consisting of project managers,
software engineers, quality assurance personnel, documentation
specialists, and programmers, will work with those with domain expertise
to design and develop a language appropriate to a range of domains.
Consider this the invention of the spreadsheet for that domain range.
Only then will the banker/trader/accountant/biologist be empowered to
use their spreadsheet equivalent.

Yes, the efforts may gain leverage from each other. Yes the domain
ranges may be large or may grow over time.

...


> Better for whom? For ordinary people, there is ample proof that
> computer languages that more closely resemble natural language are
> "better" - they simply don't use languages that aren't sufficiently
> like natural languages at all. But lay people do use more
> natural-language-like languages such as AppleScript, and Smalltalk.
>

I disagree that true natural language will produce the results you seem
to claim. The law is written in natural language, with domain-specific
extensions and idioms. Look at the practice of law, the number of
lawyers and judges, and the disputes over fine points of meaning. How
many lines in a nation's consitution have been questioned and
reinterpreted, asking about the founder's intent and other factors? Is
there unambiguous meaning to complex law? Are ambiguous software
specifications to be trusted in domains such as banking?

Beyond a certain simplistic level, anything stated by a banker about
software desired by the banker quickly exceeds the banker's domain
knowledge about systems and system behavior. Locking strategies,
replication mechanisms, performance bottlenecks, concurrent behaviors...
the banker knows nothing about these. The banker must rely upon the
default behaviors provided by the underlying software. When more than
that is required, people with domain knowledge about locking strategies
and replication mechanisms must become involved. These people always
will be needed, although a trend of many years has these people
concentrated more in system-domain companies and less in
application-domain companies.

The brain's basic wiring for language enabled communication between
humans providing a competitive survival edge in a given natural
environment. Saying that that mechanism is the ideal way to specify
Ph.D.-level computer science thought about machine behavior does not
seem like a strong claim to me.

To say that bankers can express application behaviors best through
natural language is true only to the extent that what they express does
not exceed their domain knowledge and to the extent that what they
express in natural language is not subject to multiple interpretations
such as we find even in the shortest legal documents such as
constitutions.

A banker using natural language to express desired machine behaviors
quickly will find repetitive expressions, begin to find them tedious,
and begin to look for shorter methods of expressing common patterns.
Over a period of years, if the compilers adapt to the banker's desire
for these less-tedious means of expression, the banker will end up with
a domain-specific language for expressing desired machine behaviors.

Ph.D. computer scientists, and others with machine and system domain
knowledge, today can use languages such as Lisp to build domain-specific
languages for expressing desired machine behaviors. These people build
domain-specific languages up from languages built to express machine
behaviors. Bankers ultimately may use natural language to build
domain-specific languages for expressing desired machine behaviors. They
will have built their domain-specific language down from natural language.

One idea is that when building large, complex financial systems, a
banker can express all the proper machine behaviors in natural language
without a systems person on the team. Another idea is that when building
large, complex banking systems, a systems person can express all the
proper financial behaviors in any language without a banker on the team.
Both ideas seem equally untenable to me. Sufficiently reduce the domain
knowledge required, and a banker can write a spreadsheet application and
a programmer can balance a checkbook.

Raffael Cavallaro

unread,
May 23, 2003, 8:28:25 AM5/23/03
to
Jeff Caldwell <jd...@yahoo.com> wrote in message news:<J9hza.718$H84.3...@news1.news.adelphia.net>...

> How many Ph.D.'s in molecular biology have the equivalent computer
> science domain knowledge of a Ph.D. in computer science?

You miss my point. With better (read, more natural-language-like)
computer languages, Ph.D.s in molecular biology wouldn't *need* the
equivalent science domain knowledge of a Ph.D. in computer science.
Only compiler writers would need this level of knowledge. Everyone
else would leverage it by using a better designed computer language.

> I think you
> are saying more that as spreadsheets became available for accountants,
> something else will become available for molecular biologists and a
> broad group of other people, and saying less that everyone must learn to
> hold Ph.D.'s in computer science as well as in their own domain.

Yup, now you're arguing my point. We need a natural-language-like
computer language that is the next step beyond spreadsheets, as it
were.


> Most books have a table of contents. Most books are structured at least
> into chapters. Many chapters are divided into sections and most chapters
> and sections are divided into paragraphs. Most paragraphs are divided
> into sentences. These structures define the boundaries of contextual
> structures.

Exactly. These are natural language structures that could be
*compiled* into nested block structures. But no ordinary person lays
out paragraphs as nested blocks. Their nesting (or lack of nesting, as
not all sequential paragraphs correspond to nested blocks) is
determined by such cue phrases as "alternatively," (read, here now, I
present a different branch).


> A programmer, or more likely a team consisting of project managers,
> software engineers, quality assurance personnel, documentation
> specialists, and programmers, will work with those with domain expertise
> to design and develop a language appropriate to a range of domains.
> Consider this the invention of the spreadsheet for that domain range.
> Only then will the banker/trader/accountant/biologist be empowered to
> use their spreadsheet equivalent.

Or a more general purpose language will be developed that allows
people from different domains to write their own software. This will
be much more useful, affordable, and flexible than calling in a team
of software engineers, QA personnel, documentation specialists, and
programmers, for each new domain to receive its limited extent, domain
specific language. What's the failure rate of large, complex software
projects these days? And you expect domain experts to play those sorts
of odds just to get a limited use language?

Better for the people with the greatest computer science expertise to
write a compiler for a general purpose, natural-language-like computer
language.


> I disagree that true natural language will produce the results you seem
> to claim. The law is written in natural language, with domain-specific
> extensions and idioms. Look at the practice of law, the number of
> lawyers and judges, and the disputes over fine points of meaning. How
> many lines in a nation's consitution have been questioned and
> reinterpreted, asking about the founder's intent and other factors? Is
> there unambiguous meaning to complex law? Are ambiguous software
> specifications to be trusted in domains such as banking?


Did you know that in Europe, most business that requires gangs of
lawyers and binders full of contracts here in the US, is transacted
with a two paragraph letter of intent, and a hand shake? The broken
complexity of the US legal system is not a necessary feature of legal
systems, nor of legal language. It is a product of a guild working to
make its services indispensable (remember, Congress is composed mostly
of lawyers, so all the laws are written by guild members). Rather like
IT people and programmers working to make computer use and computer
programming harder than it needs to be in order to maintain the IT
priesthood, to which all users must supplicate.

US law, and Common Law traditions in general, are *intentionally*
ambiguous, since they rely on precedent (i.e., previous case decisions
hold as much importance in how a judge will rule as what is actually
written in the legal code). I am not claiming that intentionally
ambiguous language can magically be made unambiguous. Merely that
ordinary domain experts can express themselves unambiguously when it
is needed, especially with the help of decent compiler warnings and
error messages.

>
> Beyond a certain simplistic level, anything stated by a banker about
> software desired by the banker quickly exceeds the banker's domain
> knowledge about systems and system behavior. Locking strategies,
> replication mechanisms, performance bottlenecks, concurrent behaviors...
> the banker knows nothing about these. The banker must rely upon the
> default behaviors provided by the underlying software. When more than
> that is required, people with domain knowledge about locking strategies
> and replication mechanisms must become involved.

But at what level? I'm saying that they only need to be involved at
the compiler writing level. The banker simply specifies that he's
dealing with a transaction, and the compiler generates all the
necessary locking strategies, etc, from that context, namely, that of
a transaction.

Your only valid argument here is performance. But moore's law will
take care of that for most cases (no premature optimization please).
There will probably always exist domains where real programmers will
need to tune for performance, but this is much easier to do when the
specification is a *working program* written by the domain experts.


> The brain's basic wiring for language enabled communication between
> humans providing a competitive survival edge in a given natural
> environment. Saying that that mechanism is the ideal way to specify
> Ph.D.-level computer science thought about machine behavior does not
> seem like a strong claim to me.

Its not a claim I made. I claim that computer languages can be made
more like natural languages, and the result would broaden the range of
domain experts who could write functioning software systems for
themselves. Would these systems by optimized for best CPU/memory/mass
storage use? No, probably not. But in most cases, that wouldn't
matter. In those few cases where such performance issues did matter,
the domain expert could call in a real software engineer to tune the
already correct, but slow/memory pig/disk hog program for better
performance.

> To say that bankers can express application behaviors best through
> natural language is true only to the extent that what they express does
> not exceed their domain knowledge and to the extent that what they
> express in natural language is not subject to multiple interpretations
> such as we find even in the shortest legal documents such as
> constitutions.


This is a red herring. Bankers, and other non-computer-scientists, are
perfectly capable of specifying things unambiguously when they know
that it is necessary. This process would be aided by *useful* compiler
messages, specifying what is ambiguous, and possible interpretations,
allowing the user to specify a specific, unambiguous alternative, that
would then become the saved version.


> A banker using natural language to express desired machine behaviors
> quickly will find repetitive expressions, begin to find them tedious,
> and begin to look for shorter methods of expressing common patterns.

People already do this with spreadsheet formulas. There's no reason to
believe that they wouldn't generalize this to methods, and modules of
functionality which would be regularly re-used.

> Over a period of years, if the compilers adapt to the banker's desire
> for these less-tedious means of expression, the banker will end up with
> a domain-specific language for expressing desired machine behaviors.

This would be nice, but is a step beyond even what I am advocating.
Having an adaptive compiler would be nice, but lets get one that is
merely more natural-language-like first.


> One idea is that when building large, complex financial systems, a
> banker can express all the proper machine behaviors in natural language
> without a systems person on the team. Another idea is that when building
> large, complex banking systems, a systems person can express all the
> proper financial behaviors in any language without a banker on the team.
> Both ideas seem equally untenable to me. Sufficiently reduce the domain
> knowledge required, and a banker can write a spreadsheet application and
> a programmer can balance a checkbook.

But look at the historical trend that you yourself have acknowledged;
in the future, do you think that we'll be moving in the direction of
teams with fewer systems people (like a banker with a language that is
simpler, yet more flexible than current spreadsheets), or teams with
fewer domain experts (like a cube farm full of programmers trying to
implement a banking system with no bankers to guide them)?

It think its clear that the former scenario is one that I'll see in my
lifetime, and that if I ever hear about the latter, I'll know its time
to dump stock in that bank.

Karl A. Krueger

unread,
May 23, 2003, 10:04:29 AM5/23/03
to
Raffael Cavallaro <raf...@mediaone.net> wrote:
> Jeff Caldwell <jd...@yahoo.com> wrote in message news:<J9hza.718$H84.3...@news1.news.adelphia.net>...
>> How many Ph.D.'s in molecular biology have the equivalent computer
>> science domain knowledge of a Ph.D. in computer science?
>
> You miss my point. With better (read, more natural-language-like)
> computer languages, Ph.D.s in molecular biology wouldn't *need* the
> equivalent science domain knowledge of a Ph.D. in computer science.

Pardon my cluelessness, but it doesn't seem to me that spreadsheets (the
example being used of a domain-specific "programming language") are any
more akin to a natural language than are ordinary programming languages.

Spreadsheets don't have to be like -natural- languages to be easier for
accountants. They have to be more like the notation that evolved
specifically to handle accountancy, the domain-specific conlang as it
were: ledger books. And so they are. They visually resemble printed
ledgers, and easily support operations that make sense in a ledger, like
"sum this column" or "let these values here be 106.5% of those values
over there".

The analogue of spreadsheets in a given domain would be a programmable
system with support for that domain's specific notation and operations
-- as, say, computer algebra systems offer for mathematics. This would
only resemble natural language insofar as the domain lends itself to
same: accountants' columns of figures do not look much like English to
me.

--
Karl A. Krueger <kkru...@example.edu>
Woods Hole Oceanographic Institution
Email address is spamtrapped. s/example/whoi/
"Outlook not so good." -- Magic 8-Ball Software Reviews

Raffael Cavallaro

unread,
May 23, 2003, 4:35:00 PM5/23/03
to
"Karl A. Krueger" <kkru...@example.edu> wrote in message news:<bal9pd$4s4$1...@baldur.whoi.edu>...


> Pardon my cluelessness, but it doesn't seem to me that spreadsheets (the
> example being used of a domain-specific "programming language") are any
> more akin to a natural language than are ordinary programming languages.

Which is why they are a domain specific solution, not a general
purpose one.

This is the big picture problem. Software engineers keep crafting
either:

1. domain specific, user friendly solutions, like spreadsheets, or CAD
packages, or...
2. general purpose languages that only software engineers can really
use effectively.

What we need are general purpose computer languages that are also user
friendly. When it comes to languages, "user friendly" means
natural-language-like.

>
> Spreadsheets don't have to be like -natural- languages to be easier for
> accountants.

But not everyone who needs software is an accountant, or an architect
(CAD packages), etc.

> The analogue of spreadsheets in a given domain would be a programmable
> system with support for that domain's specific notation and operations
> -- as, say, computer algebra systems offer for mathematics. This would
> only resemble natural language insofar as the domain lends itself to
> same: accountants' columns of figures do not look much like English to
> me.

You're thinking in a domain-specific-solution way. This is bound to
fail, because each new domain will require its own unique, mutually
incompatible, domain specific language. Unless your needs fall
precisely into that particular realm, and do not extend beyond it in
any way, you lose. Better to craft a general purpose,
natural-language-like computer language, that all the specific domains
can use. As new application domains arise, a general purpose languge
can be turned to those tasks, but domain specific solutions are
unlikely to be flexible enough to be useful.

Matthew Danish

unread,
May 23, 2003, 4:54:07 PM5/23/03
to
On Fri, May 23, 2003 at 01:35:00PM -0700, Raffael Cavallaro wrote:
> When it comes to languages, "user friendly" means
> natural-language-like.

Where has this been proven? I don't think that this is the case at all.
What will end up happening is this: Joe User will type out a sentence
expecting it to have X behavior. Jill User will type out the same
sentence expecting it to have Y behavior. If X is not Y, then one of
them will be very surprised. And natural language leaves so much
ambiguity, normally, that this is bound to happen. And eliminating
ambiguity from natural language will just give you a stiff, difficult
language which is more akin to an overly verbose formal language than
anything a human might use for day-to-day conversation.

A true natural language interface requires, in my opinion, artificial
intelligence in order to be usable. Without that, it will be entirely
too frustrating for any user who thinks that they can pretend to be
speaking to another human being. And if they don't think that way, then
what is the point of being natural-language-like?

Logicians went through this over a century ago, when Frege published
`Begriffsschrift'. They arrived at precisely the opposite conclusion that
you have.

Jeff Caldwell

unread,
May 23, 2003, 11:25:43 PM5/23/03
to
Raffael Cavallaro wrote:
...

> We need a natural-language-like
> computer language that is the next step beyond spreadsheets, as it
> were.
...

> Having an adaptive compiler would be nice, but lets get one that is
> merely more natural-language-like first.

Please try to convince a musician that he or she would be better off
writing and reading music in English or Chinese rather than in musical
notation.

The accountants I know would be angry if you tried to force them to
construct their spreadsheets in English. Take a complex spreadsheet,
write out its specifications in unambiguous English, show the result to
an accountant and try to convince her that she should begin entering all
her spreadsheets your new way.

> But no ordinary person lays out paragraphs as nested blocks.

A paragraph is a nested block of language but that was made clear in my
original post.

> Or a more general purpose language will be developed that allows
> people from different domains to write their own software. This will
> be much more useful, affordable, and flexible than calling in a team
> of software engineers, QA personnel, documentation specialists, and
> programmers,

Programmers need software engineers, QA personnel, and documentation
specialists but bankers won't! When machines can parse bankers's natural
language, the banker's software will be well designed, bug free, fully
documented, and comprehensive enough to run their entire enterprise, no
matter how large! With no QA! Or documentation! Or SE's! It will all
work! Because the compiler will be so smart! And the program will be the
documentation! Much better than those lousy programmers who need all
that extra support!


Karl A. Krueger

unread,
May 24, 2003, 2:19:22 AM5/24/03
to
Raffael Cavallaro <raf...@mediaone.net> wrote:
> You're thinking in a domain-specific-solution way. This is bound to
> fail, because each new domain will require its own unique, mutually
> incompatible, domain specific language. Unless your needs fall
> precisely into that particular realm, and do not extend beyond it in
> any way, you lose.

I am clearly confused. It seems to me, though, that every program is a
specificity, a selection of function for some purpose. It also seems to
me that programs that are to serve a particular domain must of necessity
incorporate domain-specific knowledge. They would not be very useful if
they did not.

The example of a spreadsheet reminds me of the warlord of Wu and his
question:

http://www.canonical.org/~kragen/tao-of-programming.html#book3

Kenny Tilton

unread,
May 24, 2003, 10:16:35 AM5/24/03