Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Many ways to say it

2 views
Skip to first unread message

Nick Szabo

unread,
Jul 20, 1994, 11:40:14 PM7/20/94
to
Many computer language proponents have argued for minimalist
languages, ie languages with few primitive components and
simple syntax, as being the simplest to learn and understand,
and therefore the most efficient to use. Proponents of the
successful grass roots computer language Perl, on the other
hand, have as their slogan, "There Is More Than One Way to Do
It". They argue that people routinely learn natural languages
with far larger vocabulary and more syntactic complexity than
any programming languages. A greater variety of primitives
and syntactic forms increases the flexibility of the language.
Since there are several ways to write most intended semantic
forms, the programmer can use a specific form that is more
convenient and succinct in a given context. Furthermore, the programmer
is more likely to recall or figure out at least one of the several
possible ways to communicate what he wants, whereas if there is
only way one to say it, and it is forgotten or can't be discovered,
the programmer is out of luck.

Computer languages have exacting semantics, but natural
languages are usually imprecise: two statements with
at first glance the same meaning might have important, subtle
differences. On the other hand, for practical daily
use many subtle differences will be lost. Nevertheless,
the fact that there are many ways to say practically
the same thing might be a useful feature of a natural language,
for the same reasons Perl proponents argue for redundancy:
succinctness with a given context, and the ease of remembering how
to communicate what is intended. Do natural language linguists
have good evidence for or against this point of view?

Support for both sides of the argument comes from the
the emerging theory of machine learning. In these techniques,
algorithms search the space of a simple language, such as the
language of common algebraic expressions. These languages are
completely defined both syntactically (by a context free grammar)
and semantically (by the operators and carrier set of the algebra).
Search spaces with redundancy -- for example, algebras with properties
like commutativity and associativity that allow them to be simplified
-- tend to be "smoother", less "spiky", so that search algorithms
have a greater likelihood of "hill-climbing" to solutions that
match the intended semantics. Thus, in many of these simple
models, languages with redundancy tend to facilitate learning of the
proper way to communicate a given meaning. Is this also
true for more complex computer languages, and for the very
complex, "fuzzier", open-ended natural languages?


--
N1ck Szab0 sz...@netcom.com
Big Brother is parsing you.

Raul Deluth Miller

unread,
Jul 21, 1994, 9:47:42 AM7/21/94
to
Nick Szabo:
. Many computer language proponents have argued for minimalist
. languages, ie languages with few primitive components and simple
. syntax, as being the simplest to learn and understand, and
. therefore the most efficient to use. Proponents of the successful
. grass roots computer language Perl, on the other hand, have as
. their slogan, "There Is More Than One Way to Do It".

There is more than one way to do it in just about any language.

What makes perl nice is integration with a large variety of i/o
facilities and environmental queries. What makes perl not nice is
covered in the documentation [e.g. you might not want to use perl to
implement a high speed i/o state machine, nor would you necessarily
want to use perl for numerical work].

Raul D. Miller n =: p*q NB. prime p, q, e
<rock...@nova.umd.edu> NB. public e, n, y
y =: n&|&(*&x)^:e 1
x -: n&|&(*&y)^:d 1 NB. 1 < (d*e) +.&<: (p,q)

Larry Wall

unread,
Jul 21, 1994, 6:46:48 PM7/21/94
to
In article <szaboCt...@netcom.com> sz...@netcom.com (Nick Szabo) writes:
: Thus, in many of these simple
: models, languages with redundancy tend to facilitate learning of the
: proper way to communicate a given meaning. Is this also
: true for more complex computer languages, and for the very
: complex, "fuzzier", open-ended natural languages?

That's an excellent question.

Whether "diagonal" languages like Perl facilitate learning really depends
on what you mean by learning. If you mean, "How easy is it to learn all
the components of a language?", then a smaller, orthogonal language will
certainly be easier to learn. However, if you mean, "How easy is it to
learn to apply this language to the problem space?", then it's much less
clear whether the reductionist approach helps or hurts you.

It really depends on the structure of the problem space. Driving in a
city that is layed out on a rectilinear grid, you will certainly spend
most of your time with your nose facing one of four points on the
compass. Solutions tend to have two components: "Go north on Sepulveda
till you get to Nordhoff, then go west on Nordhoff till you pass what's
left of the Northridge Fashion Center." Orthogonality is fine in this
limited problem space.

On the other hand, when I'm at the park, I generally walk in a straight
line from the picnic basket to the restroom, subject to the constraints
of the landscape. The only thing meaningful to me is getting where I'm
going as efficiently as possible. In this situation, any arbitrary
coordinate system is just that, arbitrary. We may as well select the
coordinate system that represents my walk as the unit vector from (0,0)
to (0,1). Your preordained coordinate system is relatively useless to me.

Now, you may argue that real life is not just like a park, and you'd be
right. But unfortunately for our reductionists, it doesn't err on the
side of orthogonality. Real life is much more fractal than a park is.
The landscape imposes even more constraints, and at all scales.
To stick with the transportation metaphor, how should I decide the optimal
path from Hoover Dam to Stonehenge? The shortest distance between
two points in a fractal landscape is almost never a straight line, or
even a small number of straight lines. And even at the largest scale,
jet aircraft don't often fly routes with large right angles in them.

"Stonehenge? No problem--just go to Paris and hang a left. Can't miss it."

I would argue that real life is a fractal problem, and that the
languages that best deal with real life are not orthogonal languages,
but the languages that connect the major destinations directly, while
still making it possible to get anywhere, somehow or other, if only
by conveying you to a more localized form of transportation.

Learning the "language" of transportation is not easy. That's why we have
travel agents, who are specialists in "hill climbing" algorithms. They're
the programmers in our metaphor. Travel agents love to do diagonal
programming, which they call non-stop flights, and door-to-door service.

Now, to drag this back to the realm of computer science, 90% of the
programs I write are mostly doing text processing. You tell me whether
my language should specialize in text processing. Yeah, sure, a good
miminalist can decompose the concept of strings into the concepts of
arrays and integers. In fact, that's just what C does. Computers love
it. People who think like computers love it. But is that what most
people really want most of the time? I suspect you know my answer.

Orthogonality is for the birds. And even the birds don't want it.

Larry Wall
lw...@netlabs.com

Guy Isabel

unread,
Jul 21, 1994, 11:35:15 PM7/21/94
to
In article <1994Jul21.2...@netlabs.com> lw...@netlabs.com (Larry Wall) writes:

>Whether "diagonal" languages like Perl facilitate learning really depends
>on what you mean by learning. If you mean, "How easy is it to learn all
>the components of a language?", then a smaller, orthogonal language will
>certainly be easier to learn. However, if you mean, "How easy is it to
>learn to apply this language to the problem space?", then it's much less
>clear whether the reductionist approach helps or hurts you.

[Much very cogent material deleted.]

Larry, I wish you would write a book along those lines :) There are other
folks out there who are interested in discussing programming languages from
such a philosophical standpoint.

If it's just me, I'll seek remedial therapy.

Guy Isabel <gis...@pharma1.pharma.mcgill.ca>

William Chang in Marr Lab

unread,
Jul 25, 1994, 4:12:52 PM7/25/94
to
In article <szaboCt...@netcom.com> sz...@netcom.com (Nick Szabo) writes:
> Thus, in many of these simple
> models, languages with redundancy tend to facilitate learning of the
> proper way to communicate a given meaning. Is this also
> true for more complex computer languages, and for the very
> complex, "fuzzier", open-ended natural languages?

I think an orthogonal language is good for compilation, but a rich, expressive
language is better for interpretation. One must describe to an interpreter
not only what is to be done, but exactly how to do it _efficiently_. As for
communicating meaning, the language of mathematics is based on formal set
theory and logic, and fairly simple and orthogonal. However one doesn't really
"interpret" what is written, only verify its correctness. Interpretation and
visualization comes later. A real natural language is of course interpreted.
-- Bill Chang (wch...@cshl.org)

0 new messages