Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

please tell me the design faults of CL & Scheme

321 views
Skip to first unread message

Pierre R. Mai

unread,
Mar 4, 2001, 7:26:19 AM3/4/01
to
"Julian Morrison" <jul...@extropy.demon.co.uk> writes:

> (warning: this post might spark a language war; if you hate language wars,
> set the "ignore")

If we really want to have this kind of discussion, we need to keep it
seperate for each of the languages involved. Otherwise a language war
is bound to ensue. Let Scheme users comment on faults in Scheme, and
CL users comment on faults in CL.

Regs, Pierre.

--
Pierre R. Mai <pm...@acm.org> http://www.pmsf.de/pmai/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents. -- Nathaniel Borenstein

Tim Bradshaw

unread,
Mar 4, 2001, 9:53:22 AM3/4/01
to
This article is a bit like the assassination of Archduke Ferdinand -- a
seemingly minor event which will in fact result in an inevitable
cascade of consequences ending in four years of futile and
inconclusive war and the death of a generation on both sides. Even
now the CL fleet is on its way to Scapa Flow, there to wait in the
cold and wind for the final showdown which will never come.

--tim

Julian Morrison

unread,
Mar 4, 2001, 11:27:27 AM3/4/01
to
"Kent M Pitman" <pit...@world.std.com> wrote:

>> Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken
>> as designed".
>
> why? in what context do you ask the question? what use do you plan to
> make of the information? what do you mean by broken?

Personal interest - specifically for some possible point in the future if
I reimplement a lisp-alike, I don't want to make any of the same mistakes.

Broken in this context means:

- inconsistencies in the spec such that conforming implementations can
fail to interoperate. especially subtle ones I might make again myself.

- misfeatures that make implementation unnecessarily hard, or
resource-wasteful, or unnecessarily force the runtime to be more complex

- security holes, things with unintended consequences

- design slips that make the language less generally useful than could
have been possible, or make certain uses impractical or impossible

- ugly special-cased hacks that break out of the cleanness of the design

- design anachronisms, language features unhelpfully tied to dated tech
limitations

etc etc. That sort of thing.

>"i'm designing my own language, what should i do differently".

That's more or less the context, without any implied promise of imminent
action ;-)

Julian Morrison

unread,
Mar 4, 2001, 11:30:01 AM3/4/01
to
"Pierre R. Mai" <pm...@acm.org> wrote:

> If we really want to have this kind of discussion, we need to keep it
> seperate for each of the languages involved. Otherwise a language war
> is bound to ensue. Let Scheme users comment on faults in Scheme, and CL
> users comment on faults in CL.

Disagreed - the very reasons someone has picked one language over the
other may be valid info about that other language, in this context

Frode Vatvedt Fjeld

unread,
Mar 4, 2001, 12:03:21 PM3/4/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Frode Vatvedt Fjeld <fro...@acm.org> writes:
>
> > I think &rest parameters should have been accessed by some
> > multiple-value mechanism, not an implicitly consed list.
>
> This is an interesting choice of response. I don't really find CL
> broken in the sense that it's designed as an "industrial strength"
> language and it serves well for that purpose. But there are
> certainly some details of &rest lists that could be designed better
> in other ways if it were done again.

I certainly agree with you that the &rest issue doesn't ruin CL's
status as an industrial strength language. But the current &rest
design is broken in that--as I'm writing a compiler based on CL syntax
and semantics--it's the first (possibly only) part where I'm presented
with a choice: Be compatible with CL, _or_ implement it well. Before
approaching &rest, I never felt constrained by staying compatible with
CL.

My proposal for a better &rest would be that &rest named a local
function or macro of zero arguments that returns each of the
rest-arguments as multiple values. Then the old semantics is just an
m-v-list away, only now the consing would be explicit.

You'd also need a control structure that lets you iterate over
multiple-values. (Is there such a thing in CL currently?)

The whole point of having multiple values after all is the fact that
the slots used to pass values into and out of functions have a very
clearly defined, limited extent, and so it is just stupid to cons them
on the heap. Meaning, of course you _can_ cons them on the heap if
that is for some reason useful, but the current &rest design
_requires_ you to heap-cons, for no good reason.

--
Frode Vatvedt Fjeld

Boris Smilga

unread,
Mar 4, 2001, 2:05:43 PM3/4/01
to

Well, more sort of "missing feature" than "misdesign". In [Report]
Scheme, there is such a thing as promises, but no predicate to test an
object for promise nature. The reason for that is, probably, that RnRS
authors had in mind an implementation that implements promises as some
other object type (most likely, procedures), but sometimes the
impossibility to say (promise? obj) annoys me.

Another one would be a standard line-oriented reader or, at least,
some predicate eol-object? which returns true if and only if its
argument is a character or sequence of characters which signifies an
end of line. Without that, granted that Unices, MacOS and Windows all
have different conventions of eol, text-processing becomes somewhat
more difficult to port between platforms. Well, eol-object? would not
be kosher, because in Windows (ah, those Windows again) an end-of-line
is not one character but a two-character sequence, so that eol-object?
will be hardly compatible with read-char and peek-char. Something like
(read-line _port_) / (read-line) would look more proper.

What else? Perhaps, a separate numeric type for natural numbers (i.e.
non-negative integers)? Not sure about that.

-BSm

Joe English

unread,
Mar 4, 2001, 1:28:03 PM3/4/01
to
Julian Morrison wrote:
>
>Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
>designed".

This should be pretty easy to figure out: just come up
with (A) a list of Common Lisp features that aren't in Scheme,
and (B) a list of Scheme features that aren't in Common Lisp.
Schemers will cite (A) as stuff that's Broken as Designed
in Lisp, and Lispers will cite (B) as stuff that's Broken
as Designed in Scheme :-)

But seriously, IMHO 'dynamic-wind' and 'call-with-values'
in R5RS Scheme are broken. The ideas behind them are good,
but R5RS should have left 'call-with-current-continuation'
alone and added 'call-with-winding-continuation' instead
to support dynamic-wind. For multiple values, Ashley &
Dybvig's 'mv-call' and 'mv-let' primitives would have been
a much better choice than 'call-with-values'.


--Joe English

jeng...@flightlab.com

David Fox

unread,
Mar 5, 2001, 7:52:02 AM3/5/01
to
"Julian Morrison" <jul...@extropy.demon.co.uk> writes:

> Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
> designed".

One problem I have with Common Lisp is the separate name spaces for
functions and variables.

Ole Myren Rohne

unread,
Mar 5, 2001, 8:56:24 AM3/5/01
to
ds...@cogsci.ucsd.edu (David Fox) writes:

One problem I have with Scheme is the unified name space for
functions and variables.

Sorry, I just couldn't resist;-)

Kent M Pitman

unread,
Mar 5, 2001, 9:42:33 AM3/5/01
to
[ comp.lang.scheme removed.
http://world.std.com/~pitman/pfaq/cross-posting.html ]

ds...@cogsci.ucsd.edu (David Fox) writes:

We looked at this issue during the design of ANSI CL and decided there
were some strong reasons for our approach. I doubt that you can
adequately defend the single namespace approach as a "design flaw".

Btw, this is a great time for me to put out a plug for the paper Dick
Gabriel and I wrote on this at the time of the ANSI CL stuff. The
original discussion at ANSI was longer because it included both
technical and non-technical issues. We distilled it down to just the
technical stuff for inclusion in the first edition of the journal
"Lisp and Symbolic Computation". Anyway, you can read the distilled
version on the web now at

http://world.std.com/~pitman/Papers/Technical-Issues.html

NOTE WELL: If you look closely, this paper reads a little like a
debate. Gabriel and I wrote it because we disagreed on the answer,
and it goes back and forth like a dialog in places, suggesting one
thing and then immediately countering it. If you find such places,
that's probably interleaved paragraphs of him talking and me talking.
But I learned long ago of debates that people following them always
come out thinking their hero won. So I've talked to people on both
sides of the issue who believe this is finally the conclusive paper
supporting their position, whichever position they have. Personally,
and perhaps because I'm on that side of things, I think the
*technical* arguments argue for multiple namespaces because there is
an efficiency issue that is tough to get around in a single namespace
Lisp, ESPECIALLY one that religiously eschews declarations to help the
compiler in places where automated proof techniques are going to slow
things down a lot. But I think at minimum a fair reading of this will
tell you that there is no substantial technical reason to believe a
multi-namespace Lisp is flawed, and that this is largely an issue of
style.

I also think, although I think the paper doesn't get into it, that
people's brains plainly handle multiple namespaces and contexts naturally
because it comes up all the time in natural language, and that it's a
shame for a computer language not to take advantage of wetware we already
have for things. Claims of simplicity are often, and without justification,
measured against mathematical notions of an empty processor or a simple
processor that you could build. But since programming languages are designed
for people, I think simplicity should be measured against our best guess
as to what processor PEOPLE have, and that leads to wholly different
conclusions.

Will Deakin

unread,
Mar 5, 2001, 9:57:07 AM3/5/01
to
Tim wrote:

> ...Even now the CL fleet is on its way to Scapa Flow, there to wait

> in the cold and wind for the final showdown which will never come.

Will they be a repeat of the tragic incident resulting in a man
in a night gown clutching a parrot and demanding a one way
rail-ticket to Leeds? Will the the Norman keep be shelled again?
Remember Scarborough!

;)will


brl...@my-deja.com

unread,
Mar 5, 2001, 9:59:46 AM3/5/01
to
This is an unusual question. I'm curious as to why you ask. Are you
choosing between these two languages for a particular project?

Paul Dietz

unread,
Mar 5, 2001, 10:15:36 AM3/5/01
to
Frode Vatvedt Fjeld wrote:

> Meaning, of course you _can_ cons them on the heap if
> that is for some reason useful, but the current &rest design
> _requires_ you to heap-cons, for no good reason.

Unless the &rest parameter is defined DYNAMIC-EXTENT.

If performance is a problem at some particular call
then add the declaration. This seems perfectly consistent
with CL philosophy.

Paul

Marco Antoniotti

unread,
Mar 5, 2001, 11:05:45 AM3/5/01
to

There are no design faults in Scheme. The standard is way too short
to contain any :)

Cheers

--
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488
719 Broadway 12th Floor fax +1 - 212 - 995 4122
New York, NY 10003, USA http://bioinformatics.cat.nyu.edu
Like DNA, such a language [Lisp] does not go out of style.
Paul Graham, ANSI Common Lisp

Raffael Cavallaro

unread,
Mar 5, 2001, 1:01:31 PM3/5/01
to
In article <sfwofvge0...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

>But since programming languages are designed
>for people, I think simplicity should be measured against our best guess
>as to what processor PEOPLE have, and that leads to wholly different
>conclusions.

Just to play devil's advocate, isn't this Larry Wall's argument for the
complexity and TMTOWTDI of Perl? I guess the question then becomes what
is the right balance between consistent abstraction and the complexity
and inconsistency introduced by multiple contexts.

Ralph

--

Raffael Cavallaro, Ph.D.
raf...@mediaone.net

Ray Blaak

unread,
Mar 5, 2001, 1:04:52 PM3/5/01
to
"Julian Morrison" <jul...@extropy.demon.co.uk> writes:
> Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
> designed".

For scheme: Lack of standardized namespaces. Many (most?) implementations
supply some sort of module/unit support, but porting between them is a pain.

Without decent static namespaces, programming "in the large" is not properly
doable.

This is such a basic necessity that it needs to be in the standard language
itself.

Also, lack of a standard defstruct/define-record. Such a fundamental data
structure is absolutely essential, and should be in the standard. That you can
roll your own with the macro system alleviates this somewhat, but since this
is so basic, one shouldn't need to invent this all the time.

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
bl...@infomatch.com The Rhythm has my soul.

Kent M Pitman

unread,
Mar 5, 2001, 1:15:33 PM3/5/01
to
Paul Dietz <di...@stc.comm.mot.com> writes:

Well, there are still some small glitches in the design.

The user can't rely on them being dynamic-extent in practice. That's
left to implementations, and perhaps rightly so.

But the'res a small semantic issue to do with the implementation
also having the right to share structure with the given list when
you do

(apply #'foo x)

The function FOO might get the actual list X (sharing structure)
or might not. This has some weird consequencies that I think I would
nail down better if I were doing a redesign, but that in practice you
can just kind of tiptoe around when you first get bitten by the
implications... or even beforehand if you think about it in time and
program defensively.

Kent M Pitman

unread,
Mar 5, 2001, 1:29:19 PM3/5/01
to
Raffael Cavallaro <raf...@mediaone.net> writes:

I'm not sure we're disagreeing. I would count the bookkeeping required to
keep track of notations like Perl or Teco and to be sure you're correctly
composing things as part of what the human processor does and must be
measured against.

I do think there is a balance to be struck, and that the solution isn't at
one end of the spectrum or the other. Probably little factoids like the
number of short term memory slots and other such things create the parameters
that dictate where the "middle" is on such a spectrum.

Indeed, one of the criticisms that is made against multiple namespaces is
that it increases the complexity of the formal semantics. I don't do formal
semantics stuff myself, so I can't say. However, people I trust have assured
me that adding an infinite number of namespaces would be a trivial addition.
However, I think that would also increase program complexity because of the
mental bookkeeping, etc. That's why I don't think the formal semantics
is predictive. I think the middle ground of "just a few namespaces"
is most appropriate to how people think, regardless of what the formal
semantics says. The more the formal semantics leads me into spaces that
I think don't use the brain well, the more I dislike it as a guiding force
in language design.

David Fox

unread,
Mar 5, 2001, 7:14:03 PM3/5/01
to
brl...@my-deja.com writes:

> This is an unusual question. I'm curious as to why you ask. Are you
> choosing between these two languages for a particular project?

I think its a great question, I'd like to see it answered for lots and
lots of languages.

Bruce Hoult

unread,
Mar 5, 2001, 8:43:09 PM3/5/01
to
In article <sfwofvge0...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> http://world.std.com/~pitman/Papers/Technical-Issues.html

Thanks for this.

(Re the instructions at the top, there *are* quite a few typos, starting
with "proved on[e] of the most important" in the second para and the
same typo in the very next sentence.)


The strongest arguments I see there are:

1) Lisp2 conveys extra type information, namely that you can call what
is in the function cell *knowing* that it is a function -- you don't
have to check first.

2) macros vs namespaces: "There are two ways to look at the arguments
regarding macros and namespaces. The first is that a single namespace is
of fundamental importance, and therefore macros are problematic. The
second is that macros are fundamental, and therefore a single namespace
is problematic."


I believe that a lexically-scoped Lisp1 that has a) type declarations,
and b) hygenic macros avoids both problems.

I think that 1) is pretty obvious. Two namespaces is a pretty weak type
system -- why not go further and have different namespaces for scalars,
arrays, hashes, labels, globs and God-only-knows-what-else. You can
introduce special symbols such as $, @, % to distinguish them in
ambiguous contexts. Well, we know what *that* language is called :-)

Even if the function cell is know not to be data, what if it's empty?
Don't you have to check for that? Or are symbols in CL bound to some
sort of error function by default?


Re 2): <quote>

(DEFMACRO MAKE-FOO (THINGS) `(LIST 'FOO ,THINGS))

Here FOO is quoted, THINGS is taken from the parameter list for the
Macro, but LIST is free. The writer of this macro definition is almost
certainly assuming either that LIST is locally bound in the calling
environment and is trying to refer to that locally bound name or that
list is to be treated as constant and that the author of the code will
not locally bind LIST. In practice, the latter assumption is almost
always made.

If the consumer of the above macro definition writes

(DEFUN FOO (LIST) (MAKE-FOO (CAR LIST)))

in Lisp1, there will probably be a bug in the code.

</quote>

If the free use of LIST in the macro is defined by the language to refer
to the lexical binding of LIST at the point where the macro is *defined*
then there is no problem. It will continue to refer (presumably) to the
global function that creates a list from its arguments. The (CAR LIST)
in the use of the macro will refer to the argument of FOO.

If the writer of the macro (*not* the user of the macro) intends the use
of LIST to refer to the binding at the point of use of the macro then
they can indicate this using a suitable "hygiene-breaking" notation.
This is something that should be done only rarely -- better in this case
to make LIST another explicit argument of the macro.


Of course none of this is new today and this *is* an old paper, but
since it is being re-presented today to justify Lisp2 perhaps some note
should be made of the advances made (e.g. by Dylan, but also recently by
Scheme) since the paper was written?

As a historical explanation of why things were done the way they were
twenty years ago it is of course great.

> I also think, although I think the paper doesn't get into it, that
> people's brains plainly handle multiple namespaces and contexts naturally

Well, perl certainly seems to prove that. I just like to write perl
code with as many different uses of the same name as possible. Such as
...

next b if $b{$b} = <b>;

Yum :-)

-- Bruce

Christopher Stacy

unread,
Mar 5, 2001, 10:54:26 PM3/5/01
to
>>>>> On Sun, 04 Mar 2001 09:51:51 +0000, Julian Morrison ("Julian") writes:
Julian> Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as designed".

Scheme uses parenthesis, which confuses people into asking questions like yours.
(However, that might have been a subconscious intentional design choice!)

Janis Dzerins

unread,
Mar 6, 2001, 3:54:08 AM3/6/01
to
Kent M Pitman <pit...@world.std.com> writes:

> http://world.std.com/~pitman/Papers/Technical-Issues.html
>
> NOTE WELL: If you look closely, this paper reads a little like a
> debate. Gabriel and I wrote it because we disagreed on the answer,
> and it goes back and forth like a dialog in places, suggesting one
> thing and then immediately countering it. If you find such places,
> that's probably interleaved paragraphs of him talking and me talking.

XPW: eXtreme paper-writing! (ok, just a subset -- pair paper-writing.)

--
Janis Dzerins

If million people say a stupid thing it's still a stupid thing.

David Rush

unread,
Mar 6, 2001, 4:04:55 AM3/6/01
to
Ray Blaak <bl...@infomatch.com> writes:
> "Julian Morrison" <jul...@extropy.demon.co.uk> writes:
> > Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
> > designed".

> Also, lack of a standard defstruct/define-record. Such a fundamental data


> structure is absolutely essential, and should be in the standard. That you can
> roll your own with the macro system alleviates this somewhat, but since this
> is so basic, one shouldn't need to invent this all the time.

Fixed. See SRFI-9 <http://srfi.schemers.org>

Extra credit: Tell what is the *real* broken part behind this.

david rush
--
The beginning of wisdom for a [software engineer] is to recognize the
difference between getting a program to work, and getting it right.
-- M A Jackson, 1975

Kent M Pitman

unread,
Mar 6, 2001, 6:19:30 AM3/6/01
to
Janis Dzerins <jo...@latnet.lv> writes:

> Kent M Pitman <pit...@world.std.com> writes:
>
> > http://world.std.com/~pitman/Papers/Technical-Issues.html
> >
> > NOTE WELL: If you look closely, this paper reads a little like a
> > debate. Gabriel and I wrote it because we disagreed on the answer,
> > and it goes back and forth like a dialog in places, suggesting one
> > thing and then immediately countering it. If you find such places,
> > that's probably interleaved paragraphs of him talking and me talking.
>
> XPW: eXtreme paper-writing! (ok, just a subset -- pair paper-writing.)

At a distance, btw. 3000 miles. FWIW.

Kent M Pitman

unread,
Mar 6, 2001, 6:24:48 AM3/6/01
to
Erik Naggum <er...@naggum.net> writes:

> * Bruce Hoult <br...@hoult.org>


> > Even if the function cell is know not to be data, what if it's empty?
> > Don't you have to check for that? Or are symbols in CL bound to some
> > sort of error function by default?
>

> You mean, unbound? What the implementation does with an unbound function
> slot in a symbol is not specified in the standard. One smart way is to
> make the internal representation of "unbound" be a function that signals
> the appropriate error. That would make function calls faster, and you
> could not have optimized away the check for boundness if you asked for
> the functional value, anyway. Note that the user of this code would
> never know how you represented the unbound value unless he peeked under
> the hood, say by inspecting a symbol.

Exactly. This is the efficiency issue I mentioned, which cannot be
duplicated in a Lisp1 without either massive theorem proving (takes lots
of time) or declarations (which Scheme, for example, won't do, it seems
to me at least partially because the same minimalist mindset that drives
them to want to be a Lisp1 also drives them to want to be declaration-free).
Consequently, unless you are happy with just having programs execute
machine level garbage, there are certain function calls which are inherently
faster in a Lisp2 than in a Lisp1, assuming you believe (as I believe both
CL and Scheme designers believe) that functions are called more often than
they are defined. A Lisp2 can take advantage of this to check once at
definition time, but a Lisp1 cannot take advantage because it can't
(due to the halting problem) check the data flow into every (f x) to be
sure that f contained a valid machine-runnable function.

Bruce Hoult

unread,
Mar 6, 2001, 6:58:55 AM3/6/01
to
In article <sfw4rx7...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

I see at least two serious problems with this argument:

1) you appear to be assuing that "Lisp1" is identically equal to
"Scheme", when that's not the case at all. Well, I know you *invented*
the term "Lisp1", but I understand that you defined it in terms of the
number of namespaces and not "actually, when I say Lisp1 I *really* mean
Scheme but don't want to say so".

Other Lisp1's, such as Dylan, do in fact have declarations which enable
the compiler, just as in a Lisp2, to put any necessary type checks at
the point of assignment of the function instead of the point of use.


2) if the ability to move type checks from the point of use to the point
of definition is in fact so important then why do it only for *function*
values? Why not do it for integers, floats, chars, strings, arrays,
lists? Perhaps each symbol should have a slot for a possible integer
binding, a slot for a pssible float binding, a slot for a possible char
binding, a slot for a possible string binding, a slot for a possible
array binding, and a slot for a possible pair binding?

If you do, for example, (CAR X) then the CAR operator will go direct to
the slot in the symbol X that has been reserved for pairs. No type
check is necessary. It is undefined what happens if that slot is
unbound, but perhaps a smart implementation will put a pair there which
refers to itself, or maybe which contains an illegal hardware address to
cause a controlled fault?


But what about user-sefined types, such as classes? There are an
infinite number of those possible. You can't reserve a slot in every
symbol for each one.


At some point doesn't it just become easier to break down and use type
declarations and symbols that can be bound to only one value at any
given time?

Or is the benefit from not having to type check functin calls *so* much
greater than the benefit from not having to type check integer addition
or CAR/CDR that two namespaces (and not type declarations) is the
optimum answer? I wouldn't have thought so.

-- Bruce

Kent M Pitman

unread,
Mar 6, 2001, 8:56:33 AM3/6/01
to nobody
Bruce Hoult <br...@hoult.org> writes:

Absolutely. This thread initiated discussing Scheme and namespaces though.
Just as a Lisp1 calls for hygienic macros, when a Lisp2 doesn't, it also
calls for declarations.

> 2) if the ability to move type checks from the point of use to the point
> of definition is in fact so important then why do it only for *function*
> values?

Because when you illegally reference a pointer, the worst you get is
generally a pointer into a non-existent page. When you jump to garbage
thinking it's machine executable data, the worst case can be much worse:
it could be an integer whose bit configuration coincidentally says
"delete all my files".

> Why not do it for integers, floats, chars, strings, arrays,
> lists?

Not a bad plan, but not as essential, in the sense of image integrity.

> ...


> At some point doesn't it just become easier to break down and use type
> declarations and symbols that can be bound to only one value at any
> given time?

No. Because the decision to use only one namespace is expressionally
limiting. I simply would not want to use only one namespace for
expressional reasons. I'm only using the technical argument to reinforce
that this is a sound choice.



> Or is the benefit from not having to type check functin calls *so* much
> greater than the benefit from not having to type check integer addition
> or CAR/CDR that two namespaces (and not type declarations) is the
> optimum answer? I wouldn't have thought so.

I personally think so. Perhaps this is just an opinion. I haven't
coded machine code in a long time, so it's possible that the
equivalent "danger" has been created in other areas since then, but
function calling in my day used to be special (danger-wise) in the way
I'm describing, in a way ordinary data is not.

Harvey J. Stein

unread,
Mar 6, 2001, 9:25:01 AM3/6/01
to
Ray Blaak <bl...@infomatch.com> writes:

> For scheme: Lack of standardized namespaces. Many (most?)
> implementations supply some sort of module/unit support, but
> porting between them is a pain.
>
> Without decent static namespaces, programming "in the large" is not
> properly doable.

I guess that depends on what you mean by "properly" doable. There are
other languages which are used for programming "in the large" that
don't have namespaces. It's nice to have namespaces & it's a little
ugly to work around not having them. But in Scheme & Lisp working
around something is not nearly as hard as it is in languages like C &
C++.

In that a design fault is the failure to meet a design goal, I find it
hard to say that the lack of namespaces is a design fault in Scheme.
After all, programming "in the large" wasn't one of the design goals
for Scheme.

Scheme was intended to be simple and concise. If all the "design
faults" people have mentioned were incorporated into the language,
then it wouldn't be simple & concise any longer. It would no longer
meet its design criteria & *then* would have a design fault.

--
Harvey Stein
Bloomberg LP
hjs...@bfr.co.il

Tim Bradshaw

unread,
Mar 6, 2001, 9:19:38 AM3/6/01
to
* Bruce Hoult wrote:

> 2) if the ability to move type checks from the point of use to the point
> of definition is in fact so important then why do it only for *function*
> values? Why not do it for integers, floats, chars, strings, arrays,
> lists? Perhaps each symbol should have a slot for a possible integer
> binding, a slot for a pssible float binding, a slot for a possible char
> binding, a slot for a possible string binding, a slot for a possible
> array binding, and a slot for a possible pair binding?

I think the fact that the language considers function call so
important that it has a special syntax to support it might be a clue
here. It looks like you need one.

--tim

Marco Antoniotti

unread,
Mar 6, 2001, 9:58:29 AM3/6/01
to

David Rush <ku...@bellsouth.net> writes:

> Ray Blaak <bl...@infomatch.com> writes:
> > "Julian Morrison" <jul...@extropy.demon.co.uk> writes:
> > > Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
> > > designed".
>
> > Also, lack of a standard defstruct/define-record. Such a fundamental data
> > structure is absolutely essential, and should be in the standard. That you can
> > roll your own with the macro system alleviates this somewhat, but since this
> > is so basic, one shouldn't need to invent this all the time.
>
> Fixed. See SRFI-9 <http://srfi.schemers.org>
>
> Extra credit: Tell what is the *real* broken part behind this.

That it took more than 10 years to fix (17 since CLtL1), it is not in
R^nRS and that staggering amount of resources were thrown at this just
ti reinvent a wheel?

brl...@my-deja.com

unread,
Mar 6, 2001, 10:14:58 AM3/6/01
to
ds...@cogsci.ucsd.edu (David Fox) writes:

It makes sense to ask for lots of languages individually. But when
asked of two languages at once, it sounds like a comparison. I don't
think it's a fair comparison for these two languages. Scheme, designed
primarily for teaching purposes, is more likely to leave out a feature
rather than include it with design flaws.

I define design flaws as problems that require backwards-incompatible
changes to fix. You wouldn't want examples in a textbook to stop
working because of such a change.

CL, used for large projects, doesn't have the luxury of waiting until
the best possible design has been created before implementing
something, so it's likely to include more design flaws. This doesn't
mean that you can choose Scheme for your project and build the missing
features yourself, and then expect the resulting system to have fewer
design flaws than if you started with CL.

(I don't actually use CL myself. Those who know what they're talking
about feel free to step in.)

Anyway, about design flaws. I think Scheme was generally designed well,
except some procedure names where chosen badly s.t. they will likely
never be changed.

The Scheme designers started to translate Lisp into something more like
English, but didn't finish the job, e.g. consp --> pair?, but they left
cons, car and cdr untouched. Probably they had used them so long they
thought they *were* English words.

They should have used pair, first and rest. If they were too
uncomfortable about "rest" for non-list pairs (technically correct, but
not the word one would normally use), and consciously kept cons/car/cdr,
then they should have kept pair? as cons?, following the uniformity of
other constructors / type predicates.

Conversion functions, e.g. number->string, are named consistent with
imperative style. Wouldn't an expression like this...

(list->string (reverse (string->list word)))

...read a lot better like this...?

(string<-list (reverse (list<-string word)))

Tim Bradshaw

unread,
Mar 6, 2001, 9:38:15 AM3/6/01
to
* Harvey J Stein wrote:

> Scheme was intended to be simple and concise. If all the "design
> faults" people have mentioned were incorporated into the language,
> then it wouldn't be simple & concise any longer. It would no longer
> meet its design criteria & *then* would have a design fault.

I think that the CL attitude would be that simple and concise *is* a
design fault in a language design, because it leads to complex and
verbose programs. At least both scheme and CL are doing better than
C++ which succeeds neither in being a simple and concise language nor
in allowing simple and concise programs!

--tim

Duane Rettig

unread,
Mar 6, 2001, 12:30:53 PM3/6/01
to

This is really more of a response to Bruce Hoult than to Kent Pitman,
but since Kent started a tenative argument in the direction I wanted
to go anyway, I am answering his article.

Up to this point, the arguments between Lisp1 and Lisp2 have either
been religious or asthetic. I'd like to introduce an "implementational"
argument, that is, that the number of namespaces should closely follow
what the underlying hardware best implements. In the case of code vs
data, _all_ modern computer hardware of any significance establish a
clear distinction between code and data spaces, even though that
distinction could be blurred a little because the spaces tend to overlap
in practical situations. However, anyone who has had to deal with
cache-flushing mechanisms whenever establishing or moving a code vector
will see first-hand this distinction.

Kent M Pitman <pit...@world.std.com> writes:

> Bruce Hoult <br...@hoult.org> writes:
>
> > 2) if the ability to move type checks from the point of use to the point
> > of definition is in fact so important then why do it only for *function*
> > values?
>
> Because when you illegally reference a pointer, the worst you get is
> generally a pointer into a non-existent page. When you jump to garbage
> thinking it's machine executable data, the worst case can be much worse:
> it could be an integer whose bit configuration coincidentally says
> "delete all my files".

A cogent argument, but I actually think it's more of an efficiency
argument than a safety argument. It's true that one architecture's
"garbage" is another architecture's machine instruction. Nowadays,
even newer versions of the "same" arhitecture will relegate a deprecated
bit pattern to an "emulation trap", so that the machine treats the code
as garbage (somewhat) but a trap handler will simulate an execution of
the instruction anyway. Taking this emulation a step further, any data
at all could be made to _look_ like instructions, with the proper
emulator (it doesn't even have to look like the same architecture as
the one doing the emulation). Any such emulation could possibly result
in the "delete all my files" coincidence. But the most efficient way
to do so :-) is through native code, as much as possible, where the
actual level of native-ness depends on your design and portability
requirements.

The way this all ties in with the Lisp1/Lisp2 argument is that if you
implement your lisp at a native-down-to-the-hardware level, you can
take advantage of codespace vectoring to perform your functionality
checks, as I believe Erik and Kent have discussed already, but if you
must treat your code as potential data, even though it is in a functional
position, then you must either make checks at runtime or elide them by
checking at compile-time. This reduces dynamicity. And since CL has
a way to transition from data to code (i.e. via funcall) it loses
nothing in practice.

> > Why not do it for integers, floats, chars, strings, arrays,
> > lists?
>
> Not a bad plan, but not as essential, in the sense of image integrity.

Along the same lines as my efficiency argument above: I submit that
Ansi C does this very thing, more and more as time progresses. Each
architecture has a Standard Calling convention, where register uses are
assigned. Many of the RISC architectures define a set of N integer
registers and a set of N floating point registers that will be used
to pass the first N arguments between functions. So, for example,
if a C function is defined as

int foo (int a, int b, float c, double d, int e);

then the arguments might be passed in gr1, gr2, fr3, fr4, and gr5,
respectively (gr => general or integer register, fr => float register).

The advantage of passing in this manner is one of efficiency. The
floating point units tend to be separate, and a move and/or conversion
to an integer register tends to add to the instruction and cycle count.
Passing a float argument in a float register is the "natural" thing to
do.

The disadvantage of this kind of passing is one of normalization (the
lack thereof); both caller and callee must agree on where the arguments
will be, or the results could be disastrous. For example, in the above
declaration, if the caller of foo placed the third argument into gr3
instead of fr3, then the argument seen would be garbage.

Performing a hand-wave, I conclude that the reasons for using the first
style vs the second style has to with dynamism. The first style eschews
dynamism and the second style allows it. CL defines the second style
for its calling convention, and this allows maximum dynamism. As we lisp
vendors have had to provide foreign calling capabilities, such capabilities
inherently tend to force such resulting code to be static in nature, to
the extent that it is made efficient.

> > At some point doesn't it just become easier to break down and use type
> > declarations and symbols that can be bound to only one value at any
> > given time?
>
> No. Because the decision to use only one namespace is expressionally
> limiting. I simply would not want to use only one namespace for
> expressional reasons. I'm only using the technical argument to reinforce
> that this is a sound choice.

Perhaps when it comes down to it, the technical argument becomes the
only one. If one sticks only with arguments of Turing completeness, one
could argue that a Turing machine is just as good as a CL (better, in fact,
because it is simpler). Note that in rejecting this previous statement as
ridiculous, we all use the obvious efficiency argument to disprove the
statement, even if only subconsciously.

Perhaps the best way to answer Mr. Hoult's question is to invite him to
continue on with the thought process and to flesh out his design, to see
if he can come to a point where such multiple-bindings-per-type style
is really easier or not...

> > Or is the benefit from not having to type check functin calls *so* much
> > greater than the benefit from not having to type check integer addition
> > or CAR/CDR that two namespaces (and not type declarations) is the
> > optimum answer? I wouldn't have thought so.
>
> I personally think so. Perhaps this is just an opinion. I haven't
> coded machine code in a long time, so it's possible that the
> equivalent "danger" has been created in other areas since then, but
> function calling in my day used to be special (danger-wise) in the way
> I'm describing, in a way ordinary data is not.

Your instincts are good, as I have mentioned above with the float-vs-int
parameter passing. However, the code-vs-data has always been much more
distinctive in the Von Neumann model, and will probably always be the
most clear dividing line between namespaces.

--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)

Lieven Marchand

unread,
Mar 6, 2001, 11:50:54 AM3/6/01
to
brl...@my-deja.com writes:

> The Scheme designers started to translate Lisp into something more like
> English, but didn't finish the job, e.g. consp --> pair?, but they left
> cons, car and cdr untouched. Probably they had used them so long they
> thought they *were* English words.
>
> They should have used pair, first and rest. If they were too
> uncomfortable about "rest" for non-list pairs (technically correct, but
> not the word one would normally use), and consciously kept cons/car/cdr,
> then they should have kept pair? as cons?, following the uniformity of
> other constructors / type predicates.
>

CL has both car/cdr and first/rest and while they are equivalent in
effect, there is an intended difference to the reader. The first pair
of functions is meant to be used for an abstraction called a cons,
that is an implementation of the mathematical concept of cartesian
product of the set of lisp object with itself. The second pair is
meant to be used for the abstraction "list". The fact that lists are
implemented with conses is a historical coincidence, not a necessity.

--
Lieven Marchand <m...@wyrd.be>
Glaðr ok reifr skyli gumna hverr, unz sinn bíðr bana.

Joe Marshall

unread,
Mar 6, 2001, 2:02:19 PM3/6/01
to
brl...@my-deja.com writes:

> The Scheme designers started to translate Lisp into something more like
> English, but didn't finish the job, e.g. consp --> pair?, but they left
> cons, car and cdr untouched. Probably they had used them so long they
> thought they *were* English words.
>
> They should have used pair, first and rest. If they were too
> uncomfortable about "rest" for non-list pairs (technically correct, but
> not the word one would normally use), and consciously kept cons/car/cdr,
> then they should have kept pair? as cons?, following the uniformity of
> other constructors / type predicates.

There are two types here: cons-cells and lists (which happen to be
implemented using cons cells as the backbone). So CAR and FIRST,
although implemented the same, are logically different. Likewise with
CDR and REST.

If you prefer writing CONS? (or CONSP) to writing PAIR?, it is easy
enough to fix.

> Conversion functions, e.g. number->string, are named consistent with
> imperative style. Wouldn't an expression like this...
>
> (list->string (reverse (string->list word)))
>
> ...read a lot better like this...?
>
> (string<-list (reverse (list<-string word)))

The latter version echoes the dataflow, but I don't think the former
is necessarily imperative. STRING->LIST is a function mapping from
elements in the domain of strings to elements in the domain of lists.
The function name echoes the usual english usage of `from A to B'.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 80,000 Newsgroups - 16 Different Servers! =-----

Ray Blaak

unread,
Mar 6, 2001, 2:29:51 PM3/6/01
to
hjs...@bfr.co.il (Harvey J. Stein) writes:
> Ray Blaak <bl...@infomatch.com> writes:
> > Without decent static namespaces, programming "in the large" is not
> > properly doable.
>
> I guess that depends on what you mean by "properly" doable.[...]

>
> In that a design fault is the failure to meet a design goal, I find it
> hard to say that the lack of namespaces is a design fault in Scheme.
> After all, programming "in the large" wasn't one of the design goals
> for Scheme.

Fair enough. What I like about Scheme, though, is how its core design is
fundamentally extensible in almost any direction. It is a great language for
experimentation.

The lack of standardized namespaces are the one missing core feature (well,
aside from optional type declarations) needed to be able to do anything in
Scheme.

Also, namespace support is present in many Scheme implementations, so even if
not an original design goal, it certainly is considered a useful feature. As
such, it would benefit from being standardized.

Marco Antoniotti

unread,
Mar 6, 2001, 2:49:07 PM3/6/01
to

Ray Blaak <bl...@infomatch.com> writes:

...

> Fair enough. What I like about Scheme, though, is how its core design is
> fundamentally extensible in almost any direction. It is a great language for
> experimentation.

What I like about CL, though, is how its core design is fundamentally


extensible in almost any direction. It is a great language for
experimentation.

Given that CL gives you enormous advantages over Scheme in terms of
extisibility, why don't you switch? (Assuming you haven't turn on the
Dark Side yet :) )

> Also, namespace support is present in many Scheme implementations, so even if
> not an original design goal, it certainly is considered a useful feature. As
> such, it would benefit from being standardized.

Packages are in CL. They have been there since 1984. What else do you
need?

Joe Marshall

unread,
Mar 6, 2001, 3:20:39 PM3/6/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Exactly. This is the efficiency issue I mentioned, which cannot be
> duplicated in a Lisp1 without either massive theorem proving (takes lots
> of time) or declarations (which Scheme, for example, won't do, it seems
> to me at least partially because the same minimalist mindset that drives
> them to want to be a Lisp1 also drives them to want to be declaration-free).
> Consequently, unless you are happy with just having programs execute
> machine level garbage, there are certain function calls which are inherently
> faster in a Lisp2 than in a Lisp1, assuming you believe (as I believe both
> CL and Scheme designers believe) that functions are called more often than
> they are defined. A Lisp2 can take advantage of this to check once at
> definition time, but a Lisp1 cannot take advantage because it can't
> (due to the halting problem) check the data flow into every (f x) to be
> sure that f contained a valid machine-runnable function.

This is a red herring.

The issue of whether a particular address contains executable code
and whether it would be legal to load that address into the program
counter is an issue of linker protocol. Lisp hackers tend to forget
about linking because lisp links things on the fly and makes it easy
to run a partially linked image.

Having a particular `slot' in a symbol to hold the function value is
an implementation detail. There is no necessity for such a slot to
actually exist, but rather that such a slot *appear* to exist for the
intents and purposes of SYMBOL-FUNCTION and for free-references to the
function in code. What matters is that when a piece of code calls
function FOO, it either invokes the most recent piece of code
associated with FOO, or invokes the error handler for an `unbound
function'.

One way to implement this is to have a cell in every symbol that can
contain the `function' definition for that symbol. You could `link'
by having the compiler cause all function calls to push the symbol
naming the target function and jump to the linker. The linker would
then look in the function cell of the symbol, and if it finds a
function, jump to the entry point, otherwise jump to the `unbound
function' handler. You could call your linker `FUNCALL'.

Another way to implement this is to inline the linker functionality at
the call point. The compiler would `open code' funcall by inserting
the instructions to fetch the contents of the function cell, test to
ensure it is a function, and either jump to the entry point or to the
error handler.

You could go a step further. Arrange for the function cell to
*always* have a valid entry point, so that the `open coded funcall'
wouldn't have to check the validity. The default entry point would be
the error handler.

But why stop there?

You could arrange for the compiler to go one step further: rather
than open coding a funcall, it could simply place a jump or call
template in the code itself. In essence, there is no longer one
function cell, but a set of function cells --- one at each call
point. The code that implements SYMBOL-FUNCTION would be much more
complicated, of course. (Note, too, that some architectures may
not be amenable to this since it requires patching code on the fly).

Take it further: do arity checking at link time. Only link to those
functions when the number of arguments is correct.

And further: arrange for multiple function entry points. Link to the
appropriate one based upon arity (for optional and rest arguments).
Special case to allow unboxed floats.

Why does this require a separate function and value space? It
doesn't. The same techinques will work in a single namespace lisp,
and the resulting code will run as quickly (why would a jump
instruction care what the source code looks like?) The difference
occurs in the ease of implementation. It a two-namespace lisp, the
more complicated you make the linker protocol, the more complicated
SYMBOL-FUNCTION and (SETF SYMBOL-FUNCTION) have to be. In a
one-namespace lisp, this complexity will extend to SETQ and
special-variable binding as well.

There is no need for dataflow analysis or declarations.

If you would like more detail on how this works in practice, email me.

Joe Marshall

unread,
Mar 6, 2001, 3:27:30 PM3/6/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Because when you illegally reference a pointer, the worst you get is
> generally a pointer into a non-existent page. When you jump to garbage
> thinking it's machine executable data, the worst case can be much worse:
> it could be an integer whose bit configuration coincidentally says
> "delete all my files".

The worse you can get when you dereference an illegal pointer can be
destruction of your hardware. I remember a nasty garbage collector bug
caused stray write to the screen controller in a pc. Certain values
loaded into the controller could cause physical damage to the screen.
In this case, the screen was completely trashed.

Another case I remember involved landing the heads of a disk drive off
the platter, then attempting a seek to an inner cylinder.

Come to think of it, I don't think I've *ever* heard of a stray jump
causing all files to be deleted.

Kent M Pitman

unread,
Mar 6, 2001, 5:27:10 PM3/6/01
to
Joe Marshall <j...@content-integrity.com> writes:

> Kent M Pitman <pit...@world.std.com> writes:
>
> > Exactly. This is the efficiency issue I mentioned, which cannot
> > be duplicated in a Lisp1 without either massive theorem proving
> > (takes lots of time) or declarations (which Scheme, for example,
> > won't do, it seems to me at least partially because the same
> > minimalist mindset that drives them to want to be a Lisp1 also
> > drives them to want to be declaration-free).
> >
> > Consequently, unless you are happy with just having programs
> > execute machine level garbage, there are certain function calls
> > which are inherently faster in a Lisp2 than in a Lisp1, assuming
> > you believe (as I believe both CL and Scheme designers believe)
> > that functions are called more often than they are defined. A
> > Lisp2 can take advantage of this to check once at definition time,
> > but a Lisp1 cannot take advantage because it can't (due to the
> > halting problem) check the data flow into every (f x) to be sure
> > that f contained a valid machine-runnable function.
>
> This is a red herring.

Well, I don't agree.



> The issue of whether a particular address contains executable code
> and whether it would be legal to load that address into the program
> counter is an issue of linker protocol. Lisp hackers tend to forget
> about linking because lisp links things on the fly and makes it easy
> to run a partially linked image.

And modern programmers tend to assume the only hardware Lisp was designed
for is the stuff you can buy right now. On the PDP10, for example, you
could load the contents of any address into memory and execute it.
And you could just JRST or JSP to any memory location. The linker
was not involved.

Surely it is the case that there are operating systems that protect you
betetr, and maybe increasingly this is how operating systems are designed.
But CL is not designed merely to accomodate a specific memory architecture
or operating system.

Joe Marshall

unread,
Mar 6, 2001, 6:14:50 PM3/6/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Joe Marshall <j...@content-integrity.com> writes:
>
> > Kent M Pitman <pit...@world.std.com> writes:
> >
> > > Exactly. This is the efficiency issue I mentioned, which cannot
> > > be duplicated in a Lisp1 without either massive theorem proving
> > > (takes lots of time) or declarations (which Scheme, for example,
> > > won't do, it seems to me at least partially because the same
> > > minimalist mindset that drives them to want to be a Lisp1 also
> > > drives them to want to be declaration-free).
> > >
> > > Consequently, unless you are happy with just having programs
> > > execute machine level garbage, there are certain function calls
> > > which are inherently faster in a Lisp2 than in a Lisp1, assuming
> > > you believe (as I believe both CL and Scheme designers believe)
> > > that functions are called more often than they are defined. A
> > > Lisp2 can take advantage of this to check once at definition time,
> > > but a Lisp1 cannot take advantage because it can't (due to the
> > > halting problem) check the data flow into every (f x) to be sure
> > > that f contained a valid machine-runnable function.
> >
> > This is a red herring.
>
> Well, I don't agree.

I understand that you do, but I have outlined a mechanism that is used
in practice and appears to refute your claim.

> > The issue of whether a particular address contains executable code
> > and whether it would be legal to load that address into the program
> > counter is an issue of linker protocol. Lisp hackers tend to forget
> > about linking because lisp links things on the fly and makes it easy
> > to run a partially linked image.
>
> And modern programmers tend to assume the only hardware Lisp was designed
> for is the stuff you can buy right now.

I am assuming modern hardware.

> On the PDP10, for example, you could load the contents of any
> address into memory and execute it. And you could just JRST or JSP
> to any memory location. The linker was not involved.

I am speaking of the `linker' as the abstract `thing that resolves
jump targets', not LD or whatever the OS provides.

> Surely it is the case that there are operating systems that protect you

> better, and maybe increasingly this is how operating systems are designed.


> But CL is not designed merely to accomodate a specific memory architecture
> or operating system.

Actually, the more modern operating systems are *more* amenable to
this technique, not less (because of DLLs and shared libraries). Any
OS that allows dynamic loading of code has enough power to link in the
way I described.

This technique doesn't require anything unusual or too implementation
dependent, just a bit of cleverness.

Kent M Pitman

unread,
Mar 6, 2001, 7:11:39 PM3/6/01
to
Joe Marshall <j...@content-integrity.com> writes:

> I understand that you do, but I have outlined a mechanism that is used
> in practice and appears to refute your claim.

Then what I'm saying is that you might have a fixnum poitner whose
backing store held an instruction which was a syntactically valid
instruction to execute. It could, for example, be seen as a system
call. And yet you could do (setq x that-fixnum) and if you could just
funcall to it without checking it for pointerness (as in the PDP10
bibop scheme, where checking meant consulting an external table), then
you'd end up jumping to garbage and executing it. (We used to do this
stuff intentionally in Maclisp. But if you do it by accident, it's
scary. Now, a loader, either whole-image loader or a dynamic loader,
might protect you. But it might not. That's my only point.)

> > > The issue of whether a particular address contains executable code
> > > and whether it would be legal to load that address into the program
> > > counter is an issue of linker protocol. Lisp hackers tend to forget
> > > about linking because lisp links things on the fly and makes it easy
> > > to run a partially linked image.
> >
> > And modern programmers tend to assume the only hardware Lisp was designed
> > for is the stuff you can buy right now.
>
> I am assuming modern hardware.

"current" hardware. My point is that hardware continues to change and not
all changes are monotonically in a given direction. You cannot quantify over
existing operating systems and assume you have quantified over the target
platforms for CL.

It would have been possible to pick a set of plausible architectures and
work over only those, and that would have led to a much different language.
I think more short-sighted but the trade-off might be "more useful". I'm
not taking a position on that. Dylan is an example of a language that I
think I remember making some very specific tactical assumptions about the
architecture (e.g., for numbers and character codes, maybe other things
too, like files).

Joe Marshall

unread,
Mar 6, 2001, 7:40:53 PM3/6/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Joe Marshall <j...@content-integrity.com> writes:
>
> > I understand that you do, but I have outlined a mechanism that is used
> > in practice and appears to refute your claim.
>
> Then what I'm saying is that you might have a fixnum poitner whose
> backing store held an instruction which was a syntactically valid
> instruction to execute. It could, for example, be seen as a system
> call. And yet you could do (setq x that-fixnum) and if you could just
> funcall to it without checking it for pointerness (as in the PDP10
> bibop scheme, where checking meant consulting an external table), then
> you'd end up jumping to garbage and executing it. (We used to do this
> stuff intentionally in Maclisp. But if you do it by accident, it's
> scary. Now, a loader, either whole-image loader or a dynamic loader,
> might protect you. But it might not. That's my only point.)

Yes, I understand this point. I'm arguing that you don't need to have
separate function and value namespaces in the source language in order
to efficiently deal with functions at compile, link, or run time.
Assume, for the moment, that your code has an expression (foo 'bar)
where FOO is free. At link time, you check to see if FOO is bound to
a function. If it is, you arrange for the code to be linked to FOO,
either directly (by modifying the code itself) or indirectly (by
modifying a jump table or uuo link, or even having a special `function
cell' associated with the symbol FOO).

Now supposing at some later time, someone does (setq foo 42). You
arrange to invalidate the links to the function FOO. You can do this
via hash tables, weak pointers in the symbol, groveling through all of
memory, or replacing the trampoline in the `function cell'. Now every
place that used to call FOO directly ends up calling an error handler
trampoline, instead.

But all of this is *implementation* detail. It can be done regardless
of whether your source language has a separate namespace for functions
and variables or not.

It is certainly the case that the *implementation* can (and ought to)
treat functions and values as different things.

> > > > The issue of whether a particular address contains executable code
> > > > and whether it would be legal to load that address into the program
> > > > counter is an issue of linker protocol. Lisp hackers tend to forget
> > > > about linking because lisp links things on the fly and makes it easy
> > > > to run a partially linked image.
> > >
> > > And modern programmers tend to assume the only hardware Lisp was designed
> > > for is the stuff you can buy right now.
> >
> > I am assuming modern hardware.
>
> "current" hardware.

Something like a MIPS, Alpha or Pentium, for example.

> My point is that hardware continues to change and not
> all changes are monotonically in a given direction. You cannot quantify over
> existing operating systems and assume you have quantified over the target
> platforms for CL.

No, but there are some generalizations I can make. For instance, if
the OS disallows dynamic loading of code, you can't use UUO links. On
the other hand, you couldn't incrementally compile, either.

> It would have been possible to pick a set of plausible architectures and
> work over only those, and that would have led to a much different language.
> I think more short-sighted but the trade-off might be "more useful". I'm
> not taking a position on that. Dylan is an example of a language that I
> think I remember making some very specific tactical assumptions about the
> architecture (e.g., for numbers and character codes, maybe other things
> too, like files).

I'm not suggesting that Common Lisp adopt a single namespace, or that
a single namespace is `better' than two namespaces. I'm asserting
that a single namespace can be implemented with no less efficiency
than a dual namespace, and that such an implementation does not
require declarations or complex dataflow analysis. This is because
the mechanism of linking does not depend on what happens at the
syntactic level.

Bruce Hoult

unread,
Mar 6, 2001, 7:49:39 PM3/6/01
to
In article <ey3r90b...@cley.com>, Tim Bradshaw <t...@cley.com>
wrote:

> I think the fact that the language considers function call so
> important that it has a special syntax to support it might be a clue
> here. It looks like you need one.

I can't think offhand of any language that *doesn't* have special syntax
to support function calls, so that's hardly a distinguishing feature of
Common Lisp.

-- Bruce

Michael Parker

unread,
Mar 6, 2001, 8:41:06 PM3/6/01
to

Forth.

Bruce Hoult

unread,
Mar 6, 2001, 10:20:25 PM3/6/01
to
In article <31929212...@naggum.net>, Erik Naggum <er...@naggum.net>
wrote:

> * Bruce Hoult <br...@hoult.org>


> > I can't think offhand of any language that *doesn't* have special
> > syntax
> > to support function calls, so that's hardly a distinguishing feature of
> > Common Lisp.
>

> Scheme works very, very hard to not to distinguish a function call from
> any other variable reference. And vice versa. At least give them
> credit
> for having achieved that, even though it is a fundamentally silly thing
> to want to do.

How is that? If you see something at the start of a non-quoted list
then you know it must be a reference to a function (or possibly, an
error).

That's just as special as, say, putting the reference to the function
outside (in front of) the arguent list.

-- Bruce