Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

please tell me the design faults of CL & Scheme

317 views
Skip to first unread message

Pierre R. Mai

unread,
Mar 4, 2001, 7:26:19 AM3/4/01
to
"Julian Morrison" <jul...@extropy.demon.co.uk> writes:

> (warning: this post might spark a language war; if you hate language wars,
> set the "ignore")

If we really want to have this kind of discussion, we need to keep it
seperate for each of the languages involved. Otherwise a language war
is bound to ensue. Let Scheme users comment on faults in Scheme, and
CL users comment on faults in CL.

Regs, Pierre.

--
Pierre R. Mai <pm...@acm.org> http://www.pmsf.de/pmai/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents. -- Nathaniel Borenstein

Tim Bradshaw

unread,
Mar 4, 2001, 9:53:22 AM3/4/01
to
This article is a bit like the assassination of Archduke Ferdinand -- a
seemingly minor event which will in fact result in an inevitable
cascade of consequences ending in four years of futile and
inconclusive war and the death of a generation on both sides. Even
now the CL fleet is on its way to Scapa Flow, there to wait in the
cold and wind for the final showdown which will never come.

--tim

Julian Morrison

unread,
Mar 4, 2001, 11:27:27 AM3/4/01
to
"Kent M Pitman" <pit...@world.std.com> wrote:

>> Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken
>> as designed".
>
> why? in what context do you ask the question? what use do you plan to
> make of the information? what do you mean by broken?

Personal interest - specifically for some possible point in the future if
I reimplement a lisp-alike, I don't want to make any of the same mistakes.

Broken in this context means:

- inconsistencies in the spec such that conforming implementations can
fail to interoperate. especially subtle ones I might make again myself.

- misfeatures that make implementation unnecessarily hard, or
resource-wasteful, or unnecessarily force the runtime to be more complex

- security holes, things with unintended consequences

- design slips that make the language less generally useful than could
have been possible, or make certain uses impractical or impossible

- ugly special-cased hacks that break out of the cleanness of the design

- design anachronisms, language features unhelpfully tied to dated tech
limitations

etc etc. That sort of thing.

>"i'm designing my own language, what should i do differently".

That's more or less the context, without any implied promise of imminent
action ;-)

Julian Morrison

unread,
Mar 4, 2001, 11:30:01 AM3/4/01
to
"Pierre R. Mai" <pm...@acm.org> wrote:

> If we really want to have this kind of discussion, we need to keep it
> seperate for each of the languages involved. Otherwise a language war
> is bound to ensue. Let Scheme users comment on faults in Scheme, and CL
> users comment on faults in CL.

Disagreed - the very reasons someone has picked one language over the
other may be valid info about that other language, in this context

Frode Vatvedt Fjeld

unread,
Mar 4, 2001, 12:03:21 PM3/4/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Frode Vatvedt Fjeld <fro...@acm.org> writes:
>
> > I think &rest parameters should have been accessed by some
> > multiple-value mechanism, not an implicitly consed list.
>
> This is an interesting choice of response. I don't really find CL
> broken in the sense that it's designed as an "industrial strength"
> language and it serves well for that purpose. But there are
> certainly some details of &rest lists that could be designed better
> in other ways if it were done again.

I certainly agree with you that the &rest issue doesn't ruin CL's
status as an industrial strength language. But the current &rest
design is broken in that--as I'm writing a compiler based on CL syntax
and semantics--it's the first (possibly only) part where I'm presented
with a choice: Be compatible with CL, _or_ implement it well. Before
approaching &rest, I never felt constrained by staying compatible with
CL.

My proposal for a better &rest would be that &rest named a local
function or macro of zero arguments that returns each of the
rest-arguments as multiple values. Then the old semantics is just an
m-v-list away, only now the consing would be explicit.

You'd also need a control structure that lets you iterate over
multiple-values. (Is there such a thing in CL currently?)

The whole point of having multiple values after all is the fact that
the slots used to pass values into and out of functions have a very
clearly defined, limited extent, and so it is just stupid to cons them
on the heap. Meaning, of course you _can_ cons them on the heap if
that is for some reason useful, but the current &rest design
_requires_ you to heap-cons, for no good reason.

--
Frode Vatvedt Fjeld

Boris Smilga

unread,
Mar 4, 2001, 2:05:43 PM3/4/01
to

Well, more sort of "missing feature" than "misdesign". In [Report]
Scheme, there is such a thing as promises, but no predicate to test an
object for promise nature. The reason for that is, probably, that RnRS
authors had in mind an implementation that implements promises as some
other object type (most likely, procedures), but sometimes the
impossibility to say (promise? obj) annoys me.

Another one would be a standard line-oriented reader or, at least,
some predicate eol-object? which returns true if and only if its
argument is a character or sequence of characters which signifies an
end of line. Without that, granted that Unices, MacOS and Windows all
have different conventions of eol, text-processing becomes somewhat
more difficult to port between platforms. Well, eol-object? would not
be kosher, because in Windows (ah, those Windows again) an end-of-line
is not one character but a two-character sequence, so that eol-object?
will be hardly compatible with read-char and peek-char. Something like
(read-line _port_) / (read-line) would look more proper.

What else? Perhaps, a separate numeric type for natural numbers (i.e.
non-negative integers)? Not sure about that.

-BSm

Joe English

unread,
Mar 4, 2001, 1:28:03 PM3/4/01
to
Julian Morrison wrote:
>
>Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
>designed".

This should be pretty easy to figure out: just come up
with (A) a list of Common Lisp features that aren't in Scheme,
and (B) a list of Scheme features that aren't in Common Lisp.
Schemers will cite (A) as stuff that's Broken as Designed
in Lisp, and Lispers will cite (B) as stuff that's Broken
as Designed in Scheme :-)

But seriously, IMHO 'dynamic-wind' and 'call-with-values'
in R5RS Scheme are broken. The ideas behind them are good,
but R5RS should have left 'call-with-current-continuation'
alone and added 'call-with-winding-continuation' instead
to support dynamic-wind. For multiple values, Ashley &
Dybvig's 'mv-call' and 'mv-let' primitives would have been
a much better choice than 'call-with-values'.


--Joe English

jeng...@flightlab.com

David Fox

unread,
Mar 5, 2001, 7:52:02 AM3/5/01
to
"Julian Morrison" <jul...@extropy.demon.co.uk> writes:

> Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
> designed".

One problem I have with Common Lisp is the separate name spaces for
functions and variables.

Ole Myren Rohne

unread,
Mar 5, 2001, 8:56:24 AM3/5/01
to
ds...@cogsci.ucsd.edu (David Fox) writes:

One problem I have with Scheme is the unified name space for
functions and variables.

Sorry, I just couldn't resist;-)

Kent M Pitman

unread,
Mar 5, 2001, 9:42:33 AM3/5/01
to
[ comp.lang.scheme removed.
http://world.std.com/~pitman/pfaq/cross-posting.html ]

ds...@cogsci.ucsd.edu (David Fox) writes:

We looked at this issue during the design of ANSI CL and decided there
were some strong reasons for our approach. I doubt that you can
adequately defend the single namespace approach as a "design flaw".

Btw, this is a great time for me to put out a plug for the paper Dick
Gabriel and I wrote on this at the time of the ANSI CL stuff. The
original discussion at ANSI was longer because it included both
technical and non-technical issues. We distilled it down to just the
technical stuff for inclusion in the first edition of the journal
"Lisp and Symbolic Computation". Anyway, you can read the distilled
version on the web now at

http://world.std.com/~pitman/Papers/Technical-Issues.html

NOTE WELL: If you look closely, this paper reads a little like a
debate. Gabriel and I wrote it because we disagreed on the answer,
and it goes back and forth like a dialog in places, suggesting one
thing and then immediately countering it. If you find such places,
that's probably interleaved paragraphs of him talking and me talking.
But I learned long ago of debates that people following them always
come out thinking their hero won. So I've talked to people on both
sides of the issue who believe this is finally the conclusive paper
supporting their position, whichever position they have. Personally,
and perhaps because I'm on that side of things, I think the
*technical* arguments argue for multiple namespaces because there is
an efficiency issue that is tough to get around in a single namespace
Lisp, ESPECIALLY one that religiously eschews declarations to help the
compiler in places where automated proof techniques are going to slow
things down a lot. But I think at minimum a fair reading of this will
tell you that there is no substantial technical reason to believe a
multi-namespace Lisp is flawed, and that this is largely an issue of
style.

I also think, although I think the paper doesn't get into it, that
people's brains plainly handle multiple namespaces and contexts naturally
because it comes up all the time in natural language, and that it's a
shame for a computer language not to take advantage of wetware we already
have for things. Claims of simplicity are often, and without justification,
measured against mathematical notions of an empty processor or a simple
processor that you could build. But since programming languages are designed
for people, I think simplicity should be measured against our best guess
as to what processor PEOPLE have, and that leads to wholly different
conclusions.

Will Deakin

unread,
Mar 5, 2001, 9:57:07 AM3/5/01
to
Tim wrote:

> ...Even now the CL fleet is on its way to Scapa Flow, there to wait

> in the cold and wind for the final showdown which will never come.

Will they be a repeat of the tragic incident resulting in a man
in a night gown clutching a parrot and demanding a one way
rail-ticket to Leeds? Will the the Norman keep be shelled again?
Remember Scarborough!

;)will


brl...@my-deja.com

unread,
Mar 5, 2001, 9:59:46 AM3/5/01
to
This is an unusual question. I'm curious as to why you ask. Are you
choosing between these two languages for a particular project?

Paul Dietz

unread,
Mar 5, 2001, 10:15:36 AM3/5/01
to
Frode Vatvedt Fjeld wrote:

> Meaning, of course you _can_ cons them on the heap if
> that is for some reason useful, but the current &rest design
> _requires_ you to heap-cons, for no good reason.

Unless the &rest parameter is defined DYNAMIC-EXTENT.

If performance is a problem at some particular call
then add the declaration. This seems perfectly consistent
with CL philosophy.

Paul

Marco Antoniotti

unread,
Mar 5, 2001, 11:05:45 AM3/5/01
to

There are no design faults in Scheme. The standard is way too short
to contain any :)

Cheers

--
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488
719 Broadway 12th Floor fax +1 - 212 - 995 4122
New York, NY 10003, USA http://bioinformatics.cat.nyu.edu
Like DNA, such a language [Lisp] does not go out of style.
Paul Graham, ANSI Common Lisp

Raffael Cavallaro

unread,
Mar 5, 2001, 1:01:31 PM3/5/01
to
In article <sfwofvge0...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

>But since programming languages are designed
>for people, I think simplicity should be measured against our best guess
>as to what processor PEOPLE have, and that leads to wholly different
>conclusions.

Just to play devil's advocate, isn't this Larry Wall's argument for the
complexity and TMTOWTDI of Perl? I guess the question then becomes what
is the right balance between consistent abstraction and the complexity
and inconsistency introduced by multiple contexts.

Ralph

--

Raffael Cavallaro, Ph.D.
raf...@mediaone.net

Ray Blaak

unread,
Mar 5, 2001, 1:04:52 PM3/5/01
to
"Julian Morrison" <jul...@extropy.demon.co.uk> writes:
> Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
> designed".

For scheme: Lack of standardized namespaces. Many (most?) implementations
supply some sort of module/unit support, but porting between them is a pain.

Without decent static namespaces, programming "in the large" is not properly
doable.

This is such a basic necessity that it needs to be in the standard language
itself.

Also, lack of a standard defstruct/define-record. Such a fundamental data
structure is absolutely essential, and should be in the standard. That you can
roll your own with the macro system alleviates this somewhat, but since this
is so basic, one shouldn't need to invent this all the time.

--
Cheers, The Rhythm is around me,
The Rhythm has control.
Ray Blaak The Rhythm is inside me,
bl...@infomatch.com The Rhythm has my soul.

Kent M Pitman

unread,
Mar 5, 2001, 1:15:33 PM3/5/01
to
Paul Dietz <di...@stc.comm.mot.com> writes:

Well, there are still some small glitches in the design.

The user can't rely on them being dynamic-extent in practice. That's
left to implementations, and perhaps rightly so.

But the'res a small semantic issue to do with the implementation
also having the right to share structure with the given list when
you do

(apply #'foo x)

The function FOO might get the actual list X (sharing structure)
or might not. This has some weird consequencies that I think I would
nail down better if I were doing a redesign, but that in practice you
can just kind of tiptoe around when you first get bitten by the
implications... or even beforehand if you think about it in time and
program defensively.

Kent M Pitman

unread,
Mar 5, 2001, 1:29:19 PM3/5/01
to
Raffael Cavallaro <raf...@mediaone.net> writes:

I'm not sure we're disagreeing. I would count the bookkeeping required to
keep track of notations like Perl or Teco and to be sure you're correctly
composing things as part of what the human processor does and must be
measured against.

I do think there is a balance to be struck, and that the solution isn't at
one end of the spectrum or the other. Probably little factoids like the
number of short term memory slots and other such things create the parameters
that dictate where the "middle" is on such a spectrum.

Indeed, one of the criticisms that is made against multiple namespaces is
that it increases the complexity of the formal semantics. I don't do formal
semantics stuff myself, so I can't say. However, people I trust have assured
me that adding an infinite number of namespaces would be a trivial addition.
However, I think that would also increase program complexity because of the
mental bookkeeping, etc. That's why I don't think the formal semantics
is predictive. I think the middle ground of "just a few namespaces"
is most appropriate to how people think, regardless of what the formal
semantics says. The more the formal semantics leads me into spaces that
I think don't use the brain well, the more I dislike it as a guiding force
in language design.

David Fox

unread,
Mar 5, 2001, 7:14:03 PM3/5/01
to
brl...@my-deja.com writes:

> This is an unusual question. I'm curious as to why you ask. Are you
> choosing between these two languages for a particular project?

I think its a great question, I'd like to see it answered for lots and
lots of languages.

Bruce Hoult

unread,
Mar 5, 2001, 8:43:09 PM3/5/01
to
In article <sfwofvge0...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> http://world.std.com/~pitman/Papers/Technical-Issues.html

Thanks for this.

(Re the instructions at the top, there *are* quite a few typos, starting
with "proved on[e] of the most important" in the second para and the
same typo in the very next sentence.)


The strongest arguments I see there are:

1) Lisp2 conveys extra type information, namely that you can call what
is in the function cell *knowing* that it is a function -- you don't
have to check first.

2) macros vs namespaces: "There are two ways to look at the arguments
regarding macros and namespaces. The first is that a single namespace is
of fundamental importance, and therefore macros are problematic. The
second is that macros are fundamental, and therefore a single namespace
is problematic."


I believe that a lexically-scoped Lisp1 that has a) type declarations,
and b) hygenic macros avoids both problems.

I think that 1) is pretty obvious. Two namespaces is a pretty weak type
system -- why not go further and have different namespaces for scalars,
arrays, hashes, labels, globs and God-only-knows-what-else. You can
introduce special symbols such as $, @, % to distinguish them in
ambiguous contexts. Well, we know what *that* language is called :-)

Even if the function cell is know not to be data, what if it's empty?
Don't you have to check for that? Or are symbols in CL bound to some
sort of error function by default?


Re 2): <quote>

(DEFMACRO MAKE-FOO (THINGS) `(LIST 'FOO ,THINGS))

Here FOO is quoted, THINGS is taken from the parameter list for the
Macro, but LIST is free. The writer of this macro definition is almost
certainly assuming either that LIST is locally bound in the calling
environment and is trying to refer to that locally bound name or that
list is to be treated as constant and that the author of the code will
not locally bind LIST. In practice, the latter assumption is almost
always made.

If the consumer of the above macro definition writes

(DEFUN FOO (LIST) (MAKE-FOO (CAR LIST)))

in Lisp1, there will probably be a bug in the code.

</quote>

If the free use of LIST in the macro is defined by the language to refer
to the lexical binding of LIST at the point where the macro is *defined*
then there is no problem. It will continue to refer (presumably) to the
global function that creates a list from its arguments. The (CAR LIST)
in the use of the macro will refer to the argument of FOO.

If the writer of the macro (*not* the user of the macro) intends the use
of LIST to refer to the binding at the point of use of the macro then
they can indicate this using a suitable "hygiene-breaking" notation.
This is something that should be done only rarely -- better in this case
to make LIST another explicit argument of the macro.


Of course none of this is new today and this *is* an old paper, but
since it is being re-presented today to justify Lisp2 perhaps some note
should be made of the advances made (e.g. by Dylan, but also recently by
Scheme) since the paper was written?

As a historical explanation of why things were done the way they were
twenty years ago it is of course great.

> I also think, although I think the paper doesn't get into it, that
> people's brains plainly handle multiple namespaces and contexts naturally

Well, perl certainly seems to prove that. I just like to write perl
code with as many different uses of the same name as possible. Such as
...

next b if $b{$b} = <b>;

Yum :-)

-- Bruce

Christopher Stacy

unread,
Mar 5, 2001, 10:54:26 PM3/5/01
to
>>>>> On Sun, 04 Mar 2001 09:51:51 +0000, Julian Morrison ("Julian") writes:
Julian> Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as designed".

Scheme uses parenthesis, which confuses people into asking questions like yours.
(However, that might have been a subconscious intentional design choice!)

Janis Dzerins

unread,
Mar 6, 2001, 3:54:08 AM3/6/01
to
Kent M Pitman <pit...@world.std.com> writes:

> http://world.std.com/~pitman/Papers/Technical-Issues.html
>
> NOTE WELL: If you look closely, this paper reads a little like a
> debate. Gabriel and I wrote it because we disagreed on the answer,
> and it goes back and forth like a dialog in places, suggesting one
> thing and then immediately countering it. If you find such places,
> that's probably interleaved paragraphs of him talking and me talking.

XPW: eXtreme paper-writing! (ok, just a subset -- pair paper-writing.)

--
Janis Dzerins

If million people say a stupid thing it's still a stupid thing.

David Rush

unread,
Mar 6, 2001, 4:04:55 AM3/6/01
to
Ray Blaak <bl...@infomatch.com> writes:
> "Julian Morrison" <jul...@extropy.demon.co.uk> writes:
> > Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
> > designed".

> Also, lack of a standard defstruct/define-record. Such a fundamental data


> structure is absolutely essential, and should be in the standard. That you can
> roll your own with the macro system alleviates this somewhat, but since this
> is so basic, one shouldn't need to invent this all the time.

Fixed. See SRFI-9 <http://srfi.schemers.org>

Extra credit: Tell what is the *real* broken part behind this.

david rush
--
The beginning of wisdom for a [software engineer] is to recognize the
difference between getting a program to work, and getting it right.
-- M A Jackson, 1975

Kent M Pitman

unread,
Mar 6, 2001, 6:19:30 AM3/6/01
to
Janis Dzerins <jo...@latnet.lv> writes:

> Kent M Pitman <pit...@world.std.com> writes:
>
> > http://world.std.com/~pitman/Papers/Technical-Issues.html
> >
> > NOTE WELL: If you look closely, this paper reads a little like a
> > debate. Gabriel and I wrote it because we disagreed on the answer,
> > and it goes back and forth like a dialog in places, suggesting one
> > thing and then immediately countering it. If you find such places,
> > that's probably interleaved paragraphs of him talking and me talking.
>
> XPW: eXtreme paper-writing! (ok, just a subset -- pair paper-writing.)

At a distance, btw. 3000 miles. FWIW.

Kent M Pitman

unread,
Mar 6, 2001, 6:24:48 AM3/6/01
to
Erik Naggum <er...@naggum.net> writes:

> * Bruce Hoult <br...@hoult.org>


> > Even if the function cell is know not to be data, what if it's empty?
> > Don't you have to check for that? Or are symbols in CL bound to some
> > sort of error function by default?
>

> You mean, unbound? What the implementation does with an unbound function
> slot in a symbol is not specified in the standard. One smart way is to
> make the internal representation of "unbound" be a function that signals
> the appropriate error. That would make function calls faster, and you
> could not have optimized away the check for boundness if you asked for
> the functional value, anyway. Note that the user of this code would
> never know how you represented the unbound value unless he peeked under
> the hood, say by inspecting a symbol.

Exactly. This is the efficiency issue I mentioned, which cannot be
duplicated in a Lisp1 without either massive theorem proving (takes lots
of time) or declarations (which Scheme, for example, won't do, it seems
to me at least partially because the same minimalist mindset that drives
them to want to be a Lisp1 also drives them to want to be declaration-free).
Consequently, unless you are happy with just having programs execute
machine level garbage, there are certain function calls which are inherently
faster in a Lisp2 than in a Lisp1, assuming you believe (as I believe both
CL and Scheme designers believe) that functions are called more often than
they are defined. A Lisp2 can take advantage of this to check once at
definition time, but a Lisp1 cannot take advantage because it can't
(due to the halting problem) check the data flow into every (f x) to be
sure that f contained a valid machine-runnable function.

Bruce Hoult

unread,
Mar 6, 2001, 6:58:55 AM3/6/01
to
In article <sfw4rx7...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

I see at least two serious problems with this argument:

1) you appear to be assuing that "Lisp1" is identically equal to
"Scheme", when that's not the case at all. Well, I know you *invented*
the term "Lisp1", but I understand that you defined it in terms of the
number of namespaces and not "actually, when I say Lisp1 I *really* mean
Scheme but don't want to say so".

Other Lisp1's, such as Dylan, do in fact have declarations which enable
the compiler, just as in a Lisp2, to put any necessary type checks at
the point of assignment of the function instead of the point of use.


2) if the ability to move type checks from the point of use to the point
of definition is in fact so important then why do it only for *function*
values? Why not do it for integers, floats, chars, strings, arrays,
lists? Perhaps each symbol should have a slot for a possible integer
binding, a slot for a pssible float binding, a slot for a possible char
binding, a slot for a possible string binding, a slot for a possible
array binding, and a slot for a possible pair binding?

If you do, for example, (CAR X) then the CAR operator will go direct to
the slot in the symbol X that has been reserved for pairs. No type
check is necessary. It is undefined what happens if that slot is
unbound, but perhaps a smart implementation will put a pair there which
refers to itself, or maybe which contains an illegal hardware address to
cause a controlled fault?


But what about user-sefined types, such as classes? There are an
infinite number of those possible. You can't reserve a slot in every
symbol for each one.


At some point doesn't it just become easier to break down and use type
declarations and symbols that can be bound to only one value at any
given time?

Or is the benefit from not having to type check functin calls *so* much
greater than the benefit from not having to type check integer addition
or CAR/CDR that two namespaces (and not type declarations) is the
optimum answer? I wouldn't have thought so.

-- Bruce

Kent M Pitman

unread,
Mar 6, 2001, 8:56:33 AM3/6/01
to nobody
Bruce Hoult <br...@hoult.org> writes:

Absolutely. This thread initiated discussing Scheme and namespaces though.
Just as a Lisp1 calls for hygienic macros, when a Lisp2 doesn't, it also
calls for declarations.

> 2) if the ability to move type checks from the point of use to the point
> of definition is in fact so important then why do it only for *function*
> values?

Because when you illegally reference a pointer, the worst you get is
generally a pointer into a non-existent page. When you jump to garbage
thinking it's machine executable data, the worst case can be much worse:
it could be an integer whose bit configuration coincidentally says
"delete all my files".

> Why not do it for integers, floats, chars, strings, arrays,
> lists?

Not a bad plan, but not as essential, in the sense of image integrity.

> ...


> At some point doesn't it just become easier to break down and use type
> declarations and symbols that can be bound to only one value at any
> given time?

No. Because the decision to use only one namespace is expressionally
limiting. I simply would not want to use only one namespace for
expressional reasons. I'm only using the technical argument to reinforce
that this is a sound choice.



> Or is the benefit from not having to type check functin calls *so* much
> greater than the benefit from not having to type check integer addition
> or CAR/CDR that two namespaces (and not type declarations) is the
> optimum answer? I wouldn't have thought so.

I personally think so. Perhaps this is just an opinion. I haven't
coded machine code in a long time, so it's possible that the
equivalent "danger" has been created in other areas since then, but
function calling in my day used to be special (danger-wise) in the way
I'm describing, in a way ordinary data is not.

Harvey J. Stein

unread,
Mar 6, 2001, 9:25:01 AM3/6/01
to
Ray Blaak <bl...@infomatch.com> writes:

> For scheme: Lack of standardized namespaces. Many (most?)
> implementations supply some sort of module/unit support, but
> porting between them is a pain.
>
> Without decent static namespaces, programming "in the large" is not
> properly doable.

I guess that depends on what you mean by "properly" doable. There are
other languages which are used for programming "in the large" that
don't have namespaces. It's nice to have namespaces & it's a little
ugly to work around not having them. But in Scheme & Lisp working
around something is not nearly as hard as it is in languages like C &
C++.

In that a design fault is the failure to meet a design goal, I find it
hard to say that the lack of namespaces is a design fault in Scheme.
After all, programming "in the large" wasn't one of the design goals
for Scheme.

Scheme was intended to be simple and concise. If all the "design
faults" people have mentioned were incorporated into the language,
then it wouldn't be simple & concise any longer. It would no longer
meet its design criteria & *then* would have a design fault.

--
Harvey Stein
Bloomberg LP
hjs...@bfr.co.il

Tim Bradshaw

unread,
Mar 6, 2001, 9:19:38 AM3/6/01
to
* Bruce Hoult wrote:

> 2) if the ability to move type checks from the point of use to the point
> of definition is in fact so important then why do it only for *function*
> values? Why not do it for integers, floats, chars, strings, arrays,
> lists? Perhaps each symbol should have a slot for a possible integer
> binding, a slot for a pssible float binding, a slot for a possible char
> binding, a slot for a possible string binding, a slot for a possible
> array binding, and a slot for a possible pair binding?

I think the fact that the language considers function call so
important that it has a special syntax to support it might be a clue
here. It looks like you need one.

--tim

Marco Antoniotti

unread,
Mar 6, 2001, 9:58:29 AM3/6/01
to

David Rush <ku...@bellsouth.net> writes:

> Ray Blaak <bl...@infomatch.com> writes:
> > "Julian Morrison" <jul...@extropy.demon.co.uk> writes:
> > > Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
> > > designed".
>
> > Also, lack of a standard defstruct/define-record. Such a fundamental data
> > structure is absolutely essential, and should be in the standard. That you can
> > roll your own with the macro system alleviates this somewhat, but since this
> > is so basic, one shouldn't need to invent this all the time.
>
> Fixed. See SRFI-9 <http://srfi.schemers.org>
>
> Extra credit: Tell what is the *real* broken part behind this.

That it took more than 10 years to fix (17 since CLtL1), it is not in
R^nRS and that staggering amount of resources were thrown at this just
ti reinvent a wheel?

brl...@my-deja.com

unread,
Mar 6, 2001, 10:14:58 AM3/6/01
to
ds...@cogsci.ucsd.edu (David Fox) writes:

It makes sense to ask for lots of languages individually. But when
asked of two languages at once, it sounds like a comparison. I don't
think it's a fair comparison for these two languages. Scheme, designed
primarily for teaching purposes, is more likely to leave out a feature
rather than include it with design flaws.

I define design flaws as problems that require backwards-incompatible
changes to fix. You wouldn't want examples in a textbook to stop
working because of such a change.

CL, used for large projects, doesn't have the luxury of waiting until
the best possible design has been created before implementing
something, so it's likely to include more design flaws. This doesn't
mean that you can choose Scheme for your project and build the missing
features yourself, and then expect the resulting system to have fewer
design flaws than if you started with CL.

(I don't actually use CL myself. Those who know what they're talking
about feel free to step in.)

Anyway, about design flaws. I think Scheme was generally designed well,
except some procedure names where chosen badly s.t. they will likely
never be changed.

The Scheme designers started to translate Lisp into something more like
English, but didn't finish the job, e.g. consp --> pair?, but they left
cons, car and cdr untouched. Probably they had used them so long they
thought they *were* English words.

They should have used pair, first and rest. If they were too
uncomfortable about "rest" for non-list pairs (technically correct, but
not the word one would normally use), and consciously kept cons/car/cdr,
then they should have kept pair? as cons?, following the uniformity of
other constructors / type predicates.

Conversion functions, e.g. number->string, are named consistent with
imperative style. Wouldn't an expression like this...

(list->string (reverse (string->list word)))

...read a lot better like this...?

(string<-list (reverse (list<-string word)))

Tim Bradshaw

unread,
Mar 6, 2001, 9:38:15 AM3/6/01
to
* Harvey J Stein wrote:

> Scheme was intended to be simple and concise. If all the "design
> faults" people have mentioned were incorporated into the language,
> then it wouldn't be simple & concise any longer. It would no longer
> meet its design criteria & *then* would have a design fault.

I think that the CL attitude would be that simple and concise *is* a
design fault in a language design, because it leads to complex and
verbose programs. At least both scheme and CL are doing better than
C++ which succeeds neither in being a simple and concise language nor
in allowing simple and concise programs!

--tim

Duane Rettig

unread,
Mar 6, 2001, 12:30:53 PM3/6/01
to

This is really more of a response to Bruce Hoult than to Kent Pitman,
but since Kent started a tenative argument in the direction I wanted
to go anyway, I am answering his article.

Up to this point, the arguments between Lisp1 and Lisp2 have either
been religious or asthetic. I'd like to introduce an "implementational"
argument, that is, that the number of namespaces should closely follow
what the underlying hardware best implements. In the case of code vs
data, _all_ modern computer hardware of any significance establish a
clear distinction between code and data spaces, even though that
distinction could be blurred a little because the spaces tend to overlap
in practical situations. However, anyone who has had to deal with
cache-flushing mechanisms whenever establishing or moving a code vector
will see first-hand this distinction.

Kent M Pitman <pit...@world.std.com> writes:

> Bruce Hoult <br...@hoult.org> writes:
>
> > 2) if the ability to move type checks from the point of use to the point
> > of definition is in fact so important then why do it only for *function*
> > values?
>
> Because when you illegally reference a pointer, the worst you get is
> generally a pointer into a non-existent page. When you jump to garbage
> thinking it's machine executable data, the worst case can be much worse:
> it could be an integer whose bit configuration coincidentally says
> "delete all my files".

A cogent argument, but I actually think it's more of an efficiency
argument than a safety argument. It's true that one architecture's
"garbage" is another architecture's machine instruction. Nowadays,
even newer versions of the "same" arhitecture will relegate a deprecated
bit pattern to an "emulation trap", so that the machine treats the code
as garbage (somewhat) but a trap handler will simulate an execution of
the instruction anyway. Taking this emulation a step further, any data
at all could be made to _look_ like instructions, with the proper
emulator (it doesn't even have to look like the same architecture as
the one doing the emulation). Any such emulation could possibly result
in the "delete all my files" coincidence. But the most efficient way
to do so :-) is through native code, as much as possible, where the
actual level of native-ness depends on your design and portability
requirements.

The way this all ties in with the Lisp1/Lisp2 argument is that if you
implement your lisp at a native-down-to-the-hardware level, you can
take advantage of codespace vectoring to perform your functionality
checks, as I believe Erik and Kent have discussed already, but if you
must treat your code as potential data, even though it is in a functional
position, then you must either make checks at runtime or elide them by
checking at compile-time. This reduces dynamicity. And since CL has
a way to transition from data to code (i.e. via funcall) it loses
nothing in practice.

> > Why not do it for integers, floats, chars, strings, arrays,
> > lists?
>
> Not a bad plan, but not as essential, in the sense of image integrity.

Along the same lines as my efficiency argument above: I submit that
Ansi C does this very thing, more and more as time progresses. Each
architecture has a Standard Calling convention, where register uses are
assigned. Many of the RISC architectures define a set of N integer
registers and a set of N floating point registers that will be used
to pass the first N arguments between functions. So, for example,
if a C function is defined as

int foo (int a, int b, float c, double d, int e);

then the arguments might be passed in gr1, gr2, fr3, fr4, and gr5,
respectively (gr => general or integer register, fr => float register).

The advantage of passing in this manner is one of efficiency. The
floating point units tend to be separate, and a move and/or conversion
to an integer register tends to add to the instruction and cycle count.
Passing a float argument in a float register is the "natural" thing to
do.

The disadvantage of this kind of passing is one of normalization (the
lack thereof); both caller and callee must agree on where the arguments
will be, or the results could be disastrous. For example, in the above
declaration, if the caller of foo placed the third argument into gr3
instead of fr3, then the argument seen would be garbage.

Performing a hand-wave, I conclude that the reasons for using the first
style vs the second style has to with dynamism. The first style eschews
dynamism and the second style allows it. CL defines the second style
for its calling convention, and this allows maximum dynamism. As we lisp
vendors have had to provide foreign calling capabilities, such capabilities
inherently tend to force such resulting code to be static in nature, to
the extent that it is made efficient.

> > At some point doesn't it just become easier to break down and use type
> > declarations and symbols that can be bound to only one value at any
> > given time?
>
> No. Because the decision to use only one namespace is expressionally
> limiting. I simply would not want to use only one namespace for
> expressional reasons. I'm only using the technical argument to reinforce
> that this is a sound choice.

Perhaps when it comes down to it, the technical argument becomes the
only one. If one sticks only with arguments of Turing completeness, one
could argue that a Turing machine is just as good as a CL (better, in fact,
because it is simpler). Note that in rejecting this previous statement as
ridiculous, we all use the obvious efficiency argument to disprove the
statement, even if only subconsciously.

Perhaps the best way to answer Mr. Hoult's question is to invite him to
continue on with the thought process and to flesh out his design, to see
if he can come to a point where such multiple-bindings-per-type style
is really easier or not...

> > Or is the benefit from not having to type check functin calls *so* much
> > greater than the benefit from not having to type check integer addition
> > or CAR/CDR that two namespaces (and not type declarations) is the
> > optimum answer? I wouldn't have thought so.
>
> I personally think so. Perhaps this is just an opinion. I haven't
> coded machine code in a long time, so it's possible that the
> equivalent "danger" has been created in other areas since then, but
> function calling in my day used to be special (danger-wise) in the way
> I'm describing, in a way ordinary data is not.

Your instincts are good, as I have mentioned above with the float-vs-int
parameter passing. However, the code-vs-data has always been much more
distinctive in the Von Neumann model, and will probably always be the
most clear dividing line between namespaces.

--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)

Lieven Marchand

unread,
Mar 6, 2001, 11:50:54 AM3/6/01
to
brl...@my-deja.com writes:

> The Scheme designers started to translate Lisp into something more like
> English, but didn't finish the job, e.g. consp --> pair?, but they left
> cons, car and cdr untouched. Probably they had used them so long they
> thought they *were* English words.
>
> They should have used pair, first and rest. If they were too
> uncomfortable about "rest" for non-list pairs (technically correct, but
> not the word one would normally use), and consciously kept cons/car/cdr,
> then they should have kept pair? as cons?, following the uniformity of
> other constructors / type predicates.
>

CL has both car/cdr and first/rest and while they are equivalent in
effect, there is an intended difference to the reader. The first pair
of functions is meant to be used for an abstraction called a cons,
that is an implementation of the mathematical concept of cartesian
product of the set of lisp object with itself. The second pair is
meant to be used for the abstraction "list". The fact that lists are
implemented with conses is a historical coincidence, not a necessity.

--
Lieven Marchand <m...@wyrd.be>
Glaðr ok reifr skyli gumna hverr, unz sinn bíðr bana.

Joe Marshall

unread,
Mar 6, 2001, 2:02:19 PM3/6/01
to
brl...@my-deja.com writes:

> The Scheme designers started to translate Lisp into something more like
> English, but didn't finish the job, e.g. consp --> pair?, but they left
> cons, car and cdr untouched. Probably they had used them so long they
> thought they *were* English words.
>
> They should have used pair, first and rest. If they were too
> uncomfortable about "rest" for non-list pairs (technically correct, but
> not the word one would normally use), and consciously kept cons/car/cdr,
> then they should have kept pair? as cons?, following the uniformity of
> other constructors / type predicates.

There are two types here: cons-cells and lists (which happen to be
implemented using cons cells as the backbone). So CAR and FIRST,
although implemented the same, are logically different. Likewise with
CDR and REST.

If you prefer writing CONS? (or CONSP) to writing PAIR?, it is easy
enough to fix.

> Conversion functions, e.g. number->string, are named consistent with
> imperative style. Wouldn't an expression like this...
>
> (list->string (reverse (string->list word)))
>
> ...read a lot better like this...?
>
> (string<-list (reverse (list<-string word)))

The latter version echoes the dataflow, but I don't think the former
is necessarily imperative. STRING->LIST is a function mapping from
elements in the domain of strings to elements in the domain of lists.
The function name echoes the usual english usage of `from A to B'.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 80,000 Newsgroups - 16 Different Servers! =-----

Ray Blaak

unread,
Mar 6, 2001, 2:29:51 PM3/6/01
to
hjs...@bfr.co.il (Harvey J. Stein) writes:
> Ray Blaak <bl...@infomatch.com> writes:
> > Without decent static namespaces, programming "in the large" is not
> > properly doable.
>
> I guess that depends on what you mean by "properly" doable.[...]

>
> In that a design fault is the failure to meet a design goal, I find it
> hard to say that the lack of namespaces is a design fault in Scheme.
> After all, programming "in the large" wasn't one of the design goals
> for Scheme.

Fair enough. What I like about Scheme, though, is how its core design is
fundamentally extensible in almost any direction. It is a great language for
experimentation.

The lack of standardized namespaces are the one missing core feature (well,
aside from optional type declarations) needed to be able to do anything in
Scheme.

Also, namespace support is present in many Scheme implementations, so even if
not an original design goal, it certainly is considered a useful feature. As
such, it would benefit from being standardized.

Marco Antoniotti

unread,
Mar 6, 2001, 2:49:07 PM3/6/01
to

Ray Blaak <bl...@infomatch.com> writes:

...

> Fair enough. What I like about Scheme, though, is how its core design is
> fundamentally extensible in almost any direction. It is a great language for
> experimentation.

What I like about CL, though, is how its core design is fundamentally


extensible in almost any direction. It is a great language for
experimentation.

Given that CL gives you enormous advantages over Scheme in terms of
extisibility, why don't you switch? (Assuming you haven't turn on the
Dark Side yet :) )

> Also, namespace support is present in many Scheme implementations, so even if
> not an original design goal, it certainly is considered a useful feature. As
> such, it would benefit from being standardized.

Packages are in CL. They have been there since 1984. What else do you
need?

Joe Marshall

unread,
Mar 6, 2001, 3:20:39 PM3/6/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Exactly. This is the efficiency issue I mentioned, which cannot be
> duplicated in a Lisp1 without either massive theorem proving (takes lots
> of time) or declarations (which Scheme, for example, won't do, it seems
> to me at least partially because the same minimalist mindset that drives
> them to want to be a Lisp1 also drives them to want to be declaration-free).
> Consequently, unless you are happy with just having programs execute
> machine level garbage, there are certain function calls which are inherently
> faster in a Lisp2 than in a Lisp1, assuming you believe (as I believe both
> CL and Scheme designers believe) that functions are called more often than
> they are defined. A Lisp2 can take advantage of this to check once at
> definition time, but a Lisp1 cannot take advantage because it can't
> (due to the halting problem) check the data flow into every (f x) to be
> sure that f contained a valid machine-runnable function.

This is a red herring.

The issue of whether a particular address contains executable code
and whether it would be legal to load that address into the program
counter is an issue of linker protocol. Lisp hackers tend to forget
about linking because lisp links things on the fly and makes it easy
to run a partially linked image.

Having a particular `slot' in a symbol to hold the function value is
an implementation detail. There is no necessity for such a slot to
actually exist, but rather that such a slot *appear* to exist for the
intents and purposes of SYMBOL-FUNCTION and for free-references to the
function in code. What matters is that when a piece of code calls
function FOO, it either invokes the most recent piece of code
associated with FOO, or invokes the error handler for an `unbound
function'.

One way to implement this is to have a cell in every symbol that can
contain the `function' definition for that symbol. You could `link'
by having the compiler cause all function calls to push the symbol
naming the target function and jump to the linker. The linker would
then look in the function cell of the symbol, and if it finds a
function, jump to the entry point, otherwise jump to the `unbound
function' handler. You could call your linker `FUNCALL'.

Another way to implement this is to inline the linker functionality at
the call point. The compiler would `open code' funcall by inserting
the instructions to fetch the contents of the function cell, test to
ensure it is a function, and either jump to the entry point or to the
error handler.

You could go a step further. Arrange for the function cell to
*always* have a valid entry point, so that the `open coded funcall'
wouldn't have to check the validity. The default entry point would be
the error handler.

But why stop there?

You could arrange for the compiler to go one step further: rather
than open coding a funcall, it could simply place a jump or call
template in the code itself. In essence, there is no longer one
function cell, but a set of function cells --- one at each call
point. The code that implements SYMBOL-FUNCTION would be much more
complicated, of course. (Note, too, that some architectures may
not be amenable to this since it requires patching code on the fly).

Take it further: do arity checking at link time. Only link to those
functions when the number of arguments is correct.

And further: arrange for multiple function entry points. Link to the
appropriate one based upon arity (for optional and rest arguments).
Special case to allow unboxed floats.

Why does this require a separate function and value space? It
doesn't. The same techinques will work in a single namespace lisp,
and the resulting code will run as quickly (why would a jump
instruction care what the source code looks like?) The difference
occurs in the ease of implementation. It a two-namespace lisp, the
more complicated you make the linker protocol, the more complicated
SYMBOL-FUNCTION and (SETF SYMBOL-FUNCTION) have to be. In a
one-namespace lisp, this complexity will extend to SETQ and
special-variable binding as well.

There is no need for dataflow analysis or declarations.

If you would like more detail on how this works in practice, email me.

Joe Marshall

unread,
Mar 6, 2001, 3:27:30 PM3/6/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Because when you illegally reference a pointer, the worst you get is
> generally a pointer into a non-existent page. When you jump to garbage
> thinking it's machine executable data, the worst case can be much worse:
> it could be an integer whose bit configuration coincidentally says
> "delete all my files".

The worse you can get when you dereference an illegal pointer can be
destruction of your hardware. I remember a nasty garbage collector bug
caused stray write to the screen controller in a pc. Certain values
loaded into the controller could cause physical damage to the screen.
In this case, the screen was completely trashed.

Another case I remember involved landing the heads of a disk drive off
the platter, then attempting a seek to an inner cylinder.

Come to think of it, I don't think I've *ever* heard of a stray jump
causing all files to be deleted.

Kent M Pitman

unread,
Mar 6, 2001, 5:27:10 PM3/6/01
to
Joe Marshall <j...@content-integrity.com> writes:

> Kent M Pitman <pit...@world.std.com> writes:
>
> > Exactly. This is the efficiency issue I mentioned, which cannot
> > be duplicated in a Lisp1 without either massive theorem proving
> > (takes lots of time) or declarations (which Scheme, for example,
> > won't do, it seems to me at least partially because the same
> > minimalist mindset that drives them to want to be a Lisp1 also
> > drives them to want to be declaration-free).
> >
> > Consequently, unless you are happy with just having programs
> > execute machine level garbage, there are certain function calls
> > which are inherently faster in a Lisp2 than in a Lisp1, assuming
> > you believe (as I believe both CL and Scheme designers believe)
> > that functions are called more often than they are defined. A
> > Lisp2 can take advantage of this to check once at definition time,
> > but a Lisp1 cannot take advantage because it can't (due to the
> > halting problem) check the data flow into every (f x) to be sure
> > that f contained a valid machine-runnable function.
>
> This is a red herring.

Well, I don't agree.



> The issue of whether a particular address contains executable code
> and whether it would be legal to load that address into the program
> counter is an issue of linker protocol. Lisp hackers tend to forget
> about linking because lisp links things on the fly and makes it easy
> to run a partially linked image.

And modern programmers tend to assume the only hardware Lisp was designed
for is the stuff you can buy right now. On the PDP10, for example, you
could load the contents of any address into memory and execute it.
And you could just JRST or JSP to any memory location. The linker
was not involved.

Surely it is the case that there are operating systems that protect you
betetr, and maybe increasingly this is how operating systems are designed.
But CL is not designed merely to accomodate a specific memory architecture
or operating system.

Joe Marshall

unread,
Mar 6, 2001, 6:14:50 PM3/6/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Joe Marshall <j...@content-integrity.com> writes:
>
> > Kent M Pitman <pit...@world.std.com> writes:
> >
> > > Exactly. This is the efficiency issue I mentioned, which cannot
> > > be duplicated in a Lisp1 without either massive theorem proving
> > > (takes lots of time) or declarations (which Scheme, for example,
> > > won't do, it seems to me at least partially because the same
> > > minimalist mindset that drives them to want to be a Lisp1 also
> > > drives them to want to be declaration-free).
> > >
> > > Consequently, unless you are happy with just having programs
> > > execute machine level garbage, there are certain function calls
> > > which are inherently faster in a Lisp2 than in a Lisp1, assuming
> > > you believe (as I believe both CL and Scheme designers believe)
> > > that functions are called more often than they are defined. A
> > > Lisp2 can take advantage of this to check once at definition time,
> > > but a Lisp1 cannot take advantage because it can't (due to the
> > > halting problem) check the data flow into every (f x) to be sure
> > > that f contained a valid machine-runnable function.
> >
> > This is a red herring.
>
> Well, I don't agree.

I understand that you do, but I have outlined a mechanism that is used
in practice and appears to refute your claim.

> > The issue of whether a particular address contains executable code
> > and whether it would be legal to load that address into the program
> > counter is an issue of linker protocol. Lisp hackers tend to forget
> > about linking because lisp links things on the fly and makes it easy
> > to run a partially linked image.
>
> And modern programmers tend to assume the only hardware Lisp was designed
> for is the stuff you can buy right now.

I am assuming modern hardware.

> On the PDP10, for example, you could load the contents of any
> address into memory and execute it. And you could just JRST or JSP
> to any memory location. The linker was not involved.

I am speaking of the `linker' as the abstract `thing that resolves
jump targets', not LD or whatever the OS provides.

> Surely it is the case that there are operating systems that protect you

> better, and maybe increasingly this is how operating systems are designed.


> But CL is not designed merely to accomodate a specific memory architecture
> or operating system.

Actually, the more modern operating systems are *more* amenable to
this technique, not less (because of DLLs and shared libraries). Any
OS that allows dynamic loading of code has enough power to link in the
way I described.

This technique doesn't require anything unusual or too implementation
dependent, just a bit of cleverness.

Kent M Pitman

unread,
Mar 6, 2001, 7:11:39 PM3/6/01
to
Joe Marshall <j...@content-integrity.com> writes:

> I understand that you do, but I have outlined a mechanism that is used
> in practice and appears to refute your claim.

Then what I'm saying is that you might have a fixnum poitner whose
backing store held an instruction which was a syntactically valid
instruction to execute. It could, for example, be seen as a system
call. And yet you could do (setq x that-fixnum) and if you could just
funcall to it without checking it for pointerness (as in the PDP10
bibop scheme, where checking meant consulting an external table), then
you'd end up jumping to garbage and executing it. (We used to do this
stuff intentionally in Maclisp. But if you do it by accident, it's
scary. Now, a loader, either whole-image loader or a dynamic loader,
might protect you. But it might not. That's my only point.)

> > > The issue of whether a particular address contains executable code
> > > and whether it would be legal to load that address into the program
> > > counter is an issue of linker protocol. Lisp hackers tend to forget
> > > about linking because lisp links things on the fly and makes it easy
> > > to run a partially linked image.
> >
> > And modern programmers tend to assume the only hardware Lisp was designed
> > for is the stuff you can buy right now.
>
> I am assuming modern hardware.

"current" hardware. My point is that hardware continues to change and not
all changes are monotonically in a given direction. You cannot quantify over
existing operating systems and assume you have quantified over the target
platforms for CL.

It would have been possible to pick a set of plausible architectures and
work over only those, and that would have led to a much different language.
I think more short-sighted but the trade-off might be "more useful". I'm
not taking a position on that. Dylan is an example of a language that I
think I remember making some very specific tactical assumptions about the
architecture (e.g., for numbers and character codes, maybe other things
too, like files).

Joe Marshall

unread,
Mar 6, 2001, 7:40:53 PM3/6/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Joe Marshall <j...@content-integrity.com> writes:
>
> > I understand that you do, but I have outlined a mechanism that is used
> > in practice and appears to refute your claim.
>
> Then what I'm saying is that you might have a fixnum poitner whose
> backing store held an instruction which was a syntactically valid
> instruction to execute. It could, for example, be seen as a system
> call. And yet you could do (setq x that-fixnum) and if you could just
> funcall to it without checking it for pointerness (as in the PDP10
> bibop scheme, where checking meant consulting an external table), then
> you'd end up jumping to garbage and executing it. (We used to do this
> stuff intentionally in Maclisp. But if you do it by accident, it's
> scary. Now, a loader, either whole-image loader or a dynamic loader,
> might protect you. But it might not. That's my only point.)

Yes, I understand this point. I'm arguing that you don't need to have
separate function and value namespaces in the source language in order
to efficiently deal with functions at compile, link, or run time.
Assume, for the moment, that your code has an expression (foo 'bar)
where FOO is free. At link time, you check to see if FOO is bound to
a function. If it is, you arrange for the code to be linked to FOO,
either directly (by modifying the code itself) or indirectly (by
modifying a jump table or uuo link, or even having a special `function
cell' associated with the symbol FOO).

Now supposing at some later time, someone does (setq foo 42). You
arrange to invalidate the links to the function FOO. You can do this
via hash tables, weak pointers in the symbol, groveling through all of
memory, or replacing the trampoline in the `function cell'. Now every
place that used to call FOO directly ends up calling an error handler
trampoline, instead.

But all of this is *implementation* detail. It can be done regardless
of whether your source language has a separate namespace for functions
and variables or not.

It is certainly the case that the *implementation* can (and ought to)
treat functions and values as different things.

> > > > The issue of whether a particular address contains executable code
> > > > and whether it would be legal to load that address into the program
> > > > counter is an issue of linker protocol. Lisp hackers tend to forget
> > > > about linking because lisp links things on the fly and makes it easy
> > > > to run a partially linked image.
> > >
> > > And modern programmers tend to assume the only hardware Lisp was designed
> > > for is the stuff you can buy right now.
> >
> > I am assuming modern hardware.
>
> "current" hardware.

Something like a MIPS, Alpha or Pentium, for example.

> My point is that hardware continues to change and not
> all changes are monotonically in a given direction. You cannot quantify over
> existing operating systems and assume you have quantified over the target
> platforms for CL.

No, but there are some generalizations I can make. For instance, if
the OS disallows dynamic loading of code, you can't use UUO links. On
the other hand, you couldn't incrementally compile, either.

> It would have been possible to pick a set of plausible architectures and
> work over only those, and that would have led to a much different language.
> I think more short-sighted but the trade-off might be "more useful". I'm
> not taking a position on that. Dylan is an example of a language that I
> think I remember making some very specific tactical assumptions about the
> architecture (e.g., for numbers and character codes, maybe other things
> too, like files).

I'm not suggesting that Common Lisp adopt a single namespace, or that
a single namespace is `better' than two namespaces. I'm asserting
that a single namespace can be implemented with no less efficiency
than a dual namespace, and that such an implementation does not
require declarations or complex dataflow analysis. This is because
the mechanism of linking does not depend on what happens at the
syntactic level.

Bruce Hoult

unread,
Mar 6, 2001, 7:49:39 PM3/6/01
to
In article <ey3r90b...@cley.com>, Tim Bradshaw <t...@cley.com>
wrote:

> I think the fact that the language considers function call so
> important that it has a special syntax to support it might be a clue
> here. It looks like you need one.

I can't think offhand of any language that *doesn't* have special syntax
to support function calls, so that's hardly a distinguishing feature of
Common Lisp.

-- Bruce

Michael Parker

unread,
Mar 6, 2001, 8:41:06 PM3/6/01
to

Forth.

Bruce Hoult

unread,
Mar 6, 2001, 10:20:25 PM3/6/01
to
In article <31929212...@naggum.net>, Erik Naggum <er...@naggum.net>
wrote:

> * Bruce Hoult <br...@hoult.org>


> > I can't think offhand of any language that *doesn't* have special
> > syntax
> > to support function calls, so that's hardly a distinguishing feature of
> > Common Lisp.
>

> Scheme works very, very hard to not to distinguish a function call from
> any other variable reference. And vice versa. At least give them
> credit
> for having achieved that, even though it is a fundamentally silly thing
> to want to do.

How is that? If you see something at the start of a non-quoted list
then you know it must be a reference to a function (or possibly, an
error).

That's just as special as, say, putting the reference to the function
outside (in front of) the arguent list.

-- Bruce

Rob Warnock

unread,
Mar 6, 2001, 10:59:21 PM3/6/01
to
Bruce Hoult <br...@hoult.org> wrote:
+---------------

| Erik Naggum <er...@naggum.net> wrote:
| > Scheme works very, very hard to not to distinguish a function call from
| > any other variable reference. And vice versa. At least give them
| > credit for having achieved that, even though it is a fundamentally
| > silly thing to want to do.
|
| How is that? If you see something at the start of a non-quoted list
| then you know it must be a reference to a function (or possibly, an error).
+---------------

I think what Erik might be referring to is that Scheme insists that the
evaluator use *THE EXACT SAME* evaluation rules on the function position
as on the argument positions. That is, the evaluator basically does this:

(let ((evaled-args (mapcar #'eval exp)))
(funcall apply (car evaled-args) (cdr evaled-args)))

[Except the "mapcar" is *not* required to execute left-to-right or
right-to-left or any other fixed order -- only *some* serializable order.]

That lets Scheme get away with writing stuff like this, where the function
position can be an arbitrary expression:

> (define x 13)
> ((if (odd? x) + *) 2 3)
5
>

instead of as in CL:

> (defvar x 13)
X
> (funcall (if (oddp x) #'+ #'*) 2 3)
5
>

[In CL, of course, the Scheme style is an error: ]

> ((if (oddp x) #'+ #'*) 2 3)

*** - EVAL: (IF (ODDP X) #'+ #'*) is not a function name
1. Break>

Now do Scheme programmers ever *use* that generality? Actually, very
seldom. I've used it maybe a couple of times, total, in several years of
Scheme hacking. I probably wouldn't even miss it much if it were gone.
(You'd still have "apply", and you can trivially define "funcall" in
terms of "apply".)


-Rob

-----
Rob Warnock, 31-2-510 rp...@sgi.com
SGI Network Engineering <URL:http://reality.sgi.com/rpw3/>
1600 Amphitheatre Pkwy. Phone: 650-933-1673
Mountain View, CA 94043 PP-ASEL-IA

David Rush

unread,
Mar 7, 2001, 12:25:13 AM3/7/01
to
Erik Naggum <er...@naggum.net> writes:
> How do I get real symbols in Scheme? How do I get two namespaces? How
> do I get a programmable reader that the compiler will also obey? How do
> I get packages?

These are all well-solved problems, Erik. If you bothered to be as
well-informed on Scheme as you are on CL, you would know that.

> Fact is, Scheme is _not_ extensible in any direction that can be
> construed as being towards Common Lisp.

Fact is, you are confusing your opinions with facts. Try not to repeat
this mistake in the future.

david rush
--
To get anywhere with programming we must be free to discuss and
improve subjective phenomena. and leave the objective metrics to
resultants such as bug reports.
-- The Programmer's Stone (Alan Carter & Colston Sanger)

Tim Bradshaw

unread,
Mar 6, 2001, 9:24:57 PM3/6/01
to

It is, however, a good indication that treating functions as a special
case is a useful and practical thing to do (in *all* languages), while
inventing special cases for every possible type might be less useful.

--tim


Ray Blaak

unread,
Mar 7, 2001, 2:25:38 AM3/7/01
to
Kent M Pitman <pit...@world.std.com> writes:
> Exactly. This is the efficiency issue I mentioned, which cannot be
> duplicated in a Lisp1 without either massive theorem proving (takes lots of
> time) or declarations [...] Consequently, unless you are happy with just

> having programs execute machine level garbage, there are certain function
> calls which are inherently faster in a Lisp2 than in a Lisp1 [...] A Lisp2

> can take advantage of this to check once at definition time, but a Lisp1
> cannot take advantage because it can't (due to the halting problem) check the
> data flow into every (f x) to be sure that f contained a valid
> machine-runnable function.

Another solution to the Lisp1 function-call efficiency problem that does not
require type declarations (although that suits my language preferences just
fine) is to have function bindings be (usually) immutable.

That way one can guarantee not only that a valid function is bound to the
symbol, but that a *particular* function is bound, allowing for further
optmizations.

For example, one could have (in some Lispy language that is not currently
Scheme or CL or anything in particular):

(define-constant foo (lambda (blah) blah))

or

(define (foo blah) blah)

or even

(defun foo (blah) blah)

all create immutable bindings.

If one truly does need functional variables, then the general symbol binding
can still be available to do the job:

(define foo (lambda (blah) blah))

This idea comes from Henry Baker's paper "Critique of DIN Kernel Lisp
Definition Version 1.2" at http://linux.rice.edu/~rahul/hbaker/CritLisp.html

One still has the problem of what to do with local function bindings, e.g.

(let ((foo (lambda (blah) blah)))
...)

Baker also recommends (for more general reasons) that (let ...) should create
immutable bindings by default, which would solve this problem. Alternatively,
data flow analysis in a (let ...) construct might be practical enough to allow
for function calls to be made efficient. Local (defun foo ...) or (define (foo
...) ...) declarations would also work.

At any rate, with such an approach, type declarations are no longer needed for
efficient function calls. Note however, that this approach requires strict
lexical scoping in order to work.

Frode Vatvedt Fjeld

unread,
Mar 7, 2001, 2:47:24 AM3/7/01
to
Joe Marshall <j...@content-integrity.com> writes:

> Now supposing at some later time, someone does (setq foo 42). You

> arrange to invalidate the links to the function FOO. [...]

But this means _every_ setq needs to check if the previous value was a
function, no? Thus you've just changed the time of error checking from
apply-time to setq-time, which is still expectedly much more frequent
than defun-time.

--
Frode Vatvedt Fjeld

Duane Rettig

unread,
Mar 7, 2001, 3:16:09 AM3/7/01
to
Michael Parker <des...@pdq.net> writes:

Actually, in Forth _every_ operation is a function call (or, more
properly, a word execution). Even a variable reference is defined
by the <builds does> construct as the execution of a "variable" class
of word which places its address on the stack. Forth goes to the
opposite extreme; instead of everything being data, everything is
code...

But, then again, I suppose you could argue that this verifies your
argument that a function call is not a special thing in Forth, since
it is the _only_ thing.

Janis Dzerins

unread,
Mar 7, 2001, 3:49:48 AM3/7/01
to
Joe Marshall <j...@content-integrity.com> writes:

But that's flawed assertion. All you have described is an emulation of
two namespaces with one namespace. How can they be equally efficient?

--
Janis Dzerins

If million people say a stupid thing it's still a stupid thing.

Hallvard B Furuseth

unread,
Mar 7, 2001, 4:07:03 AM3/7/01
to
Erik Naggum <er...@naggum.net> writes:
> Scheme works very, very hard to not to distinguish a function call from
> any other variable reference. And vice versa. At least give them credit
> for having achieved that, even though it is a fundamentally silly thing
> to want to do.

It would be silly in CL, but it seems curiously *right* for Scheme.
Scheme is fond of exposing the programmer to Neat Ideas, and "Code Is
Data" is the Neatest Idea I ever saw in programming. If Scheme didn't
reflect _that_, what would be the point of the language?

--
Hallvard

Bruce Hoult

unread,
Mar 7, 2001, 5:40:25 AM3/7/01
to
In article <31928740...@naggum.net>, Erik Naggum <er...@naggum.net>
wrote:

> * Bruce Hoult <br...@hoult.org>


> > 2) if the ability to move type checks from the point of use
> > to the point of definition is in fact so important then why

> > do it only for *function* values? Why not do it for integers,


> > floats, chars, strings, arrays, lists?
>

> Because of the distinct way function values are used.

And what is the nature of this "distinct way"? Do you mean that
function calling is extremely common? Or that functions are mostly
"called" while other values are "used"? Or something else? Syntax?


> Incidentally, I prefer to think of arrays as functions and
> I'm annoyed by the way I have to refer to slots in arrays in
> Common Lisp, but Scheme suffers from the same problem, only
> more so.

I don't have a problem with that. Arrays are functions over
discrete-valued arguments. The mutability is interesting, of course,
but at any given time it's a function.


> > Perhaps each symbol should have a slot for a possible
> > integer binding, a slot for a pssible float binding, a
> > slot for a possible char binding, a slot for a possible
> > string binding, a slot for a possible array binding,
> > and a slot for a possible pair binding?
>
> Pardon me for being such a party pooper, but what would you
> do with these slots? Do you want integer-let and float-let
> binding forms

Well, *I* wouldn't like such a language, but yes things such as
float-let would probably be what you would do. Or you could go the Perl
route (which I dislike, but I use it).


> the above ridiculous suggestion suggests that you fail
> completely to see the need for cost/benefit analyses of both
> the Lisp2 approach and your own silly exaggerations.

No, not at all. I'm trying to explore what the benefits of a
user-visible function slot in symbols is, and why you'd want that but
not other user-visible slots for different types of values.

One of the very first things I learned when studying software design a
couple of decades ago was that the only numbers that make sense as an
answer to "how many" were "zero", "one", and "unlimited". Now of course
that's not an iron-clad rule, but I think you've got to make a pretty
good case before you break it.


> > At some point doesn't it just become easier to break down and
> > use type declarations and symbols that can be bound to only
> > one value at any given time?
>

> You seem not to grasp the point that has been made several
> times: That the functional value is used very, very differently
> from all other types of vlaues. In fact so differently that
> the benefits of separating it from the rest has a relatively
> low cost and relatively many benefits. As long as you see this
> is in terms of type declarations and general types, you will
> not see any of the benefits that come from separating only the
> functional value from the rest of the types.

I can see reasons to seperate it in the implementation -- basically as
some sort of a cached pointer to stuff known to be code -- but I can't
see any good reason to expose this to the programmer.


> But suppose we let arrays be like functions. An array reference
> is then like a function call with the array indices as arguments.
> In a Lisp1 without mandatory and explicit type declarations or
> sufficient type inference, whatever actually does the function
> calls would have to make a decision whether to do the array
> reference or make a function call, for every function call and
> array reference.

Why wouldn't you make the array reference *be* a function call that has
the address of the data as a closure value and grabs the index values
and does the right thing?


> > Or is the benefit from not having to type check functin calls *so* much
> > greater than the benefit from not having to type check integer addition
> > or CAR/CDR that two namespaces (and not type declarations) is the
> > optimum answer? I wouldn't have thought so.
>

> The Lisp1 mindset almost forces you to think this is about types.
> The fact that the first position in a function call form must have
> the functional type is a consequence of the design decition, not
> the design decision itself.

The design decision itself being?


> This is pretty easy to see if you shed the Lisp1 mindset and
> really understand that type information is an optimization
> that is _optional_.

If it's _optional_, then why force the Lisp2 user to butt his head
against the distinction between function and non-function values, when
you could instead do that entirely within the implementation, as an
optimization?


> If you didn't optimize the way the functional value
> is typed, and did a type check at every function call

Those are not the only options.

A Lisp1 is free to provide a "code" slot along with the necessary data
slot within objects, and if the object is used in a function position
then it can blindly and safely jump to that code, provided only that it
maintains an appropriate invariant.

One method, for example, would be to have every set! using an unknown
value store the new value in the data slot, and as well check the type
of the value and fill the code slot with either a pointer to the actual
executable code or else to an error routine.

This would slow down every set! quite a bit -- but we're assuming that
set! is relatively rare. Another option would be to *always* have set!
blindly store the address of a special function in the code slot, such
that if the object is some time later used as a function the code will
at that time check the data slot to see if it contains a function, and
if so patch the code slot, ready for the next time.

In this way, what looks to the user to be (and is) a Lisp1, can gain
exactly the same benefits you claim to be unique to a Lisp2 -- the cost
being one extra memory store on each set!. Which, being to the same
cache line you're already writing to, is very nearly free on modern
machines.

All this is assuming an implementation that makes no attmept at static
type checking or type propogation.

Which is a very curious level of implementation. It is not an
interpreter, since it is generating machine code from Lisp functions,
but it is being almost perverse in studiously *not* analysing the code
it is generating.

-- Bruce

Bruce Hoult

unread,
Mar 7, 2001, 5:49:50 AM3/7/01
to
In article <sfw8zmi...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> > The issue of whether a particular address contains executable code
> > and whether it would be legal to load that address into the program
> > counter is an issue of linker protocol. Lisp hackers tend to forget
> > about linking because lisp links things on the fly and makes it easy
> > to run a partially linked image.
>
> And modern programmers tend to assume the only hardware Lisp was designed
> for is the stuff you can buy right now. On the PDP10, for example, you
> could load the contents of any address into memory and execute it.
> And you could just JRST or JSP to any memory location. The linker
> was not involved.

Ah, this time it is you rather than me who raises the possibility of
decisions being time-dependent!

The reasons why Common Lisp was made the way it was in 1980 are very
interesting, but I don't think it is reasonable to ignore current
hardware and blindly maintain traditions that may (or may not) only be
appropriate to stuff that you can't buy any more.

-- Bruce

Bruce Hoult

unread,
Mar 7, 2001, 6:13:15 AM3/7/01
to
In article <sfwhf16...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> Joe Marshall <j...@content-integrity.com> writes:
>
> > I understand that you do, but I have outlined a mechanism that is used
> > in practice and appears to refute your claim.
>
> Then what I'm saying is that you might have a fixnum poitner whose
> backing store held an instruction which was a syntactically valid
> instruction to execute. It could, for example, be seen as a system
> call. And yet you could do (setq x that-fixnum) and if you could just
> funcall to it without checking it for pointerness (as in the PDP10
> bibop scheme, where checking meant consulting an external table), then
> you'd end up jumping to garbage and executing it. (We used to do this
> stuff intentionally in Maclisp. But if you do it by accident, it's
> scary. Now, a loader, either whole-image loader or a dynamic loader,
> might protect you. But it might not. That's my only point.)

You can be protected simply by actually having a code pointer which
always points to valid code, and by arranging for setq to maintain that
invariant. This doesn't require exposing the existance of the code
pointer to the gaze of the programmer.

We're assuming, of course, that you can actually tell a fixnum pointer
from a function pointer by some means -- whether by a tag in the object
referred to, or bitfields in the pointer itself dosn't matter.


> > I am assuming modern hardware.
>
> "current" hardware. My point is that hardware continues to
> change and not all changes are monotonically in a given direction.

All the more reason to keep code slots as a mere implementation
technique, rather than explicitly exposing it to the programmer by
having a different namespace for it.


> I think more short-sighted but the trade-off might be "more
> useful". I'm not taking a position on that. Dylan is an
> example of a language that I think I remember making some
> very specific tactical assumptions about the architecture
> (e.g., for numbers and character codes, maybe other things
> too, like files).

The only assumptions I'm aware of (from reading both the original 1992
prefix syntax book and the current reference manual) is that <integer>
should have at least 28 bits of precision. It is implementation-defined
whether <integer> operations are modulo, trap, or overflow into bignums
-- and in fact Harlequin Dylan (now Functional Developer) allows this to
be a compile-time choice via importing <integer> from one of several
different libraries.

Perhaps you could be more clear about what "tactical assumptions" you
are talking about?

-- Bruce

Bruce Hoult

unread,
Mar 7, 2001, 6:21:46 AM3/7/01
to
In article <2hd7but...@dslab7.cs.uit.no>, Frode Vatvedt Fjeld
<fro...@acm.org> wrote:

You can easily make the setq merely set the code value to a known
constant function which does the error checking, thus deferring the vast
majority of the work to the first time the object is used as a function.

-- Bruce

Bruce Hoult

unread,
Mar 7, 2001, 6:26:51 AM3/7/01
to
In article <ey3ae6y...@cley.com>, Tim Bradshaw <t...@cley.com>
wrote:

> * Bruce Hoult wrote:
> > In article <ey3r90b...@cley.com>, Tim Bradshaw <t...@cley.com>
> > wrote:
>
> > I can't think offhand of any language that *doesn't* have special
> > syntax
> > to support function calls, so that's hardly a distinguishing feature of
> > Common Lisp.
>
> It is, however, a good indication that treating functions as a special
> case is a useful and practical thing to do (in *all* languages), while
> inventing special cases for every possible type might be less useful.

A special *syntactic* case, certainly. Function call is commonly
signalled using (), while array indexing is signalled using [], pointer
dereference by * or ^ or ->, and so forth. This doesn't imply anything
about the implementation details.

-- Bruce

Bruce Hoult

unread,
Mar 7, 2001, 6:31:47 AM3/7/01
to
In article <sfwr90b...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> > 2) if the ability to move type checks from the point of use to the
> > point of definition is in fact so important then why do it only for
> > *function* values?
>

> Because when you illegally reference a pointer, the worst you get is
> generally a pointer into a non-existent page. When you jump to garbage
> thinking it's machine executable data, the worst case can be much worse:
> it could be an integer whose bit configuration coincidentally says
> "delete all my files".

I don't see anyone advocating not providing suitable checks at *all*.
Waterproof type-safety is an *extremely* important characteristic.


> > At some point doesn't it just become easier to break down and use type
> > declarations and symbols that can be bound to only one value at any
> > given time?
>

> No. Because the decision to use only one namespace is expressionally
> limiting. I simply would not want to use only one namespace for
> expressional reasons.

In what way is it limiting? Can you give examples where you would
habitually pun functions and values onto the same name?

-- Bruce

Bruce Hoult

unread,
Mar 7, 2001, 7:25:20 AM3/7/01
to
In article <m3hf16x...@blight.transcend.org>, Ray Blaak
<bl...@infomatch.com> wrote:

> For example, one could have (in some Lispy language that is not currently
> Scheme or CL or anything in particular):
>
> (define-constant foo (lambda (blah) blah))
>
> or
>
> (define (foo blah) blah)
>
> or even
>
> (defun foo (blah) blah)
>
> all create immutable bindings.
>
> If one truly does need functional variables, then the general symbol
> binding
> can still be available to do the job:
>
> (define foo (lambda (blah) blah))

Which is exactly what Dylan does.

// immutable bindings
define constant foo = method(blah) blah end;
define method foo(blah) blah end;

// mutable binding
define variable foo = method(blah) blah end;


The difference between the first two examples is that define method
creates an implicit generic function (if ti doesn't already exist)
wheras the define constant doesn't.

Both Dylan implementations now provide a "define function" macro that
expands to the "define constant" form.


> One still has the problem of what to do with local function bindings,
> e.g.
>
> (let ((foo (lambda (blah) blah)))
> ...)

Dylan provides a syntax...

local
method a() end,
method b() end,
method c() end;

... which provides immutable bindings for local functions that can be
mutually-recursive. You can of course also make mutable local function
indings using let.

-- Bruce

Bruce Hoult

unread,
Mar 7, 2001, 7:46:00 AM3/7/01
to
In article <984bmp$1ag21$1...@fido.engr.sgi.com>, rp...@rigden.engr.sgi.com
(Rob Warnock) wrote:

> Bruce Hoult <br...@hoult.org> wrote:
> +---------------
> | Erik Naggum <er...@naggum.net> wrote:
> | > Scheme works very, very hard to not to distinguish a function call
> | > from
> | > any other variable reference. And vice versa. At least give them
> | > credit for having achieved that, even though it is a fundamentally
> | > silly thing to want to do.
> |
> | How is that? If you see something at the start of a non-quoted list
> | then you know it must be a reference to a function (or possibly, an
> | error).
> +---------------
>
> I think what Erik might be referring to is that Scheme insists that the
> evaluator use *THE EXACT SAME* evaluation rules on the function position
> as on the argument positions. That is, the evaluator basically does this:
>
> (let ((evaled-args (mapcar #'eval exp)))
> (funcall apply (car evaled-args) (cdr evaled-args)))
>
> [Except the "mapcar" is *not* required to execute left-to-right or
> right-to-left or any other fixed order -- only *some* serializable
> order.]

Plus, of course, the compiler can optimize the hell out of it :-)


> That lets Scheme get away with writing stuff like this, where the
> function position can be an arbitrary expression:
>
> > (define x 13)
> > ((if (odd? x) + *) 2 3)
> 5

You can do the same in Dylan:

-----------------------------------------------------
module: funcall

define function test(x :: <integer>)
if (odd?(x)) \+ else \* end (2, 3)
end
-----------------------------------------------------
descriptor_t * funcallZfuncallZtest_FUN(descriptor_t *orig_sp, long A_x)
{
descriptor_t *cluster_0_top;
heapptr_t L_function; /* function */
descriptor_t L_temp;
descriptor_t L_temp_2;

if (((A_x & 1) == 0)) {
L_function = &dylanZdylan_visceraZV_HEAP;
}
else {
L_function = &dylanZdylan_visceraZPLUS_HEAP;
}

L_temp.heapptr = funcallZliteral.heapptr;
L_temp.dataword.l = 2;
L_temp_2.heapptr = funcallZliteral.heapptr;
L_temp_2.dataword.l = 3;
orig_sp[0] = L_temp;
orig_sp[1] = L_temp_2;
cluster_0_top = GENERAL_ENTRY(L_function)(orig_sp + 2, L_function,
2);
return cluster_0_top;
}
-----------------------------------------------------

(if you put "let x=13" inside the function then it would simply return 5)

> Now do Scheme programmers ever *use* that generality? Actually, very
> seldom. I've used it maybe a couple of times, total, in several years of
> Scheme hacking. I probably wouldn't even miss it much if it were gone.
> (You'd still have "apply", and you can trivially define "funcall" in
> terms of "apply".)

I don't think you'd often use an "if" in that position, but grabbing a
value out of an array, or calling a function that returns a function are
both pretty common, I suspect.

In certain versions of OO implemnted in Scheme styles such as...

((myObj 'set-foo) newVal)

... are natural.

-- Bruce

cbbr...@hex.net

unread,
Mar 7, 2001, 7:50:03 AM3/7/01
to
Duane Rettig <du...@franz.com> writes:

> Michael Parker <des...@pdq.net> writes:
> > Forth.
>
> Actually, in Forth _every_ operation is a function call (or, more
> properly, a word execution). Even a variable reference is defined
> by the <builds does> construct as the execution of a "variable" class
> of word which places its address on the stack. Forth goes to the
> opposite extreme; instead of everything being data, everything is
> code...
>
> But, then again, I suppose you could argue that this verifies your
> argument that a function call is not a special thing in Forth, since
> it is the _only_ thing.

Close, but possibly not _quite_ there; when Forth deals with comments,
there winds up being something _very_ vaguely like a CL *readtable*
involved.

And you can connect in your own parser if you need to; the default
behaviour of "everything is a space-separated WORD" is enforced by
WORD, but if you write words that parse input otherwise, then WORD
doesn't get involved, and Stranger Things Can Happen...

But to be sure, just about everything in Forth is a WORD, or "function
call," so that is pretty nearly the only thing...
--
(reverse (concatenate 'string "ac.notelrac.teneerf@" "454aa"))
http://www.ntlug.org/~cbbrowne/sap.html
"I once went to a shrink. He told me to speak freely. I did. The
damn fool tried to charge me $90 an hour."
-- ji...@qis.net (Jim Moore Jr)

Tim Bradshaw

unread,
Mar 7, 2001, 8:07:32 AM3/7/01
to
* Bruce Hoult wrote:

> A special *syntactic* case, certainly. Function call is commonly
> signalled using (), while array indexing is signalled using [], pointer
> dereference by * or ^ or ->, and so forth.

Note that Lisp (and Scheme) do *not* have special syntactic cases for
the things other than function call above. Function call is
considered that special.

> This doesn't imply anything about the implementation details.

OK, I think I'll give up now.

--tim

Kent M Pitman

unread,
Mar 7, 2001, 9:17:53 AM3/7/01
to
Bruce Hoult <br...@hoult.org> writes:

But here you have implementationally two namespaces. You're just hiding one.

Kent M Pitman

unread,
Mar 7, 2001, 9:38:42 AM3/7/01
to
Bruce Hoult <br...@hoult.org> writes:

> In article <sfw8zmi...@world.std.com>, Kent M Pitman
> <pit...@world.std.com> wrote:
>
> > > The issue of whether a particular address contains executable code
> > > and whether it would be legal to load that address into the program
> > > counter is an issue of linker protocol. Lisp hackers tend to forget
> > > about linking because lisp links things on the fly and makes it easy
> > > to run a partially linked image.
> >
> > And modern programmers tend to assume the only hardware Lisp was designed
> > for is the stuff you can buy right now. On the PDP10, for example, you
> > could load the contents of any address into memory and execute it.
> > And you could just JRST or JSP to any memory location. The linker
> > was not involved.
>
> Ah, this time it is you rather than me who raises the possibility of
> decisions being time-dependent!

No, I observe others doing it. I'm saying design for older hardware is
just as relevant as design for newer hardware in a system that seeks to
be timeless. It's the people who are injecting phrases like "modern
operating systems do xxx" that are being time-dependent. They have perhaps
not had the "luxury" (and I use the term advisedly) of publishing a paper
that refers to "modern" something and then looking back at it 20 years later
to see how laughable it sounds. I've started trying to substitute words
like "contemporary" for modern in words I write to forums that I think will
survive into the future because it doesn't have the pejorative sense of
monotonic wisdom about it that I feel hides behind the use of "modern".



> The reasons why Common Lisp was made the way it was in 1980 are very
> interesting, but I don't think it is reasonable to ignore current
> hardware and blindly maintain traditions that may (or may not) only be
> appropriate to stuff that you can't buy any more.

To the contrary, I think it's being built for those operating systems
that caused CL to be timeless in its design. The people making the
decisions (and I was there at the time, but it wasn't me deciding
things like this, so it's not me patting myself on the back) were
sharp enough to realize that to have the language survive changes in
hardware over time, they ought not rely on features of hardware past
or present or future, but rather make a design that was as neutral as
possible.

To take a neutral (to this topic) example, they could have built in
ASCII encoding but they tolerated EBCDIC. Whether EBCDIC was or is on
the way out is of no relevance; the point is that avoiding ASCII
implicitly avoided assumptions about Unicode, and if/when the world
outgrows Unicode (assuming Unicode doesn't just outright kill the
character sets that got left out), CL's character model will continue
to apply where Dylan's (for example) will have to be revised at the
language level.

Likewise for memory and operating system. Geez, the "modern" operating
systems of the time were the Lisp Machine's. They had hardware assist
for GC and there were tons of cool asssumptions they could have built in
to the language about what the operating system and hardware would or
wouldn't do for them. Automatic tracking of invisible pointers was a
big deal and would have allowed much easier implementation of some language
features, but reliance on it would have limited the set of target platforms.

I firmly believe it's short-sighted to assume that what happens today
is "better" than what happened yesterday just because it killed off
yesterday. Things run in cycles more than people like to admit. Good
ideas get killed as much for market or political reasons as technical
ones. Old concepts have a way of coming back--sometimes in our
lifetime if we're lucky enough not to have lost all remembrance of
them and have to invent them from scratch. And the best way I know to
insulate myself from the problems that could cost is not to rely on
either past OR present in situations where I don't need to.

Kent M Pitman

unread,
Mar 7, 2001, 9:55:21 AM3/7/01
to
Bruce Hoult <br...@hoult.org> writes:

> > I think more short-sighted but the trade-off might be "more
> > useful". I'm not taking a position on that. Dylan is an
> > example of a language that I think I remember making some
> > very specific tactical assumptions about the architecture
> > (e.g., for numbers and character codes, maybe other things
> > too, like files).
>
> The only assumptions I'm aware of (from reading both the original 1992
> prefix syntax book and the current reference manual) is that <integer>
> should have at least 28 bits of precision.

I'll take your word for this, though I thought there was more.

> It is implementation-defined whether <integer> operations are modulo,
> trap, or overflow into bignums

This may be how they got out of what I thought was an "assumption".
(This sounds awful. Who can do real work not knowing whether they
have modulare arithmetic or real arithmetic?)

> Perhaps you could be more clear about what "tactical assumptions" you
> are talking about?

I think I was talking both about the stuff above (had they not been
wishy-washy on the "implementation-defined" part) and also about
character. As I recall, they made it impossible to have something of
type character that was not unicode--in particular that was bigger
than unicode. Am I wrong on this? A couple of times I almost had to
use Dylan for work, but mostly only ever wrote a few small programs to
test it, and read various versions of emerging manuals. I didn't get
very deep into it, nor am I sure we're talking about suitably similar
versions since it changed a lot during the time I'm talking about.

Again with files, there are a lot of notions of files that we encountered
in the CL design that I don't expect the dylan pathname (locator? can't
recall what they called them) model doesn't admit. In CL's design we
were up agains file systems that had no directory notion, that had multiple
hosts, that had two name components (or that could have either a version
or a type but not both), that had no hierarchy, that used spaces as
component separators, some that used "<" and ">" to separate directory
components directionally ("up" vs "down" in ">foo>bar>baz<x>y>z>w")
while ithers that used them to wrap around directory part ("<foo.bar.baz>").
And that's just at the syntax level; at the semantics level there were
other differences. It was a lot to unify. I'm near certain Dylan didn't
attempt to follow in the "generality" CL sought, but thrived (such as
they did) on discarding the "baggage" of generality.

I'd rather not lean too heavy on this if we can't be in agreement here.
I thought I was saying something fairly non-controversial about Dylan
(i.e., something I'd heard directly from a Dylan designer--that this was
a difference between CL and Dylan--Dylan was designed specifically for
the Mac/PC platforms, not for arbitrary platforms, because those were
the ones that had "won", i.e., killed off competing platforms).

Kent M Pitman

unread,
Mar 7, 2001, 9:56:47 AM3/7/01
to
Erik Naggum <er...@naggum.net> writes:

> In Fortran, foo(1) is either an array reference or a function call.

In Maclisp, you could declare and use an array this way, too.
I agree this was sometimes useful.

Kent M Pitman

unread,
Mar 7, 2001, 10:00:34 AM3/7/01
to
Bruce Hoult <br...@hoult.org> writes:

> In article <sfwr90b...@world.std.com>, Kent M Pitman
> <pit...@world.std.com> wrote:

> > ... the decision to use only one namespace is expressionally


> > limiting. I simply would not want to use only one namespace for
> > expressional reasons.
>
> In what way is it limiting? Can you give examples where you would
> habitually pun functions and values onto the same name?

(defun foo (list)
(loop for x in list
collect (list (first x) (third x))))

I do this all the time. And after 20 years of practice, it continues to
drive me nuts in Scheme when it blows up "needlessly". In Scheme, you
learn to misspell your parameters ("lst") to reduce the risk of this.
I hate that. It is obvious from context what I meant. In the rare case
that I want to funcall my parameter, and it IS rare enough in CL to be
meaningful, I would prefer to designate that by putting in a FUNCALL.

Thant Tessman

unread,
Mar 7, 2001, 10:44:32 AM3/7/01
to

Erik Naggum wrote:

> [...] please inform
> me of how you can give a Scheme real symbols with a functional value in
> addition to the variable value. That is, suppose I want to _experiment_
> with the Common Lisp way from within Scheme where the following code has
> the expected meaning:
>
> (define foo (x) (+ 1 x))
>
> (let ((foo 41))
> (foo foo))

Chez Scheme's syntax-case macro system supports syntax extension of
singleton identifiers. So a CL interpretation of the above Scheme code
should be possible. But my question is why would you want to? I thought
the original reason to keep functions in separate namespaces was to
allow for optimizations that now days aren't really relevant. It
certainly seems anti-intuitive to me.

-thant

Kent M Pitman

unread,
Mar 7, 2001, 11:08:16 AM3/7/01
to
[ comp.lang.scheme removed.
http://world.std.com/~pitman/pfaq/cross-posting.html ]

Thant Tessman <th...@acm.org> writes:

No, the original reason is to allow people to use the same name in different
contexts to mean different things.

(defun foo (list)
(loop for x in list

collect (list (third x) (first x))))

where the variable in line 1 is what is referred to in line 2, but where
the function call in line 3 is to the good old trusted LIST function.

You may not have a strong desire to do this, but those of us who like
Lisp2's generally do.

This capability is mirrored in natural language, and we're used to doing
it. The equivalent in English might be "He boxed the boxes." There's
no ambiguity here of the two references to box. We are plenty equipped
in wetware to resolve it contextually; Scheme ignores this ability of
people and tries to make us run with half our brain tied behind our back.

This capability also allows us to understand the well-formedness (if
not the meaning) of pathological sentences like the english "Buffalo
buffalo buffalo." The Spanish "Como como como." is yet another, which
has the virtue of being easier to both easy to see as well-formed but
also intelligible and something you actually might want to say sometimes.
(The english is "subject verb object", by the way, while the Spanish is
"clause connective clause". But the point is that the like-spelled words
mean different things in different contexts, and it's not a burden on the
speaker to figure out... other than that the verb "buffalo" is an obscure
English verb to begin with.)

I cite these natural language cases as evidence that there is a natural
desire to make a namespace distinction, and that it is Scheme which is
arguably unnatural for trying to fight this desire.

Thant Tessman

unread,
Mar 7, 2001, 12:52:58 PM3/7/01
to

Erik Naggum wrote:
>
> * Thant Tessman <th...@acm.org>

[...CL-style separate namespaces...]

> > But my question is why would you want to?
>

> That is not for anyone to question.

My question was not rhetorical. I was not arguing that you shouldn't be
able to. I'm fairly convinced that in Scheme you can. I was asking why
you would want to.

I build what might loosely be considered separate namespaces all the
time. But I've always wanted them to be explicit, not built into the
language. Who says that dividing namespaces into functions and
non-functions is always (or even usually) what you want? My 'namespaces'
are usually far more semantically-constrained than that.

Since we're talking about experimenting, it should be mentioned that
it's trivial to build an interpreter in Scheme with whatever evaluation
rules you want. And I assume the same goes for CL. The question merely
becomes: What and how much default behavior do you want to borrow from
(or carry around with) your implementation language?

I'm not sure what to make of your noun/verb example. In Scheme, whether
a value is to be treated as a 'noun' or 'verb' is, just like in CL, a
matter of how you use it. And in Scheme, just like in CL (if I
understand correctly), a function can be treated just like a value. That
is, in the terms of your example, a verb can be treated as a noun when
necessary. The difference is only that CL requires extra syntax to do
it. And adding extra syntax to a language just to allow an identifier to
denote more than one value within a single expression just seems very
strange to me. Do you have more than your word to convince me that the
decision to implement separate namespaces for functions in CL isn't
about early compiler implementation details, or at best an accident of
history?

If you think the reliance on context to disambiguate the value denoted
by a given identifier is a good thing, check out C++. Each scoping rule
of C++ can be fairly well-justified in isolation. But their interaction
is monstrously complicated.

[...]

-thant

Joe Marshall

unread,
Mar 7, 2001, 1:02:26 PM3/7/01
to
rp...@rigden.engr.sgi.com (Rob Warnock) writes:

> Now do Scheme programmers ever *use* that [the ability to invoke
> computed functions without FUNCALL] generality?

It would depend on the programmer and the program, but I have seen it
used to good effect in several situations. For instance, suppose you
were doing some classical mechanics. You would be working a fair
amount with derivatives of functions. Suppose you had a functions
that compute derivatives and partial derivatives. You might wish to
evaluate the a partial derivative of a function at a particular
point. You would write: (((partial 2) F) state) rather than
(funcall (funcall (partial 2) f) state)

> Actually, very seldom. I've used it maybe a couple of times, total,
> in several years of Scheme hacking. I probably wouldn't even miss it
> much if it were gone. (You'd still have "apply", and you can
> trivially define "funcall" in terms of "apply".)

Not being able to do this would not a `showstopper' by any means.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 80,000 Newsgroups - 16 Different Servers! =-----

Dorai Sitaram

unread,
Mar 7, 2001, 1:53:12 PM3/7/01
to
In article <sfwr909...@world.std.com>,

Kent M Pitman <pit...@world.std.com> wrote:

Not really. There are _two_ sets of word-strings
where the same word-string is used as both noun and
verb, and Lisp1 allows the larger, more intuitive set
whereas Lisp2 allows the smaller, less intuitive set.
These two sets can be characterized by their
representative members "help" and "can".

The word-string "help" can be used as a noun (=
assistance) and as a verb (= to render assistance).
Ie, it is the same word "help", sometimes used as
a noun and sometimes as a verb. Compare with Lisp1

(help me)
(give me help)

The word-string "can" can be used as a noun (=
container). Its Lisp1-like verb counterpart is "can"
in the sense of "put in a can". So far, this is not
different from the "help" example. But there is also
the auxiliary verb "can" denoting ability. This "can"
is unrelated to the container "can" even though
it is spelled like it. The pair "can" (ability) and
"can" (container) therefore exhibit a Lisp2 rather than
a Lisp1 character.

The Lisp1-like set is by far the larger of the two, eg,

help, dig, can (_not_ in the sense of ability, where it
is also Lisp2), head, lead (_not_ in the sense of Pb,
where it is also Lisp2), bed, love, make, do (hair- or
dog-), throw, catch, request, produce, kill, atrophy,
dye, color, sound, impact, requisition, position,
control, function, use, talk, walk, go, say (noun
in "to have a say in the matter"), matter, mind, think
("you've got another think coming"), read (noun in
computerspeak), write, table, list, start, end, figure,
set, structure, email, program, lisp, scheme

and there is no end in sight because the set is
generative rather than a fixed table. Ie, people use
words in Lisp1 fashion intuitively to the extent of
creating Lisp1 usage even if none is authoratitively
sanctioned, even if this activity is sometimes
unavailingly frowned upon as Haigspeak, and even if
"verbing weirds language". <grin> (which, by the way,
is also Lisp1, as also "smile", "laugh", and "humor").

--d

Ray Blaak

unread,
Mar 7, 2001, 3:25:02 PM3/7/01
to
Erik Naggum <er...@naggum.net> writes:
> If a language is touted as being useful for experimentation, either it is,
> and allows _me_ to think up weird stuff I want to try out, or it is not,
> and allows someone else to tell me what I can be allowed to think up.
> Scheme is touted as good for experimentation, until you actually try to do
> something that some Scheme freak tells you that you should not be
> experimenting with. At that point it is obvious that Scheme is not a
> language suitable for experimentation.

Why be so pendantic about it? Do you really think that I said it is useful
for every single possible kind of experimentation that can exist now or in the
future?

So it is not useful for your kind of experimentation. So what? You use
something else.

The fact of the matter is that Scheme *is* being used for experimentation (it
is primarily a teaching/research language, after all). At the very least *I*
find it useful for experimentation.

At any rate, the original point was that Scheme was missing namespaces, and
even if namespaces were not part of the original design goals, it is still a
good idea to stick them in, because of the current uses of Scheme today.

All this fuss about an off the cuff opinion of mine is definitely not worth
the trouble.

Hallvard B Furuseth

unread,
Mar 7, 2001, 3:39:16 PM3/7/01
to
Erik Naggum <er...@naggum.net> writes:
>* Hallvard B Furuseth <h.b.fu...@usit.uio.no>

>> It would be silly in CL, but it seems curiously *right* for Scheme.
>> Scheme is fond of exposing the programmer to Neat Ideas, and "Code Is
>> Data" is the Neatest Idea I ever saw in programming. If Scheme didn't
>> reflect _that_, what would be the point of the language?
>
> That's the weirdest use of the idea "code is data" I have ever seen.

Well, it's "use" as in illustration more than application.

> I don't think the ability to read source code as data should be confused
> with the values of variables.

Why "source"?

> For instance, you cannot really work with _compiled_ code in any other
> way than to call it.

Sure, but there are other entities you can do very few operations on too
- like variables. Usually a few more operations, but so what? Remember
Hoult's argument that the only numbers that make sense as "how many?"
are "zero", "one", and "unlimited". For that matter, _compiled_ code is
just a special case of _code_ which we can do more with.

It just seems to me to fit well with Scheme and the pro-Scheme mindset
I've seen in the Scheme vs CL wars.

--
Hallvard

Marco Antoniotti

unread,
Mar 7, 2001, 3:46:03 PM3/7/01
to

Ray Blaak <bl...@infomatch.com> writes:

> Erik Naggum <er...@naggum.net> writes:
> > If a language is touted as being useful for experimentation, either it is,
> > and allows _me_ to think up weird stuff I want to try out, or it is not,
> > and allows someone else to tell me what I can be allowed to think up.
> > Scheme is touted as good for experimentation, until you actually try to do
> > something that some Scheme freak tells you that you should not be
> > experimenting with. At that point it is obvious that Scheme is not a
> > language suitable for experimentation.
>
> Why be so pendantic about it? Do you really think that I said it is useful
> for every single possible kind of experimentation that can exist now
> or in the future?
>
> So it is not useful for your kind of experimentation. So what? You use
> something else.
>
> The fact of the matter is that Scheme *is* being used for experimentation (it
> is primarily a teaching/research language, after all). At the very least *I*
> find it useful for experimentation.

The problem is that the "experimental usefulness envelope" of Scheme
is a strict (and small) subset of the "experimental usefulness
envelope" of Common Lisp. That's all.

Hence you should turn to the Dark Side.

Cheers

--
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488
719 Broadway 12th Floor fax +1 - 212 - 995 4122
New York, NY 10003, USA http://bioinformatics.cat.nyu.edu
Like DNA, such a language [Lisp] does not go out of style.
Paul Graham, ANSI Common Lisp

Hallvard B Furuseth

unread,
Mar 7, 2001, 3:55:47 PM3/7/01
to
Erik Naggum <er...@naggum.net> writes:
> * Hallvard B Furuseth <h.b.fu...@usit.uio.no>
>> Why "source"?
>
> Because that is the traditional understanding of "code is data". It is
> _not_ referring to bytes of machine memory that can be regarded as data
> and also be executable code.

Duh. Of course. My problem is, "code is data" reached _me_ at a time
when I lived with a Basic & Machine Code (no, not assembly) dingbat and
no printer, so there was little difference:-)

>> Remember Hoult's argument that the only numbers that make sense as "how
>> many?" are "zero", "one", and "unlimited".
>

> FWIW, I think that "argument" makes absolutely zero sense.

Think "theory is more important than practice" - in short, Scheme.
"Zero" and "one" give nice theoretical solutions to various things,
"two" makes the theory more complicated to analyze even though it can
usually be emulated with a (less practical) "one" solution.

--
Hallvard

Joe Marshall

unread,
Mar 7, 2001, 3:26:48 PM3/7/01
to

In article <2hd7but...@dslab7.cs.uit.no>, Frode Vatvedt Fjeld
<fro...@acm.org> wrote:

> Joe Marshall <j...@content-integrity.com> writes:
>
> > Now supposing at some later time, someone does (setq foo 42). You
> > arrange to invalidate the links to the function FOO. [...]
>
> But this means _every_ setq needs to check if the previous value was a
> function, no?

No, only SETQs to free variables.

Note that writing a value cell often involves more than just a move
instruction. (Consider ephemeral GC, forwarding pointers, etc.) Thus
checking whether the prior value was a function adds little, if any,
overhead.

> Thus you've just changed the time of error checking from
> apply-time to setq-time, which is still expectedly much more frequent
> than defun-time.

SETQs to free variables are much less frequent that SETQs to bound
variables.

It may be true that SETQs to free variables are more frequent that
SETFs of symbol functions, but funcall-time is the dominant factor.
Even if the performance of SETQ were to drop noticably, your code
would have to have a high ratio of SETQs to function calls for it to
make a significant performance difference.

Joe Marshall

unread,
Mar 7, 2001, 3:15:09 PM3/7/01
to
Janis Dzerins <jo...@latnet.lv> writes:

> Joe Marshall <j...@content-integrity.com> writes:
>
> > I'm asserting that a single namespace can be implemented with no
> > less efficiency than a dual namespace, and that such an
> > implementation does not require declarations or complex dataflow
> > analysis.
>
> But that's flawed assertion. All you have described is an emulation of
> two namespaces with one namespace. How can they be equally efficient?

I wasn't *trying* to do that, but I can see how that could be
construed. What I forgot to include is an outline of why this
mechanism is no less efficient than indirecting through the function
cell.

Consider this function: (defun foo () (bar))

Suppose we had a lisp on an intel processor, with the convention that
ESI will point to the beginning of the current code block, and that
invoking a function could be done via an indirect jump to the function
cell of a symbol.

The call sequence would roughly look like this:

; fetch the pointer to symbol bar
movl ebx,[esi+18]
; fetch the function cell, put function object in esi
movl esi,[ecx-11]
jmp *esi

Now if arrange for the linker to maintain a `uuo-link' to the function
bar in the code for function foo, the call sequence would be:

jmp pc + 23 ; jump into link table

link table + 23:
jmp <entry point for bar>

This latter will execute quicker.

Here are some numbers. I use the tak function because it is dominated
by function call time. In the Common Lisp version, optimization is
set to 3, safety 0, and I explicitly load the function cells to ensure
the compiler isn't short-circuiting anything.

(defun takx (x y z)
(declare (fixnum x y z)
(optimize (speed 3) (safety 0)))
(cond ((not (< y x)) z)
(t
(taka
(takb (the fixnum (1- x)) y z)
(takc (the fixnum (1- y)) z x)
(takd (the fixnum (1- z)) x y)))))

(defun test ()
(setf (symbol-function 'taka) (symbol-function 'takx))
(setf (symbol-function 'takb) (symbol-function 'takx))
(setf (symbol-function 'takc) (symbol-function 'takx))
(setf (symbol-function 'takd) (symbol-function 'takx))
(time (dotimes (i 10000) (taka 18 12 6))))

In the scheme version I declare (usual-integrations) (this allows the
compiler to assume that I have not redefined the standard procedures).
I use the fixnum-specific < and decrement operators. I set a switch
in the compiler to tell it to not perform stack checks (as the lisp
version does not do this when speed is at 3). I couldn't figure out
how to instruct the compiler to not poll for interrupts.

(declare (usual-integrations))

(define taka)
(define takb)
(define takc)
(define takd)

(define (takx x y z)
(cond ((not (fix:< y x)) z)
(else
(taka
(takb (fix:-1+ x) y z)
(takc (fix:-1+ y) z x)
(takd (fix:-1+ z) x y)))))

(define (test)
(set! taka takx)
(set! takb takx)
(set! takc takx)
(set! takd takx)
(time (lambda ()
(do ((i 0 (+ i 1)))
((= i 10000) #f)
(taka 18 12 6)))))


On my machine, the lisp version takes 33.5 seconds, the scheme version
takes 29.2 seconds.

This demonstrates that function calls in a single namespace are not
significantly less time efficient than those in a dual namespace under
similar conditions of optimization.

David Fox

unread,
Mar 7, 2001, 4:15:14 PM3/7/01
to
Erik Naggum <er...@naggum.net> writes:

> * Erik Naggum
> > How do I get real symbols in Scheme? How do I get two namespaces? How
> > do I get a programmable reader that the compiler will also obey? How do
> > I get packages?
>
> * David Rush
> > These are all well-solved problems, Erik. If you bothered to be as
> > well-informed on Scheme as you are on CL, you would know that.
>
> Obviously, I'm not, so instead of such idiotic "arguments", please inform


> me of how you can give a Scheme real symbols with a functional value in
> addition to the variable value. That is, suppose I want to _experiment_
> with the Common Lisp way from within Scheme where the following code has
> the expected meaning:
>
> (define foo (x) (+ 1 x))
>
> (let ((foo 41))
> (foo foo))
>

> If this is all well-solved problem, David, you should be able to whip up
> at least one URL for me. Thank you in advance.

I don't see why it is a problem at all. Is there some task that you
can't accomplish because of this?

Ray Blaak

unread,
Mar 7, 2001, 4:28:04 PM3/7/01
to
Erik Naggum <er...@naggum.net> writes:
> One man's off the cuff opinion is another man's serious issue. If you
> post an unsupportable claim about Scheme to some non-Scheme newsgroup, I
> suggest you prepare for a rebuttal. Not all of us think Scheme is the
> 8th wonder of the world.

Fair enough in as far as that goes.

However, I was not posting an unsupportable claim (in the non-pendantic
sense), and I don't think Scheme is the 8th wonder of the world. Furthermore,
nothing I have said indicates that I think it is either.

I *like* it, but I also like many languages. I am also aware of its warts,
again as with many other languages.

I *dislike* freaks for any language, and intolerance in general.

Frode Vatvedt Fjeld

unread,
Mar 7, 2001, 4:34:55 PM3/7/01
to
Joe Marshall <j...@content-integrity.com> writes:

> In article <2hd7but...@dslab7.cs.uit.no>, Frode Vatvedt Fjeld
> <fro...@acm.org> wrote:
>
> > But this means _every_ setq needs to check if the previous value was a
> > function, no?
>
> No, only SETQs to free variables.

Do you mean free variables as opposed to lexically bound variables? I
fail to see why this is so (i.e. why wouldn't you have to check
lexically bound variables)..?

> Note that writing a value cell often involves more than just a move
> instruction. (Consider ephemeral GC, forwarding pointers, etc.)
> Thus checking whether the prior value was a function adds little, if
> any, overhead.

Unless you have some better GC scheme.

--
Frode Vatvedt Fjeld

Thant

unread,
Mar 7, 2001, 10:00:56 AM3/7/01
to

Erik Naggum wrote:

> [...] People _want_ to be able to use the same spelling for
> words of different kinds without having to deal with arbitrary
> conventions in the normal case. There _is_ no extra syntax
> added in the normal case. [...]

What is the "normal" case depends entirely on whether you want to
emphasize the use of higher-order functions as a common programming
idiom or not. If not for higher-order functions, why prefer Scheme *or*
Common Lisp over FORTRAN, COBOL, Pascal, C, Smalltalk, Modula-*, Eiffel,
C++, Java, C#, etc., etc.?

Multiple namespaces add semantic complexity. What does this semantic
complexity buy one other than the fact that it suits you?


> > Do you have more than your word to convince me that the
> > decision to implement separate namespaces for functions
> > in CL isn't about early compiler implementation details,
> > or at best an accident of history?
>

> You'll have to be satisfied with somebody's word, so you
> can keep this up as long as you want, which means I'm not
> goign to entertain you.

I got my impressions from the first few chapters of Christian Queinnec's
"Lisp in Small Pieces". Maybe I've misread him. Maybe he's not as
clued-in as you are. But you haven't given me or any others still
bothering to read this thread any reason to give your opinion more
weight than his.

In the mean-time, I'll continue to consider CL's separate namespace for
functions a flaw rather than a feature. Oh yeah, and arguing that Scheme
is different from Common Lisp is not the same thing as arguing that
Scheme is no good for experimenting.

-thant

Joe Marshall

unread,
Mar 7, 2001, 4:44:37 PM3/7/01
to
Ray Blaak <bl...@infomatch.com> writes:

> I *dislike* freaks for any language, and intolerance in general.

That's a rather intolerant view.

Joe Marshall

unread,
Mar 7, 2001, 4:51:08 PM3/7/01
to
Frode Vatvedt Fjeld <fro...@acm.org> writes:

> Joe Marshall <j...@content-integrity.com> writes:
>
> > In article <2hd7but...@dslab7.cs.uit.no>, Frode Vatvedt Fjeld
> > <fro...@acm.org> wrote:
> >
> > > But this means _every_ setq needs to check if the previous value was a
> > > function, no?
> >
> > No, only SETQs to free variables.
>
> Do you mean free variables as opposed to lexically bound variables?

Yes.

> I fail to see why this is so (i.e. why wouldn't you have to check
> lexically bound variables)..?

Because it is trivial for the compiler to determine whether they have
the potential to be used as functions.

(let ((answer nil))
(dotimes (i 30) (push (foo i) answer)))

Since answer is not being used as a function anywhere it is visible,
there is no need to invalidate any links when assigning a non-function
value to it.


> > Note that writing a value cell often involves more than just a move
> > instruction. (Consider ephemeral GC, forwarding pointers, etc.)
> > Thus checking whether the prior value was a function adds little, if
> > any, overhead.
>
> Unless you have some better GC scheme.

GC isn't the only reason to read before a write.

Frode Vatvedt Fjeld

unread,
Mar 7, 2001, 5:31:44 PM3/7/01
to
Joe Marshall <j...@content-integrity.com> writes:

> Frode Vatvedt Fjeld <fro...@acm.org> writes:
>
> > I fail to see why this is so (i.e. why wouldn't you have to check
> > lexically bound variables)..?
>
> Because it is trivial for the compiler to determine whether they
> have the potential to be used as functions.

Ok. This kind of thing doesn't really give me good vibes, but I
suppose the compiler can determine this in most situations.

But what if it determines that the variable does have the potential to
be used as a function?

> GC isn't the only reason to read before a write.

What are you thinking of?

--
Frode Vatvedt Fjeld

Craig Brozefsky

unread,
Mar 7, 2001, 5:48:28 PM3/7/01
to
Thant <th...@acm.org> writes:

> What is the "normal" case depends entirely on whether you want to
> emphasize the use of higher-order functions as a common programming
> idiom or not. If not for higher-order functions, why prefer Scheme *or*
> Common Lisp over FORTRAN, COBOL, Pascal, C, Smalltalk, Modula-*, Eiffel,
> C++, Java, C#, etc., etc.?
>
> Multiple namespaces add semantic complexity. What does this semantic
> complexity buy one other than the fact that it suits you?

I simply don't find #' or (function blah) a hindrance for doing real
programming with higher-order functions, and the ability to have data
variables and functions with the same name is something that I do
appreciate every day.

I wrote alot of code, I spend all day writing code. This particular
optimization for semantic complexity doesn't buy anything or me, while
the CL behavior does (functions and variables with the same name).

Another example is lisp macros versus scheme macros. Scheme macros
are indeed "cleaner", but I find them much less useful. The
"problems" in CL macros they try and avoid have not been real problems
for me in the past, and the advantages that the CL system has have
bought me real advantages.

Like I said, I'm not an academic tho. I'm much more like the "working
programmer" of Larry Paulson's "ML for the Working Programmer".


> I got my impressions from the first few chapters of Christian
> Queinnec's "Lisp in Small Pieces". Maybe I've misread him. Maybe
> he's not as clued-in as you are. But you haven't given me or any
> others still bothering to read this thread any reason to give your
> opinion more weight than his.

Well, I think he has. You gave your opinion and Erik and others,
myelf included, have given you ours. As for weight to opinions, it
doesn't mean shit. If you really want to know the answer I'm sure
that some reasearch thru the historical documents of the CL
standardization process, or direct interviewing with those who
participated in the languages evolution would be feasible.

Quiennec's opinion could also be effected by the fact that he is
writing a book about implementing lisps, so the implementation
optimization becomes important. If he hasn't done alot of CL
programming then the other advantages that we have pointed out would
never become apparent to him. I don't recall him supporting this
opinion with documentation, but I can't check since my copy is at
home.

> In the mean-time, I'll continue to consider CL's separate namespace
> for functions a flaw rather than a feature. Oh yeah, and arguing
> that Scheme is different from Common Lisp is not the same thing as
> arguing that Scheme is no good for experimenting.

Sure, to each his own. Arguing that Scheme is simple doesn't mean
it's good for experimentation. I think Erik has done more than just
argue for the difference between the two.

--
Craig Brozefsky <cr...@red-bean.com>
In the rich man's house there is nowhere to spit but in his face
-- Diogenes

Kent M Pitman

unread,
Mar 7, 2001, 5:52:10 PM3/7/01
to
[ comp.lang.scheme removed.
http://world.std.com/~pitman/pfaq/cross-posting.html ]

Thant <th...@acm.org> writes:

> Erik Naggum wrote:
>
> > [...] People _want_ to be able to use the same spelling for
> > words of different kinds without having to deal with arbitrary
> > conventions in the normal case. There _is_ no extra syntax
> > added in the normal case. [...]
>
> What is the "normal" case depends entirely on whether you want to
> emphasize the use of higher-order functions as a common programming
> idiom or not.

Right. And the point is that there is a community that doesn't want to
empahsize this and a community that does.

The CL community QUITE EXPLICITLY (I was there in tne room while the
decision was made) did not buy into the notion of higher order
functions being as casual as your'e saying. Everyone in our community
and yours likes having them, but we do not all want to be a community
whose central core theme IS higher order programming. We wanted the
case-marking to call attention to the places where it was happening,
for two reasons: one is that we felt it's unusual and worth
highlighting, and two is because we want the value of being able to
get at both namespaces by positional information in the case where we
are NOT doing this.

It is a design trade-off, and we made it one way for reasons of aesthetics
that make no sense to you but are dear to us, while you made it the other
way for reasons of aesthetics that make no sense to us but are dear to you.

> If not for higher-order functions, why prefer Scheme *or*
> Common Lisp over FORTRAN, COBOL, Pascal, C, Smalltalk, Modula-*, Eiffel,
> C++, Java, C#, etc., etc.?

There is a difference between "having" higher order functions and promoting
them as the answer to all the world's problems.

In the CL community, many of us would first seek to solve problems in other
ways before resorting to them. They are an abstraction tool, but they give
no guidance to efficiency. Scheme eschews any effort to communicate notions
of efficiency to the processor at all. The decision to do f(x) vs to inline
the contents of f(x) is an important core distinction in CL. We call
functions when we're willing to take function call overhead. Sometimes we
make functions and then ask for them to be inlined to buy back the overhead
of a function call boundary. This notion of the expense of the function call
boundary is critical all the way back to FORTRAN, where one did not call
functions for something unless it was worth the overhead. Scheme, on the
other hand, has hand-waved away the cost of function calls and said they are
always cheap. This isn't really possible to do. Static analysis can't
always tell you when it's right to inline a function and when it's right to
call it closed, and so Scheme encourages a programming style in which people
are constantly making functions without regard to whether there is overhead
for so doing. CL is not about that. Scheme also doesn't teach people to be
concerned about the difference between (lambda (x) x) and (lambda () x).
These are just both functions in Scheme, but in Lisp one is a function and
one is a closure, and any decent text tells people to be a lot more concerned
about the latter than the former, since the latter will typically cons and
the former will not. And certainly (catch 'foo x) vs (call/cc (lambda (x) ..))
encourages a programming style that will potentially cons in a situation where
CL does not.

THAT is why prefer CL over Fortran and Scheme--because it has higher order
functions and it has a notion of what responsible use is. Scheme teaches
that any use of functions is good and any non-use of functions is bad.
I simply don't believe that.

> Multiple namespaces add semantic complexity. What does this semantic
> complexity buy one other than the fact that it suits you?

Multiple letters add complexity, too. Why not just label everthing in
binary? Answer: We are equipped in wetware to make this distinction.
No point in igorning it. If we could not distinguish X from Y then definitely
putting that in our programming languages would be bad. But since we are
worse at the computationally simpler task of distinguishing a binary number
from another binary number than distinguishing a higher-order radix number
(such as a base 10 number or an alphanumeric) from another, we prefer to use
the more complex system.

If you can't see that programming languages EXIST to suit me and that buying
me that privilege is the whole goal, there's probably nothing more to say
between us. You ahve to acknowledge this is a legitimate first-order goal
or I will begin, for debate purposes, not to acknowledge any first-order goal
you might have.

Craig Brozefsky

unread,
Mar 7, 2001, 6:01:04 PM3/7/01
to
Hallvard B Furuseth <h.b.fu...@usit.uio.no> writes:

> Think "theory is more important than practice"

Yes, in theory it is easier for me to write programs in Scheme than
CL.

Yes, in theory I am an American citizen with full democratic rights.

Practice is how I feed myself and my family, so you can understand
that as someone whose family or government cannot/won't pay or them
put theory before practice, I subjugate theory to practice and
evaluate it's worth based on practical results.

Kent M Pitman

unread,
Mar 7, 2001, 6:02:47 PM3/7/01
to
Joe Marshall <j...@content-integrity.com> writes:

> In article <2hd7but...@dslab7.cs.uit.no>, Frode Vatvedt Fjeld
> <fro...@acm.org> wrote:
>
> > Thus you've just changed the time of error checking from
> > apply-time to setq-time, which is still expectedly much more frequent
> > than defun-time.
>
> SETQs to free variables are much less frequent that SETQs to bound
> variables.

But SETQs to defuns are much less frequent than SETQs to free variables.
So technically I win on the efficiency of a Lisp2.

I did say at the outset that this was a slim claim. This is what I meant.

Bruce Hoult

unread,
Mar 7, 2001, 6:05:04 PM3/7/01
to
In article <sfwr909...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> No, the original reason is to allow people to use the same name in
> different
> contexts to mean different things.
>
> (defun foo (list)
> (loop for x in list
> collect (list (third x) (first x))))
>
> where the variable in line 1 is what is referred to in line 2, but where
> the function call in line 3 is to the good old trusted LIST function.
>
> You may not have a strong desire to do this, but those of us who like
> Lisp2's generally do.

I not only have no string desire to do this, I have a strong desire to
*not* do it.

A number of people have mentioned this exact same example of "list". I
would have thought that creating a 2nd namespace just to deal with one
historically poorly-named primitive is a bit of an overreaction! Either
rename the primitive to "make-list" (which you can do yourself while
importing the library) or else rename the argument. I'd use "L" myself,
and if it's important that the thing passed is actually a list then I'd
indicate that by declaring the type of the argument explicitly.


> This capability is mirrored in natural language, and we're used to doing
> it. The equivalent in English might be "He boxed the boxes." There's
> no ambiguity here of the two references to box. We are plenty equipped
> in wetware to resolve it contextually; Scheme ignores this ability of
> people and tries to make us run with half our brain tied behind our back.

Actually, I find that both these natural language sentences and programs
that pun names (or nearly pun them, such as MyList and myList) are
unambiguous and possible to understant, but they definitely require a
mental "gear shift" each time I see them, and that takes a non-trivial
part of my mental bandwidth. I don't have much to spare, so try to
economise the use of it where possible!

-- Bruce

Bruce Hoult

unread,
Mar 7, 2001, 6:14:13 PM3/7/01
to
In article <sfwpuft...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> Bruce Hoult <br...@hoult.org> writes:
>
> > In article <2hd7but...@dslab7.cs.uit.no>, Frode Vatvedt Fjeld
> > <fro...@acm.org> wrote:
> >

> > > Joe Marshall <j...@content-integrity.com> writes:
> > >
> > > > Now supposing at some later time, someone does (setq foo 42). You
> > > > arrange to invalidate the links to the function FOO. [...]
> > >

> > > But this means _every_ setq needs to check if the previous value was
> > > a

> > > function, no? Thus you've just changed the time of error checking

> > > from
> > > apply-time to setq-time, which is still expectedly much more frequent
> > > than defun-time.
> >

> > You can easily make the setq merely set the code value to a known
> > constant function which does the error checking, thus deferring the
> > vast
> > majority of the work to the first time the object is used as a
> > function.
>
> But here you have implementationally two namespaces. You're just hiding
> one.

No, here I have implemented a cache. This is an implementation
technique not a language feature and there is no program for which the
meaning changes as a result.

Two namespaces is a langauge feature whose presense or absense changes
the meanings of programs.

-- Bruce

Ray Blaak

unread,
Mar 7, 2001, 6:17:04 PM3/7/01
to
Joe Marshall <j...@content-integrity.com> writes:

> Ray Blaak <bl...@infomatch.com> writes:
>
> > I *dislike* freaks for any language, and intolerance in general.
>
> That's a rather intolerant view.

Yes indeed. I am fundamentally against fundamentalists.

It is a contradiction I am willing to live with.

Arthur H. Gold

unread,
Mar 7, 2001, 6:25:43 PM3/7/01
to
Julian Morrison wrote:
>
> (warning: this post might spark a language war; if you hate language wars,
> set the "ignore")
>
> Please tell me the parts of (1)Common Lisp (2) Scheme that are "broken as
> designed".
>
> The reason I ask about the faults is: the good stuff is pretty easy to
> spot by reading the spec, but design faults can be pretty subtle (which is
> why they got in there in the first place, obviously).
The strength of Common Lisp is that it is a large language,
with support for multiple namespaces, an extensive series of
support libraries....
The weakness of COmmon Lisp is that it is a large language,
with support for multiple namespaces, an extensive series of
support libraries.....

The strength of Scheme is that it is a small language, has a
single namespace....
The weakness of Scheme is that it is a small language, has a
single namespace....
--
Artie Gold, Austin, TX (finger the cs.utexas.edu account
for more info)
mailto:ag...@bga.com or mailto:ag...@cs.utexas.edu
--
Verbing weirds language.

Bruce Hoult

unread,
Mar 7, 2001, 6:33:52 PM3/7/01
to
In article <sfwofvd...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> Bruce Hoult <br...@hoult.org> writes:
>
> > In article <sfw8zmi...@world.std.com>, Kent M Pitman
> > <pit...@world.std.com> wrote:
> >
> > > > The issue of whether a particular address contains executable code
> > > > and whether it would be legal to load that address into the program
> > > > counter is an issue of linker protocol. Lisp hackers tend to
> > > > forget
> > > > about linking because lisp links things on the fly and makes it
> > > > easy
> > > > to run a partially linked image.
> > >
> > > And modern programmers tend to assume the only hardware Lisp was
> > > designed
> > > for is the stuff you can buy right now. On the PDP10, for example,
> > > you
> > > could load the contents of any address into memory and execute it.
> > > And you could just JRST or JSP to any memory location. The linker
> > > was not involved.
> >
> > Ah, this time it is you rather than me who raises the possibility of
> > decisions being time-dependent!
>
> No, I observe others doing it. I'm saying design for older hardware is
> just as relevant as design for newer hardware in a system that seeks to
> be timeless.

I agree, and therefore think that you should not design too closely to
*either*. You shouldn't put in language features that you can't see how
to implement efficiently, but at the same time it's even *more*
important to not put in language features that in some way *depend* on
the machine you happen to be using at the time.

"Two namespaces was better on the PDP-10" strikes me as making the
second mistake.

> To take a neutral (to this topic) example, they could have built in
> ASCII encoding but they tolerated EBCDIC. Whether EBCDIC was or is on
> the way out is of no relevance; the point is that avoiding ASCII
> implicitly avoided assumptions about Unicode, and if/when the world
> outgrows Unicode (assuming Unicode doesn't just outright kill the
> character sets that got left out), CL's character model will continue
> to apply where Dylan's (for example) will have to be revised at the
> language level.

I don't understand this reference.

Dylan includes a "character" type but doesn't define how big it is or
what the native encoding is. Character and string literals can contain
characters either as they are (e.g. "Hello world", in whatever the
current character set is) or else as delimited hex strings such as
"\<44>\<79>\<6c>\<61>\<6e>" (which is "Dylan"). There is no limit on
the size of these hex strings, so Unicode, or extensions to Unicode are
transparently supported.

Now, yes, the encoding is listed as ASCII/Unicode, but I don't see hwo
that can be avoided. How does Common Lisp allow you to portably
specify, say, an a-umlaut, such that it works on EBCDIC systems?

> I firmly believe it's short-sighted to assume that what happens today
> is "better" than what happened yesterday just because it killed off
> yesterday. Things run in cycles more than people like to admit.

I completely agree. I believe Ivan Sutherland was the first to write
about this.


> Good
> ideas get killed as much for market or political reasons as technical
> ones. Old concepts have a way of coming back--sometimes in our
> lifetime if we're lucky enough not to have lost all remembrance of
> them and have to invent them from scratch. And the best way I know to
> insulate myself from the problems that could cost is not to rely on
> either past OR present in situations where I don't need to.

I totally agree.

-- Bruce

Bruce Hoult

unread,
Mar 7, 2001, 6:50:40 PM3/7/01
to
In article <sfwn1ax...@world.std.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> Bruce Hoult <br...@hoult.org> writes:
>
> > > I think more short-sighted but the trade-off might be "more
> > > useful". I'm not taking a position on that. Dylan is an
> > > example of a language that I think I remember making some
> > > very specific tactical assumptions about the architecture
> > > (e.g., for numbers and character codes, maybe other things
> > > too, like files).
> >
> > The only assumptions I'm aware of (from reading both the original 1992
> > prefix syntax book and the current reference manual) is that <integer>
> > should have at least 28 bits of precision.
>
> I'll take your word for this, though I thought there was more.
>
> > It is implementation-defined whether <integer> operations are modulo,
> > trap, or overflow into bignums
>
> This may be how they got out of what I thought was an "assumption".
> (This sounds awful. Who can do real work not knowing whether they
> have modulare arithmetic or real arithmetic?)

Well, you either know that your data values are nowhere near caring, or
else you explicitly use an <int32> or <bignum> type. And if you read
the manual for your implementation then you'll know anyway.

People seem to get work done in C, whcih doesn't specify all sorts of
things, including the results of divisions.


> > Perhaps you could be more clear about what "tactical assumptions" you
> > are talking about?
>
> I think I was talking both about the stuff above (had they not been
> wishy-washy on the "implementation-defined" part) and also about
> character. As I recall, they made it impossible to have something of
> type character that was not unicode--in particular that was bigger
> than unicode. Am I wrong on this?

Well, the manual certainly *says* "Unicode", but the syntax for
character constants is '\<xxxx>', which readily admits extension to
larger than unicode if your machine/compiler supports it.

And if your machine has big characters that dont' happen to be Unicode
then I guess the compiler gets to use a table mapping unicode numbers to
whatever you actually have. Assuming you want programs using wide
character constants to be portable, which seems like a good thing.


> Again with files, there are a lot of notions of files that we encountered
> in the CL design that I don't expect the dylan pathname (locator? can't
> recall what they called them) model doesn't admit. In CL's design we
> were up agains file systems that had no directory notion, that had
> multiple
> hosts, that had two name components (or that could have either a version
> or a type but not both), that had no hierarchy, that used spaces as
> component separators, some that used "<" and ">" to separate directory
> components directionally ("up" vs "down" in ">foo>bar>baz<x>y>z>w")
> while ithers that used them to wrap around directory part
> ("<foo.bar.baz>").

The Dylan Reference Manual says nothing at all about files and I/O.

The two existing implementations (Harlequin/Fun-O and Gwydion) have
agreed on a common library spec that is sufficiently general to support
Unix, MSDOS, and Macintosh. This isn't ideal but it's better than
nothing. I expect any move from this at this point would be in the
direction of using URLs (i.e. basically Unix format plus the protocol)
and let the implementation map that to the local standard.

-- Bruce

Joe Marshall

unread,
Mar 7, 2001, 6:42:44 PM3/7/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Scheme eschews any effort to communicate notions of efficiency to
> the processor at all.

Certainly there is no standard way of providing hints to the scheme
system.

> The decision to do f(x) vs to inline the contents of f(x) is an
> important core distinction in CL.

The fact that the most popular commercial lisp, AllegroCL, ignores
inline declarations seems to indicate that not everyone agrees.

> Scheme, on the other hand, has hand-waved away the cost of function
> calls and said they are always cheap. This isn't really possible to
> do.

You are no doubt familiar with
G.L.Steele: "Debunking the `Expensive Procedure Call' Myth",
Proc AMC National Conference (Seattle, Oct 1977)
also AI Memo 443

Function calls can be made *very* cheap --- usually no more than a
half-dozen instructions. In more modern machines, any overhead
associated with a function call is often more than made up for by the
instruction cache hit you would get by inlining.

If the software is dynamically optimized (see HP's Dynamo project) or
`JIT' compiled (see Transmeta) the cost is exactly zero. Even
your run-of-the-mill pentium can avoid overhead here.

What *is* expensive is continuation (stack frame) management and
variable lookup when you have to fetch values from lexical contexts.

> Scheme also doesn't teach people to be concerned about the
> difference between (lambda (x) x) and (lambda () x). These are just
> both functions in Scheme, but in Lisp one is a function and one is a
> closure, and any decent text tells people to be a lot more concerned
> about the latter than the former, since the latter will typically
> cons and the former will not.

There is really no reason to be afraid of consing under normal
circumstances. In the past, before ephemeral GC's and megabyte RAMs,
yes, it was a concern.

Joe Marshall

unread,
Mar 7, 2001, 6:55:55 PM3/7/01
to
Kent M Pitman <pit...@world.std.com> writes:

> Joe Marshall <j...@content-integrity.com> writes:
>
> > In article <2hd7but...@dslab7.cs.uit.no>, Frode Vatvedt Fjeld
> > <fro...@acm.org> wrote:
> >
> > > Thus you've just changed the time of error checking from
> > > apply-time to setq-time, which is still expectedly much more frequent
> > > than defun-time.
> >
> > SETQs to free variables are much less frequent that SETQs to bound
> > variables.
>
> But SETQs to defuns are much less frequent than SETQs to free variables.
> So technically I win on the efficiency of a Lisp2.

Not so fast, I get to split hairs as well.

If I arrange for the compiler to annotate where in the object code
SETQs are being performed, and what free variables are being SETQd,
then I can arrange it so that the linker can `patch up' only those
SETQs that modify free variables that could actually be used as a
function.

> I did say at the outset that this was a slim claim. This is what I meant.

It's getting very slim...

Marco Antoniotti

unread,
Mar 7, 2001, 7:08:03 PM3/7/01
to

Thant <th...@acm.org> writes:

...

> In the mean-time, I'll continue to consider CL's separate namespace for
> functions a flaw rather than a feature.

You are free to do so. Meanwhile you will have to re-invent the wheel
any time you use a slightly different implementation of Scheme.

> Oh yeah, and arguing that Scheme is different from Common Lisp is
> not the same thing as arguing that Scheme is no good for
> experimenting.

That is not the argument. The rock-solid-and-so-far-unrefuted claim
is that CL is way better for "experimenting" than Scheme is, because
of the simple and true fact that CL is bigger, fatter, more useful and
more usable that Scheme. The Dark Side wins. :)

And now, back to program in Java. :)

It is loading more messages.
0 new messages