Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Why seperate function namespaces?

27 views
Skip to first unread message

Simon H.

unread,
Apr 30, 2003, 6:45:56 AM4/30/03
to
Of all the functional languages I've used (Scheme, OCaml, etc) I
notice that Lisp is the only one with a seperate namespace for
functions. It just recently struck me that it seems to be a rather
odd and counter-intuitive way of doing things, especially since I
can't think of a single other language that does it, however I assume
there's some reason for it... Or at least an excuse. ;-)

Pax,
S

Pascal Costanza

unread,
Apr 30, 2003, 7:40:15 AM4/30/03
to

Here is a paper that explains the difference:
http://www.nhplace.com/kent/Papers/Technical-Issues.html

There are several reasons why I like a separate function namespace more
than a unified namespace. The most important ones are:

+ Functions and values are different, i.e. (eql 3 (lambda () 3)) returns
nil for very good reasons. So in effect it's more intuitive to treat
them differently IMHO. (but see below)

+ You can very easily implement macros that treat variables and
functions differently. You don't have to submit to any kind of
orthogonality aesthetics in case it turns out to be unnatural.

+ If you use a certain name for a global function you cannot use it
locally anymore in a unified namespace. For example, it Common Lisp it's
perfectly ok to say something like (let ((list ...)) (dosomething ...)),
although list is also a predefined function. This can be very handy.

However, a unified namespace can be more intuitive when you are mainly
programming in a functional/applicative style. Common Lisp doesn't have
a preference for a particular programming style, so that's not as
important as in, say, Scheme.


Pascal

--
Pascal Costanza University of Bonn
mailto:cost...@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Adam Warner

unread,
Apr 30, 2003, 8:09:28 AM4/30/03
to
Hi Simon H.,

> Of all the functional languages I've used (Scheme, OCaml, etc) I

> notice that Lisp is the only one with a separate namespace for


> functions. It just recently struck me that it seems to be a rather
> odd and counter-intuitive way of doing things, especially since I
> can't think of a single other language that does it, however I assume
> there's some reason for it... Or at least an excuse. ;-)

Ooh, a nice Google Groups+Hotmail troll who's a little short of
imagination. So let's be pathological:

(let ((print "variable namespace"))
(print
(block print
(tagbody
(go print)
(print "unreachable code")
print (print "tagbody namespace")
(return-from print "block namespace")
nil)))
(print "we can still print a string")
(print print))

Will print:

"tagbody namespace"
"block namespace"
"we can still print a string"
"variable namespace"

(Was this functional enough?)

In a nutshell it's all about expressiveness and allowing the easy
avoidance of namespace collisions without having to resort to less
powerful hygienic macros. I recently learned about another reason from
Christian Queinnec's "Lisp In Small Pieces": Separating the world of
functions from the rest of computations "is a profitable distinction that
all good Scheme compilers exploit, according to [Sén89].... A user of
Lisp2 has to do much of the work of the compiler in that way and thus
understands better what it's worth." [page 41]

Code walking is more difficult. And the book also explains that macros
that expand into lambda forms are "highly problematic".

Regards,
Adam

Kaz Kylheku

unread,
Apr 30, 2003, 1:32:47 PM4/30/03
to
she...@mail.com (Simon H.) wrote in message news:<e9904ec5.03043...@posting.google.com>...

> Of all the functional languages I've used (Scheme, OCaml, etc) I
> notice that Lisp is the only one with a seperate namespace for
> functions.

I'm going to assume that you are not a troll, and point you to this:

http://www.c2.com/cgi/wiki?SingleNamespaceLisp
http://www.dreamsongs.com/Separation.html

Simon H.

unread,
Apr 30, 2003, 4:11:31 PM4/30/03
to
"Adam Warner" <use...@consulting.net.nz> wrote in message news:<pan.2003.04.30....@consulting.net.nz>...

> (let ((print "variable namespace"))
> (print
> (block print
> (tagbody
> (go print)
> (print "unreachable code")
> print (print "tagbody namespace")
> (return-from print "block namespace")
> nil)))
> (print "we can still print a string")
> (print print))

*laughs* Why grandmother, what pretty spaghetti code you have. You
may consider the clue-by-four delivered.

Pax,
S

Michael J. Ferrador

unread,
Apr 30, 2003, 4:18:05 PM4/30/03
to

looking at my init/rc (.cmucl-init) files while reading
some lisp-2, symbol properties stuff I came up with -

- but since it's on Cliki, http://www.cliki.net/CMUCL%20Hints
As well as http://www.cons.org/cmucl/doc/prompt.html
And I'm new to CL, I'd thought I would submit it to the
critical flame of C.L.L

(in-package :common-lisp)

(defvar *last-package* nil "cache previous package")
(defvar my-prompt "value of package nick or name + >") ; was
*cached-prompt*

(defun my-prompt ()
(unless (eq *last-package* *package*)
(setf my-prompt ; was
*cached-prompt*
(concatenate 'string (or (first (package-nicknames *package*))
(package-name *package*))
"> "))
(setf *last-package* *package*))
my-prompt) ; was
*cached-prompt*

(setf *prompt* #'my-prompt)

- It seems a trite example to save 1 sysmbol, at the possible expense
of clarity. Now I see the spreadsheet like value OR function, but BOTH
in lisp-2 (and more in CL)

Tj

unread,
Apr 30, 2003, 4:58:11 PM4/30/03
to
she...@mail.com (Simon H.) wrote in message news:<e9904ec5.03043...@posting.google.com>...
> It just recently struck me that it seems to be a rather
> odd and counter-intuitive way of doing things, especially since I
> can't think of a single other language that does it, however I assume
> there's some reason for it... Or at least an excuse. ;-)

Actually, most languages that are like C do this. Go ahead, define
blah and blah(). So at least the language is as good as C. ;P

I recall a post by a Symbolics guy on ll1 mailing list mentioning that
being a lisp-1 would have been likely had there not been such strong
customer pressure; a tool was written to assist porting, but that
apparently wasn't good enough.

Though the extent of my CL is exploring Allegro and Corman, and
reading things like PAIP, it's obvious CL is more powerful than any
language I use. And yet, it's like a big zit on the language. Though
I think it's immature of me to find asymmetry ugly, since oftentimes
chaos gives resources useful for elegant solutions.

Tj

Pascal Costanza

unread,
Apr 30, 2003, 5:15:00 PM4/30/03
to
In article <ccc7084.03043...@posting.google.com>,
tj_sc...@yahoo.com (Tj) wrote:

> Though the extent of my CL is exploring Allegro and Corman, and
> reading things like PAIP, it's obvious CL is more powerful than any
> language I use. And yet, it's like a big zit on the language. Though
> I think it's immature of me to find asymmetry ugly, since oftentimes
> chaos gives resources useful for elegant solutions.

I have the following definitions in my startup files:

(defun open-curl-macro-char (stream char)
(declare (ignore char))
(let ((forms (read-delimited-list #\} stream t)))
`(funcall ,@forms)))

(set-macro-character #\{ #'open-curl-macro-char)
(set-macro-character #\} (get-macro-character #\)))


Now I can write {f args} instead of (funcall f args). Of course, you
still have to know that you want to access the value, and not the
function, but is this really such a big deal?

As far as I can see it, there are some pragmatic advantages of having a
Lisp-2, whereas the reasons brought up for Lisp-1 are purely of an
aesthetical nature. Or am I missing something?

Pascal

--
"If I could explain it, I wouldn't be able to do it."
A.M.McKenzie

Erann Gat

unread,
Apr 30, 2003, 5:57:24 PM4/30/03
to
In article <costanza-DEF2B8...@news.netcologne.de>, Pascal
Costanza <cost...@web.de> wrote:

I disagree with the implication that aesthetic and pragmatic issues are
disjoint. Programming languages are user interfaces, so there is
considerable overlap between the aesthetic and the pragmatic.

That said, the case for the aesthetics of Lisp-1 is not at all a
slam-dunk. Simpler and more aesthetic are not equivalent. (Bauhaus
architecture is simpler than Frank Gehry's style. Does that necessarily
make it more aesthetic?) Since there is already an asymmetry between the
CAR and the CDR in Lisp, it's not at all clear that having the same
evaluation semantics for both is necessarily the Right Thing even from a
purely aesthetic point of view.

E.

Gareth McCaughan

unread,
Apr 30, 2003, 5:33:03 PM4/30/03
to
"Tj" wrote:

> she...@mail.com (Simon H.) wrote in message news:<e9904ec5.03043...@posting.google.com>...
> > It just recently struck me that it seems to be a rather
> > odd and counter-intuitive way of doing things, especially since I
> > can't think of a single other language that does it, however I assume
> > there's some reason for it... Or at least an excuse. ;-)
>
> Actually, most languages that are like C do this. Go ahead, define
> blah and blah(). So at least the language is as good as C. ;P

Um, no.

-------- bad.c begins ---------
int x;
int x(double y) { return 1; }
-------- bad.c ends -----------

$ gcc -c bad.c
bad.c:2: `x' redeclared as different kind of symbol
bad.c:1: previous declaration of `x'

--
Gareth McCaughan Gareth.M...@pobox.com
.sig under construc

Pascal Costanza

unread,
Apr 30, 2003, 7:35:39 PM4/30/03
to
In article <gat-300403...@k-137-79-50-101.jpl.nasa.gov>,
g...@jpl.nasa.gov (Erann Gat) wrote:

> > As far as I can see it, there are some pragmatic advantages of having a
> > Lisp-2, whereas the reasons brought up for Lisp-1 are purely of an
> > aesthetical nature. Or am I missing something?
>
> I disagree with the implication that aesthetic and pragmatic issues are
> disjoint. Programming languages are user interfaces, so there is
> considerable overlap between the aesthetic and the pragmatic.

In general, you are right. But I don't see the practical advantages of a
Lisp-1, and the arguments brought up for a Lisp-1 always sound like
purely aesthetical considerations to me. So my impression is that in
this concrete case the two concerns seem to be separated.

> That said, the case for the aesthetics of Lisp-1 is not at all a
> slam-dunk. Simpler and more aesthetic are not equivalent. (Bauhaus
> architecture is simpler than Frank Gehry's style. Does that necessarily
> make it more aesthetic?) Since there is already an asymmetry between the
> CAR and the CDR in Lisp, it's not at all clear that having the same
> evaluation semantics for both is necessarily the Right Thing even from a
> purely aesthetic point of view.

Again, I agree. I was only talking about the arguments put forward by
Lisp-1 advocates. I haven't heard yet about actual pragmatic advantages
of Lisp-1. Are there any?

I can't imagine that there are cases in which you can actually forget
about the difference between values and functions. In a Lisp-1, it only
looks as if there is no difference. So what's the point?

Tom Lord

unread,
Apr 30, 2003, 8:31:39 PM4/30/03
to

Pascal:

> But I don't see the practical advantages of a
> Lisp-1,


It's hard to point to _absolute_ advantages for either lisp-1 or
lisp-2 because they aren't very different -- just different in
emphasis and syntax optimizations.

In lisp-2, I could program in a style where I always use funcall
and then I'm programming in lisp-1.

In lisp-1, I could program in a style where I always use something
like (get 'symbol 'function) in the CAR of expressions, and then
I'm programming in lisp-2.

It's a case of "Anything you can do, I can do .... well, pretty much
the same way."


> I haven't heard yet about actual pragmatic advantages of
> Lisp-1. Are there any?

> I can't imagine that there are cases in which you can
> actually forget about the difference between values and
> functions. In a Lisp-1, it only looks as if there is no
> difference. So what's the point?

A few non-absolute pragmatic advantages of lisp-1:

* Um... lexical scoping?

My CL (as compared to my Scheme) is certainly rusty. Perhaps
I am full of s here and I'll humbly accept the fish-slap if so:

Aren't function binding slots per-symbol, and thus not lexically
scoped? In lisp-1, for example, I can take a function body, and
wrap it in a `let' that shadows some globally defined function, and
it all just works. I don't need to go through the body adding
FUNCALL or taking FUNCALL out should I happen to remove the shadow
binding.


* macros

Especially macros that implement binding constructs or have
side-effects on bindings, and macros that have free variables. In
lisp-1, to choose a trivial example, I can have a single macro for
`swap-values' where in lisp-2 I'd need `swap-values' and
`swap-function-bindings' or worse (see "exploratory programming",
below).


* pedagogy

I'm not a college professor, but I'd bet a quarter that lisp-1 is
easier to teach, simply because there's less to learn. Sure,
everybody messes up with `(let ((list ....)) ...)' -- once. (After
which their intuitive understanding of both the evaluation rule and
lexical scoping is improved.)


* automatic code transforms

A generalization of macros. The fewer primitive constructs
in your language, the easier it is to write high-level transforms.
No need for `(cond ((eq (car foo) 'funcall) ...) ...)'.


* simpler implementation

Consider a simple meta-circular interpreter. The lisp-1 version is
smaller and simpler. I gather people don't _really_ transform
everything-binding-related to lambda in the radical manner of
RABBIT.SCM in production compilers, but I'm not so sure it's really
a dead technique.


And the big one, I think, though the most abstract:

* exploratory programming

In lisp-2, I have to decide whether a given function should be
treated as the value of a variable or the function slot binding
of a symbol. My decision then becomes spread throughout
the code in the form of the presence or absense of FUNCALL.
Then I change my mind about that decision.

Worse: I have a package that assumes that decision goes one way,
and applications that use that package that way. Then I want to
use the same package in an application that makes the decision the
other way.

And the "general principles" one, though this is less directly a
pragmatic issue:

* why stop at 2?

Maybe, like C, I want a third binding for, say, structure types.
The step from 1 to 2 is almost always wrong. Either stop at 1,
or make it N. But that's just a rule of thumb.

Binding is binding is binding. Why do need 2?

So, give me lisp-1 anyday --- but if I weren't, seemingly, the only
person in the world that wants it this way: make it a lisp-1 where
"nil" and "false" (however you spell them) are the same value.

-t

Tim Bradshaw

unread,
Apr 30, 2003, 9:03:33 PM4/30/03
to
* Tom Lord wrote:
> * Um... lexical scoping?

Um. FLET, LABELS?

> My CL (as compared to my Scheme) is certainly rusty.

Yes, it is.


> * why stop at 2?

> Maybe, like C, I want a third binding for, say, structure types.
> The step from 1 to 2 is almost always wrong. Either stop at 1,
> or make it N. But that's just a rule of thumb.

Maybe, like C, CL has done that. The normal definition, I think,
makes CL a Lisp7:

- Functions & macros
- lexical variables
- special variables
- types and classes
- labels (for GO)
- block names
- symbols in quoted expressions (such as tags for THROW).

(defvar name nil) ;1

(defun name (name) ;2, 3
(catch 'name ;4
(block name ;5
(tagbody
(when (numberp (locally (declare (special name)) name))
(go name)) ;6
(throw 'name name)
name ;7
(return-from name (+ name (locally (declare (special name))
name)))))))

Typical programs probably add from several to many more namespaces.
Of course, typical Scheme programs add lots of namespaces too, but
that doesn't count, because, erm... well because you don't have to
mention this inconvenient fact when teaching the language, I guess.

--tim

Pascal Costanza

unread,
Apr 30, 2003, 9:37:26 PM4/30/03
to
Tom,

Thanks for replying - I am really interested in these things!

In article <vb0qnba...@corp.supernews.com>,
lo...@emf.emf.net (Tom Lord) wrote:

> Pascal:
>
> > But I don't see the practical advantages of a
> > Lisp-1,
>
> It's hard to point to _absolute_ advantages for either lisp-1 or
> lisp-2 because they aren't very different -- just different in
> emphasis and syntax optimizations.

OK.

> In lisp-2, I could program in a style where I always use funcall
> and then I'm programming in lisp-1.
>
> In lisp-1, I could program in a style where I always use something
> like (get 'symbol 'function) in the CAR of expressions, and then
> I'm programming in lisp-2.
>
> It's a case of "Anything you can do, I can do .... well, pretty much
> the same way."

No, not quite. As Erann said before in this thread, a programming
language is a user interface - so it is not just a matter whether you
can do something at all, but rather how easy it is, how well it fits
your mental model, how well it is balanced against other features of a
language, and so on.

So, as I said before, a Lisp-1 is probably better when you want/need to
program in a functional style, say, 90% of the time.

> A few non-absolute pragmatic advantages of lisp-1:
>
> * Um... lexical scoping?
>
> My CL (as compared to my Scheme) is certainly rusty. Perhaps
> I am full of s here and I'll humbly accept the fish-slap if so:
>
> Aren't function binding slots per-symbol, and thus not lexically
> scoped? In lisp-1, for example, I can take a function body, and
> wrap it in a `let' that shadows some globally defined function, and
> it all just works. I don't need to go through the body adding
> FUNCALL or taking FUNCALL out should I happen to remove the shadow
> binding.

Nope, you are wrong in this regard - function are by default lexically
scoped, and the ANSI standard even explicitly states that local function
definitions cannot be declared to be special (dynamically scoped).

The point is not whether you can redefine a function locally, the point
is what extent the redefinition has. With regard to local function
definitons, Common Lisp behaves just like Scheme.

> * macros
>
> Especially macros that implement binding constructs or have
> side-effects on bindings, and macros that have free variables. In
> lisp-1, to choose a trivial example, I can have a single macro for
> `swap-values' where in lisp-2 I'd need `swap-values' and
> `swap-function-bindings' or worse (see "exploratory programming",
> below).

No, I think you wouldn't do it like this in Common Lisp. CL defines
rotatef that acts like swap when it is passed two arguments. You can do
the following:

(defun f (x) (print x))

(defun g (x) (print (1+ x)))

(rotatef (symbol-function 'f) (symbol-function 'g))

(f 5)
=> 6

So this means that you have a single macro for both cases, but you need
to pass the right arguments.

> * pedagogy
>
> I'm not a college professor, but I'd bet a quarter that lisp-1 is
> easier to teach, simply because there's less to learn. Sure,
> everybody messes up with `(let ((list ....)) ...)' -- once. (After
> which their intuitive understanding of both the evaluation rule and
> lexical scoping is improved.)

I think students should be taught about both options. It's not up to the
professor to teach students what he/she perceives as The Right Thing,
but he/she should give them sufficient information so that they can make
up their own minds.

> * automatic code transforms
>
> A generalization of macros. The fewer primitive constructs
> in your language, the easier it is to write high-level transforms.
> No need for `(cond ((eq (car foo) 'funcall) ...) ...)'.

I don't really understand what you mean here, probably just because I
don't have enough experience in this regard. Perhaps someone else can
comment on this.

> * simpler implementation
>
> Consider a simple meta-circular interpreter. The lisp-1 version is
> smaller and simpler. I gather people don't _really_ transform
> everything-binding-related to lambda in the radical manner of
> RABBIT.SCM in production compilers, but I'm not so sure it's really
> a dead technique.

Well, "smaller" and "simpler" are aesthetical categories IMHO. ;)

> And the big one, I think, though the most abstract:
>
> * exploratory programming
>
> In lisp-2, I have to decide whether a given function should be
> treated as the value of a variable or the function slot binding
> of a symbol. My decision then becomes spread throughout
> the code in the form of the presence or absense of FUNCALL.
> Then I change my mind about that decision.
>
> Worse: I have a package that assumes that decision goes one way,
> and applications that use that package that way. Then I want to
> use the same package in an application that makes the decision the
> other way.

Hmm, I am not quite sure if I understand you correctly. The default in
Common Lisp is of course to always store functions in function cells.
Functions are anly stored in variables when you want to parameterize a
function with another function, so the cases are actually very clear cut.

Well, ok, when a function of an outer scope was called directly in one
version of a program, but needs to be passed in a variable in the next
version, this means that you need to refactor your program in this
regard. Is this what you mean? In this case, I still don't fully
understand the second paragraph (about packages).

> And the "general principles" one, though this is less directly a
> pragmatic issue:
>
> * why stop at 2?
>
> Maybe, like C, I want a third binding for, say, structure types.
> The step from 1 to 2 is almost always wrong. Either stop at 1,
> or make it N. But that's just a rule of thumb.
>
> Binding is binding is binding. Why do need 2?

Common Lisp has more than two bindings, and you can always create more
by storing new bindings in hashtables or a-lists.

Tom Lord

unread,
Apr 30, 2003, 9:57:52 PM4/30/03
to
>> * Um... lexical scoping?

> Um. FLET, LABELS?


Ok, there's the fish-slap. Blush blush. Thanks.


>> My CL (as compared to my Scheme) is certainly rusty.
> Yes, it is.

And, yup. Sure.

But wait a minute: Ick!

So now, if I want to make a macro that defines a new binding
construct: what I need different versions of the macro depending on
where I want function bindings and where variable?

I'm looking at something like my 4/5-line (lisp-1) `let-values' or
`let*-values'. Now I gather that the CL reply here is that, where I'd
want to use such a thing, I know that I'm looking at variable bindings
not function bindings and callers will use FUNCALL. But that's not
really true:

For the namespace convenience of function bindings, I pay the cost of
having to spread throughout my code decisions about whether or not
I'm referring to function bindings or variable bindings. And if I
change my mind about that, or have two uses for my code that would
make that decision differently, I'm screwed.

Function bindings and variable bindings are apparently pretty
isomorphic -- so why have both?

The only good arguments I've seen for lisp-2 are about namespace
convenience such as regarding the function called `list' -- and that's
as much about a poorly chosen function name as anything else; and it's
really not all that big a deal to learn to work around regardless of
the function name.

I've seen lots of comments along the lines of "it usually doesn't
matter, you know when you should use funcall." That's not really a
"pro" argument -- that's an "it's not too hard to live with" argument.


> Maybe, like C, CL has done that. The normal definition, I think,
> makes CL a Lisp7:

> - Functions & macros
> - lexical variables
> - special variables
> - types and classes
> - labels (for GO)
> - block names
> - symbols in quoted expressions (such as tags for THROW).


Ick again.


> Typical programs probably add from several to many more
> namespaces. Of course, typical Scheme programs add lots of
> namespaces too, but that doesn't count, because, erm... well
> because you don't have to mention this inconvenient fact
> when teaching the language, I guess.

Hmm.

First, I'm not exactly a Scheme (in the sense of R^nRS) fan. I am a
lisp-1 fan and I think there are some good ideas in Scheme.

In my not-quite-standard-Scheme lisp-1 ("systas scheme"), functions,
macros, lexical variables, special variables, types and classes, and
block names all share a namespace. I don't have labels, per se.
Exception names do, in fact, have their own little namespace and I
regard that as a flaw in the design (at least I'm consistent).

Yes, lots of programs introduce "namespaces" of a sort -- heck, many
hash-tables effectively count as one. Generally these don't change
the evaluation rules of the language, though.

-t

Adam Warner

unread,
Apr 30, 2003, 10:30:24 PM4/30/03
to
Hi Kaz Kylheku,

> http://www.c2.com/cgi/wiki?SingleNamespaceLisp
> http://www.dreamsongs.com/Separation.html

Wow, the wiki link is particularly impressive. I've never seen/grokked
funcall used with quoted symbols to indirect upon a function.

I've just been playing with redefining local functions and I managed to
come up with this code that seems legal (and could be wrapped in a macro
to avoid use of funcall within the visible body of the code):

;;can't redefine local functions?
(labels ((foo (n) (* n n))
(foo-foo (n) (* (foo n) (foo n))))
(print (foo 2))
(print (foo-foo 2)))

;;try a different approach
(let* ((foo (lambda (n) (* n n)))
(foo-foo (lambda (n) (* (funcall foo n) (funcall foo n)))))
(flet ((foo (n) (funcall foo n))
(foo-foo (n) (funcall foo-foo n)))
(print (foo 2))
(print (foo-foo 2))
(setf foo (lambda (n) (* n n n)))
(print (foo 2)) ;;expecting 8
(print (foo-foo 2)))) ;;expecting 64

CMUCL 18 January 2003 build 4523 can't compile it:

; Python version 1.0, VM version Intel x86 on 01 MAY 03 02:24:34 pm.
; Compiling: /home/adam/t/c.lisp 01 MAY 03 02:20:12 pm

; Compiling labels ((foo # #) (foo-foo # #)):
; Compiling let* ((foo #) (foo-foo #)):


Error in function common-lisp::assert-error:
The assertion (eq c::env
(c::lambda-environment
(c::lambda-var-home c::thing))) failed.

Restarts:
0: [continue] Retry assertion.
1: [abort ] Return to Top-Level.

Debug (type H for help)

(common-lisp::assert-error (eq c::env (c::lambda-environment #)) nil nil)
Source:
; File: target:code/macros.lisp
(restart-case (error cond) (continue nil :report (lambda # #) nil))
0]

CLISP and SBCL work fine. Can anyone confirm that the bug is still in a
more recent version of CMUCL? If it is I'll report it to the mailing list.

Regards,
Adam

Jochen Schmidt

unread,
Apr 30, 2003, 11:52:48 PM4/30/03
to
Adam Warner wrote:

> Hi Kaz Kylheku,
>
>> http://www.c2.com/cgi/wiki?SingleNamespaceLisp
>> http://www.dreamsongs.com/Separation.html
>
> Wow, the wiki link is particularly impressive. I've never seen/grokked
> funcall used with quoted symbols to indirect upon a function.
>
> I've just been playing with redefining local functions and I managed to
> come up with this code that seems legal (and could be wrapped in a macro
> to avoid use of funcall within the visible body of the code):
>
> ;;can't redefine local functions?
> (labels ((foo (n) (* n n))
> (foo-foo (n) (* (foo n) (foo n))))
> (print (foo 2))
> (print (foo-foo 2)))
>
> ;;try a different approach
> (let* ((foo (lambda (n) (* n n)))
> (foo-foo (lambda (n) (* (funcall foo n) (funcall foo n)))))
> (flet ((foo (n) (funcall foo n))
> (foo-foo (n) (funcall foo-foo n)))
> (print (foo 2))
> (print (foo-foo 2))
> (setf foo (lambda (n) (* n n n)))
> (print (foo 2)) ;;expecting 8
> (print (foo-foo 2)))) ;;expecting 64

Try yet another approach:

(defmacro with-single-namespace ((&rest fns) &body forms)
`(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
(funcall ,',fn ,@,'args))) fns))
,@forms))

(with-single-namespace (foo foo-foo)


(let* ((foo (lambda (n) (* n n)))

(foo-foo (lambda (n) (* (foo n) (foo n)))))


(print (foo 2))
(print (foo-foo 2))
(setf foo (lambda (n) (* n n n)))
(print (foo 2))

(print (foo-foo 2))))

ciao,
Jochen

Jeff Caldwell

unread,
May 1, 2003, 12:18:53 AM5/1/03
to
FWIW,

(defmacro with-single-namespace ((&rest fns) &body forms)
`(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
(funcall ,`,fn ,@,'args))) fns))
,@forms))

Error: A comma appears outside the scope of a backquote (or there are
too many commas).

(defmacro with-single-namespace ((&rest fns) &body forms)
`(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
`(funcall ,',fn ,@,'args))) fns))
,@forms))

WITH-SINGLE-NAMESPACE
Jochen Schmidt wrote:
...

> Try yet another approach:
>
> (defmacro with-single-namespace ((&rest fns) &body forms)
> `(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
> (funcall ,',fn ,@,'args))) fns))
> ,@forms))

...

Jochen Schmidt

unread,
May 1, 2003, 12:20:05 AM5/1/03
to
Jochen Schmidt wrote:


> Try yet another approach:
>
> (defmacro with-single-namespace ((&rest fns) &body forms)
> `(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
> `(funcall ,',fn ,@,'args))) fns))
> ,@forms))

Sorry my editing because of limited linelength omitted the inner backquote
before the funcall...

Jochen Schmidt

unread,
May 1, 2003, 12:24:51 AM5/1/03
to
Jeff Caldwell wrote:

> FWIW,
>
> (defmacro with-single-namespace ((&rest fns) &body forms)
> `(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
> (funcall ,`,fn ,@,'args))) fns))
> ,@forms))
>
> Error: A comma appears outside the scope of a backquote (or there are
> too many commas).
>
> (defmacro with-single-namespace ((&rest fns) &body forms)
> `(macrolet (,@(mapcar (lambda (fn) `(,fn (&rest args)
> `(funcall ,',fn ,@,'args))) fns))
> ,@forms))

Yes I realized this after sending the message. I accidently omitted the
backquote when I splitted the line to fit into 76 characters.

ciao,
Jochen

Tj

unread,
May 1, 2003, 12:26:13 AM5/1/03
to
Gareth McCaughan <Gareth.M...@pobox.com> wrote in message news:<slrnbb0g8f.on.G...@g.local>...

> "Tj" wrote:
> > Actually, most languages that are like C do this. Go ahead, define
> > blah and blah(). So at least the language is as good as C. ;P
>
> Um, no.

Thank you for correcting an incredibly embarrassing error. I've been
programming in Java too much and should be perfectly suited for
management. I'll refrain from posting on programming for a month.

Tj

Adam Warner

unread,
May 1, 2003, 12:37:56 AM5/1/03
to
Hi Jochen Schmidt,

That's very impressive. Thanks Jochen.

Regards,
Adam

Erann Gat

unread,
May 1, 2003, 1:18:16 AM5/1/03
to
In article <vb0qnba...@corp.supernews.com>, lo...@emf.emf.net (Tom
Lord) wrote:

> * why stop at 2?

As others have already pointed out, CL doesn't actually stop at 2.
However, I think there is an argument to made for stopping at 2: there is
an asymmetry between handling the CAR and the CDR during program
evaluation. It is arguable that this asymmetry should extend to bindings.

E.

Tim Bradshaw

unread,
May 1, 2003, 4:17:12 AM5/1/03
to
* Tom Lord wrote:

> For the namespace convenience of function bindings, I pay the cost of
> having to spread throughout my code decisions about whether or not
> I'm referring to function bindings or variable bindings. And if I
> change my mind about that, or have two uses for my code that would
> make that decision differently, I'm screwed.

I really can't see a case where this matters. If your decide to
change your code so something wants to be an operator binding (I made
this term up: I mean `function or macro binding') instead of simply a
normal binding then you already have to change really a whole lot of
stuff: everything that said (op x ...) now needs to say (x ...).
About the least of your problems is going to be that the binding
construct is different. Of course, maybe this change is all in
machine generated code, but then who cares anyway.

And note, even when you have a single namespace you *still* need two
binding constructs, because you need to know whether the scope name
being bound includes the binding itself (let / letrec).

> Yes, lots of programs introduce "namespaces" of a sort -- heck, many
> hash-tables effectively count as one. Generally these don't change
> the evaluation rules of the language, though.

Now this is just stupid. Of course they do, unless you never write
nontrivial macros and so live entirely within the semantics of the
base language. Common Lisp people don't do that - they define
languages on top of CL with their own evaluation rules all the time.

I think I'm probably wasting my time here though, so I'll stop.

--tim


Dorai Sitaram

unread,
May 1, 2003, 9:39:29 AM5/1/03
to
In article <b8ocn1$rrg$1...@f1node01.rhrz.uni-bonn.de>,

Pascal Costanza <cost...@web.de> wrote:
>
>There are several reasons why I like a separate function namespace more
>than a unified namespace. The most important ones are:
>
>+ Functions and values are different, i.e. (eql 3 (lambda () 3)) returns
>nil for very good reasons. So in effect it's more intuitive to treat
>them differently IMHO. (but see below)

I saw below, but the example still eludes me. The
non-equality of 3 and (lambda () 3) is assured whether
the namespace is split or not, so what exactly
are you highlighting with this example...?

Patrick O'Donnell

unread,
May 1, 2003, 10:40:20 AM5/1/03
to
she...@mail.com (Simon H.) writes:
> Of all the functional languages I've used (Scheme, OCaml, etc) I
> notice that Lisp is the only one with a seperate namespace for
> functions.

And, yet, it functions quite well.



> It just recently struck me that it seems to be a rather
> odd and counter-intuitive way of doing things,

I do hope the strike was not painful.

> especially since I can't think of a single other language that does
> it,

Perhaps you have another think coming. I can immediately think of at
least one.



> however I assume there's some reason for it...

Reason further on the topic, and it may come to you.



> Or at least an excuse.

You're excused.


My objection is to "counter-intuitive". "Unaccustomed" is perhaps
more accurate. If you find linguistic distinction between action and
naming counter-intuitive, it may be that your intuitions have been
reshaped to an unnatural degree by considerations of functional
languages. Consider the natural world, and you'll find many examples
of multiple "namespaces". The human mind is much more flexible than
your complaint gives it credit for.

- Pat

Joe Marshall

unread,
May 1, 2003, 12:48:04 PM5/1/03
to
Pascal Costanza <cost...@web.de> writes:

> I have the following definitions in my startup files:
>
> (defun open-curl-macro-char (stream char)
> (declare (ignore char))
> (let ((forms (read-delimited-list #\} stream t)))
> `(funcall ,@forms)))
>
> (set-macro-character #\{ #'open-curl-macro-char)
> (set-macro-character #\} (get-macro-character #\)))
>
>
> Now I can write {f args} instead of (funcall f args).

Wow. I bet it runs faster, too.

Pascal Costanza

unread,
May 1, 2003, 1:59:17 PM5/1/03
to
In article <r87ieh...@ccs.neu.edu>, Joe Marshall <j...@ccs.neu.edu>
wrote:

> > Now I can write {f args} instead of (funcall f args).
>
> Wow. I bet it runs faster, too.

You have forgotten the smiley. ;)

Pascal Costanza

unread,
May 1, 2003, 2:11:27 PM5/1/03
to
In article <b8r82h$7au$1...@news.gte.com>,
ds...@goldshoe.gte.com (Dorai Sitaram) wrote:

> I saw below, but the example still eludes me. The
> non-equality of 3 and (lambda () 3) is assured whether
> the namespace is split or not, so what exactly
> are you highlighting with this example...?

Both expressions always evaluate to 3, so there is no obvious reason why
they shouldn't be regarded equal. However in general, you cannot
determine such properties of functions - you cannot give a reasonable
semantics for (function-eql (lambda (...) ...) (lambda (...) ...)). So
there is a fundamental difference between values and functions.

You always have to keep this fundamental difference in mind, whether the
language tries to hide it or not.

Dorai Sitaram

unread,
May 1, 2003, 2:51:14 PM5/1/03
to
In article <costanza-9EE5AC...@news.netcologne.de>,

Pascal Costanza <cost...@web.de> wrote:
>In article <b8r82h$7au$1...@news.gte.com>,
> ds...@goldshoe.gte.com (Dorai Sitaram) wrote:
>
>> I saw below, but the example still eludes me. The
>> non-equality of 3 and (lambda () 3) is assured whether
>> the namespace is split or not, so what exactly
>> are you highlighting with this example...?
>
>Both expressions always evaluate to 3, so there is no obvious reason why
>they shouldn't be regarded equal.

Ah, you are making a little joke without the smiley
prop, yes? The .de in your address kind of
blinded me to that possibility...

Thomas A. Russ

unread,
May 1, 2003, 12:25:38 PM5/1/03
to

Well, there are a number of people (including me) who think that
multiple namespaces are a real boon. This is especially the case with
large systems.

That is why Common Lisp goes to all that trouble to define packages: to
increase the number of namespaces so as to make name collisions less
likely to occur. It is the same with the other namespaces. I think
this is one of the key design issues on which we disagree, since uniform
namespaces make it more difficult to manage large systems.

Now, you may, of course, think differently about some of the
philosophical issues, but I tend to think of functions and values as
being separate things. In other words, when I am thinking about what
functions are available, I don't automatically also think of what values
are available. I tend to concentrate on one or the other. This may, of
course, just be training from working a long time in a Lisp-2, but
(ignoring the function-as-data paradigm) there is a fundamental
difference between a name that denotes a functional value and one that
denotes a non-functional value.

That difference is that if you try to apply a non-functional value to
some arguments you will get an error. Even in Scheme, doing something
like

(let ((x 3)) (x 10)) ==> An error.

In order to use a functional value as a function, you really do need to
know that it is a function. It is syntactically distinguished by both
lisp-1 and lisp-2 languages since functions are used differently than
values.

Now even in a lisp-2 language you can bind function values to the value
of lexical variables using let. You can then treat it just the same as
any other value. For example the following is legal:

(let ((f #'(lambda (a b) (* a b))))
(print f)
(cons 'b f))

It also occurs to me that having the lisp-1 makes writing macros a lot
trickier. The problem is that if you have a macro that expands into
some code, you have to worry not only about variable capture, but you
also have to worry about a redefinition or shadowing of the function
names as well. I'm not familiar enough with Scheme's hygienic macros to
know if this is really a problem or not.

What if one wrote a macro FIRST that took the CAR of a list. In CL this
might look like

(defmacro first (x) `(car ,x))

Well, then what if one were to write some snippet of code like the
following:

(let ((car 'ford)
(desired-vehicles '(bmw mercedes)))
(eq? car (first desired-vehicles)))

Does the lexical binding of CAR to 'FORD shadow the expanded code from
the macro FIRST and cause it to fail? If so, I claim that this is a bad
property since one doesn't want to have to know what functions a macro
uses in its expansion in order to be sure of avoiding problems.

Now you can run into the same situation in CL with respect to capture if
use FLETs for functions.

One answer is, of course, not to use built-in function names as variable
names, ever. Of course, this doesn't necessarily help you if the macro
expands into a user-defined function that you didn't know about.

In fact, I would claim that in a Lisp-1 system, in order to safely use
it, you really need to know about the entire namespace of defined
functions, so that you can choose your lexical variables names to avoid
conflicts. Or else you give up on macros.

Separate namespaces really do provide some real advantages, so I think
that having "namespace convenience" is a real bonus. If I didn't like
all of the conveniences of Common Lisp, I would be programming in C.

--
Thomas A. Russ, USC/Information Sciences Institute t...@isi.edu

Joe Marshall

unread,
May 1, 2003, 3:02:46 PM5/1/03
to
Pascal Costanza <cost...@web.de> writes:

> In article <r87ieh...@ccs.neu.edu>, Joe Marshall <j...@ccs.neu.edu>
> wrote:
>
> > > Now I can write {f args} instead of (funcall f args).
> >
> > Wow. I bet it runs faster, too.
>
> You have forgotten the smiley. ;)

No I didn't. I was being completely serious.

Erann Gat

unread,
May 1, 2003, 3:58:52 PM5/1/03
to
In article <costanza-9EE5AC...@news.netcologne.de>, Pascal
Costanza <cost...@web.de> wrote:

> In article <b8r82h$7au$1...@news.gte.com>,
> ds...@goldshoe.gte.com (Dorai Sitaram) wrote:
>
> > I saw below, but the example still eludes me. The
> > non-equality of 3 and (lambda () 3) is assured whether
> > the namespace is split or not, so what exactly
> > are you highlighting with this example...?
>
> Both expressions always evaluate to 3, so there is no obvious reason why
> they shouldn't be regarded equal.

Er, no. (lambda () 3) does not evaluate to 3. ((lambda () 3)) does, but
that's not the same thing.

> However in general, you cannot
> determine such properties of functions - you cannot give a reasonable
> semantics for (function-eql (lambda (...) ...) (lambda (...) ...)). So
> there is a fundamental difference between values and functions.

It's not so clear that you can give a reasonable semantics of value-eql
for all values. Consider:

#1=(1 2 #1 #1) and #1=(1 2 #2=(1 2 #2) #2)

Are these value-eql or not?

> You always have to keep this fundamental difference in mind, whether the
> language tries to hide it or not.

The difference between functions and values is not fundamental. It does,
however, seem to be very strongly ingrained in the human psyche.

E.

Jeff Caldwell

unread,
May 1, 2003, 7:24:48 PM5/1/03
to
The expressions Dorai posted do not both evaluate to 3.

CL-USER 1 > 3
3

CL-USER 2 > (lambda () 3)
#'(LAMBDA NIL 3)

CL-USER 3 > ((lambda () 3))
3

Jeff


Pascal Costanza wrote:
> ds...@goldshoe.gte.com (Dorai Sitaram) wrote:
>
>>I saw below, but the example still eludes me. The
>>non-equality of 3 and (lambda () 3) is assured whether
>>the namespace is split or not, so what exactly
>>are you highlighting with this example...?

...


> Both expressions always evaluate to 3, so there is no obvious reason why
> they shouldn't be regarded equal.
>

> Pascal
>

Pascal Costanza

unread,
May 1, 2003, 8:26:20 PM5/1/03
to
In article <gat-010503...@k-137-79-50-101.jpl.nasa.gov>,
g...@jpl.nasa.gov (Erann Gat) wrote:

> It's not so clear that you can give a reasonable semantics of value-eql
> for all values. Consider:
>
> #1=(1 2 #1 #1) and #1=(1 2 #2=(1 2 #2) #2)

You're right. I stand corrected.

Pascal Costanza

unread,
May 1, 2003, 8:27:32 PM5/1/03
to
In article <b8rqb2$7h5$1...@news.gte.com>,
ds...@goldshoe.gte.com (Dorai Sitaram) wrote:

> >Both expressions always evaluate to 3, so there is no obvious reason why
> >they shouldn't be regarded equal.
>
> Ah, you are making a little joke without the smiley
> prop, yes?

No, I had the wrong mental model and made a mistake.


Pascal

Pascal Costanza

unread,
May 1, 2003, 8:28:45 PM5/1/03
to
In article <fznyeb...@ccs.neu.edu>, Joe Marshall <j...@ccs.neu.edu>
wrote:

> > > > Now I can write {f args} instead of (funcall f args).
> > >
> > > Wow. I bet it runs faster, too.
> >
> > You have forgotten the smiley. ;)
>
> No I didn't. I was being completely serious.

Hmm, so why do you think it runs faster? {f args} just gets translated
to (funcall f args) at readtime...

?!?

Pascal

Dorai Sitaram

unread,
May 2, 2003, 9:08:41 AM5/2/03
to
In article <costanza-C564A1...@news.netcologne.de>,

Well, um, OK, I guess. I do wish you'd take this
as a lesson, and curb your overeagerness in dissing
Lisp-1 and Scheme with bogus reasons. It is a bit too
mealy-mouthed to say "I had the wrong mental model and
made a mistake" when your "reasons" are questioned. :-)

Pascal Costanza

unread,
May 2, 2003, 9:43:39 AM5/2/03
to

Well, fortunately this wasn't the only reason I have given, and I don't
think the others were equally bogus. ;-)

To be serious: Most of the time when people argue for a Lisp-2 they
point to practical advantages, whereas the practical advantages of a
Lisp-1 weren't clear to me before this thread - I haven't read any good
expositions in this regard yet. In this thread, Tom Lord has given the
first practical advantage of a Lisp-1 that I can relate to, namely that
it is easier to refactor your code in a Lisp-1.

In Common Lisp, if you have code like this:

(defun f (x) (...))
(defun g (x) (... (f x) ...))

(g 5)

...and you want to switch from using a global function to a parameter
you have to replace all the calls of the global functions as follows:

(defun g (x f) (... (funcall f x) ...))

(g 5 #'f)

Whereas in Scheme, when you start from:

(define (f x) (...))
(define (g x) (... (f x) ...))

(g 5)

You can switch to the parameterized version like this:

(define (g x f) (... (f x) ...))

(g 5 f)

The important point here is that the body of g doesn't change at all.
That's a real advantage I didn't know about before.


Pascal

--
Pascal Costanza University of Bonn
mailto:cost...@web.de Institute of Computer Science III
http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)

Erann Gat

unread,
May 2, 2003, 10:53:28 AM5/2/03
to
In article <b8tsme$aoc$1...@f1node01.rhrz.uni-bonn.de>, Pascal Costanza
<cost...@web.de> wrote:

> Whereas in Scheme, when you start from:
>
> (define (f x) (...))
> (define (g x) (... (f x) ...))
>
> (g 5)
>
> You can switch to the parameterized version like this:
>
> (define (g x f) (... (f x) ...))
>
> (g 5 f)
>
> The important point here is that the body of g doesn't change at all.
> That's a real advantage I didn't know about before.

If you think about it you will realize that this "advantage" is
*precisely* the same as the classic "disadvantage" of a Lisp-1. Sometimes
you want a lexical named F to shadow the global function F, sometimes you
don't (the classic examples are when F is spelled "LIST" or "CAR").

The Right Answer IMO is to have first-class dynamic environments that have
separate function and value maps. At the user's option the function and
value maps can be set to the same map object thus providing the semantics
of a Lisp-1, or to two separate objects, thus providing the semantics of a
Lisp-2. That way you get the best of both worlds.

E.

Pascal Costanza

unread,
May 2, 2003, 11:08:00 AM5/2/03
to
Erann Gat wrote:

> The Right Answer IMO is to have first-class dynamic environments that have
> separate function and value maps. At the user's option the function and
> value maps can be set to the same map object thus providing the semantics
> of a Lisp-1, or to two separate objects, thus providing the semantics of a
> Lisp-2. That way you get the best of both worlds.

Do you know Lisps that offer such constructs?

Kent M Pitman

unread,
May 2, 2003, 12:03:16 PM5/2/03
to
Pascal Costanza <cost...@web.de> writes:

> In Common Lisp, if you have code like this:
>
> (defun f (x) (...))
> (defun g (x) (... (f x) ...))
>
> (g 5)
>
> ...and you want to switch from using a global function to a parameter
> you have to replace all the calls of the global functions as follows:
>
> (defun g (x f) (... (funcall f x) ...))
>
> (g 5 #'f)

In Common Lisp, if you have code like this:

(defvar *table* (make-hash-table))

(defun register (name element list)
(push (cons element list) (gethash name *table*)))

... and you want to use some new function in the body, you don't have to
go checking the bound variable list to see if it's a good name. Also, when
changing the name of a lexical variable, you don't have to modify existing
references to functions that might be in use. e.g., deciding to use list
instead of cons in what's stored in the table above:

(defvar *table* (make-hash-table))

(defun register (name element list)
(push (list element list) (gethash name *table*)))

These examples always look stupid in the small but they are common enough
in the large. Scheme programmers know these are an issue and that's why they
religiously misspell names of system functions when using them as arguments.
A Scheme programmer would say the above would never happen either because
he would write

(define (register name element lst) ...)

to start with, so that LIST was not accidentally bound in the first place
or else the Scheme programmer would make the false claim that no one ever
wants to use the name of any system function as a variable.

Common Lisp is full of functions that are utilities for manipulating data.
A great many of these has a constructor by the same name as the type.
Many of the manipulators of the type take just one instance of the type,
and using the type name is an argument name.

Historical Aside: A lot of proto-Common Lisp programmers, at the birth
of CLOS, were dubious about how often they'd want multiple dispatch.
Many of those with much experience in what was then still called
object-oriented programming (Smalltalk, Flavors, etc.) kept insisting
that multiple dispatch just didn't come up in their code and that
probably CLOS didn't need it. But the CLOS committee argued that it
didn't come up because people knew it was going to be a pain and
mentally steered things away from something they might prefer to use
if it were there. (I personally was a bit in the middle; I didn't
think the CLOS committee's prediction was likely to be true, but I
didn't see the harm in going their way anyway, just to see.) My
personal subjective assessment is that the CLOS committee turned out
to be right. The incidence rate of multiple dispatch is dramatically
higher than would have been predicted by poeple from those other
backgrounds where it wasn't available (including myself).

And I think Scheme people should not be so quick to say that no one wants
the separation between functions and variables.

Human languages overload the meanings of words according to type.
We've had this conversation before. Do a google groups search for
"Como como como." (Spanish sentence "I eat how I eat" which
illustrates the differing uses of "como" depending on parts of the
sentence.) The English sentence "Buffalo buffalo buffalo." likewise
has different meanings based on context. I seem to recall that there
was an extended conversation of other languages in which this happens
and you can read about it in the archives. But the point is that that
human wetware does not, left to its own devices, naturally avoid
varying the meaning of words based on syntactic placement in
sentences. Lots of (perhaps almost all) natural languages seem to
gravitate toward this. My informal assumption about why this is
"because they can" and "because it would be foolish to have a
capability and not use it", given how strong a need we have for
concise expression when getting complex thoughts out of our brains in
finite form.

Consequently, the spartan "I will deny myself a capability my brain is
obviously perfectly well capatble of exploiting fully" attitude of the
Scheme community does not sit well with me. Maybe indeed it makes the
language more complicated to implement; I just don't care. I care
only that (a) it is _possible_ to implement the language and (b) the
language is easy to use. In other words, ease of use and ease of
implementation are frequently at odds, and I prefer ease of use. I want
short user programs, not short language definitions. I think designers
should carefully check that it's feasible to implement what they design,
so that implementors are not held to impossible standards (pardon the
pun), but beyond that their allegience should be to users.

Users who don't want to use same-named variables and functions are welcome
not to use all the variable names available to them.

As to the issue of funcall and function (#'), those were explicit design
decisions as well. Think of funcall and function as (very approximately)
marshall/unmarshall operations for grabbing something from universe and
forcing it back. CL was designed to _permit_ this kind of thing, but it
was an explicit decision of the design committee not to have it be happening
by accident. The people in the room at the time very clearly said "we
don't want to be _that_ FP". The use of #' makes it easy to find the places
where something stored in the function domain is creeping out of its normal
home and moving to someplace else. The use of the FUNCALL makes it easy to
find where functional parameters are being called.

> Whereas in Scheme, when you start from:
>
> (define (f x) (...))
> (define (g x) (... (f x) ...))
>
> (g 5)
>
> You can switch to the parameterized version like this:
>
> (define (g x f) (... (f x) ...))
>
> (g 5 f)
>
> The important point here is that the body of g doesn't change at
> all. That's a real advantage I didn't know about before.

And it's only half the cases that a real language designer would have to
consider in making a good choice. That is, there are techniques for
exhaustively exploring a syntax space methodically so you don't get blindsided
by things you should have thought of. And doing so yields the fact
that the world is more complex than you cite. Consider:

* Renaming a bound variable might intentionally capture a contained name.
* Renaming a bound variable might unintentionally capture a contained name.
* Renaming a contained name might cause it to be intentionally captured.
* Renaming a contained name might cause it to be unintentionally captured.

Your example looks at the case where you set up the circumstances for the
intentional case and you neglect where it happens by accident. But in the
full world, the situation is balanced. The issue is not that the things
programmers need when renaming are hard in one language and easy in another,
but rather that each language makes some renamings easy and some hard.
The case you need to make is therefore not one of "I might want to do x"
but "doing x is more statistically predicted".

And the things that we in the CL community observed is that Scheme
people routinely get name collisions because they so densely pack
their single namespace, such that they are often voluntarily
misspelling their function names. We decided we don't like this.

My guess is that people reject Lisp2 more out of kneejerk hatred of anything
different than because they've given it a chance and found it to be
unworkable. I also see bogus theories of aesthetic appeal that usually
begin with the implied assertion "The only possible theory of aesthetics
is the following" and then go on to show why we've violated that, without
ever asking "Might there be another possible theory of aesthetics?" But
then, I think the same mindset that leads one to believe that there is only
one namespace needed is the same mindset that leads one to believe that
there is only one theory of aesthetics needed. So there ya go.

All in all, I mostly think the communities are best off separated from
one another because of deep divisions in how people think which are revealed
by these superficial issues. It's better if you really disagree with me
on this that you go find people you agree with, than that you try to
convince me otherwise.

"It's easier to learn to like something
than to learn to not like something."
-- unknown source

I think the underlying point of the saying, though, is that learning to like
something is about learning to see that it has value, and understanding that
all things in the world are about trade-offs between values and problems.
Understanding something, and even liking it, is not a way of putting aside
what you thought before--it's monotonic. It's about understanding that there
are multiple ways to think about something. Can you think in Lisp1? Certainly.
But can you think in Lisp2? Certainly, too. At that point, it's a simple
choice. Lisp2 people choose it not because they are crazy, but because they
choose it. Lisp1 people seem to think Lisp2 is not a choice. I don't care
if they ever choose it. I do care that they come to understand that it is
a valid choice.

"There are two kinds of people in the world.
People who think there are two kinds of people,
and people who don't."
-- another unknown source

Pascal Costanza

unread,
May 2, 2003, 1:02:52 PM5/2/03
to
Kent,

Thanks a lot for your excellent posting. [1]

As an addendum to the current thread, someone else has sent me a
solution how to avoid the introduction of funcall in the example I have
given:

> Pascal Costanza <cost...@web.de> writes:
>
>>In Common Lisp, if you have code like this:
>>
>>(defun f (x) (...))
>>(defun g (x) (... (f x) ...))
>>
>>(g 5)
>>
>>...and you want to switch from using a global function to a parameter
>>you have to replace all the calls of the global functions as follows:
>>
>>(defun g (x f) (... (funcall f x) ...))
>>
>>(g 5 #'f)

...what you can also do in this case is the following:

(defun g (x fn)
(flet ((f (x) (funcall fn x)))
(... (f x) ...)))

(g 5 #'f)

So even this one is not a real issue in a Lisp-2.

You could even write a macro for that (not tested!):

(defmacro with-function (f &body body)
(with-gensyms (args)
`(flet ((,f (&rest ,args) (apply ,f ,args)))
,@body)))

...and then...

(defun g (x f)
(with-function f
(... (f x) ...)))


Thank god I can stick to Common Lisp! ;) Or to put it differently, I
still wonder what the merits of a Lisp-1 are...


Pascal


[1] I also especially like the historical notes you give every now and
then. (Perhaps it'd be a good a idea to collect them and make them
available somewhere...)

Erann Gat

unread,
May 2, 2003, 1:17:01 PM5/2/03
to
In article <b8u1kk$kqe$1...@f1node01.rhrz.uni-bonn.de>, Pascal Costanza
<cost...@web.de> wrote:

> Erann Gat wrote:
>
> > The Right Answer IMO is to have first-class dynamic environments that have
> > separate function and value maps. At the user's option the function and
> > value maps can be set to the same map object thus providing the semantics
> > of a Lisp-1, or to two separate objects, thus providing the semantics of a
> > Lisp-2. That way you get the best of both worlds.
>
> Do you know Lisps that offer such constructs?

Not yet :-)

T had first-class environments, but it was a Lisp-1. But I think T's
design makes a good starting point.

E.

Erann Gat

unread,
May 2, 2003, 1:32:36 PM5/2/03
to
In article <b8u8bu$hvk$1...@f1node01.rhrz.uni-bonn.de>, Pascal Costanza
<cost...@web.de> wrote:

> You could even write a macro for that (not tested!):
>
> (defmacro with-function (f &body body)
> (with-gensyms (args)
> `(flet ((,f (&rest ,args) (apply ,f ,args)))
> ,@body)))
>
> ...and then...
>
> (defun g (x f)
> (with-function f
> (... (f x) ...)))

Say, that's a cute trick. We could even take it one step further:

(defmacro define ((fn &rest args) &body body)
`(defun ,fn ,args (with-functions ,args ,@body)))

E.

Pascal Costanza

unread,
May 2, 2003, 2:23:19 PM5/2/03
to

...and then:

(defmacro define (car &body body)
(if (listp car)
(destructuring-bind (fn &rest args) car


`(defun ,fn ,args (with-functions ,args ,@body)))

(with-gensyms (sym)
`(progn (defvar ,sym ,body)
(define-symbol-macro ,car ,sym)))))

;-)

Would it make sense to analye body in the second case in order to turn
this into a function in the case of (define f (lambda (...) ...))?


Pascal

Thomas F. Burdick

unread,
May 2, 2003, 2:36:54 PM5/2/03
to
Pascal Costanza <cost...@web.de> writes:

> Thank god I can stick to Common Lisp! ;) Or to put it differently, I
> still wonder what the merits of a Lisp-1 are...

Convenience. Of course, "convenience" only makes sense when you
understand what domain you're talking about -- so, convenience in
functional programming. This particular convenience feature has
*huge* drawbacks for other programming styles, so CL's Lisp-7-ness is
generally a feature. But if you *really* *really* want to program in
an all-functional style, what you lose from a Lisp-1 is probably less
than what you gain. Lisp being Lisp, you can make yourself a little
Lisp-1 world inside of a Lisp-2, and work inside of there, but in that
case you're really in your own world (I stand by my wording here :).

BTW, if you still don't get how a Lisp-1 is significantly convenient
to use, try learning some Scheme. Just be sure to put your brain in
academic-exercise mode first.

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Kent M Pitman

unread,
May 2, 2003, 2:42:38 PM5/2/03
to
Pascal Costanza <cost...@web.de> writes:

> (defmacro with-function (f &body body)
> (with-gensyms (args)
> `(flet ((,f (&rest ,args) (apply ,f ,args)))

(declare (ignorable #',f))

> ,@body)))

You want this so that others can insert this macro into situations where
in fact the f argument may or may not be used functionally.

Adrian Kubala

unread,
May 2, 2003, 2:21:48 PM5/2/03
to
On 1 May 2003, Thomas A. Russ wrote:
> there is a fundamental difference between a name that denotes a
> functional value and one that denotes a non-functional value.

You could say that about any type. To someone used to a functional style,
having to specify that you want the /functional/ binding makes about as
much sense as having to do:

(let ((x 3))
(print (int-value x)))

Erann Gat

unread,
May 2, 2003, 3:24:45 PM5/2/03
to
In article
<Pine.LNX.4.44.030502...@gwen.sixfingeredman.net>, Adrian
Kubala <adr...@sixfingeredman.net> wrote:

That's not quite as outrageous as it might seem. For a long time BASIC
had separate namespaces for integers, floats, and strings (denoted X%, X,
and X$). And Hungarian notation, which I am given to understand is still
very much in fashion in certain circles, is basically a user-level hack to
make a separate name space for every type.

E.

Tim Bradshaw

unread,
May 2, 2003, 5:36:22 PM5/2/03
to
* Erann Gat wrote:

> That's not quite as outrageous as it might seem. For a long time
> BASIC had separate namespaces for integers, floats, and strings
> (denoted X%, X, and X$). And Hungarian notation, which I am given
> to understand is still very much in fashion in certain circles, is
> basically a user-level hack to make a separate name space for every
> type.

Perl.

Kent M Pitman

unread,
May 2, 2003, 8:10:27 PM5/2/03
to
Adrian Kubala <adr...@sixfingeredman.net> writes:

Of course, you can say it feels like anything you like. But it's not really
the same. Because there is no reason that PRINT couldn't print the
integer x directly. There IS a reason that

(f x)

and

(funcall f x)

are different. That's exactly to allow programs in which the user's set
of operations required (e.g., CAR, CDR, CONS) do not inhibit his ability
to use various variables (e.g., if he's dealing with automobiles, to use
the variable CAR). Consider that if DOLIST were a macro that expands into
references to the CAR function, the following program would need to work:

(defun wash (car-list)
(dolist (car car-list)
(wash1 car)))

This works in Scheme because they added infinite hair in their hygeinic
macro system in order to remember whose car FIRST expands into; but it works
in CL because our rules and customs for macro expansion and namespace
separation do not lead to any name conflict using a _far simpler_ macro
system. Not just simpler to implement (which I don't care about at all),
but simpler to _use_, which I care about a lot.

_ XL1201 _ Sebek _ Budo _ Kafka @hotmail.com Michael Israel

unread,
May 2, 2003, 9:44:29 PM5/2/03
to

"Joe Marshall" <j...@ccs.neu.edu> wrote in message
news:fznyeb...@ccs.neu.edu...

You can even write:

{format t "~%C Sucks."} ;; ;)===>:)
{format t "~%&so doesWindows."} ;; 2tru2B_funny.


Erann Gat

unread,
May 3, 2003, 2:48:54 AM5/3/03
to

Oh yeah. That too.

:-)

E.

Christopher C. Stacy

unread,
May 3, 2003, 10:22:45 PM5/3/03
to
>>>>> On 02 May 2003 20:10:27 -0400, Kent M Pitman ("Kent") writes:

Kent> Adrian Kubala <adr...@sixfingeredman.net> writes:
>> On 1 May 2003, Thomas A. Russ wrote:
>> > there is a fundamental difference between a name that denotes a
>> > functional value and one that denotes a non-functional value.
>>
>> You could say that about any type. To someone used to a functional style,
>> having to specify that you want the /functional/ binding makes about as
>> much sense as having to do:
>>
>> (let ((x 3))
>> (print (int-value x)))

Kent> Of course, you can say it feels like anything you like. But it's not really
Kent> the same. Because there is no reason that PRINT couldn't print the
Kent> integer x directly. There IS a reason that

Kent> (f x)

Kent> and

Kent> (funcall f x)

Kent> are different. That's exactly to allow programs in which the user's set
Kent> of operations required (e.g., CAR, CDR, CONS) do not inhibit his ability
Kent> to use various variables (e.g., if he's dealing with automobiles, to use
Kent> the variable CAR). Consider that if DOLIST were a macro that expands into

And for telephony applications I've developed, Call Detail Records.

Joe Marshall

unread,
May 5, 2003, 3:07:20 PM5/5/03
to
Pascal Costanza <cost...@web.de> writes:

> Thank god I can stick to Common Lisp! ;) Or to put it differently, I
> still wonder what the merits of a Lisp-1 are...

You might want to take a look at `Structure and Interpretation of
Classical Mechanics' by Sussman and Wisdom. They use Scheme to
explain Lagrangian and Hamiltonian mechanics. In many of the
examples, the objects being dealt with are higher order functions.
For instance, an object's position may be described as a function of
time to coordinates. A coordinate transformation is a function from
one coordinate system to another. Applying the coordinate transform
to the function allows you to view an object's position in a different
coordinate frame. You end up with some very abstract functions and
some deep nesting. This leads to an abundance of FUNCALLs in the
Lisp-2 version.

;; Scheme version
(define ((F->C F) local)
(->local (time local)
(F local)
(+ (((partial 0) F) local)
(* (((partial 1) F) local)
(velocity local)))))

;; CL version
(define F->C (F)
#'(lambda (local)
(->local (time local)
(funcall F local)
(+ (funcall (funcall (partial 0) F) local)
(* (funcall (funcall (partial 1) F) local)
(velocity local))))))

Pascal Costanza

unread,
May 5, 2003, 4:38:49 PM5/5/03
to
In article <vfwpci...@ccs.neu.edu>, Joe Marshall <j...@ccs.neu.edu>
wrote:

> Pascal Costanza <cost...@web.de> writes:


>
> > Thank god I can stick to Common Lisp! ;) Or to put it differently, I
> > still wonder what the merits of a Lisp-1 are...
>

[...]


>This leads to an abundance of FUNCALLs in the
> Lisp-2 version.
>
> ;; Scheme version
> (define ((F->C F) local)
> (->local (time local)
> (F local)
> (+ (((partial 0) F) local)
> (* (((partial 1) F) local)
> (velocity local)))))
>
> ;; CL version
> (define F->C (F)
> #'(lambda (local)
> (->local (time local)
> (funcall F local)
> (+ (funcall (funcall (partial 0) F) local)
> (* (funcall (funcall (partial 1) F) local)
> (velocity local))))))

Thanks for this good example.

In the meantime I have thought about it and came to the following
conclusion: The advantage of a Lisp-1 doesn't lie specifically in the
unified namespace. Instead, if you want to do XFP ("extreme functional
programming" ;) you regularly want to apply functions that are
themselves results of higher-order functions. This means that you not
only want to treat the argument positions of an sexpr to be evaluated
but you want the function position to be also a "normal" arbitrary
expression. (and the example you have posted illustrates this nicely)

So ((some-expression) ...) should evaluate (some-expression) in order to
determine the actual function to be applied. Now, when you have (f ...),
f should also be interpreted as a "normal" expression - and this
consequentially means that f must be a variable, and not somehting
special.

So Schemers don't want Lisp-1 in the first place, but only as a
consequence of another feature in their language.

I feel enlightened. ;) [1]


Pascal

[1] ...but this doesn't mean that I will switch to Scheme. I don't want
to do XFP... ;)

Erann Gat

unread,
May 5, 2003, 5:55:44 PM5/5/03
to
In article <costanza-E189AA...@news.netcologne.de>, Pascal
Costanza <cost...@web.de> wrote:

> So ((some-expression) ...) should evaluate (some-expression) in order to
> determine the actual function to be applied.

Note that this feature could be added as a backwards-compatible extension
to Common Lisp.

E.

Adrian Kubala

unread,
May 6, 2003, 1:06:42 AM5/6/03
to
On 2 May 2003, Kent M Pitman wrote:
> [...]

> This works in Scheme because they added infinite hair in their hygeinic
> macro system in order to remember whose car FIRST expands into; but it works
> in CL because our rules and customs for macro expansion and namespace
> separation do not lead to any name conflict using a _far simpler_ macro
> system. Not just simpler to implement (which I don't care about at all),
> but simpler to _use_, which I care about a lot.

In other words, CL exports the complexity involved in making macros
hygenic into the whole rest of the language, and it /still/ doesn't
completely solve the problem of unintentionally-shadowed bindings.

Kent M Pitman

unread,
May 6, 2003, 1:59:29 AM5/6/03
to
Adrian Kubala <adr...@sixfingeredman.net> writes:

> On 2 May 2003, Kent M Pitman wrote:
> > [...]
> > This works in Scheme because they added infinite hair in their hygeinic
> > macro system in order to remember whose car FIRST expands into; but it works
> > in CL because our rules and customs for macro expansion and namespace
> > separation do not lead to any name conflict using a _far simpler_ macro
> > system. Not just simpler to implement (which I don't care about at all),
> > but simpler to _use_, which I care about a lot.
>
> In other words, CL exports the complexity involved in making macros
> hygenic into the whole rest of the language,

And the power. That is, the so-called complexity is useful for other
kinds of namespacing that Scheme does not aspire to do. That ought
not be taken as a value judgment on Scheme, but neither should you
take it as a slight on CL. CL uses a package system that partitions
not just programs but data; Scheme's lexical contours can only
partition programs. Each has their use. All in all, though, it's easier
to implement Scheme using CL data structures than CL using Scheme data
structures. In CL, you can just make a SCHEME package and put Scheme
there; in Scheme, you can't use Scheme symbols at all straightforwardly
as CL symbols.

> and it /still/ doesn't
> completely solve the problem of unintentionally-shadowed bindings.

I don't believe that's so. I believe it solves the problem to the
exact same extent that Scheme does.

This sounds like flamebait since you did not cite a problem, merely make
a broad sweeping claim with no foundation.

If you are having trouble figuring out how to write something material
in CL, perhaps you could offer your example of a real programming problem
that you are inhibited from doing using CL. But I doubt that is so.

Whatever theoretical problems you're having are probably due to fighting
the paradigm rather than trying to learn it.

If CL isn't your cup of tea, you're welcome not to use it. The rest
of us are using it just fine, and have been for several decades now.
I've been employed at more than one vendor and can say with some
reliability that, to round numbers, no complaints are ever received
about the package system failing to allow them to write macros that
stand up under heavy stress.

Pascal Costanza

unread,
May 6, 2003, 3:30:04 AM5/6/03
to

You need a code walker in order to implement this, right? Or is there a
simpler way?


Pascal

Pascal Costanza

unread,
May 6, 2003, 3:34:36 AM5/6/03
to

Hygienic macros make as much sense as static typing. They repel
elephants. From http://c2.com/cgi/wiki?StaticTypingRepelsElephants :

Salesman: "Want to buy some elephant repellent?"
Customer: "But there aren't any elephants around here!"
Salesman: "See how well it works?"

;-)

Paul F. Dietz

unread,
May 6, 2003, 5:57:28 AM5/6/03
to
Adrian Kubala wrote:

> In other words, CL exports the complexity involved in making macros
> hygenic into the whole rest of the language, and it /still/ doesn't
> completely solve the problem of unintentionally-shadowed bindings.

Which is ok, since it's not a problem in practice. Why screw up
the language to solve a nonproblem?

Paul


Erann Gat

unread,
May 6, 2003, 12:14:38 PM5/6/03
to
In article <sfwr87c...@shell01.TheWorld.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> no complaints are ever received
> about the package system failing to allow them to write macros that
> stand up under heavy stress.

This is true, but one does regularly hear other complaints about the
package system, the most common being:

(foo)

Error: unbound variable FOO ; Doh! Forgot to load a module

(require 'foo)

Loading FOO

(use-package 'foo)

Error: importing symbol FOO:FOO would result in a name conflict ; AAARRGGHH!

This situation gets particularly annoying when there are hundreds of
symbols involved.

E.

Adrian Kubala

unread,
May 6, 2003, 6:54:03 PM5/6/03
to
On Tue, 6 May 2003, Pascal Costanza wrote:
> Adrian Kubala wrote:
> > In other words, CL exports the complexity involved in making macros
> > hygenic into the whole rest of the language, and it /still/ doesn't
> > completely solve the problem of unintentionally-shadowed bindings.
>
> Hygienic macros make as much sense as static typing. They repel
> elephants. From http://c2.com/cgi/wiki?StaticTypingRepelsElephants :

I don't see the purpose of static typing as catching "type errors" (at
least in the common, limited definition of a type error) -- it's an
attempt at formalizing certain properties of your programs in a
computer-understandable way. In the same way, scheme's macros formalize
rules about how bindings are captured (or not) by macros.

It's probably true that both these ideas are still too immature and not
pragmatic in the short term.

Adrian Kubala

unread,
May 6, 2003, 7:06:38 PM5/6/03
to
On 6 May 2003, Kent M Pitman wrote:

> Adrian Kubala <adr...@sixfingeredman.net> writes:
> > In other words, CL exports the complexity involved in making macros
> > hygenic into the whole rest of the language,
>
> All in all, though, it's easier to implement Scheme using CL data
> structures than CL using Scheme data structures. In CL, you can just
> make a SCHEME package and put Scheme there; in Scheme, you can't use
> Scheme symbols at all straightforwardly as CL symbols.

That should hardly be taken as an argument for CL -- the purpose of a
language is to limit the number of easily-expressible programs. Structured
languages cannot express "goto" -- however, since they can express almost
all the "good" uses of goto, and none of the "bad" ones, they are
preferred. Not that I'm prepared to argue this, but one could say that
such is the case with Lisp 1 + hygenic macros -- it allows one to do all
the useful things one does in Lisp 2, while avoiding more possible bugs.

Kent M Pitman

unread,
May 6, 2003, 8:02:46 PM5/6/03
to
Adrian Kubala <adr...@sixfingeredman.net> writes:

One could say this, but they'd be wrong. ;)

I had an actual mid-term test in compiler design at MIT on which I was
asked: Gotos are (a) good (b) bad. I have to say, I just don't believe
in absolute good and absolute bad in this sense. It was this that caused
me to realize I was studying religion, not science.

I also don't believe the purpose of the language should be limit the number
of easily expressible programs. Mostly I think the opposite. You need
more exposition here.

Kent M Pitman

unread,
May 6, 2003, 8:05:31 PM5/6/03
to
Adrian Kubala <adr...@sixfingeredman.net> writes:

It's probably also true that even when mature, some of us will not
want to bother with them. Our resistence to them is not their
maturity. It is the lack of demonstrated need. They are solving a
problem we don't have. It's fine for them to experiment this to
death, but we have our own problems to experiment with. That's WHY
we are different languages.

Duane Rettig

unread,
May 6, 2003, 8:06:56 PM5/6/03
to
Adrian Kubala <adr...@sixfingeredman.net> writes:

> On 6 May 2003, Kent M Pitman wrote:
> > Adrian Kubala <adr...@sixfingeredman.net> writes:
> > > In other words, CL exports the complexity involved in making macros
> > > hygenic into the whole rest of the language,
> >
> > All in all, though, it's easier to implement Scheme using CL data
> > structures than CL using Scheme data structures. In CL, you can just
> > make a SCHEME package and put Scheme there; in Scheme, you can't use
> > Scheme symbols at all straightforwardly as CL symbols.
>
> That should hardly be taken as an argument for CL -- the purpose of a
> language is to limit the number of easily-expressible programs.

That is _definitely_ not Common Lisp's purpose. Your statement might be
accurate for a _specific_ language, but not all languages. Perhaps Scheme
desires to restrict, but CL definitely desires to enable.

> Structured
> languages cannot express "goto" -- however, since they can express almost
> all the "good" uses of goto, and none of the "bad" ones, they are
> preferred.

Again, CL doesn't claim to be a Structured language, any more than it
claims to be a functional language (although it can allow either or both
kinds of programming). Note that CL _does_ have goto, as in CL:GO.

> Not that I'm prepared to argue this, but one could say that
> such is the case with Lisp 1 + hygenic macros -- it allows one to do all
> the useful things one does in Lisp 2, while avoiding more possible bugs.

This is a different argument than you are making above. It may indeed be
the case that CL gives you a bigger gun to hang yourself with, or more rope
with which to shoot yourself in the foot, and I like it like that.

--
Duane Rettig du...@franz.com Franz Inc. http://www.franz.com/
555 12th St., Suite 1450 http://www.555citycenter.com/
Oakland, Ca. 94607 Phone: (510) 452-2000; Fax: (510) 452-0182

Nikodemus Siivola

unread,
May 6, 2003, 8:30:28 PM5/6/03
to
Adrian Kubala <adr...@sixfingeredman.net> wrote:

> the purpose of a language is to limit the number of easily-expressible
> programs

What!? That's like saying "purpose of harmony is chaos", or any other
oxymoron of your choise... Please explain.

Cheers,

-- Nikodemus

Kent M Pitman

unread,
May 6, 2003, 9:37:36 PM5/6/03
to
Nikodemus Siivola <tsii...@kekkonen.cs.hut.fi> writes:

I hate to speak for another, so Adrian can correct me if he's wrong, but
I have to assume that what he means implies that languages allow one to
express 'good' things and 'bad' things and that to the extent that one can
make illegal the bad things that one can easily reach, one has reduced the
amount of runtime grief. This is sort of like childproofing your house.

This argument is presently playing out in US politics where it comes as
a surprise to a generation of people who have grown up without the need
to contemplate the design of their own political system--they are merely
consumers of it--and they have been surprised to learn that 'freedom'
doesn't imply 'safety' nor vice versa. Worse, many (falsely) imagine
that this is something the founding fathers did not intend and are busy
'fixing' things by making things more safe, even at the expense of freedom.

It is certainly the case that one can trade freedom for safety, but many
of us don't want this kind of trade.

Reference: Disneyland with the Death Penalty
Wired Magazine, Issue 1.04, Sep/Oct 1993
http://www.wired.com/wired/archive/1.04/gibson.html

"They that can give up essential liberty to obtain a
little temporary safety deserve neither liberty nor safety."
-- Benjamin Franklin
Historical Review of Pennsylvania
http://www.bartleby.com/100/245.1.html

I don't want to push the predictive nature of this metaphor too
precisely because there are some fundamental properties of the
linguistic world that are different than the real world. For example,
in the linguistic realm, one does not _live_ in the language and one
can use languages for one purpose and not another. However, the basic
notion that freedom and safety are in tension with one another does
mostly carry over, and is worth giving serious heed to. And for those
of us to whom linguistic freedom is important, the value of using
tools that let us express ourselves the way we want and not in some
government-approved "safe" way is not to be ignored even if the
consequences are less dramatic in the linguistic domain.

Michael Livshin

unread,
May 7, 2003, 5:50:17 AM5/7/03
to
Kent M Pitman <pit...@world.std.com> writes:

> I hate to speak for another, so Adrian can correct me if he's wrong

priceless. :)

> , but I have to assume that what he means implies that languages
> allow one to express 'good' things and 'bad' things and that to the
> extent that one can make illegal the bad things that one can easily
> reach, one has reduced the amount of runtime grief. This is sort of
> like childproofing your house.

let's see. how do you view CL's lack of general pointers to
arbitrary memory locations, like those they have in C? is that a
good thing or a bad thing?

the problem with hygenic macros, in my view, is not that they restrict
you, but precisely that they don't buy you anything if you are using
CL.

the (forced) abstaining from using pointers, on the other hand,
obviously does buy you some beneficial language properties.

so I wouldn't want to make grand sweeping claims about desirability
of semantic restrictions or lack thereof. it's always a trade-off.

--
Purely applicative languages are poorly applicable.
-- Alan Perlis

_ XL1201 _ Sebek _ Budo _ Kafka @hotmail.com Franz Kafka

unread,
May 7, 2003, 7:34:32 AM5/7/03
to
Michael Livshin:
>
> How do you view CL's lack of general pointers to

> arbitrary memory locations, like those they have in C? is that a
> good thing or a bad thing?
>

Very Very Very Good Thing!!!!!!!!

I personally think that the lack of pointers is a good thing. It is one of
the
major reasons why I chose Lisp over C for most of my coding.

I found that pointers gave me way to much rope and
I ended up trashing the system by mangeling memory locations
when I was learning C.

I've read that you needed pointers to write intresting progs.
in one of my comp. sci. books but I found that I could
write those same programs in Lisp without using pointers,
and without worrying about trashing a memory location.

I think pointers are like goto not in function, so don't
flame about that, but because their are many other
ways to do what pointers do--and they all are
a lot more safe to do.

Lisp hides pointers--like loops hide goto. It's
still there but it's not as easy to shoot you
hand off with.

Pascal Costanza

unread,
May 7, 2003, 8:25:00 AM5/7/03
to
Michael Livshin wrote:

> the (forced) abstaining from using pointers, on the other hand,
> obviously does buy you some beneficial language properties.
>
> so I wouldn't want to make grand sweeping claims about desirability
> of semantic restrictions or lack thereof. it's always a trade-off.

No, it's not. The lack of pointers means that it is relatively hard to
implement a straightforward FFI. The lack of a powerful UFFI is
perceived to be one of the most prominent obstacles in making Common
Lisp more popular. Or to put it the other way around: As soon as you
include an FFI into Common Lisp, you need to include pointers in some
way or the other.

The idea behind language restrictions is always that the world would be
a better place if everyone would behave the same, at least in certain
respects. However, the world doesn't function like that - you can create
secluded compartments, but when the lack of certain facilities hinders
people to express certain things, they will switch to a different
compartment. It doesn't make sense to deny this fact.

_ XL1201 _ Sebek _ Budo _ Kafka @hotmail.com Franz Kafka

unread,
May 7, 2003, 8:42:38 AM5/7/03
to

"Pascal Costanza" <cost...@web.de> wrote in message
news:b9atut$s7a$1...@f1node01.rhrz.uni-bonn.de...

> Michael Livshin wrote:
>
> > the (forced) abstaining from using pointers, on the other hand,
> > obviously does buy you some beneficial language properties.
> >
> > so I wouldn't want to make grand sweeping claims about desirability
> > of semantic restrictions or lack thereof. it's always a trade-off.
>
> No, it's not. The lack of pointers means that it is relatively hard to
> implement a straightforward FFI. The lack of a powerful UFFI is
> perceived to be one of the most prominent obstacles in making Common
> Lisp more popular. Or to put it the other way around: As soon as you
> include an FFI into Common Lisp, you need to include pointers in some
> way or the other.
>

Lisp provides pointers--there just under the hood. If there was a portable
way to get at pointers that function should start with '%' because this
warns Lisp programmers that using this function could corrupt
memory.

'(a) ;; [value: A]---pointer--->[value:Nil]

some of Lisps "destructive operators" deal directly with pointers.

Because Linked DataStructures are built into the language--
the lack of pointers is not as big of an issue.

If the programmer had full access to pointers it would
be harder to Garbage Collect; if anyone can
think of how to implement pointers in such a way
that people who need them can use them but
programmers are not forced to use them that
would be great? I just don't know: how this
would be done, if it could be done in a
standardized way.

I only know one thing all Common Lisps would
have to treat them in the same way or the Lisp
code that used them would not be portable.


Pekka P. Pirinen

unread,
May 7, 2003, 8:41:59 AM5/7/03
to
Tim Bradshaw <t...@cley.com> writes:
> The normal definition, I think, makes CL a Lisp7:
>
> - Functions & macros
> - lexical variables
> - special variables
> - types and classes
> - labels (for GO)
> - block names
> - symbols in quoted expressions (such as tags for THROW).

I count four more than Norvig, discounting symbols (details in
<http://groups.google.com/groups?selm=ix6734k5a0.fsf%40gaspode.cam.harlequin.co.uk>:
- structures
- setf methods
- compiler macros
- methods
- method combinations
plus, as you say, any number of user-defined namespaces. Lisp11+.

Some of those could not be squeezed into a single namespace without
ugly hacks, such as C++ name mangling for methods.
--
Pekka P. Pirinen
The worst book in a trilogy is the fourth.

_ XL1201 _ Sebek _ Budo _ Kafka @hotmail.com Franz Kafka

unread,
May 7, 2003, 8:50:08 AM5/7/03
to

"Franz Kafka" <Symbolics _ XL1201 _ Sebek _ Budo _ Kafka @ hotmail . com>
wrote in message news:cb6ua.4695$c22....@news01.roc.ny.frontiernet.net...

> Michael Livshin:
> >
> > How do you view CL's lack of general pointers to
> > arbitrary memory locations, like those they have in C? is that a
> > good thing or a bad thing?
> >
>
> Very Very Very Good Thing!!!!!!!!
>

Even thou I think pointers are bad I don't think Lisp should
avoid them if like Pascal said thay have uses.

Lisp provides many ways to do the same task, and
many features so that programmers can write
what they want to write more easily.

I also think that a call/cc like operator
should have been included in ANSI CL
so that Lisp programmers could use it
to implement new control structures.
I know it is going to be harder in CL that
it was in Scheme--don't post about that--CL
is a bigger language.

The avoid it mentality of the 'goto' crowd
is to be avoided. If you need to add a
new looping construt to Lisp (go <tag>)
is one way to do it.


Michael Livshin

unread,
May 7, 2003, 8:52:48 AM5/7/03
to
Pascal Costanza <cost...@web.de> writes:

> Michael Livshin wrote:
>
>> the (forced) abstaining from using pointers, on the other hand,
>> obviously does buy you some beneficial language properties.
>> so I wouldn't want to make grand sweeping claims about desirability
>> of semantic restrictions or lack thereof. it's always a trade-off.
>

> The idea behind language restrictions is always that the world would
> be a better place if everyone would behave the same, at least in
> certain respects. However, the world doesn't function like that - you
> can create secluded compartments, but when the lack of certain
> facilities hinders people to express certain things, they will switch
> to a different compartment. It doesn't make sense to deny this fact.

I'm not denying this fact. I think that dividing the world into
compartments (delineated by semantic restrictions) is clearly a good
thing. I'm sure what are you arguing against. I think that a
property stated as "if your program limits itself to core CL + safe
extensions, then it won't crash" is very nice.

let's try a (hopefully) better example of desirable semantic
restriction in CL: the lack of call/cc. I guess the benefits (such
as implementable `unwind-protect') are pretty obvious, and the need
to introduce call/cc in order to interface to other compartments is
not very pressing.

(a world where the most popular computing platform is the Scheme
Machine would be another matter entirely, of course. but then, there
would be the much more serious problem of having the lambda calculus
be part of the elementary school curriculum...)

--
In many cases, writing a program which depends on supernatural insight
to solve a problem is easier than writing one which doesn't.
-- Paul Graham

Joe Marshall

unread,
May 7, 2003, 9:19:57 AM5/7/03
to
Kent M Pitman <pit...@world.std.com> writes:

> I had an actual mid-term test in compiler design at MIT on which I was
> asked: Gotos are (a) good (b) bad.

I hope you answered `yes'.

Pascal Costanza

unread,
May 7, 2003, 9:21:03 AM5/7/03
to
Michael Livshin wrote:

> let's try a (hopefully) better example of desirable semantic
> restriction in CL: the lack of call/cc. I guess the benefits (such
> as implementable `unwind-protect') are pretty obvious, and the need
> to introduce call/cc in order to interface to other compartments is
> not very pressing.

The lack of call/cc in Common Lisp is not a deliberate restriction, but
rather a consequence of the fact that it is hard to integrate with other
features of the language. In general, programming language features do
not combine very well, and that's when language designers have to make
choices.

Kent Pitman once had a description of the problems involved when
integrating call/cc into Lisp (unwind-protect vs. continuations) - I am
not able to find it right now... (Kent?)

Joe Marshall

unread,
May 7, 2003, 9:32:16 AM5/7/03
to
Adrian Kubala <adr...@sixfingeredman.net> writes:

> the purpose of a language is to limit the number of
> easily-expressible programs.

Perl and C++ fulfill that purpose admirably.


The purpose of a language is to get the computer to do what you want.

The main desiderata of any computer language are

to make it easy to express what you want,

to make it easy to modify the computer's behavior when you change
your mind about what you want,

to make it easy to determine what you wanted from what you
expressed,

to make it obvious to both you and the computer when what you
expressed is different from what you wanted.

Michael Livshin

unread,
May 7, 2003, 9:40:51 AM5/7/03
to
Pascal Costanza <cost...@web.de> writes:

> Michael Livshin wrote:
>
>> let's try a (hopefully) better example of desirable semantic
>> restriction in CL: the lack of call/cc. I guess the benefits (such
>> as implementable `unwind-protect') are pretty obvious, and the need
>> to introduce call/cc in order to interface to other compartments is
>> not very pressing.
>
> The lack of call/cc in Common Lisp is not a deliberate restriction,
> but rather a consequence of the fact that it is hard to integrate with
> other features of the language. In general, programming language
> features do not combine very well, and that's when language designers
> have to make choices.

pointers are a generalization of memory access.
call/cc is a generalization of control flow.

for all I know, you could implement a Common Lisp on top of Scheme,
using call/cc to implement the various control features. as long as
you don't provide call/cc as a user-level feature, you can provide
working `unwing-protect' in such implementation.

it's the same with pointers. CL implementations obviously use them.
FFI's use them. but they are not in the ANSI spec for a good reason:
their availability to the user would make too many important
assumptions and invariants incorrect, and perhaps would make some ANSI
CL features completely unimplementable (no, "you can use this, but be
careful with pointers" doesn't cut it, thankyouverymuch). so, just
like call/cc.

(the question whether a restriction is deliberate or not is orthogonal
and irrelevant.)

--
The PROPER way to handle HTML postings is to cancel the article, then
hire a hitman to kill the poster, his wife and kids, and fuck his dog
and smash his computer into little bits. Anything more is just
extremism. -- Paul Tomblin, in SDM

Pascal Costanza

unread,
May 7, 2003, 10:00:22 AM5/7/03
to
Michael Livshin wrote:

> for all I know, you could implement a Common Lisp on top of Scheme,
> using call/cc to implement the various control features. as long as
> you don't provide call/cc as a user-level feature, you can provide
> working `unwing-protect' in such implementation.

Not quite. Assume you have stored a continuation before you call
unwind-protect, and within the extent of that unwind-protect you call
the stored continuation. Does this mean that the cleanup from should be
executed or not? There are cases in which it makes sense and there are
cases in which it doesn't make sense. How do you distinguish between them?

It seems to be hard to come up with a good language design in this
respect. Kent Pitman's article on this issue seems to suggest that it is
unsolvable (but I am not sure whether I understood this correctly).

Combined with the fact that there seems to be no real need for
continuations in the "real world" [1], it's really hard to justify the
inclusion of such a feature.

Pointers are totally different in this regard because it's immediately
clear what you could use them for. (It's better to talk about FFI here,
and not about pointers as an isolated feature.)


Pascal


[1] http://makeashorterlink.com/?F5CF25974

Lars Brinkhoff

unread,
May 7, 2003, 9:58:01 AM5/7/03
to
Pascal Costanza <cost...@web.de> writes:
> Kent Pitman once had a description of the problems involved when
> integrating call/cc into Lisp (unwind-protect vs. continuations) - I
> am not able to find it right now.

http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html

Nils Goesche

unread,
May 7, 2003, 10:10:40 AM5/7/03
to
Pascal Costanza <cost...@web.de> writes:

> Michael Livshin wrote:
>
> > the (forced) abstaining from using pointers, on the other hand,
> > obviously does buy you some beneficial language properties. so I
> > wouldn't want to make grand sweeping claims about desirability of
> > semantic restrictions or lack thereof. it's always a trade-off.
>
> No, it's not. The lack of pointers means that it is relatively hard
> to implement a straightforward FFI.

The ``lack´´ of pointers? Lisp's semantics can be perfectly well
described using an ``everything is a pointer´´ approach. I personally
do not like this wording very much. If /everything/ is a `foo´, the
statement ``X is a foo´´ contains zero information, after all. But
nevertheless, using the ``everything is a pointer´´ model can be very
helpful for explaining Lisp's semantics to newbies having
misconceptions about Lisp's semantics. As long as they think ``X is a
pointer´´ is a meaningful and interesting statement, they are still
working with a flawed model of Lisp's semantics. Once they get it,
using the ``pointer´´ word is no longer necessary. Somewhat like
Wittgenstein's ladder :-)

No, FFI calls to C are non-trivial simply because C is a different
language having totally incompatible objects that have to be modelled
somehow in the calling language. C's pointers are no different from
C's other objects in that regard. Note that the other way around is
even harder: Just imagine trying to call a function FOO like this

(handler-case
(foo 'bar)
(error () 42))

from C! Would you say that it is C's ``lack´´ of symbols and
conditions that make this hard? Even if C /had/ builtin symbols and
conditions, calling FOO this way would be /no easier/ because in all
likelihood they would be totally incompatible with Lisp's, anyway.

Speaking of a ``lack´´ of pointers sounds as if you could easily add
them to Lisp and somebody simply decided not to do so, for whatever
reason. But I don't think this is so. Even if you added something to
Lisp and called it ``pointers´´, what you'd get would be something
totally different and incompatible to C's pointers, anyway, because
the whole idea behind C's pointers is totally meaningless in a Lisp
context and hence inapplicable. And, consequently, the FFI wouldn't
be any simpler, either.

Regards,
--
Nils Gösche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x0655CFA0

Kent M Pitman

unread,
May 7, 2003, 10:24:43 AM5/7/03
to
Pascal Costanza <cost...@web.de> writes:

> The lack of call/cc in Common Lisp is not a deliberate restriction,
> but rather a consequence of the fact that it is hard to integrate with
> other features of the language. In general, programming language
> features do not combine very well, and that's when language designers
> have to make choices.
>
> Kent Pitman once had a description of the problems involved when
> integrating call/cc into Lisp (unwind-protect vs. continuations) - I
> am not able to find it right now... (Kent?)

http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html

Michael Livshin

unread,
May 7, 2003, 10:30:51 AM5/7/03
to
Pascal Costanza <cost...@web.de> writes:

> Michael Livshin wrote:
>
>> for all I know, you could implement a Common Lisp on top of Scheme,
>> using call/cc to implement the various control features. as long as
>> you don't provide call/cc as a user-level feature, you can provide
>> working `unwing-protect' in such implementation.
>
> Not quite. Assume you have stored a continuation before you call
> unwind-protect, and within the extent of that unwind-protect you call
> the stored continuation.

how am I supposed to /call/ it, exactly? I don't have any way to do
it. I'm writing in CL, remember?

> Kent Pitman's article on this issue seems to suggest that it
> is unsolvable (but I am not sure whether I understood this
> correctly).

that's my understanding of it, too.

> Pointers are totally different in this regard because it's immediately
> clear what you could use them for. (It's better to talk about FFI
> here, and not about pointers as an isolated feature.)

it's indeed important to finally decide what are you talking about,
yes.

let's see:

I said that sometimes it is a good idea to restrict a language,
because well-chosen restrictions let you assume all kinds of useful
things. as an example, I brought up general pointers.

you said that you need pointers for FFI's and at best, given the
existence of the so-called Real World, we can talk about
compartmentalization and not outright omission of certain semantic
features from the language.

I said "fine", and brought up call/cc as a possibly better example. I
thought this example to be better /exactly/ because there's no
Real-World-related need to include it in CL. I hoped this would help
you see my point.

but it didn't help. oh well.

--
Due to the holiday next Monday, there will be no garbage collection.

Kent M Pitman

unread,
May 7, 2003, 10:33:11 AM5/7/03
to
Pascal Costanza <cost...@web.de> writes:

> It seems to be hard to come up with a good language design in this
> respect. Kent Pitman's article on this issue seems to suggest that

> [the integration of call/cc and unwind-protect] is unsolvable

> (but I am not sure whether I understood this correctly).

I think it's unsolvable without changing Scheme. And you might as well
be asking to change the church's icon from a cross to a pistol or a
guillotine when you ask to change the way continuations are managed
in Scheme. Nevertheless, the article to which I previously alluded
( http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html )
does contain proposed fixes, just for purposes of conversation. I don't
seriously expect anyone to accept the fixes, but not because I think the
proposals are not technically sound. There is too much community vested
interest in keeping call/cc as it is, and so they'd rather just close their
eyes and pretend that there's no problem. That, at least, is my assessment.
YMMV.

> Combined with the fact that there seems to be no real need for
> continuations in the "real world" [1], it's really hard to justify
> the inclusion of such a feature.

I don't know if I'd go so far as to say no 'need' in the sense of 'no use'.
There is call for them, and they could be handy if we had them, but the
price is too high. And if we offered call/cc in the way I propose, it
probably would just annoy the Scheme community. Hmmm.... (sound of
wheels turning)

> Pointers are totally different in this regard because it's immediately
> clear what you could use them for. (It's better to talk about FFI
> here, and not about pointers as an isolated feature.)

Actually, I think the desire to have pointers instead of a GC'd world,
and the desire to have unhygienic macros instead of hygienic ones is
pretty analogous. The difference is not in the structure of the argument
but rather in the need of the community. This community has expressed
a need for automatic memory management and a willingness to give up the
fredom of pointers to get it. This community has not expressed a need
for hygiene (because we have it in other ways), and so we have no desire
to give up the full generality of our macro system for a more restrictive
one.

Pascal Costanza

unread,
May 7, 2003, 10:35:39 AM5/7/03
to
Nils Goesche wrote:
> Pascal Costanza <cost...@web.de> writes:
>
>>Michael Livshin wrote:
>>
>>>the (forced) abstaining from using pointers, on the other hand,
>>>obviously does buy you some beneficial language properties. so I
>>>wouldn't want to make grand sweeping claims about desirability of
>>>semantic restrictions or lack thereof. it's always a trade-off.
>>
>>No, it's not. The lack of pointers means that it is relatively hard
>>to implement a straightforward FFI.
>
> The ``lack´´ of pointers? Lisp's semantics can be perfectly well
> described using an ``everything is a pointer´´ approach.

That's a misunderstanding. Michael talked about "abstaining from using
pointers" - this is incompatible with "everything is a pointer".

The term pointer is used very ambiguously throughout computer science,
and this causes some confusions. So sorry for not having been clear
enough. What's missing in Common Lisp in order to make an FFI work is a
notion of pointers _as understood in the C world_ (or better:
machine-level addresses). So especially, you cannot pass addresses of
Common Lisp objects or functions to C code.

Pascal Costanza

unread,
May 7, 2003, 10:57:55 AM5/7/03
to
Michael Livshin wrote:

> it's indeed important to finally decide what are you talking about,
> yes.

Hmm, I have thought we are talking about how hard programming language
design is. (But we might also be talking about how hard newsgroup
discussions are. ;)

> let's see:
>
> I said that sometimes it is a good idea to restrict a language,
> because well-chosen restrictions let you assume all kinds of useful
> things. as an example, I brought up general pointers.
>
> you said that you need pointers for FFI's and at best, given the
> existence of the so-called Real World, we can talk about
> compartmentalization and not outright omission of certain semantic
> features from the language.
>
> I said "fine", and brought up call/cc as a possibly better example. I
> thought this example to be better /exactly/ because there's no
> Real-World-related need to include it in CL. I hoped this would help
> you see my point.
>
> but it didn't help. oh well.

OK, I seem to have misunderstood you. However, your wording is a little
dangerous: "sometimes it is a good idea to restrict a language, because

well-chosen restrictions let you assume all kinds of useful things".

This sounds like the usefulness of the assumptions that follow from a
restriction is a sufficient condition for making that restriction, and I
have interpreted your postings as arguments that support this view.

Now I guess what you really wanted to say is that sometimes it is a good
idea to restrict a language because well-chosen restrictions can result
in properties that are more useful in practice than what has been taken
away. (But you already hinted toward that by saying that it is always a
trade-off. So probably I haven't you thoroughly enough - sorry for that.)


Pascal

Kent M Pitman

unread,
May 7, 2003, 11:01:37 AM5/7/03
to
Michael Livshin <use...@cmm.kakpryg.net> writes:

> > Kent Pitman's article on this issue seems to suggest that it
> > is unsolvable (but I am not sure whether I understood this
> > correctly).
>
> that's my understanding of it, too.

Btw, I just this minute made a change to the article
http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html
in order to add some examples to make it clear what the options
for fixing it would look like in terms of code.

If you read the version where the proposed changes had no code examples,
you might want to take a second look.

Hopefully this will avoid the sense that the problem is pragmatic
and will make it more clear that the problem is political.

Adrian Kubala

unread,
May 7, 2003, 11:53:33 AM5/7/03
to

Each new level of abstraction limits the number of possible choices, and
in so doing makes it possible to express the desired choice in fewer bits.
Ideally, you would like your language to offer you only one choice --
"DWIM" -- which represents the precise program you wish to write. I'll
just ignore turing equivalence, so when I say "expressible", I mean easily
expressible, without writing a turing machine emulator in the language.

Hardware offers you the freedom to draw doodles with solder or build
little transistor sculptures. Assembly language denies you that but
continues to offer you the ability to alter and jump to arbitrary pieces
of memory. C adds the concept of functions, which keep you from jumping
about willy-nilly and enforce stack convention. Then you add
bounds-checking. This limits your ability to access unallocated memory,
which hardly anybody misses. Then you get garbage collection, which
(mostly) keeps you from hanging onto resources longer than you need them,
which again not many people miss. Then you get objects, which enforce
(often useful) conventions for associating functions to data. And types,
which enforce that functions do what they say they do in a rough sense.

This is the whole principle behind abstraction -- you see yourself doing
the same thing many times, and you formalize it so that you do in fact do
the same thing, and not possible variations thereof. You do this until you
can't do it anymore, and you've built a domain language in which precisely
the things you need to do are expressible, and no more.


Adrian Kubala

unread,
May 7, 2003, 12:00:54 PM5/7/03
to
On 6 May 2003, Kent M Pitman wrote:
> Adrian Kubala <adr...@sixfingeredman.net> writes:
> > It's probably true that both these ideas are still too immature and not
> > pragmatic in the short term.
> It's probably also true that even when mature, some of us will not
> want to bother with them. Our resistence to them is not their
> maturity. It is the lack of demonstrated need.

"Need" is so subjective -- none of us need computers, do we? But, IF it
were possible for your computer to PROVE that your program didn't have
a certain set of bugs, that would be a good thing, would it not? Static
typing is the only direction which will allow this, and the set of bugs it
can disprove grows larger with more research. Eventually, the cost will
decrease to the point that it's worthwhile no matter how rarely you make
mistakes. Anything which offloads work to the computer, no matter how
small, is a step in the right direction.

Nils Goesche

unread,
May 7, 2003, 12:29:49 PM5/7/03
to
Pascal Costanza <cost...@web.de> writes:

> Nils Goesche wrote:
> > Pascal Costanza <cost...@web.de> writes:

> >> The lack of pointers means that it is relatively hard to
> >> implement a straightforward FFI.

> > The ``lack´´ of pointers? Lisp's semantics can be perfectly well
> > described using an ``everything is a pointer´´ approach.

> That's a misunderstanding. Michael talked about "abstaining from
> using pointers" - this is incompatible with "everything is a
> pointer".
>
> The term pointer is used very ambiguously throughout computer
> science, and this causes some confusions. So sorry for not having
> been clear enough. What's missing in Common Lisp in order to make
> an FFI work is a notion of pointers _as understood in the C world_
> (or better: machine-level addresses).

I guess I still don't get it. First of all, until now I used to
believe that I'm using a ``working FFI´´ all the time in the Lisp
implementation I use, and it is an implementation of Common Lisp, not
some other language with a notion of pointers. Sure, I can do

(fli:make-pointer :address #x04000800 :type :int)

but note that this thing is an ordinary Lisp struct, which is used to
model a C pointer by the FFI. And ok, I can even do some black magic
like

(setf (fli:dereference (fli:make-pointer :address #x04000800
:type :int))
42)

although the consequences of this are undefined. Something else I
could do is

CL-USER 1 > (defparameter *pointer*
(fli:allocate-foreign-object
:type :int
:initial-element 42))
*POINTER*

CL-USER 2 > (fli:dereference *pointer*)
42

CL-USER 3 > (setf (fli:dereference *pointer*) 17)
17

CL-USER 4 > (fli:dereference *pointer*)
17

CL-USER 5 > (fli:free-foreign-object *pointer*)
#<Pointer to type :INT = #x00000000>

So now I have a freaky new kind of container. But it's not that
Lisp's semantics have been significantly changed by some new kind of
thing called ``pointer´´.

> So especially, you cannot pass addresses of Common Lisp objects or
> functions to C code.

That wouldn't help you much, either. Suppose there is a foreign C
function

struct blark {
int x;
char y;
short z;
};

int foo(struct blark *);

Now, I /could/ call that thing in Lispworks like this:

(defstruct foo
x
y
z)

(fli:define-foreign-function (foo "foo" :source)
((arg-1 (:pointer (:struct blark))))
:result-type :int
:language :ansi-c)

(defstruct foo
x
y
z)

(let ((obj (sys:in-static-area (make-foo :x 42 :y 17 :z 13))))
(foo (fli:make-pointer :type '(:struct blark)
:address (sys:object-address obj))))

but this is not going to do much good :-) About the only use for
SYS:OBJECT-ADDRESS is when a C library wants some opaque void *
pointer that it will pass back to me in some callback. There is
absolutely nothing useful I can do with such an address from Lisp
(other than converting it back to a real Lisp object, hoping I didn't
forget to allocate it in the static area and keep a Lisp reference to
it around, so GC has neither moved nor removed it in the mean time).
The main reason being that in Lisp we, thank God, do not think of
objects as being an array of octets. The garbage collector being
another one. Unlike C, these ``addresses´´ do not have any real
semantic meaning in Lisp. (Actually, in these days of MMU's they are
just an abstraction in C, too). In C, these addresses are part of the
very semantics of the language! Have just a glance into

http://citeseer.nj.nec.com/papaspyrou01denotational.html

to see what I mean.

Or simply consider

int bar(int *x);

and

(let ((x 42))
(bar (what-now? x)))

What in the world is WHAT-NOW? supposed to do for this to make sense?

So, we have a working FFI in LispWorks, and yes, there is even a way
of obtaining a memory address of Lisp objects (although this is not
very significant, as what we really have to pass is some special kind
of object that bears little resemblance to a Lisp FOO struct, anyway),
but I wouldn't say that now we have ``pointers´´ in LispWorks. There
is not some particular new language construct called ``pointer´´ in
the Lisp as implemented by LispWorks, which is integrated into the
language in a way that might justify saying that this language now
doesn't ``lack´´ pointers. When you say something like

> What's missing in Common Lisp in order to make an FFI work is a
> notion of pointers _as understood in the C world_ (or better:
> machine-level addresses).

I would expect that then it should be somehow possible to slightly
change Lisp's core semantics, yielding a language where calling C is
easy and ``pointers´´ have some kind of semantic meaning in Lisp, too.
But I believe this is not true. If you change Lisp enough for this to
be the case, what you'd get will not be a Lisp at all anymore. And as
long as you don't, a ``pointer´´, whatever that means, will always be
a totally alien concept in Lisp, thus making FFIs clumsy.

Kent M Pitman

unread,
May 7, 2003, 12:34:57 PM5/7/03
to
Adrian Kubala <adr...@sixfingeredman.net> writes:

> On 6 May 2003, Kent M Pitman wrote:
> > Adrian Kubala <adr...@sixfingeredman.net> writes:
> > > It's probably true that both these ideas are still too immature and not
> > > pragmatic in the short term.
> > It's probably also true that even when mature, some of us will not
> > want to bother with them. Our resistence to them is not their
> > maturity. It is the lack of demonstrated need.
>
> "Need" is so subjective -- none of us need computers, do we? But, IF it
> were possible for your computer to PROVE that your program didn't have
> a certain set of bugs, that would be a good thing, would it not?

I am leary of politicians hawking things as "free".

Do not use the one-place predicate "would x be good?" because it forces
an answer that is misleading.

Tell me the cost and ask me if it's worth the cost.

If the cost is restricting my programming style, I can already get dozens
of programming languages that do this.

> Static typing is the only direction which will allow this, and the
> set of bugs it can disprove grows larger with more research.

A human being can also disprove these by "reading code", even untyped code.
I'd rather see energy spent in that direction, leaving my code as it is,
than have the problem declared "solved" by giving up a critical freedom.

> Eventually, the cost will
> decrease to the point that it's worthwhile no matter how rarely you make
> mistakes.

The cost is already low enough and high enough. Low enough in the sense that
it already finds plenty of bugs. High enough in the sense that it restricts
my programming freedom more than I want.

I want it to infer the declarations I need. Eventually the cost of that
will decrease to the point it will impress even you. Meanwhile, I plan to
program in the futuristic style that this change in inference capability
will support, in anticipation of such cool compilation.

Programming in a restricted way that presumes there will be no advances
in inferencing and that I must do with what can be inferenced now seems
shortsighted.

> Anything which offloads work to the computer, no matter how
> small, is a step in the right direction.

No, that is wrong.

There are several fallacies built into this.

- Things that computers don't do right are included in what you say.
For example, is offloading "art" to the computer a step in the
right direction?

- Things that involve judgment are included in what you say.
For example, is offloading the judge's or jury's role in a trial
best offloaded to a computer?

- Things that require more work to offload to the computer than the
work they save are not best done by computer. For example, is adding
2+2 best done by punching it into a calculator?

- Things where there are more than one solution are not necesarily
best off put into a computer unless you ALSO do the work to assure
that either the program is as flexible as a non-program. For example,
is leaving your vacation planning up to a computer the best thing?
It may do very well at finding fares to Rio, but if it is just going
to say "Going to Rio is your only option for less than $200" are you
going to believe it? Sometimes I stay home on my vacations or go on
a day-trip. What if all the options are not in the computer?

- Things that restrict freedom are not necessarily better. Is it better
to leave the planning of your trip across town to a guidance computer?
I haven't seen MAPQUEST.COM or MAPS.YAHOO.COM ever plot me a best course
on any path I knew personally.

- Things that are potential hill-climbing problems are not best
programmed into a computer unless you're sure you're not climbing
a bad hill. You didn't offer any caveats.

I'm sure this list is not exhaustive.

Pascal Costanza

unread,
May 7, 2003, 1:21:43 PM5/7/03
to
Adrian Kubala wrote:

> "Need" is so subjective -- none of us need computers, do we? But, IF it
> were possible for your computer to PROVE that your program didn't have
> a certain set of bugs, that would be a good thing, would it not?

No, because it necessarily restricts the language so that I can do less
in a statically typed language than in a dynamically typed one. The
question is, if in doubt, do we prefer safety or flexibility?

One might not need total flexibility 80% of the time (make that 90%, or
95%, it doesn't really matter). So this leads many people to believe
that it is quite ok to restrict the language. However, the real counter
argument by proponents of dynamically typed languages is that the kind
of bugs that are caught by static type systems simply do not occur in
practice. So a restriction of a language in order to make it safer
becomes gratuitous. Why bother?

Here is a quote, as recently seen on the ll1 mailing list:

: I was reading Joe Armstrong's paper "The development of Erlang" from
: ICFP '97 and noticed this interesting point when talking about the
: new type system being applied to the standard libraries:
:
: "The type system has uncovered no errors. The kernel libraries
: were written by Erlang "experts"--it seems that good programmers
: don't make type errors.

Here is another recent article by Robert C. Martin who claims to have
been a fan of static typing in the past:
http://www.artima.com/weblogs/viewpost.jsp?thread=4639

Here is another article on a language construct that sucks in large
programs as soon as it gets statically checked:
http://www.mindview.net/Etc/Discussions/CheckedExceptions

Another example is that in the past, there have been languages that have
statically checked array boundaries, which has been largely dismissed in
almost all languages.

So at least you have to admit that this could be a pattern, and that
dynamic checking could turn out as generally superior to static
checking. (I am not sure this is the case, but it could be.)

The real problem I see in the usual arguments in favor of static
checking is that they start by defining a set of bugs they want to get
rid of, then define a static type system that can get rid of these bugs,
and finally "prove" the usefulness of that type system by formally
proving that the type sytem indeed helps to get rid of these bugs. This
is a circular argument.

What is really needed is an empirical evaluation of static type systems:
Under what settings do they help? Are there certain kinds of
applications that require static type systems more than others? Is this
perhaps just a matter of style? Does a higher criticality require more
static checks, or rather a more complete test suite in practice? And so
forth.

Such empirical studies are largely missing, and this is clearly a bad
sign for the field of computer science in general.

Until we have some hard data on these issues, we are simply left alone
with our own prejudices and experiences.

However, one thing should be clear: it's the job of the propenents of
static type systems to make a good case for them, and not the job of the
proponents of dynamic type systems to disprove their value. You can't
come up with an arbitrary piece of technology, claim that it increases
the quality of software, and then require other people to disprove this
claim. Such an approach gives us RUP, UML, and the like...

> Static
> typing is the only direction which will allow this, and the set of bugs it
> can disprove grows larger with more research. Eventually, the cost will
> decrease to the point that it's worthwhile no matter how rarely you make
> mistakes. Anything which offloads work to the computer, no matter how
> small, is a step in the right direction.

This is what proponents of static type systems believe in. But you
should not portray this as a given fact (or as "the only direction").

To the contrary, it can be shown that static proofs of program
correctness are limited by the halting problem. This boundary means that
you can only statically check a strict subset of all programs that might
be dynamically safe. It is your hope that you will reach a subset that
is complete enough to cover all relevant cases, and that's ok, but you
shouldn't disguise hope as truth.

Pascal Costanza

unread,
May 7, 2003, 1:34:18 PM5/7/03
to
Nils Goesche wrote:

> I guess I still don't get it. First of all, until now I used to
> believe that I'm using a ``working FFI´´ all the time in the Lisp
> implementation I use, and it is an implementation of Common Lisp, not
> some other language with a notion of pointers.

OK, maybe I am missing something - how does LispWork's FFI communicate
with C and implement call backs, for example? The C code needs _some_
kind of handle to access Common Lisp objects, doesn't it? (and vice
versa...)

Michael Livshin

unread,
May 7, 2003, 1:53:33 PM5/7/03
to
Kent M Pitman <pit...@world.std.com> writes:

it is, of course, entirely political.

another indication is the fact that all (I think) serious Scheme
implementations have primitive support for at least one of the
following:

exception handling
upward-only continuations
one-shot continuations

no `unwind-protect', though.

--
All ITS machines now have hardware for a new machine instruction --
CIZ
Clear If Zero.
Please update your programs.

Nils Goesche

unread,
May 7, 2003, 2:19:32 PM5/7/03
to
Pascal Costanza <cost...@web.de> writes:

> Nils Goesche wrote:
>
> > I guess I still don't get it. First of all, until now I used to
> > believe that I'm using a ``working FFI´´ all the time in the Lisp
> > implementation I use, and it is an implementation of Common Lisp,
> > not some other language with a notion of pointers.
>
> OK, maybe I am missing something - how does LispWork's FFI
> communicate with C and implement call backs, for example? The C code
> needs _some_ kind of handle to access Common Lisp objects, doesn't
> it? (and vice versa...)

The C code cannot do anything with Common Lisp objects. If a C
function wants to operate on a struct

struct blark {
int x;
char y;
short z;
};

we cannot simply pass it a real Lisp object like a struct

(defstruct blark
x
y
z)

because an object of this type will have a totally different memory
layout, anyway. What gets ultimately passed by LispWorks is a /C/
struct, not a Lisp struct (under the hood, of course). I tell
LispWorks about what kind of C function I want to call (it's type
signature), then I have some macros that will let me construct some
special Lisp objects which /represent/ C objects for the internal FFI
code, and when I finally call the C function with such a
representation as argument, the FFI will, under the hood, convert that
thing into something C can operate on sensibly. But the whole process
is rather opaque, and you do not have any new semantic Lisp constructs
you could use for anything /but calling C/. The situation for
callbacks is similar: You can make some special kind of functions C
will be able to call. You have to specify the C type signature of the
function you want, and C will call a function /with C objects/ again.
The FFI will convert or wrap these C objects into special kinds of
Lisp objects you can at least manipulate from Lisp. Once there, you
can call ordinary Lisp functions with ordinary Lisp objects as
arguments.

The whole process is somewhat awkward and non-trivial to use, but this
is because Lisp objects and C objects are so hopelessly different.

Note that machine addresses of Lisp objects are not involved.
Moreover, it's not like you could use in ``LispWorks Lisp´´ some
significant new language construct called ``pointer´´.

When I hear ``Lisp lacks FOO´´, or ``Lisp abstains from using FOO´´,
this would mean, to me, that FOO is something that could be added to
the HyperSpec such that it would have some semantic meaning to Lisp
code operating with Lisp objects, not only for calling C. Like
call/cc, for instance. I do not believe that there is such a thing
that could sensibly be called ``pointer´´.

And the ``everything is a pointer´´ approach I mentioned is not
totally irrelevant to this, either. The implementation /could/ simply
send the ordinary object descriptors it uses internally anyway, with
tag bits and all, down to C functions. These object descriptors do
have some resemblance to pointers. Of course, they are different from
C pointers, and libjpeg wouldn't like them at all, but this is not
surprising as C structs are differently implemented than Lisp structs,
too.

William D Clinger

unread,
May 7, 2003, 3:07:36 PM5/7/03
to
Kent M Pitman wrote:
> Btw, I just this minute made a change to the article
> http://www.nhplace.com/kent/PFAQ/unwind-protect-vs-continuations.html
> in order to add some examples to make it clear what the options
> for fixing it would look like in terms of code.
>
> If you read the version where the proposed changes had no code examples,
> you might want to take a second look.
>
> Hopefully this will avoid the sense that the problem is pragmatic
> and will make it more clear that the problem is political.

Actually, I think the problem is that we don't understand why you
aren't willing to consider implementing UNWIND-PROTECT as a simple
macro and registering it via the SRFI process.

Your article is written as though there were some technical impediment
to this, but I've never been able to figure out what it might be.

Will

It is loading more messages.
0 new messages