One of T's built-in features was first-class environments, which are data
structures that map symbols to values. Environments were designed to
serve the same purpose as packages in CL: to prevent name collisions
between program modules. The difference between packages and environments
is that packages allow you to have multiple symbols with the same print
name, while environments let you have multiple values associated with a
single symbol. (This is not to be confused with returning multiple values
from a function, which is a completely different issue.)
A mentor of mine at the time (1988 or so) once expounded on the virtues of
environments and the evils of packages. He made an offhand comment that I
still remember to this day: "Common Lisp" he said "really has a flat
namespace." What he meant was that there was little effective difference
between packages as they exist in CL, and a Lisp with only one package and
a reader hack that attached the prefix "package:" (where "package" is
replaced with the value of the global variable *package*) to all symbol
names that didn't already have colons in them before interning them. Now,
this isn't quite right, of course, because of import, export, and
use-package, but his fundamental point has stuck with me through the
years, that discriminating between multiple interpretations of a symbol
name at read time rather than at eval time was the Wrong Thing. (This is
not to say that I necessarily agree with this point of view, just that I
remember it.)
Through the years I have always found the package system to be pretty
annoying. I often find myself doing the following:
> foo
ERROR: foo is unbound ; Doh! Forgot that foo is in foo-package
> (use-package 'foo-package)
ERROR: symbol FOO-PACKAGE:FOO conflicts with symbol FOO ; Arrggh!!!
So I've always considered packages as more of a necessary evil than a
feature. I was told that this view was common in the CL world, as
reflected in one of Guy Steele's many subtle jokes in CLTLn: in both
editions of the book the section dealing with packages is "chapter 11",
which in the American legal system refers to a set of laws dealing with
bankruptcy.
But in the recent discussion of what "first class symbol" means a number
of people have made statements that at least imply (and sometimes
explicitly state) that they like packages, that they think they really are
a cool feature. For those of you who feel this way I have the following
question: were you aware of the eval-time alternative for resolving name
conflicts provided by environments? And if you were, what is it that
makes you prefer packages?
Thanks,
E.
The original description of T (as told to me by the implementors
when they had just started) was: "industrial strength Scheme".
I sympathize, but I expect to get singed on the
REPL all the time, even in Scheme. I mainly use the
REPL to observe how things crash and burn.
>So I've always considered packages as more of a necessary evil than a
>feature. I was told that this view was common in the CL world, as
>reflected in one of Guy Steele's many subtle jokes in CLTLn: in both
>editions of the book the section dealing with packages is "chapter 11",
>which in the American legal system refers to a set of laws dealing with
>bankruptcy.
:-) :-) All these years with that book, and I
didn't realize that till now.
>But in the recent discussion of what "first class symbol" means a number
>of people have made statements that at least imply (and sometimes
>explicitly state) that they like packages, that they think they really are
>a cool feature. For those of you who feel this way I have the following
>question: were you aware of the eval-time alternative for resolving name
>conflicts provided by environments? And if you were, what is it that
>makes you prefer packages?
Environments are forbiddingly expensive? I mean,
Scheme, whose midwives were well aware of first-class
environments (SICP 1/e mentions them) would have
standardized on them in a jiffy otherwise, no?
--d
> One of T's built-in features was first-class environments, which are data
> structures that map symbols to values. Environments were designed to
> serve the same purpose as packages in CL: to prevent name collisions
> between program modules.
WRONG! Just because you don't value symbolic processing doesn't mean
that others can't. PLEASE notice that symbols in Lisp ARE VALUES and
can be used as such. Therefore, when passing symbols around, one MUST
be sure that application X's symbol is not the same as application Y's
symbol due to some unintentional name-clash. That is what packages
solve.
> A mentor of mine at the time (1988 or so) once expounded on the virtues of
> environments and the evils of packages.
Amazing that of two things that can cooperate, one is evil and one is good.
> Through the years I have always found the package system to be pretty
> annoying. I often find myself doing the following:
> > foo
> ERROR: foo is unbound ; Doh! Forgot that foo is in foo-package
> > (use-package 'foo-package)
> ERROR: symbol FOO-PACKAGE:FOO conflicts with symbol FOO ; Arrggh!!!
So choose the restart to unintern it from the current package.
> But in the recent discussion of what "first class symbol" means a number
> of people have made statements that at least imply (and sometimes
> explicitly state) that they like packages, that they think they really are
> a cool feature. For those of you who feel this way I have the following
> question: were you aware of the eval-time alternative for resolving name
> conflicts provided by environments?
No, since environments map symbols to values, not strings to symbols.
--
-> -/ - Rahul Jain - \- <-
-> -\ http://linux.rice.edu/~rahul -=- mailto:rj...@techie.com /- <-
-> -/ "Structure is nothing if it is all you got. Skeletons spook \- <-
-> -\ people if [they] try to walk around on their own. I really /- <-
-> -/ wonder why XML does not." -- Erik Naggum, comp.lang.lisp \- <-
|--|--------|--------------|----|-------------|------|---------|-----|-|
(c)1996-2002, All rights reserved. Disclaimer available upon request.
> A mentor of mine at the time (1988 or so) once expounded on the virtues of
> environments and the evils of packages. He made an offhand comment that I
> still remember to this day: "Common Lisp" he said "really has a flat
> namespace." What he meant was that there was little effective difference
> between packages as they exist in CL, and a Lisp with only one package and
> a reader hack that attached the prefix "package:" (where "package" is
> replaced with the value of the global variable *package*) to all symbol
> names that didn't already have colons in them before interning them. Now,
> this isn't quite right, of course, because of import, export, and
> use-package, but his fundamental point has stuck with me through the
> years, that discriminating between multiple interpretations of a symbol
> name at read time rather than at eval time was the Wrong Thing. (This is
> not to say that I necessarily agree with this point of view, just that I
> remember it.)
It's quite possible to build a system on top of the `flat' package
namespace which allows it to be quite non-flat. I have a system
called conduits (and there is at least one other such system I think),
which adds a couple of options to DEFPACKAGE which let you define a
package as `extending' another one. For a long time I didn't really
use this in anger but I have done so now, and the package structure of
the system I'm working on now is layered quite deeply.
I have a whole bunch of low-level packages, and then something like:
(defpackage :weld-low
(:use)
(:extends ...n packages...))
and then further up
(defpackage :weld-crs-implementation
(:use :cl :weld-low))
...
(defpackage :weld
(:use)
(:extends :weld-low :weld-crs-implementation ...))
others Of course it's `just a hack' and not hygenic or anything, so
doesn't really qualify as a proper program in our brave, pure, new
world. It also doesn't address the heirarchical package *name* issue
at all which I think ACL does for instance.
Still, it's very useful, I find.
--tim
I begin to see where you come from and where you went wrong, Erann. This
is such a stupid thing to say that I marvel at the respect you must have
felt for this mentor. I hope _I_ shall never have a student who does not
have in him the ability to say "that's a load of crap, Erik" if I make a
similarly retarded point and expect people to believe it, not laugh.
How about the function and variable values of a symbol? It is just as if
we had used Scheme evaluation rules, but had a reader hack that prefixed
the first position of a form with "operator-" and every other symbol with
"variable-". See? No material difference between Scheme and Common Lisp
-- they are equivelant modulo a reader hack. Right? If you see how
_retarded_ it would be to explain the difference this way, which I
suspect you will, since I am not your highly respected mentor and you are
free to think instead of believe, it should help you realize just how
retarded the explanation that you heard about packages was, too.
| Now, this isn't quite right, of course, because of import, export, and
| use-package, but his fundamental point has stuck with me through the
| years, that discriminating between multiple interpretations of a symbol
| name at read time rather than at eval time was the Wrong Thing.
This is just too incredible to be for real. There is no discrimination
between multiple interpretation of a symbol at read time. That is just
_so_ confused, no wonder you have serious problems. Have you actually
ever understood Common Lisp at all, Erann? I have to wonder. At read
time, a string is interned in only one package to return a symbol. There
_is_ no symbol without the package. There are only the characters in a
symbol's name.
Again and again, you show us that you are _horribly_ confused and that
much of your dislike of Common Lisp comes not from Common Lisp, but from
some staggeringly stubborn misconception and refusal to listen to others.
| (This is not to say that I necessarily agree with this point of view,
| just that I remember it.)
The more I read from you, the more I think you are a Scheme person.
This is not a compliment.
| Through the years I have always found the package system to be pretty
| annoying. I often find myself doing the following:
|
| > foo
| ERROR: foo is unbound ; Doh! Forgot that foo is in foo-package
| > (use-package 'foo-package)
| ERROR: symbol FOO-PACKAGE:FOO conflicts with symbol FOO ; Arrggh!!!
Such are the results of incompetence at work. And how typical to react
emotionally at something so fundamentally silly. How come I never have
this problem? Could it be that I use symbol completion in Emacs? Could
it be that when I once in a while do get an _error_ like this, one of the
available restarts is to unintern the old symbol and proceed, so I have
no _problem_ with it? Could it be that I build my packages with care and
actually want to know which package a symbol is in -- _because_ I do not
consider packages and symbols evil?
| So I've always considered packages as more of a necessary evil than a
| feature.
Yop, Scheme is for you, this is becoming clearer and clearer.
| I was told that this view was common in the CL world, as reflected in one
| of Guy Steele's many subtle jokes in CLTLn: in both editions of the book
| the section dealing with packages is "chapter 11", which in the American
| legal system refers to a set of laws dealing with bankruptcy.
I find this unfunny, not because it is offensive, but because it is so
stupid. It could have been a cute pun if it it has been rephrased as
something like "Common Lisp packages are in Chapter 11", but as stated it
is just dumb, mostly because "Chapter 11" is nothing more than a silly
coincidende and it has to be in an expression that relates it to the
U.S. Bankruptcy Code to make any sense.
But your interpretation of what "Chapter 11" sort of "means" is actually
a very good example of how _wrong_ it is to do eval-time interpretation
instead of read-time. It may make for puns (good, bad, atrocious), but
not understanding. Unless one wants to publish books without a Chapter
11 like some hotels do not have a 13th floor because of a similarly bad
case of numerology and mysticism, there will have to be a Chapter 11 in
books with more than 10. E.g., exceptions are bankrupt in Java in your
"interpretation" because they are in Chapter 11 in both editions of "The
Java Language Specification", also written by Steele, et al, so surely
this is not accidental, either. I am sure this is "fun" to people who do
eval-time interpretation with random environments, but I find it _stupid_.
BTW, Chapter 11 of the U.S. Bankruptcy Code (Title, not Chapter, 11 of
the United States Code) is concerned with reorganization. It applies
when your creditors have some hope that they will recover their money if
they can (forcibly) reorganize the company. Chapter 7 (liquidation)
applies when there is no hope and the creditors accept whatever share
they get from the proceeds of its liquidation. This should have been
common knowledge to anyone who has any interest in referring to the
U.S. legal system. A remarkable level of intellectual sloppiness is
necessary to make this connection and to think it has any meaning.
| But in the recent discussion of what "first class symbol" means a number
| of people have made statements that at least imply (and sometimes
| explicitly state) that they like packages, that they think they really
| are a cool feature.
You know, there are some people who move to other countries, or other
places in the same country, whatever, and who manage to _like_ the place
they have moved to, even though it has lots of flaws and problems and
unlikable people and funny smells and such, probably just like the place
they moved from, which they probably also made up their mind to like, but
it may not have worked out, or they wanted something _different_, not for
lack of like. Then there are people who know what they will and will not
like and who are _never_ satisfied or happy with anything, and who are
more than likely to _avoid_ doing anything than doing something they do
not like through and through. Scheme has a history of not doing anything
because they _feared_ that would not like it, or that somebody had made
up their mind they did not like it even before they tried.
I think the ability to make up your mind to like something you _want_ to
use is kind of crucial to survival. There is so much wrong with the
world that one has to accept flaws and imperfections and just get on with
one's purpose in life. People who _have_ no purpose, do not need this.
There is no such thing as a necessary evil. Thinking in such terms is
itself destructive, not only to one's ability to do what is necessary,
but to appreciate the results. If you have deal with necessary evils all
the time, you will be in a state of permanent frustration, and what
better way of expressing this than "I like Common Lisp, but ...", and of
becoming so frustrated with Common Lisp that one spends one's time
griping about the language instead of just getting on with one's life.
But more importantly, the attitude problems that result in one's
inability to "like" something one has to use or do because of many other
factors, such as liking _them_, is actually pandemic to a personality.
People who have to set _all_ the rules, cannot _follow_ any rules. We
have seen this in people who publicly favor going behind their clueless
boss instead of working _with_ him, we have seen this is people who make
their _living_ from other people's trust that they can follow a standard
specification go public with statements that it is "braindamaged".
However, there are people who are actually able to live and prosper in a
system where respecting authorities, the rule of law, your boss and the
chain of command, etc -- one does not _have_ to be a criminal to make
money or become rich, but it helps some, like Bill Gates and Enron and
such, but by far most criminals end up rather destitute or dead at an
early age. Protesting against authority is an immature youth thing, part
of breaking free from your parents, but some people actually figure out
that authority itself is not what made it necessary to break with your
parents, and they do not have to break with every other authority the
same way.
| For those of you who feel this way I have the following question: were
| you aware of the eval-time alternative for resolving name conflicts
| provided by environments?
Yes, we call them hashtables using strings for keys. Oh, right, Scheme
does not have hashtables. Common Lisp also supports hashtables that use
keywords for keys.
| And if you were, what is it that makes you prefer packages?
This question makes absolutely no sense. If I am aware of the existence
of white meat, why do I prefer red meat? I like both, I eat both, but on
different occasions. I use hashtables with string keys for some things
and keyword for others. I sometimes use the package system because the
reader is such a convenient function that I do not want to reimplement
it, and the package system has some really nifty features that, if you
can come to like what you have, you can find good use for.
If you cannot like what you have freely accepted, there is something
wrong with you. If you do not accept something, but you cannot break
free from it, there is something wrong with you. You have but one life
to live, and it is too short to remain too long with your mistakes and
too long _not_ to leave them behind in time.
///
--
In a fight against something, the fight has value, victory has none.
In a fight for something, the fight is a loss, victory merely relief.
g...@jpl.nasa.gov (Erann Gat) writes:
> In my formative years I was a fan of T, which for those of you who don't
> know, is a member of the Lisp family of languages ;-) that resembles
> Scheme more than Common Lisp. It is a Lisp-1, but it had a lot of
> features not found in Scheme, like an object system, and a setf-like
> feature called "setters". ("SETTER" was a function which when called on
> another function would return a function that when called would set the
> location referred to by the second function. Whew! e.g.: ((setter car)
> x 1) == (replaca x 1)
...
> But in the recent discussion of what "first class symbol" means a number
> of people have made statements that at least imply (and sometimes
> explicitly state) that they like packages, that they think they really are
> a cool feature. For those of you who feel this way I have the following
> question: were you aware of the eval-time alternative for resolving name
> conflicts provided by environments?
Being one of the central parties making this noise, I'll do something I'm
not big on and actually cite my credentials to show that I am not blathering
out of ignorance:
(1) I, along with Jonathan Rees and Norman Adams, was one of a 3-person
team that kicked of the T effort at Yale. I did the initial language
prototype. I basically wanted to design a language but not a compiler,
and Jonathan wanted to design a compiler but not a language, which is
how he and I teamed up. (Norm was new to it all then and his major
contributions I think came after I left.)
I'm personally responsible for the creation of the creation of the
"locale" concept because Jonathan didn't like the "global" notion
and yet I was convinced people wanted the metaphor even if it had
problems in practice. So we made locales, which were, as I initially
described them, "locally global" workspaces that could serve as
bottom-out points for lexicality to home all otherwise-free references.
I'm also the person who, at Jonathan's request (i.e., it was his idea
but I validated its feasibility), first did the hack, in my Maclisp-based
emulator of making message-sending functions basically use their own
identity as the value to be sent. This is a huge awful pun that could
only work cleanly in a Lisp1 (and we knew that) but basically we did
something like (if I recall the technique, though I've forgotten the
operator names--I only know we spelled out whole words and didn't use
abbrevs like I've used here):
(define (make-op)
(letrec ((op (lambda (self . args) (apply send self op args))))
op))
(define kar (make-op)) ;Make a KAR operation
(define kdr (make-op)) ;Make a KDR operation
So that the handler for an object doing this message would handle
a message whose name was the function that sent the message. (Weird.)
So one day I changed over the operation handlers from something that
did (cond ((eq? msg 'kar) ...) ...) to (cond ((eq? msg kar) ...) ...)
which had the neat [lisp1-only] property that if you renamed the kar
message operation, you also renamed its key. So someone in another
environment used to calling it FIRST and LAST could do
(cond ((eq? msg first) ...) ...)
and didn't have to know that in the sender's environment the name of the
message was KAR, not FIRST. Further, since SETTER is an object message
itself, a renaming of KAR means that SETTER gets called on the
right thing and so (SETTER FIRST) in the environment with FIRST renamed
does exactly the right thing.
It's all very cool. But, then again, it's one of those oh-too-cute
kludges that they're referring to when they say "try not to be
too clever". It's a hack. A hack that works. But a serious
pun. It isn't something that falls out of the beauty of Lisp1. It's
a kludge constructed tenuously from the felicity ("accidental truth")
of Lisp1 and other accidental factors--like that you can compare
procedures with EQ? ... there was a move afoot in Scheme a while back
to say that EQ? should not be a "total" function; that is, that it
should be an error to call it on certain types, functions among them.
Had that been so, the entire kludge that was that cool thing in T
would have not worked, Lisp1 or not. Message objects would have had to
be some other datum than the function itself. But it didn't come out
that way and the punsters of the world (Jonathan and I) won out that day.
I _like_ puns mind you, and I don't feel guilty. I loved
this pun in particular and am glad you remember it fondly. But I
have a real laugh when people talk about Lisp2 as if they are the
home of all puns and Lisp1 is free somehow of the urge to pun.
In any language, people will make use of the semantics to their
advantage. Where namespaces are folded on themselves, people
will use that fact; where they are not, people will use that.
We're all sinners. Let he who thinks he is not cast the first
stone.. (Oh, ouch, that's right--they have. Ok. You can stop now.)
(2) I am among the authors of the Scheme report. I don't get my way a
lot in certain votes, but I like to think that it sometimes matters
that I am there. And in any case, I am aware of how the Scheme
people think about modules. It's been discussed many times among
the members over the years.
(3) I have used Scheme in workplace settings for paid work. I do know how
Scheme works from a practical point of view.
(4) The ISLISP committee, for which I was for a time US Representative from
(X3)J13 and Project Editor. [I continue as Project Editor although not
a J13 member because this is a neutral, non-technical role and does not
require me to be a technical participant nor national representative,
just a helpful guy.] Modules and packages are among the topics
that we discussed some years back but deferred for "future work".
In short, if your question was, am I just making up my preference for packages
because I am ignorant of the alternatives, the answer is: No.
> And if you were, what is it that makes you prefer packages?
Do you want me to repeat the stuff I mentioned under the data hygiene thread?
The short answer is that non-symbols are fine, but they do not a symbolic
processing language make. Some days I don't mind passing around Scheme's
symbols (with their characteristic "no associated meaning") + an environment
(to give the symbols meaning). Other days, I like the power of being able
to have symbols like in CL, where the symbol means something all by itself.
(Most days, in fact.) Passing non-symbols is fine computation, but is not
the essence of symbolic computation.
> Being one of the central parties making this noise, I'll do something I'm
> not big on and actually cite my credentials to show that I am not blathering
> out of ignorance:
I won't follow your example because I don't know if I'm blathering out of
ignorance or not. (That's what I'm trying to figure out.) But if you
want to see my credentials my CV is on the Web. Suffice it to say I've
built a successful career in part by writing Common Lisp code. I feel
fairly confident that I'm sufficiently non-idiotic that if someone can't
explain something in this area so that I can understand it there's a good
chance that it's their fault and not mine.
> > And if you were, what is it that makes you prefer packages?
>
> Do you want me to repeat the stuff I mentioned under the data hygiene thread?
No, but I would like you to expand on something:
> I cannot later 'relate', at the symbol
> level, your person-1 with my person-1, without doing a global renaming.
What does that mean? Can you restate this (which seems to be the crucial
point) without scare quotes and without squishy phrases like "at the
symbol level"?
> The short answer is that non-symbols are fine, but they do not a symbolic
> processing language make. Some days I don't mind passing around Scheme's
> symbols (with their characteristic "no associated meaning") + an environment
> (to give the symbols meaning). Other days, I like the power of being able
> to have symbols like in CL, where the symbol means something all by itself.
> (Most days, in fact.) Passing non-symbols is fine computation, but is not
> the essence of symbolic computation.
Suppose CL had no symbols. It seems I could easily add them by defining
two classes: "symbol" with slots name, value, plist, etc. and "package"
that's really just a hash table that maps strings onto instances of the
symbol class. Probably 20-30 LOC at most. Would that be sufficient? Or
would I still be missing some essential capability? What would it be?
Maybe the problem is that I don't really understand what people mean by
"symbolic processing". A concrete non-contrived example of "symbolic
processing" that's easy in CL and hard in Scheme would probably help.
E.
Since everyone else is weighing on this I might as well offer my uninformed
opinion.
In the years I have been hacking lisp, I have on many occasions cursed
the package system, mainly for making me restart my lisp to undo whatever
random damage happened when I read something in the wrong package.
On the other hand, I have *never once* had the occasion to say
`Wow, damn glad that package system saved my ass on *this*!'
It is hard to tell if you are serious on this but do you actually
mean that? Are you aware of the fact that it might be a benefit
of the package system that you had no reason so far to complain
when you used different packages from different parties for your
own projects? Or is the above in fact an example of very fine
irony which wanted to say this very thing? In that case I feel
very embarrassed for bothering you...
Regards,
--
Nils Goesche
Ask not for whom the <CONTROL-G> tolls.
PGP key ID 0xC66D6E6F
Good things come in small packages.
When I broke up a monolithic wadge of code into reasonable packages (for
no particular reason other than it seemed The Right Thing), to get clean
compiles I had to refactor -- not too much, but enough to realize the
code was being improved in each case by the exercise.
Note that this was not about name conflicts, just classic stuff like a
utility package calling functions from a higher level package it should
not have known about. less so the higher-level package going after
unexported implementation stuff.
--
kenny tilton
clinisys, inc
---------------------------------------------------------------
"Well, I've wrestled with reality for thirty-five years, Doctor, and
I'm happy to state I finally won out over it." Elwood P. Dowd
> In article <sfwg030...@shell01.TheWorld.com>, Kent M Pitman
> <pit...@world.std.com> wrote:
>
> > Being one of the central parties making this noise, I'll do something I'm
> > not big on and actually cite my credentials to show that I am not blathering
> > out of ignorance:
>
> I won't follow your example because I don't know if I'm blathering out of
> ignorance or not. (That's what I'm trying to figure out.) But if you
> want to see my credentials my CV is on the Web. Suffice it to say I've
> built a successful career in part by writing Common Lisp code. I feel
> fairly confident that I'm sufficiently non-idiotic that if someone can't
> explain something in this area so that I can understand it there's a good
> chance that it's their fault and not mine.
Absolutely true. It is certainly not your fault if people cannot
(so far) explain in all detail how they arrived at what they are
talking about now. Everybody is aware of your impressive
credentials, and you know much, much more about Lisp than I do.
But I do know something about ideas. What you are doing now
looks a bit unfair to me -- Kent seems to have had a sudden,
epiphanic, idea on what might be the source, the root of the
difference in character between Lisp and Scheme; why don't you
give this idea a chance? Of course he cannot prove it formally,
let alone right now. But isn't it worth considering for a while?
Sorry for following up to myself, but I'd like to add something:
Why don't you try to prove it yourself? I have often found that
it is a good way, even if you don't believe in an idea, to try to
prove it nevertheless: You'll find out what the major obstacles
are and thus gain valuable insight -- that would be much more
productive than demanding a formal prove from others.
> productive than demanding a formal prove from others.
proof
Darn, sometimes even English spelling is hard. Sorry.
> I begin to see where you come from and where you went wrong, Erann. This
> is such a stupid thing to say that I marvel at the respect you must have
> felt for this mentor. I hope _I_ shall never have a student who does not
> have in him the ability to say "that's a load of crap, Erik" if I make a
> similarly retarded point and expect people to believe it, not laugh.
That's a load of crap, Erik. Having respect for someone doesn't mean that
you believe everything they say.
> Have you actually ever understood Common Lisp at all, Erann?
I don't know. That's what I'm trying to figure out. You're not being
much help.
> I have to wonder. At read
> time, a string is interned in only one package to return a symbol. There
> _is_ no symbol without the package.
Uninterned symbols have no package. Have you ever understood Common Lisp
at all, Erik?
> Again and again, you show us that you are _horribly_ confused
That's true. I am horribly confused. Whether that's my fault or the
fault of the people trying to explain things to me is unclear.
> and that much of your dislike of Common Lisp
That is false. I like Common Lisp very much.
> comes not from Common Lisp, but from
> some staggeringly stubborn misconception and refusal to listen to others.
Do you even realize that earlier in this very same post you berated me not
for my refusal to listen, but for having listened?
> | Through the years I have always found the package system to be pretty
> | annoying. I often find myself doing the following:
> |
> | > foo
> | ERROR: foo is unbound ; Doh! Forgot that foo is in foo-package
> | > (use-package 'foo-package)
> | ERROR: symbol FOO-PACKAGE:FOO conflicts with symbol FOO ; Arrggh!!!
>
> Such are the results of incompetence at work.
One could respond the same way to every frustration anyone has ever had
with any programming language. On this view every problem anyone has
with, say, C++, is not a problem with C++ but merely incompetence on the
part of the user. Some people actually believe this.
E.
> In article <32253120...@naggum.net>, Erik Naggum <er...@naggum.net> wrote:
> > I begin to see where you come from and where you went wrong,
> > Erann. This is such a stupid thing to say that I marvel at the
> > respect you must have felt for this mentor.
> That's a load of crap, Erik. Having respect for someone doesn't
> mean that you believe everything they say.
Then it was worse, it was blind faith.
> > Have you actually ever understood Common Lisp at all, Erann?
> I don't know. That's what I'm trying to figure out. You're not
> being much help.
It helps if you try to understand what he's telling you and stop
bringing up rants from misguided people who confuse orthogonal
concepts.
> > I have to wonder. At read time, a string is interned in only
> > one package to return a symbol. There _is_ no symbol without
> > the package.
> Uninterned symbols have no package. Have you ever understood Common
> Lisp at all, Erik?
It's a good thing you removed (from your mind?) the rest of the
context in order to make it look wrong. You were talking about
choosing the right package to intern into at read time.
> > Again and again, you show us that you are _horribly_ confused
> That's true. I am horribly confused. Whether that's my fault or the
> fault of the people trying to explain things to me is unclear.
That's easy. If you don't understand something that many others
understand and use effectively, they obviously learned it somehow. Why
don't you try learning it, too, instead of trying to figure out whom
to blame next?
> > and that much of your dislike of Common Lisp
> That is false. I like Common Lisp very much.
You seem to complain about it more than enough and intentionally
misunderstand it enough. And as one of the basic principles of Common
Lisp, a dislike of packages implies a dislike of Lisp and symbolic
processing itself.
> > comes not from Common Lisp, but from some staggeringly stubborn
> > misconception and refusal to listen to others.
> Do you even realize that earlier in this very same post you berated
> me not for my refusal to listen, but for having listened?
Blind faith involves the listener consciously choosing to stop
listening at that point.
> > | Through the years I have always found the package system to be pretty
> > | annoying. I often find myself doing the following:
> > |
> > | > foo
> > | ERROR: foo is unbound ; Doh! Forgot that foo is in foo-package
> > | > (use-package 'foo-package)
> > | ERROR: symbol FOO-PACKAGE:FOO conflicts with symbol FOO ; Arrggh!!!
> >
> > Such are the results of incompetence at work.
>
> One could respond the same way to every frustration anyone has ever had
> with any programming language. On this view every problem anyone has
> with, say, C++, is not a problem with C++ but merely incompetence on the
> part of the user. Some people actually believe this.
If someone has a problem with understanding what a feature of C++
means, it's the fault of that person for not understanding it, not the
fault of the language for including it. Maybe it was designed for
people with different goals. In this case, it's clearly a result of
the fact that you forgot to think about what you were doing before
doing it. If you realized that packages formed contexts within which
symbols were interpreted, you wouldn't have had this problem. It's
like looking in the wrong module for a value in a scheme system. Is it
scheme's fault that you chose to look where you didn't want to?
> You seem to complain about it more than enough and intentionally
> misunderstand it enough. And as one of the basic principles of Common
> Lisp, a dislike of packages implies a dislike of Lisp and symbolic
> processing itself.
Geez! Erann actually *likes* CL, probably more than Scheme, but
(I'm guessing) doesn't happen to think it is Absolutely Perfect. As a
result, he perhaps complains about things once in a while.
And anyone who complains must therefore dislike CL!?
One must think CL packages are really the best thing ever, or else one
not only dislikes CL in general, but also Lisp, and "symbolic
processing itself"?
Puhleez.
> If someone has a problem with understanding what a feature of C++
> means, it's the fault of that person for not understanding it, not the
> fault of the language for including it.
Actually, that's not quite true. One of the tasks of a language
designer is to make the language easy to understand. C++ more or less
fails on that score (at least as judged by the conceptual elegance of
both CL and Scheme). This is a fault to be chalked up to the language
designer as much as to the people who have trouble understanding the
result.
Thomas
> Rahul Jain <rj...@sid-1129.sid.rice.edu> writes:
>
> > You seem to complain about it more than enough and intentionally
> > misunderstand it enough. And as one of the basic principles of Common
> > Lisp, a dislike of packages implies a dislike of Lisp and symbolic
> > processing itself.
>
> Geez! Erann actually *likes* CL, probably more than Scheme, but
> (I'm guessing) doesn't happen to think it is Absolutely Perfect. As a
> result, he perhaps complains about things once in a while.
He seems to not like actually using it, then.
> One must think CL packages are really the best thing ever, or else one
> not only dislikes CL in general, but also Lisp, and "symbolic
> processing itself"?
CL packages are merely an implementation of namespacing for
symbols. Claiming that they should be replaced with a totally
orthogonal system is claiming that symbols should not be namespaced,
which means that having multiple symbolic processing systems operating
on the same data is subject to name clashes.
> Actually, that's not quite true. One of the tasks of a language
> designer is to make the language easy to understand. C++ more or less
> fails on that score (at least as judged by the conceptual elegance of
> both CL and Scheme). This is a fault to be chalked up to the language
> designer as much as to the people who have trouble understanding the
> result.
Except that this is the sole feature which allows people to use the
language in it's professed primary purpose, symbolic processing. If
you misunderstood what the point of the "new" operator in C++ was and
claimed that it should be replaced with interned strings, would that
be a failing of C++?
> tb+u...@becket.net (Thomas Bushnell, BSG) writes:
>
> > Rahul Jain <rj...@sid-1129.sid.rice.edu> writes:
> >
> > > You seem to complain about it more than enough and intentionally
> > > misunderstand it enough. And as one of the basic principles of Common
> > > Lisp, a dislike of packages implies a dislike of Lisp and symbolic
> > > processing itself.
> >
> > Geez! Erann actually *likes* CL, probably more than Scheme, but
> > (I'm guessing) doesn't happen to think it is Absolutely Perfect. As a
> > result, he perhaps complains about things once in a while.
>
> He seems to not like actually using it, then.
So he must like everything about it, or he's against it?
There are lots of things about my Debian GNU/Linux systems that I
complain about. It does not follow that I don't like Debian, or that
I think Posix is a waste.
> CL packages are merely an implementation of namespacing for
> symbols. Claiming that they should be replaced with a totally
> orthogonal system is claiming that symbols should not be namespaced,
> which means that having multiple symbolic processing systems operating
> on the same data is subject to name clashes.
Erann didn't make any such claim at all. He didn't express an
opinion; he just said that packages got in his way once or twice, and
he'd heard from really well connected people that they were (at least
once) regarded as rather a necessary wart on Common Lisp than a great
feature. So he was wondering the reasons why they were now being
toutes as a great feature.
He didn't claim that they should be replaced at all.
> Except that this is the sole feature which allows people to use the
> language in it's professed primary purpose, symbolic processing.
I think this is a great misunderstanding of what "symbolic processing"
is about. However, I'm not interested in having that conversation
with you. (Kent, if you find it worth discussing, then I am
interested in having that conversation with you.)
Thomas
> Rahul Jain <rj...@sid-1129.sid.rice.edu> writes:
> > tb+u...@becket.net (Thomas Bushnell, BSG) writes:
> > > Rahul Jain <rj...@sid-1129.sid.rice.edu> writes:
> > > Geez! Erann actually *likes* CL, probably more than Scheme, but
> > > (I'm guessing) doesn't happen to think it is Absolutely Perfect.
> > > As a result, he perhaps complains about things once in a while.
Once in a while? What else has he done in c.l.l in the entire time
I've been reading it?
> > He seems to not like actually using it, then.
> So he must like everything about it, or he's against it?
No, but he doesn't seem to like anything about it that he couldn't get
elsewhere, along with other features that he would like better.
> There are lots of things about my Debian GNU/Linux systems that I
> complain about. It does not follow that I don't like Debian, or that
> I think Posix is a waste.
But do you spend most of your posts complaining about debian on
debian...@l.d.o? Not that I've seen. Note that Erann hasn't made
any suggestions on how to accomodate symbol namespacing without CL
packages yet, and neither have you.
> Erann didn't make any such claim at all. He didn't express an
> opinion; he just said that packages got in his way once or twice, and
> he'd heard from really well connected people that they were (at least
> once) regarded as rather a necessary wart on Common Lisp than a great
> feature. So he was wondering the reasons why they were now being
> toutes as a great feature.
> He didn't claim that they should be replaced at all.
He claimed that he agreed with someone who said that they should be
replaced. I don't see what the material difference is unless you're
claiming that he didn't know what he was agreeing with.
> > Except that this is the sole feature which allows people to use the
> > language in it's professed primary purpose, symbolic processing.
> I think this is a great misunderstanding of what "symbolic processing"
> is about.
Then you misunderstand that name clashes are a fatal situation in
symbolic processing. It's like doing side-effects while ignoring
structure-sharing.
> However, I'm not interested in having that conversation with you.
> (Kent, if you find it worth discussing, then I am interested in
> having that conversation with you.)
I agree that Kent is much more knowledgable than I am. I don't see how
that makes my input useless, but I guess that's your personal bias.
> No, but he doesn't seem to like anything about it that he couldn't get
> elsewhere, along with other features that he would like better.
The fact that he may complain about features X, Y, and Z does not mean
that he doesn't like A, B, and C.
> But do you spend most of your posts complaining about debian on
> debian...@l.d.o? Not that I've seen. Note that Erann hasn't made
> any suggestions on how to accomodate symbol namespacing without CL
> packages yet, and neither have you.
I can't speak for Erann. I've already said several times that, in the
general framework of Common Lisp, I think that Common Lisp packages
are the best way I know of to get what someone would want from them.
But that doesn't mean that they always do just what I wish; there are
things about them that I wish were different, but seem essentially
required by the way they have to work to get the other things that are
important, and are well done. This is, I suspect, why Guy Steele also
thought they were a little wartish.
> Then you misunderstand that name clashes are a fatal situation in
> symbolic processing. It's like doing side-effects while ignoring
> structure-sharing.
You are confusing one implementation of symbolic processing with the
very concept of symbolic processing.
It is your fault. Confusion can never be blamed on anyone else. If you
think it can, you are horribly confused.
* Erik Naggum
> Such are the results of incompetence at work.
* Erann Gat
| One could respond the same way to every frustration anyone has ever had
| with any programming language.
Well, there is a crucial distinction between incompetence at the designer
end of the language and incomptence at the user end of the language. You
seem to conclude thet there can be incompetence only at one end, despite
your deriding Common Lisp "flaws". I wonder how you reconcile this with
yourself, but you probably do not bother.
It is the other way around. Only those who are against it, express their
dislike of it. Those who are for it, think in a very different way: "It
would be even better if ...". The difference between dislike and like
less than some hypothetical situation can seem hard to grasp, but think
of a line, with zero in the middle. Dislike is on the left of the zero.
Like is on the right of the zero. "Like less than some hypothetical"
ought to have both on the right of the zero, and that is certainly the
case for "it would be even better if ...". But in effect, yes, one's
opinion of all features of something one has chosen to work with and
wishes to become proficient and productive in should all be on the
positive side. That does mean that one has to defend them all or that
one does not prefer one way over another, perhaps never using a feature,
but active dislike is so psychologically destructive that you have people
constantly complain about it instead of just getting over it and on with
their lives. You and Erann Gat keep bad-mouthing Common Lisp all the
time. To those who wish to become proficient and productive in Common
Lisp, this is actually really destructive. It is like going to a bar
with your friends and haveing to suffer someone who berates something or
other about your girlfriend or wife or kids or cat or perhaps your car or
your job or your house or whatever _all_ the time. This pisses most
people off _tremendously_, to the point where people actually resort to
violence to make the shitheads shut up or leave. For a community like
ours, it is like having someone come to a bar regularly only to whine
about the beer or the hygiene or the furniture or the decorations or the
bartender or whatever. That someone would be beaten up if he refused to
leave the others in peace to enjoy themselves. Common decency strongly
suggests that such naysayers and shitheads just get the hell out of the
place, but for some reason, shitheads like you and Erann Gat keep coming
to the same bar and keep deriding and denigrating something that a lot of
the people there actually come there _because_ they like. Now, to keep
this metaphor, everybody will appreciate a suggestion to make something
better if they perceive it to be in good will, i.e., with respect for
what has already been done, but if your suggestion is not adopted, it is
very _rude_ to keep suggesting it over and over (no matter how politely
it has been suggested) To change the scene a little: If someone should
argue that your cat would look a lot better with short hair, the only
appropriate answer is that then it would have to be a different cat. To
really make you destructive shitheads realize what you are doing, suppose
you talked to a black guy in the highly racially sensitized United States
and told him that he would probably be a much better person if he were
white. That probably overstates the case a little, but having you and
Erann Gat and shitheads like you around here to rag Common Lisp all the
time is extremely unwelcome. And the message I keep trying to send to
you obnoxious assholes is just that: Get the fuck out of here if all you
come here to do is deride and berate and denigrate the _one_ common
interest to the people in this forum. People will generally listen to
your problems and try to help you if you have a constructive purpose to
some frustration or other, but if all you want to do is complain that
what you want to do is not perfectly acceptable and you do not _want_ a
solution, people _will_ tire of you, some more quickly than others.
This arrogance is probably the reason why you never _listen_ to anyone.
There is probably no way to make you revert to a mental state where you
realize that you know less than those who try to tell you something.
You have established that you will only listen to very particular points
that people raise, and not listen to their "philosophical" answers. You
even ridicule me for trying to answer your philosophical _question_. In
order for you to stop being such a prick, you have to snap out of the "I
know all I need to know, thank you" attitude. Your confusion comes from
thinking you know something you do not in fact know. The only possible
solution for you is to stop thinking you know, but judging from how you
respond to every suggestion I make, you will take this to mean that you
should be humiliated and ridiculed because that is what happens when you
flout your arrogance. However, if you could start to think and listen,
and actually try to _learn_ something from what people tell you, instead
of going on your fucking annoying fault-finding missions _all_ the time,
you would be unconfused in no time.
I would actually like to help you and any other victim of confusion, but
when you are so dead certain that you know everything you need to know
and can _arbitrarily_ decide that "I don't get it, and it's his fault",
there is no incentive to try to explain anything to you at all. Only the
people who might read your incessant whining are worth talking to.
How could it? You have obviously never actually set it up so it _could_
have saved your ass.
But what I keep wondering, however, is why "save my ass" is a criterion
for the value of something. I have heard a few people argue that if they
are bumbling fools and resoundingly incompetent, they are "saved" by some
feature or another, and this is somehow valuable. I think the opposite:
I think if people are bumbling fools and resoundingly incompetent, they
should crash and burn, the sooner the better, because saving them does
only one thing: it makes it possible for them to continue to be bumbling
fools and resoundingly incompetent.
Have you ever heard anyone say "Man, I am so glad I have studied
mathematics and acquired a mathematical way of thinking! That has
_really_ saved my ass!"? If not, let us tear down the mathematics
department at every university and burn the math books!
How would you rate a medical doctor who exclaimed "I'm really happy I
know the spleen. That really saved my ass!". I would rate him as
dangerously incompetent, just as I rate your comment.
But you are a Scheme freak, are you not?
A question: what is for you the relation between "L has X",
and "X can be implemented (with effort, without effort) in L"?
> I can't speak for Erann. I've already said several times that, in the
> general framework of Common Lisp, I think that Common Lisp packages
> are the best way I know of to get what someone would want from them.
Well, as someone who seems to have been occasionally tarred with the
`thinks CL is perfect' brush, I'll say that I think this is not so. I
think the CL package system is really not that great. *However* I
don't have any coherent suggestions about how it could be made
compatibly better, so I don't tend to complain too vociferously about
it.
Here are some of the things I'd like to see be different:
Locks on packages (how to do them)?
User-defined external-only packages and automatic-self-binding
packages (like KEYWORD).
Package inheritance (like my conduits system, except better thought
out). This can be done as a user, but it really needs system-level
support to make sure all the errors that should be signaled really
are.
Hierarchical names for packages and some notion of a `current package
directory' and per-directory nicknames. If I'm in the `COM.CLEY'
context I want to be able to say that, as well as WELD refering to
COM.CLEY.WELD, NET refers to ORG.ALU.NET-COMPATIBILITY or something.
User-controlled interning. I want to be able to step in at the point
where the reader has got a string which is a symbol candidate and tell
it what to do. This makes it much easier to use READ in `unsafe'
contexts without interning a mass of symbols that you can't later
find.
The only thing I've even thought about hard is the third of these: I
think that, even if this is an uncontentious set of things people
would like (unlikely) that there is a lot of fairly serious design
work here (not to mention looking at existing implementations).
--tim
> In the years I have been hacking lisp, I have on many occasions cursed
> the package system, mainly for making me restart my lisp to undo whatever
> random damage happened when I read something in the wrong package.
> On the other hand, I have *never once* had the occasion to say
> `Wow, damn glad that package system saved my ass on *this*!'
I have. More times than I can count.
And, in general, I have a meta-rule (as in "rule of thumb", not "rule of law"):
If person X perceives a problem and person Y doesn't perceive a problem,
then unless you have reason to believe person X to be clinically nuts,
if you know no other piece of information about the problem, then person
X is probably right and person Y is probably wrong.
For example:
(a) Mary complains that Joe sexually harrassed her.
Sally says, "gosh, Joe never sexually harrassed me."
Has Sally refuted Mary? No, Sally has said "I wasn't present
and have no information."
(b) If Fred says "The thing I like about Kathy is that she always praises
people for the good work they do." and Joe says "I've never heard
her do this for me." and Fred says "I've heard her praise you."
Is it more likely that (1) Kathy said it and Joe didn't hear or
(2) Fred is a pathological liar and just made this up to irritate
or confuse Joe. Obviously either are possible but I tend to defaultly
assume (1) unless shown strong evidence of (2).
So given the choice between thinking I'm looney for liking packages or
you're oblivious for not understanding the value that packages offer you,
I'm reluctantly pressed into the bad position of suggesting you may be
creating a situation where to continue productively in the conversation,
I have to call you oblivious. It's not my first choice to approach a
discussion by addressing a speaker's personal characteristics since
ordinarily that is an ad hominem, but in this particular case you have
created what I guess I'll call a "passive ad hominem" (in honor of the
notion of "passive aggressive") by observing a personal characteristic
in yourself and then citing it as a reference for a technical point. There
is really no other way of proceeding according to proper rules of procedure
in an evidentiary hearing than to examine the credibility of the witness.
And besides, the package system is _designed_ not to overwhelm you with
the sense that it is there at all, so you're not _supposed_ to notice
when it saves you. Its whole purpose in life is to give you the "I own
the whole universe feel." If we didn't want to do this, we'd have just
made whose actual name was TV:FOO (6 letters) rather than omittable
package names because then sharing would always be explicit. The whole
notion of omitting the package is intended to draw attention away from
notions of sharing, even though those notions of sharing are often
absolutely critical to the correct function of your application. It's
not that you're not supposed to know; it's only that you're not supposed
to focus on that at every instant.
I don't really know *how* to set it up so that it could save my ass.
This isn't to say that I don't how the package system works, it is that
I think the particular tools that the package system provides are
rather crude, and although they can be used as a servicable namespace
partitioning, they quickly reveal limitations.
> But what I keep wondering, however, is why "save my ass" is a criterion
> for the value of something. I have heard a few people argue that if
they
> are bumbling fools and resoundingly incompetent, they are "saved" by
some
> feature or another, and this is somehow valuable. I think the opposite:
> I think if people are bumbling fools and resoundingly incompetent, they
> should crash and burn, the sooner the better, because saving them does
> only one thing: it makes it possible for them to continue to be bumbling
> fools and resoundingly incompetent.
>
> Have you ever heard anyone say "Man, I am so glad I have studied
> mathematics and acquired a mathematical way of thinking! That has
> _really_ saved my ass!"?
I have occasionally thought that myself.
> If not, let us tear down the mathematics
> department at every university and burn the math books!
I wouldn't suggest an all-or-nothing solution. I happen to dislike the
package system, but that doesn't mean that I'm going to abandon Lisp
and embrace Visual Basic.
> How would you rate a medical doctor who exclaimed "I'm really happy I
> know the spleen. That really saved my ass!". I would rate him as
> dangerously incompetent, just as I rate your comment.
I think I would prefer him over a medical doctor that said
``Oh. What's this organ then?''
But perhaps I should have been less colloquial. What I was implying
is that the package system has caused me an inordinate amount of
difficulty compared to the problem it is designed to solve. `Saving
my ass' should be read as hyperbole, not as criterion.
> But you are a Scheme freak, are you not?
I am tempted to quote scripture: It is you who say that I am.
I like Scheme, and enjoy programming in it. On the other hand, I like
Common Lisp, too, and have written orders of magnitude more code
in Common Lisp than I have in Scheme. I feel comfortable with either
language.
Oh, I don't mind being called oblivious. Erann Gat solicited opinions
and I gave what I explicitly said was my `uninformed opinion'.
> And besides, the package system is _designed_ not to overwhelm you with
> the sense that it is there at all, so you're not _supposed_ to notice
> when it saves you. Its whole purpose in life is to give you the "I own
> the whole universe feel."
I understand this. However, I have found that the package system does
a poor job of this. If I *never* noticed it, that would be fine, but
I notice it far more frequently than I'd like, and when I *do* notice it,
I find that it can be difficult to get it back to a useful state
without rebooting.
If you would like, I can give you several technical examples where the
package system has gotten in the way.
Yes, in this case I am not being facetious.
> Are you aware of the fact that it might be a benefit
> of the package system that you had no reason so far to complain
> when you used different packages from different parties for your
> own projects?
Obviously there is a need for *some* mechanism to allow people
to develop code independently without them having to worry
about what names other people might want to use in their code.
However, I think the package system is the wrong way to go about
this.
> In article <sfwg030...@shell01.TheWorld.com>, Kent M Pitman
> <pit...@world.std.com> wrote:
>
> > Being one of the central parties making this noise, I'll do
> > something I'm not big on and actually cite my credentials to show
> > that I am not blathering out of ignorance:
>
> I won't follow your example because I don't know if I'm blathering out of
> ignorance or not. (That's what I'm trying to figure out.) But if you
> want to see my credentials my CV is on the Web.
I'm not questioning your credentials. I thought you had asked effectively
for mine since your question seemed to ask "is it just the case that the
other side is speaking out of ignorance". Anyone can chime in and tell me
if my credentials seem insufficient--I'm not going to do like VeriSign and
offer a self-validating credential where I both assert credentials and tell
you that you can trust them--but I'm comfortable with them for myself and
willing to have them open to inspection. I _think_ I know what the other
options are. I _wish_ I could feel comfortable using the other options,
but I have repeatedly tried and found both Lisp1 and module-based solutions
inadequate to my personal needs.
> Suffice it to say I've
> built a successful career in part by writing Common Lisp code.
No quibble with your credentials here. I'm not suggesting you are not
entitled to engage in this conversation in any way. I'm just suggesting
you're wrong in point of fact if you think there is no valid defense of a
package system. Packages and modules are orthogonal structures and both
have a place. I'm not anti-module, I'm anti-module-qua-panacea.
> I feel fairly confident that I'm sufficiently non-idiotic that if
> someone can't explain something in this area so that I can
> understand it there's a good chance that it's their fault and not
> mine.
I use this same rule of thumb myself, so I can relate. Then again, I
think you should not conclude from someone's inability to articulate a
natural langauge rule to explain a philosophically complex behavior
that they do not have deep-seated and real beliefs, nor even perhaps
well-founded beliefs. Sometimes people understand but cannot
describe. Sometimes adequate description requires the person
receiving the description to 'fess up enough data about their
confusions that a custom explanation can be rendered adequate to their
needs. I have no doubt you are trying. I have neither any doubt that
my belief in packages is technically, not just emotionally or
historically, well-founded. I'm happy to engage in this discussion
with you on that basis. I'm even happy for you to conclude "Because
Kent and others cannot say what I need to hear, I will continue to
believe what I believe." (That is, I'm happy to live with the belief
that you are wrong. Heh... Which incidentally is on a statement that
you are wrong, it is just a statement about my belief structure.) I'm
not happy for you to assert (and I hope you are not asserting, I'm
just checking) that "The inability of various people to articulate a
defense is a de facto proof that no such defense exists." or that kind
of thing.
> > > And if you were, what is it that makes you prefer packages?
> >
> > Do you want me to repeat the stuff I mentioned under the data
> > hygiene thread?
>
> No, but I would like you to expand on something:
>
> > I cannot later 'relate', at the symbol
> > level, your person-1 with my person-1, without doing a global renaming.
>
> What does that mean? Can you restate this (which seems to be the crucial
> point) without scare quotes and without squishy phrases like "at the
> symbol level"?
I'll be as concrete as I can. I'm trying to be brief, too, so I may have
to add detail later if I've left something out, but here's a sketch of a
sample scenario I intended to address. Examples have the virtue of having
detail that abstract discussions do not. Re-generalize anything you see here
in concrete form back to the abstract when done; my point is not to say
"Here is the problem that worries me and if you fix it the world is well."
My point is rather to say, "My belief that there is a problem would lead
me to hypothesize a specific set of concrete and observable design problems.
Here is an example of one." Further, you should understand that when I say
"design problem" I don't mean "There is no way to compute an answer to this
problem i another system." since we are not talking about computability.
The Turing Equivalence of Scheme and CL is not in question. Rather, we are
talking about the use of a specific paradigm to express a particular problem;
so if in "solving" the technical problem, you switch paradigms, then you have
made my case. The intent of this examples is specifically to show a use of
symbols, so saying "we can substitute non-symbols to solve this" make my case,
it does not refute it... Ok, enough prelude. Here we go...
CL allows me, simply in the symbol data, to control whether I'm doing various
kinds of sharing. For example, I can separate my spaces, as in:
(in-package "FLOWER")
(setf (get 'rose 'color) 'red)
(setf (get 'daisy 'color) 'white)
(defun colors-among (things)
(loop for thing in things
when (get thing 'color)
collect thing))
(in-package "PEOPLE")
(setf (get 'rose 'color) 'white)
(setf (get 'daisy 'color) 'black)
(defun colors-among (things)
(loop for thing in things
when (get thing 'color)
collect thing))
(in-package "OTHER")
(flower:colors-among '(flower:rose people:rose flower:daisy))
=> (FLOWER:RED FLOWER:WHITE)
(flower:colors-among '(flower:rose people:daisy people:rose))
=> (FLOWER:RED)
(people:colors-among '(flower:rose people:rose flower:daisy))
=> (PEOPLE:WHITE)
(people:colors-among '(flower:rose people:daisy people:rose))
=> (PEOPLE:BLACK PEOPLE:WHITE)
Some examples of the value here:
- The information is manifest in symbols, a primitive data structure
in the language
- Both the objects and the properties can be named by symbols,
which are easy to type.
- When debugging either the PEOPLE or FLOWER package, you don't worry
about symbols and just type short names.
- In OTHER contexts, when you want to mix people and flowers in the
same list, it's easy to do without perturbing already chosen symbolic
data structures and without yielding confusion.
> > The short answer is that non-symbols are fine, but they do not a symbolic
> > processing language make. Some days I don't mind passing around Scheme's
> > symbols (with their characteristic "no associated meaning") + an environment
> > (to give the symbols meaning). Other days, I like the power of being able
> > to have symbols like in CL, where the symbol means something all by itself.
> > (Most days, in fact.) Passing non-symbols is fine computation, but is not
> > the essence of symbolic computation.
>
> Suppose CL had no symbols. It seems I could easily add them by defining
> two classes: "symbol" with slots name, value, plist, etc. and "package"
> that's really just a hash table that maps strings onto instances of the
> symbol class. Probably 20-30 LOC at most. Would that be sufficient?
No. The critical value is the ability to TYPE the symbol without
doublequotes or other impediment in most cases. Yes, you have to write :
but only in the case where an ambiguity is possible to resolve that
ambiguity. In Scheme, you also don't write : in the unambiguous cases but
you have no notation to fall back on to resolve the ambiguous cases.
You could create a notation wherein for Scheme doing TV:FOO would refer to
(GET-FROM-ENVIRONMENT 'FOO TV). And I vaguely recall in T (Yale Scheme)
we had talked about this notation: #>foo>bar>baz to mean
(GET-FROM-ENVIRONMENT 'BAZ
(GET-FROM-ENVIRONMENT 'BAR
(GET-FROM-ENVIRONMENT 'FOO
(GET-FROM-ENVIRONMENT ROOT-ENVIRONMENT))))
though I don't recall the name of the operator for GET-FROM-ENVIRONMENT.
It probably had "LOCALE" in its name. Anyway, though, the point is that
what this syntax would resolve to is not a symbol, nor is the data that it
would retrieve. This is a computationally ok answer, but it is a refutation
of any claim that the language is using "symbol" processing at that point.
Symbols are used incidentally in here (since they are one of the arguments
to some of the functions involved, but saying that means it's symbol processing
is like saying that numerical processing is what's going on in:
(defun find-longest-name (array-of-symbols)
(let ((longest (aref array-of-symbols 0))) ;!!! assumes at least one elt
(loop for i from 1 below (length array-of-symbols)
for contender = (aref array-of-symbols i)
when (> (length (string longest))
(length (string contender)))
do (setf longest contender))
longest))
Yes, numbers are involved, but that's not the essence of
"numerical processing". I think of numerical processing as something where
the core data are encoded as numbers. I think of symbolic processing as
something where the core data are encoded as symbols.
> Or
> would I still be missing some essential capability? What would it be?
You'd have a symbol datatype but not a serviceable one for me.
Even ignoring lack of packages, this is the reason I can't stand Java
symbols--the syntax is too cumbersome.
FOO is a symbol. foo is a symbol. I don't mind typing 15\r or |15r|
to preserve case. I don't mind typing tv:foo if I'm not in package TV
to get to that symbol. But basically I want unimpeded access to the name
as part of my syntax for literals if it's going to be what I think of as
a symbol. I don't mind quoting symbols. In Macsyma, I could do
X:10;
F(X,Y):=X+Y;
F(X+3,'X+3);
and get back X+16 which contains one symbol and one number. It doesn't
matter to me what the surrounding syntax is when talking about whether it's
symbol processing. I consider infix/prefix to be orthogonal, and lousy
though I find dylan syntax to be I can't see why they couldn't have
made the above kind of syntax work for them.
> Maybe the problem is that I don't really understand what people mean by
> "symbolic processing". A concrete non-contrived example of "symbolic
> processing" that's easy in CL and hard in Scheme would probably help.
The above programs are contrived, but I hope show enough meat that you
can see the problem. Try casting them into Scheme in whatever way suits you
and we can proceed to the next conversational step from there.
> tb+u...@becket.net (Thomas Bushnell, BSG) writes:
>
> > However, I'm not interested in having that conversation with you.
> > (Kent, if you find it worth discussing, then I am interested in
> > having that conversation with you.)
Thomas, I'm enjoying chatting with _both_ of you guys, first of all.
But, just fyi, a quick scan of the last 100 messages shows that you
have replied to nearly every single message of the 20 or so Rahul has
sent in the last day. If you want not to engage him in conversation,
the conventional tool is not to hit "r" when viewing one of his
posts. ;)
> I agree that Kent is much more knowledgable than I am. I don't see how
> that makes my input useless, but I guess that's your personal bias.
I don't personally like mustard or olives or horseradish. That doesn't
make those things without worth. Someone is still buying them from the
grocery store even when I don't.
Worth is a complex commodity and people spend too much time here
debating the worth of others as if any finite number of people were
even in a position to judge this, or as if having judged it there were
any materially interesting action that could then be taken that was
justified by such a judgment. I recommend simply ignoring those you
don't find of worth. Others may still find them of worth, or not, but
the outcome will likely be the best.
Conversations are usefully made of people of all skill levels. Even
people with no skill come with intuitions and these are often
conversationally useful among people who engage in respectful conversation.
Incidentally, Thomas, my entire rant on "data hygiene" was a _direct_
response to something Rahul said. Had he not posted his remarks, I
wouldn't have had occasion to sit for a while pondering that answer,
which I'm quite happy to have penned. I'm not saying Rahul's posts
have no content value, but even if they accomplished only catalyzing
the speech of others, I would not discount that as conversational
value. We each offer value in our own way and at our own time.
Further, I think a much neglected value metric in this forum is "what it
takes to keep the 'interesting' people here". Everyone has their notion
of who's interesting and who's not, so I'm not trying to single anyone
out, but I often feel as if people think that they want the newsgroup
to have only the interesting people and to have the others stand down.
But sometimes the interesting people are here only because of others who
may be disliked, or cause a stir, or be otherwise problematic. In fact,
this is an ecology, and I urge everyone not to think that killing the
mosquitos (whoever you think those people might be) will have no effect
on the food chain. Just bite into the food you find tasty and avoid
the things that taste bad or sting and be happy you live in the richness
of the jungle instead of the arctic where few things grow ...
> If you would like, I can give you several technical examples where the
> package system has gotten in the way.
I'm sure there are examples of places where where modern medicine gets in
the way, too. But that isn't a refutation of its many benefits.
Nevertheless, since I'm curious what in the world you're talking about,
yeah, I'd be interested to hear...
> Rahul Jain <rj...@sid-1129.sid.rice.edu> writes:
> > No, but he doesn't seem to like anything about it that he couldn't get
> > elsewhere, along with other features that he would like better.
> The fact that he may complain about features X, Y, and Z does not mean
> that he doesn't like A, B, and C.
He can get A, B, and C without X, Y, and Z from a different community,
from what I've seen about A, B, C, X, Y, and Z.
> > Then you misunderstand that name clashes are a fatal situation in
> > symbolic processing. It's like doing side-effects while ignoring
> > structure-sharing.
> You are confusing one implementation of symbolic processing with the
> very concept of symbolic processing.
The concept of avoiding name clashes is an implementation of symbolic
processing?
You are confusing constructive criticism with whining. Now figure out
what you really want from symbol namespaces and explain how to get
that or accept them as they are and either use them or don't.
> Well, as someone who seems to have been occasionally tarred with the
> `thinks CL is perfect' brush, I'll say that I think this is not so.
CL is by no means perfect, it's just a good compromise for the
features that the Lisp community wanted at the time.
> I think the CL package system is really not that great.
I think it's more than good enough for what it does. Most of what you
talk about below are layers on top of packages or extra features in
other systems which use packages and symbols.
> *However* I don't have any coherent suggestions about how it could
> be made compatibly better, so I don't tend to complain too
> vociferously about it.
Yes, that's what I'm trying to explain to Erann and Thomas. They're
merely complaining about a feature without providing suggestions for
improvement.
> Here are some of the things I'd like to see be different:
> Locks on packages (how to do them)?
ACL has an implementation of this, no? I think it's time to
standardize on an API for it. With more and more Lisp applications out
there, we'll need more and more to ensure that they don't accidentally
do nasty things to symbols they don't own.
> User-defined external-only packages and automatic-self-binding
> packages (like KEYWORD).
That's a good idea. I could especially use the external-only
packages. Not needed, but a nice convenience. Hmm, but there are
issues with local variables used in the defintions of symbols in that
package. I'm not exactly sure how to effectively handle that
situation.
> Hierarchical names for packages and some notion of a `current package
> directory' and per-directory nicknames. If I'm in the `COM.CLEY'
> context I want to be able to say that, as well as WELD refering to
> COM.CLEY.WELD, NET refers to ORG.ALU.NET-COMPATIBILITY or something.
ACL and CMUCL implement the same API for hierarchial names, but I
don't think they have per-directory nicknames. That would be a nice
addition.
> User-controlled interning. I want to be able to step in at the point
> where the reader has got a string which is a symbol candidate and tell
> it what to do. This makes it much easier to use READ in `unsafe'
> contexts without interning a mass of symbols that you can't later
> find.
This is one feature of my generic-reader which is a subproject of
DefDoc. I designed it both so that I can check package access controls
as well as read normal text. All the usual caveats about time apply.
> What I was implying is that the package system has caused me an
> inordinate amount of difficulty compared to the problem it is
> designed to solve.
So how would you design a namespacing system for symbols which would
cause less difficulty?
I'd start by researching the state of the art vis-a-vis module systems....
But seriously, I think a lot of the issues I have with the package
system could be ameliorated if 1. read didn't cause visible side effects
and 2. symbol resolution was deferred until runtime
READ causes noticable side effects. This can create dependencies
on the order in which things are read. In the `normal' case of
reading a file in order to compile it, (or the analagous case of
interning symbols in a fasl file), there are other dependencies
that also have to be obeyed (like macros being defined before
use). So this isn't so much of a day-to-day problem.
Where it *was* a problem was on the Lisp Machine. When you opened
a file in Zmacs, it would parse the file mode and it would
parse some of the file contents. It would attach properties to
the symbols mentioned in the file.
When you were browsing code, you might open a file that was
in a package that was not yet defined. Well, the package became
defined and a slew of symbols from the file would get interned
in it. Unfortunately, it was unlikely that the package was
correctly set up, so when you went to evaluate any code it
wouldn't work, or when you went to fix the packages, you'd run
into thousands of name conflicts.
Additionally, if you accidentally fasloaded a file before the
packages were set up, you would end up with functions pointing
at the wrong symbols. Patching up the packages by uninterning
the conflicting symbols didn't fix this, it just made the functions
point at UNINTERNED wrong symbols.
Now there is a `standard' mode of operating that avoids most of
this. You make *absolutely sure* that the *very first thing* you
load into any lisp is a set of defpackage forms that set up the
*entire* package structure. You also make sure that this file
is loaded both at compile time *and* at run time. This isn't hard
to do if you put it in your `init' file.
So, on to more obscure problems. You can't use `forward references'
to symbols that don't exist in packages that don't exist (yet).
In your init file, you'd *like* to be able to customize some
parameters before you start building the system. Perhaps you have
some macros that are parameterized, whatever. As an example, I
once worked at a place that had a `verbosity' switch that I wanted
set.
The obvious thing is to write (setq utility::*verbosity* t)
but the utility package isn't loaded yet. Never mind that the
utility package *will* be there when I call this function, it
isn't there when I *read* the code. Instead I write:
(let* ((utl-pkg (find-package :utility))
(verbosity (intern (symbol-name :*verbosity*) utl-pkg)))
(setf (symbol-value verbosity) t))
The same goes for functions that I want to call. I can't
write (progn (load "java-tools.lisp") (java-tools:make-all))
The way around this problem is to write a function that
defers interning until runtime. Then you can write
(progn (load "java-tools.lisp") (funcall-in-package :java-tools :make-all))
The package system allows me to `split' names. I can have
my symbol FOO be different from your symbol FOO. It doesn't
allow me to `join' names. I cannot have my symbol BAR be
the same as your symbol FOO. (Or perhaps more illustrative,
I can't have `JRM:ORIGINAL-MULTIPLE-VALUE-BIND' be the same
as `CL-USER:MULTIPLE-VALUE-BIND'.) On the lisp machine, you
actually *could* do this by forwarding the memory, and it
comes in handy on occasion.
Another example: when writing a tiny language, it is nice to
embed that language in Lisp. You can make a package containing
your language constructs, and use macros to `translate' them
into standard lisp for compilation. However, CL:IF is a special
form `known' to the compiler. FOO:IF is not. The compiler will
not generate a conditional unless you use the symbol in the CL
package. On the other hand, the point of putting the symbols
into your own package was to *not* use the same symbols.
Finally, a really nasty one. The `topology' of the package
structure is a global resource. When we were building a new
Lisp machine, we had a brand new compiler. Obviously, it goes
in the `compiler' package. Well, on the old machine, there
already was a compiler in the compiler package. So we had
to load the compiler into a different package. But the
compiler had to `appear' as if it were in the `compiler'
package in the fasl file. We modified the package inheritance
in the new machine. A number of symbols were moved to
different packages, but were to remain visible. The problem
was that the current machine wouldn't work if you did this.
Our `solution' was a rather hairy thing that twiddled the
guts of the package system on the fly to satisify whatever
compiler happened to want to be using it. It was a nightmare.
Now I realize that name conficts are a problem (look at emacs).
But I don't think the solution is to hack the READER. With
a little careful thought, I'm sure that a better way could be
found that defers identifier->symbol mapping until later in
the game.
> "Rahul Jain" <rj...@sid-1129.sid.rice.edu> wrote in message
> news:87u1re2...@photino.sid.rice.edu...
> > So how would you design a namespacing system for symbols which would
> > cause less difficulty?
>
> I'd start by researching the state of the art vis-a-vis module systems....
Module systems (according to any definition I've seen) aren't
namespaces for symbols, but first-class environments, where symbols
have local mappings to values.
> But seriously, I think a lot of the issues I have with the package
> system could be ameliorated if 1. read didn't cause visible side effects
> and 2. symbol resolution was deferred until runtime
That wouldn't make sense. How would I use a symbol if it didn't exist
when I was trying to use it?
> Yes, that's what I'm trying to explain to Erann and Thomas. They're
> merely complaining about a feature without providing suggestions for
> improvement.
I guess I have to say it again:
I don't have any complaints about the Common Lisp package system.
It's not perfect, but I can't see any better way to solve the problems
that it does solve for Common Lisp. And, from what I understand, that
was Guy Steele's feelings about it too.
Thomas
Depends on the use. The vast majority of uses of symbols is as identifiers
in programs. These symbols simply don't need to be interned.
> Where it *was* a problem was on the Lisp Machine. When you opened
> a file in Zmacs, it would parse the file mode and it would
> parse some of the file contents. It would attach properties to
> the symbols mentioned in the file.
Yes, I think that's a dangerous way to deal with editing source. The
editor should never use INTERN, only FIND-SYMBOL.
> Additionally, if you accidentally fasloaded a file before the
> packages were set up, you would end up with functions pointing
> at the wrong symbols. Patching up the packages by uninterning
> the conflicting symbols didn't fix this, it just made the functions
> point at UNINTERNED wrong symbols.
Wouldn't the IN-PACKAGE form at the top of the file cause an error,
preventing the file from being loaded? (or asking if you would like to
implicitly create it, etc)
> So, on to more obscure problems. You can't use `forward references'
> to symbols that don't exist in packages that don't exist (yet).
Again, this should cause an error to be signalled, at which point, you
should load the definition of the other package or just implicitly
create it.
> In your init file, you'd *like* to be able to customize some
> parameters before you start building the system.
So you load the defpackage form(s) before playing with symbols in
those packages.
> The package system allows me to `split' names. I can have
> my symbol FOO be different from your symbol FOO. It doesn't
> allow me to `join' names.
Yes, a sort of symbol-alias would be nice. I suppose the symbol could
print as its name in its home package without any real loss.
> Another example: when writing a tiny language, it is nice to
> embed that language in Lisp. You can make a package containing
> your language constructs, and use macros to `translate' them
> into standard lisp for compilation. However, CL:IF is a special
> form `known' to the compiler. FOO:IF is not. The compiler will
> not generate a conditional unless you use the symbol in the CL
> package. On the other hand, the point of putting the symbols
> into your own package was to *not* use the same symbols.
Right, so either define FOO:IF as a macro that expands to CL:IF or
import CL:IF. The package system doesn't help you when you don't tell
it to do what you want it to do.
> Finally, a really nasty one. The `topology' of the package
> structure is a global resource.
Yes, having unnamed packages and dynamic name->package bindings might
be a useful addition, but it's so uncommonly needed that I doubt
people have really thought about how to do it so that the resulting
packages are actually useful. I guess these would be called
"first-class packages".
Hmm, actually, that might be simple to add to my generic-reader.
Simply have the token-interpretation-fn be a closure around some
name->package EQUAL hashtable. Or have a *packages* hashtable that can
be dynamically shadowed. Probably providing both options would be a
good idea.
> Now I realize that name conficts are a problem (look at emacs).
> But I don't think the solution is to hack the READER. With
> a little careful thought, I'm sure that a better way could be
> found that defers identifier->symbol mapping until later in
> the game.
Identifiers in Lisp are symbols. Symbol->value mapping is what occurs
later. Use the right abstraction at the right time. It's just a matter
of learning when each of these conversions take place and choosing the
conversion that matches your desired semantics.
couldn't that be because he thinks ngs are for a little bit more than
mutual administration societies? something like "i like cl, vut have
problems with this feature, so what do others think is good about it
or what alternatives are there"?
> ...
hs
--
don't use malice as an explanation when stupidity suffices
> In article <87lmcrs...@photino.sid.rice.edu>,
> Rahul Jain <rj...@sid-1129.sid.rice.edu> writes:
> > tb+u...@becket.net (Thomas Bushnell, BSG) writes:
> >> Rahul Jain <rj...@sid-1129.sid.rice.edu> writes:
> >> > tb+u...@becket.net (Thomas Bushnell, BSG) writes:
> >> > > Geez! Erann actually *likes* CL, probably more than Scheme, but
> >> > > (I'm guessing) doesn't happen to think it is Absolutely Perfect.
> >> > > As a result, he perhaps complains about things once in a while.
> > Once in a while? What else has he done in c.l.l in the entire time
> > I've been reading it?
> couldn't that be because he thinks ngs are for a little bit more than
> mutual administration societies?
But they are for a little bit more than people just complaining that
"you lied to me!".
> something like "i like cl, vut have problems with this feature, so
> what do others think is good about it or what alternatives are
> there"?
Erann never asked questions about what is better, he complained to us
that Python is better than CL, and it's our fault that he thought
otherwise. And then he claimed that packages should be removed and
replaced with modules. That is not a useful idea for people who need
to use symbols as objects, a common occurence in the Lisp community,
from all that I can gather.
> Incidentally, Thomas, my entire rant on "data hygiene" was a _direct_
> response to something Rahul said. Had he not posted his remarks, I
> wouldn't have had occasion to sit for a while pondering that answer,
> which I'm quite happy to have penned. I'm not saying Rahul's posts
> have no content value, but even if they accomplished only catalyzing
> the speech of others, I would not discount that as conversational
> value. We each offer value in our own way and at our own time.
Thanks, Kent. :)
By the same logic, Thomas and Erann are at least as valuable to this
thread as I am, and I would have never enumerated my list of features
I find useful in CL if it weren't for their remarks. So thanks to
them, too. :)
> By the same logic, Thomas and Erann are at least as valuable to this
> thread as I am, and I would have never enumerated my list of features
> I find useful in CL if it weren't for their remarks. So thanks to
> them, too. :)
Yeah, and to join the love fest, I was over-harsh to Rahul in my
stream of recent messages. He too has stimulated my thinking in
productive ways. One thing I can say is that the whole conversation
has certainly helped me understand the whole durn thing better than I
did before, and that's a good thing, all round.
Thomas
I don't think fasl files have an `in-package' form at the top.
> > So, on to more obscure problems. You can't use `forward references'
> > to symbols that don't exist in packages that don't exist (yet).
>
> Again, this should cause an error to be signalled, at which point, you
> should load the definition of the other package or just implicitly
> create it.
The error is the problem. I want to refer to a symbol that I *know*
will exist a short time in the future. I have no intention of
using or manipulating the symbol until it exists and the package
system is set up, I just want to refer to the name. Even so, I'm
out of luck.
> > In your init file, you'd *like* to be able to customize some
> > parameters before you start building the system.
>
> So you load the defpackage form(s) before playing with symbols in
> those packages.
In general, init files are loaded *before* anything else. It is
hard to get the defpackage forms to load prior to the init forms.
> > Another example: when writing a tiny language, it is nice to
> > embed that language in Lisp. You can make a package containing
> > your language constructs, and use macros to `translate' them
> > into standard lisp for compilation. However, CL:IF is a special
> > form `known' to the compiler. FOO:IF is not. The compiler will
> > not generate a conditional unless you use the symbol in the CL
> > package. On the other hand, the point of putting the symbols
> > into your own package was to *not* use the same symbols.
>
> Right, so either define FOO:IF as a macro that expands to CL:IF or
> import CL:IF. The package system doesn't help you when you don't tell
> it to do what you want it to do.
I can't tell it to do what I want it to do. I *want* to hand forms
to the compiler of the sort '(if (foo) (bar)), but I *don't* want
the symbol IF in my package to be CL:IF (for various reasons, perhaps
I'm mucking around with the function cell). Now I could walk the form
replacing FOO:IF with CL:IF (and so forth for all the special forms),
but this is idiotic. The fact of the matter is that the compiler
is not using the value cell, the function cell, the plist, the package,
or the print name of 'CL:IF. It is only using it as a unique name.
But nonetheless, if I want to use it the same way the compiler uses it,
I have to drag along all the other baggage that comes with it.
> > Finally, a really nasty one. The `topology' of the package
> > structure is a global resource.
>
> Yes, having unnamed packages and dynamic name->package bindings might
> be a useful addition, but it's so uncommonly needed that I doubt
> people have really thought about how to do it so that the resulting
> packages are actually useful. I guess these would be called
> "first-class packages".
>
> Hmm, actually, that might be simple to add to my generic-reader.
> Simply have the token-interpretation-fn be a closure around some
> name->package EQUAL hashtable. Or have a *packages* hashtable that can
> be dynamically shadowed. Probably providing both options would be a
> good idea.
I wouldn't recommend that. The first thing we found when we tried that
was that we needed `package closures', i.e., a way to bind a function
to a particular package topology.
> > Now I realize that name conficts are a problem (look at emacs).
> > But I don't think the solution is to hack the READER. With
> > a little careful thought, I'm sure that a better way could be
> > found that defers identifier->symbol mapping until later in
> > the game.
>
> Identifiers in Lisp are symbols.
Yes, but the vast majority need not be. If I write this:
(defun fact (x) (if (zerop x) 1 (* x (fact (1- x)))))
The identifier X is interned as a symbol, yet the code does not use
the value cell, function cell, print name, package, or plist. Not
only that, but the identity of the identifier need not even last
beyond the compilation! However, were I to enter that function,
and then attempt to import a symbol named X from another package,
I'd get a name conflict. What is conflicting?
> Symbol->value mapping is what occurs later.
Yes. It occurs as late in the process as possible: just before you
need the value for a computation.
But identifier->symbol mapping occurs extremely early in the process,
where it isn't obvious that the symbol will be needed at all.
> READ causes noticable side effects. This can create dependencies
> on the order in which things are read.
EVAL and/or COMPILE-FILE causes noticeable side-effects too. This
creates dependencies on the order in which Scheme expressions can be
processed.
Moreover, the Scheme environment is among the harshest I've ever seen
for a programming language because it tends to take the attitude that
if your program is in error, all bets are off. The Scheme spec doesn't
waste any time at all trying to tell you that you have any expectation
of hope if your program ever makes a mistake. And certainly among all
of that is that you are expected to have the initial environment set up
perfectly for your initial work. No advice about how to avoid problems.
No operators or paradigms for correction if problems occur.
And it's CL you're complaining about? CL has terminology for talking
about these problem states and correcting some of them, as well as for
inspecting and manipulating the environment to determine whether
you're going to get stung. Hmm.
> Where it *was* a problem was on the Lisp Machine. When you opened
> a file in Zmacs, it would parse the file mode and it would
> parse some of the file contents. It would attach properties to
> the symbols mentioned in the file.
I'm familiar with the problem you're talking about, but you can't
blame this on the package system. I don't think this is a fair
criticism. The Zmacs editor chose to put things on the property list
of symbols prematurely and got in trouble. (Had it put them on the
property list of Scheme-like symbols, the same problems would have been
even worse).
> When you were browsing code, you might open a file that was
> in a package that was not yet defined. Well, the package became
> defined and a slew of symbols from the file would get interned
> in it. Unfortunately, it was unlikely that the package was
> correctly set up, so when you went to evaluate any code it
> wouldn't work, or when you went to fix the packages, you'd run
> into thousands of name conflicts.
Just because someone can design a problem for which symbols ought not
be used and use symbols does not make symbols a bad datastructure.
I likewise had problems with what Zmacs did, but I attribute the
problem very, very differently.
> Additionally, if you accidentally fasloaded a file before the
> packages were set up, you would end up with functions pointing
> at the wrong symbols.
Just as if you accidentally load Scheme code when things are not fully
set up, you can get confusing effects. I don't see your point here.
> Patching up the packages by uninterning
> the conflicting symbols didn't fix this, it just made the functions
> point at UNINTERNED wrong symbols.
Bad defaults? When I get package errors in LispWorks, I usually take
the default correction and I'd say 95% of the time I don't have to do
anything more than press RETURN. Not that I'd think it awful if I had
to think about or intervene more than this.
> Now there is a `standard' mode of operating that avoids most of
> this. You make *absolutely sure* that the *very first thing* you
> load into any lisp is a set of defpackage forms that set up the
> *entire* package structure.
Or you code your editor to not require this. I'm not comfortable
blaming this "requirement" on the design of the package system when
it's very plainly a problem with the Zmacs architecture.
> You also make sure that this file
> is loaded both at compile time *and* at run time. This isn't hard
> to do if you put it in your `init' file.
>
> So, on to more obscure problems. You can't use `forward references'
> to symbols that don't exist in packages that don't exist (yet).
> In your init file, you'd *like* to be able to customize some
> parameters before you start building the system. Perhaps you have
> some macros that are parameterized, whatever. As an example, I
> once worked at a place that had a `verbosity' switch that I wanted
> set.
>
> The obvious thing is to write (setq utility::*verbosity* t)
> but the utility package isn't loaded yet. Never mind that the
> utility package *will* be there when I call this function, it
> isn't there when I *read* the code. Instead I write:
> (let* ((utl-pkg (find-package :utility))
> (verbosity (intern (symbol-name :*verbosity*) utl-pkg)))
> (setf (symbol-value verbosity) t))
>
> The same goes for functions that I want to call. I can't
> write (progn (load "java-tools.lisp") (java-tools:make-all))
>
> The way around this problem is to write a function that
> defers interning until runtime. Then you can write
> (progn (load "java-tools.lisp") (funcall-in-package :java-tools :make-all))
I usually did:
(load "java-tools.lisp")
(java-tools:make-all)
as two different forms. In the few cases where this wasn't possible, I did:
(progn (load "java-tools.lisp")
(eval (read-from-string "(java-tools:make-all)")))
or
(progn (load "java-tools.lisp")
(funcall (or (find-symbol "MAKE-ALL" "JAVA-TOOLS")
(error "JAVA-TOOLS:MAKE-ALL is not defined."))))
Hardly a major problem. And it doesn't require use of any non-standard
operators.
> The package system allows me to `split' names. I can have
> my symbol FOO be different from your symbol FOO. It doesn't
> allow me to `join' names.
Scheme doesn't either, btw.
> I cannot have my symbol BAR be
> the same as your symbol FOO. (Or perhaps more illustrative,
> I can't have `JRM:ORIGINAL-MULTIPLE-VALUE-BIND' be the same
> as `CL-USER:MULTIPLE-VALUE-BIND'.) On the lisp machine, you
> actually *could* do this by forwarding the memory, and it
> comes in handy on occasion.
This is a legitimate gripe. However, I believe CL can rise above it.
I see it as an API problem, not a data structure problem.
> Another example: when writing a tiny language, it is nice to
> embed that language in Lisp. You can make a package containing
> your language constructs, and use macros to `translate' them
> into standard lisp for compilation.
By this I assume you mean:
(defmacro if (test then &rest else)
`(cl:if ,test ,then ,else))
> However, CL:IF is a special
> form `known' to the compiler. FOO:IF is not. The compiler will
> not generate a conditional unless you use the symbol in the CL
> package.
Well, it's supposed to macroexpand your form before reaching
that conclusion.
> On the other hand, the point of putting the symbols
> into your own package was to *not* use the same symbols.
> Finally, a really nasty one. The `topology' of the package
> structure is a global resource. When we were building a new
> Lisp machine, we had a brand new compiler. Obviously, it goes
> in the `compiler' package. Well, on the old machine, there
> already was a compiler in the compiler package. So we had
> to load the compiler into a different package.
I don't see any problem here that adding more power to the package
system wouldn't fix. In particular, original Zetalisp had
hierarchical packages. Symbolics Genera usefully expanded on this by
having "package universes" associated with Syntax. Symbolics Genera
had about 5 to 7 syntaxes (package universes) loaded into the same
core image at the same time, depending on the configuration. I wrote
the code that maintained that separation. I never got complaints from
amyone saying the implementation had major technical problems. Most
people were surprised that Zetalisp and Common Lisp were able to be
co-resident at the same time, given they had needs for packages to be
called different things.
I certainly don't by any stretch think this problem would have been
fixed by removing complexity of packages and using Scheme-style
packages.
> But the compiler had to `appear' as if it were in the `compiler'
> package in the fasl file. We modified the package inheritance in
> the new machine. A number of symbols were moved to different
> packages, but were to remain visible. The problem was that the
> current machine wouldn't work if you did this. Our `solution' was a
> rather hairy thing that twiddled the guts of the package system on
> the fly to satisify whatever compiler happened to want to be using
> it. It was a nightmare.
I'm probably missing something without more details. I think the term
"overconstrained" existed before Lisp, much less its package system.
And I'm not surprised that it's possible to overconstrain problems in
Lisp since it's possible to overconstrian them in life. I'm hard
pressed to say this is a package system problem.
> Now I realize that name conficts are a problem (look at emacs).
> But I don't think the solution is to hack the READER. With
> a little careful thought, I'm sure that a better way could be
> found that defers identifier->symbol mapping until later in
> the game.
Yes, those may be possible workarounds in some of the cases you've
mentioned. I'm not sure they exhaust the space of possible workarounds,
nor that the presence of such a workaround is a proof that what CL does
is bad.
I have not at any time said modules are something people should not use.
I've said that packages are something that people can and do useful use.
What you're relating is very much akin to the issue of early vs late
binding in declarations. Sometimes one adds a declaration because they
know something about data and they want early (compile time) processing
of that information. Sometimes one knows something about a name and
one uses packages to carry this information. Sometimes one doesn't know
what the name-to-package association is, and there are ways of accomodating
that. But often one does know.
In
(defmacro foist (x) `(car ,x))
there is no material change in knowledge between read time and macro
expansion time such that delaying the semantic attachment of CAR's meaning
will do anything other than make it more difficult to figure out what symbol
is what. So most of the problems to do with symbols and macros might as well
be sorted out at read time as at macro expansion time.
In the case you describe, if I follow you correctly, and I kind of
doubt I do totally, you have some weird situation where your compile
time packages and runtime packages are not aligned. (Note that CL
says this is not valid CL, so you are on your own, since every time
Scheme says you are doing something not valid the Scheme folks are
happy to say you're SOL. I don't know why a double standard of how
much help you should expect should apply.) In general, I think the
problem is more that CL does not seek to handle this problem than that
CL would not be capable of handling this problem with available data
structures if it chose to provide you the necessary operators. I don't
see a data structure problem here; perhaps an API weakness. But new APIs
can be written without changing the language spec.
> "Rahul Jain" <rj...@sid-1129.sid.rice.edu> wrote in message
> news:87sn6yz...@photino.sid.rice.edu...
> > Wouldn't the IN-PACKAGE form at the top of the file cause an error,
> > preventing the file from being loaded? (or asking if you would like to
> > implicitly create it, etc)
> I don't think fasl files have an `in-package' form at the top.
Hmm, yeah. :)
That's what defsystem is for anyway...
> > > So, on to more obscure problems. You can't use `forward references'
> > > to symbols that don't exist in packages that don't exist (yet).
> > Again, this should cause an error to be signalled, at which point, you
> > should load the definition of the other package or just implicitly
> > create it.
> The error is the problem. I want to refer to a symbol that I *know*
> will exist a short time in the future.
So load the definition that allows the name to exist.
> > So you load the defpackage form(s) before playing with symbols in
> > those packages.
> In general, init files are loaded *before* anything else. It is
> hard to get the defpackage forms to load prior to the init forms.
Couldn't you load it in the init file?
Or do something like (setf (symbol-value (find-symbol ...)) ...)?
> I can't tell it to do what I want it to do. I *want* to hand forms
> to the compiler of the sort '(if (foo) (bar)), but I *don't* want
> the symbol IF in my package to be CL:IF (for various reasons, perhaps
> I'm mucking around with the function cell). Now I could walk the form
> replacing FOO:IF with CL:IF (and so forth for all the special forms),
> but this is idiotic. The fact of the matter is that the compiler
> is not using the value cell, the function cell, the plist, the package,
> or the print name of 'CL:IF. It is only using it as a unique name.
> But nonetheless, if I want to use it the same way the compiler uses it,
> I have to drag along all the other baggage that comes with it.
Right, that's why you should do
(defmacro foo:if (&rest args)
`(cl:if ,@args))
If you don't want the whole symbol, you'll have to define which exact
parts you want to take from the other symbol anyway.
> > Hmm, actually, that might be simple to add to my generic-reader.
> > Simply have the token-interpretation-fn be a closure around some
> > name->package EQUAL hashtable. Or have a *packages* hashtable that can
> > be dynamically shadowed. Probably providing both options would be a
> > good idea.
> I wouldn't recommend that. The first thing we found when we tried that
> was that we needed `package closures', i.e., a way to bind a function
> to a particular package topology.
That sounds like modules to me...
> > Identifiers in Lisp are symbols.
> Yes, but the vast majority need not be.
They need to be because that's how Lisp code is defined.
> The identifier X is interned as a symbol, yet the code does not use
> the value cell, function cell, print name, package, or plist.
But it uses the symbol X in the code to indicate what identifier you
are using. That way the evaluator can compare two symbols and
determine if they are the same identifier or not.
> Not only that, but the identity of the identifier need not even last
> beyond the compilation! However, were I to enter that function, and
> then attempt to import a symbol named X from another package, I'd
> get a name conflict. What is conflicting?
The names you used. If you want to have lexical-extent symbols, I
think I can understand, but I can't think of a practical way of
actually doing it.
> > Symbol->value mapping is what occurs later.
> Yes. It occurs as late in the process as possible: just before you
> need the value for a computation.
> But identifier->symbol mapping occurs extremely early in the process,
> where it isn't obvious that the symbol will be needed at all.
If the symbol is used, it's needed. I don't see how this can be
removed, resulting in a language that can really be called
"Lisp". This distinction of read, compile, eval, macroexpand, load,
etc times is one of the distinctive, important, and useful features of
lisp. If you don't like those features, you'll have to form a
different (possibly overlapping) community, since many of the current
leaders of the Lisp community seem to prefer dealing with the times
the way we do now. I think it just comes down to being comfortable
with the way they are defined and being precise about what times
things occur when you write your code.
Essentially, this distinction is one of the axioms of Lisp, as I see
it.
> "Rahul Jain" <rj...@sid-1129.sid.rice.edu> wrote in message
> news:87sn6yz...@photino.sid.rice.edu...
> > "Joe Marshall" <prunes...@attbi.com> writes:
> >
> > > Additionally, if you accidentally fasloaded a file before the
> > > packages were set up, you would end up with functions pointing
> > > at the wrong symbols. Patching up the packages by uninterning
> > > the conflicting symbols didn't fix this, it just made the functions
> > > point at UNINTERNED wrong symbols.
> >
> > Wouldn't the IN-PACKAGE form at the top of the file cause an error,
> > preventing the file from being loaded? (or asking if you would like to
> > implicitly create it, etc)
>
> I don't think fasl files have an `in-package' form at the top.
Well, they are required to bind *PACKAGE* around the entire load.
And they have to give that value a binding. They may not do
IN-PACKAGE per se, but it's going to amount to the same.
> > > So, on to more obscure problems. You can't use `forward references'
> > > to symbols that don't exist in packages that don't exist (yet).
> >
> > Again, this should cause an error to be signalled, at which point, you
> > should load the definition of the other package or just implicitly
> > create it.
>
> The error is the problem. I want to refer to a symbol that I *know*
> will exist a short time in the future. I have no intention of
> using or manipulating the symbol until it exists and the package
> system is set up, I just want to refer to the name. Even so, I'm
> out of luck.
Its name is a string. You can certainly refer to its name. ;)
...
> But identifier->symbol mapping occurs extremely early in the process,
> where it isn't obvious that the symbol will be needed at all.
No, you mean where it happens not to be needed in this case.
You want it delayed until load time.
But in normal compilation, the identifier to symbol mapping _is_ done
at compile time, in order to process semantics, so it is not normall
delayed that long. It is typical of interactive and bootstrap code
that this problem comes up, but interactive and bootstrap code is not
typical of real code.
g...@jpl.nasa.gov (Erann Gat) writes:
> A mentor of mine at the time (1988 or so) once expounded on the virtues of
> environments and the evils of packages. He made an offhand comment that I
> still remember to this day: "Common Lisp" he said "really has a flat
> namespace." What he meant was that there was little effective difference
> between packages as they exist in CL, and a Lisp with only one package and
> a reader hack that attached the prefix "package:" (where "package" is
> replaced with the value of the global variable *package*) to all symbol
> names that didn't already have colons in them before interning them. Now,
> this isn't quite right, of course, because of import, export, and
> use-package, but his fundamental point has stuck with me through the
> years, that discriminating between multiple interpretations of a symbol
> name at read time rather than at eval time was the Wrong Thing. (This is
> not to say that I necessarily agree with this point of view, just that I
> remember it.)
Erann,
your mentor had it right, basically. Let me explain why.
First, we should say what a "symbol" really is: A symbol is a little
data structure, a record if you will. (The data structure contains a
number of slots. One slot, for example, could hold the symbol's
package (if it is interned), another its name within that package,
another its global value, another its function binding, another its
property list. I do not claim completeness here.) What a symbol
really is is not important for the purpose of this discussion -- only
that it exists and has an identity that is independent of the name
that refers to it.
Second, what is a package? A package is a structure that (at least)
consists of a finite map mapping names (strings of characters) to
existing (interned) symbols (where by "symbol" I mean those little
data structures introduced above).
For the time being, let's say that a package *is* its map from names to symbols.
(Of course, this is not the whole picture as far as CL is concerned. But it
will illustrate the idea without getting bogged down in unimportand but tedious
technicalities.)
If we call the domain of names "name", the domain of symbols "symbol",
and the domain of packages "package", then we can write the domain equation:
package = name |--> symbol
Now, packages themselves also have names, which means that there is a
global finite map PACK (that is also mutable, but, we postpone this issue)
that maps strings to packages. So, in a fashion analogous to the
above, we say that if m is the name of an existing package p, then
PACK(m) = p
or
PACK : name |--> package
or
PACK : name |--> (name |--> symbol)
In other words, to look up a symbol given the name m of its package and
the name n of the symbol within the package named by m, we simply do
PACK(m)(n)
(I am sure you recognize PACK: it is CL's "find-package", only grossly simplified
for the purpose of this discussion.)
--------
At this point, let's step back for a moment and look at total maps.
For every total map f that maps A's to total maps from B's to C's:
f : A --> (B --> C)
there is a total map, let's call it "uncurry(f)" from pairs (x, y) \in A * B
to C:
uncurry(f) : A * B --> C
Moreover, for every g : A * B --> C there is a curry(g) : A --> B -->
C. Finally, both "curry o uncurry" and "uncurry o curry" are
identities, so "uncurry" (like "curry") is an isomorphism. (And, not
unimportantly, computability is also preserved by "curry" and
"uncurry".)
Some minor but annoying technical difficulties arise once we try to
apply the same reasoning to partial (in particular: finite) maps
because the domains
A |--> B |--> C and A * B |--> C
are not isomorphic. The problem is that in the curried case an
application can fail at either of the two stages while in the
uncurried case these two failure modes are collapsed into one.
Translated to the PACK case this would mean that one couldn't
distinguish between looking up a non-existing symbol in an existing
package and looking up anything in a non-existing package.
But notice that these issues *can* be finessed, e.g., by "lifting" the
domains (supplying an explicit failure result -- nil in the case of
find-package) and lifting the second stage lookup function
accordingly. On the uncurried side this leads to the need for an
auxiliary map from package names to booleans (i.e., effectively a set
of package names) to keep track which packages exist and which ones
don't.
The point is: There /is/ an isomorphism, and where there is an
isomorphism between two explanations (here: curried and uncurried
lookup), one can say that one can be /viewed in terms of the other/.
Of course, one has to be careful: Once you decide which view to take,
you have to be consistent. Curried and uncurried functions are not
the same, they are merely related by an isomorphism. If one gets
mixed up between the different views, arbitrary nonsense occurs.
---------
Anyway, coming back to your mentor's "offhand comment":
- The members of the cross-product of two string domains could be
represented as the concatenation of the original strings -- provided there
is some way of recognizing the boundary so that the original two strings
can be recovered. (In CL there is such a way.)
- Thus, an uncurried lookup function (which is the essence of a "flat" domain)
could, indeed, take the symbol name with the package name prefixed.
- This is a plausible way of viewing how things work. Of course, it
does not mean that things actually /do/ work that way in terms of how
they are implemented. They probably are not (although I would claim --
with enough finesse they could be, no matter how pointless that would probably
be).
- I do not see how "import", "export", or "use-package" pose insurmountable
difficulties with the flat model.
---------
Finally, how to deal with the mutability? Well, it is actually not that
hard. Essentially the same kind of isomorphism like the one above can
be constructed, only the details are a bit messier. That's why I leave the
construction to the interested reader.... :-)
Matthias
> Moreover, the Scheme environment is among the harshest I've ever seen
> for a programming language because it tends to take the attitude that
> if your program is in error, all bets are off.
This is pretty much normal, I think. Nearly every modern language
specification says that an undefined behavior is, well, undefined.
This is especially true for errors which are not required to be
caught, and it seems to me that Scheme is conservative about requiring
compilers to catch runtime errors in order to enable them to produce
maximally efficient code.
I think this is, in effect, the same for Common Lisp; if you do
something undefined, the result is undefined.
> I have not at any time said modules are something people should not use.
> I've said that packages are something that people can and do useful use.
Hrm. If I have a good module system, and symbols are "lightweight",
that is, they don't have plists or dynamic values, then what
advantages would be gained by adding packages? Maybe I've missed it,
but all the advantages seem to depend on either plists or dynamic
values (which are, it seems to me, really examples of the same
thing). I fully grant that if you have either of those, then packages
become an absolute necessity. But if you don't then what would
packages give you that a module system doesn't?
Thomas
> I fully grant that if you have either of those [plists or dynamic
> values], then packages become an absolute necessity. But if you
> don't then what would packages give you that a module system
> doesn't?
Is this a trap? I could just imagine the response I'd get from you if
I actually answered this question yet again.
> tb+u...@becket.net (Thomas Bushnell, BSG) writes:
>
>
> > I fully grant that if you have either of those [plists or dynamic
> > values], then packages become an absolute necessity. But if you
> > don't then what would packages give you that a module system
> > doesn't?
>
> Is this a trap? I could just imagine the response I'd get from you if
> I actually answered this question yet again.
Your answer is "without plists or dynamic values, you can't do
symbolic programming", but I don't believe that's true.
Thomas
> Your answer is "without plists or dynamic values, you can't do
> symbolic programming", but I don't believe that's true.
*scratches head*
Heh. Oh well. I guess it's not worth trying to convince someone of
something they didn't bother to listen to in the first place.
I suppose I'm done with this thread now.
> Hrm. If I have a good module system, and symbols are "lightweight",
> that is, they don't have plists or dynamic values, then what
> advantages would be gained by adding packages?
In none of my arguments have I meant to suggest that either plists or
dynamic values are in any way important to me. You should understand
that _all_ of my arguments are about OBJECT IDENTITY only.
> Maybe I've missed it,
> but all the advantages seem to depend on either plists or dynamic
> values (which are, it seems to me, really examples of the same
> thing).
No, these facilities have been used only incidentally as exemplars of
the general notion of object identity in data.
Even if symbols had no plists and you had only hash tables, there is
still a difference in the operation of
(defun get-color (x)
(gethash x *color-table*))
when x is a symbol from one package or x is a symbol from another package
and x has no intrinsic slots at all.
> I fully grant that if you have either of those, then packages
> become an absolute necessity. But if you don't then what would
> packages give you that a module system doesn't?
Separation of data from one application to another.
Modules are about program separation, not data separation.
Data is about identity, nothing more.
This notion of slots is utterly and completely orthogonal to the problem.
> Even if symbols had no plists and you had only hash tables, there is
> still a difference in the operation of
> (defun get-color (x)
> (gethash x *color-table*))
> when x is a symbol from one package or x is a symbol from another package
> and x has no intrinsic slots at all.
Sure, but then I wonder:
When do you actually want to have two different packages, with no
antecedent connections between them, contributing to the same hash
table?
If they *do* have an antecedent connection, then what is the nature of
the connection?
In other words, it seems really clear to me how to do all this kind of
data manipulation; one simply needs to represent data by more than
plain symbols--but it's still pretty darn easy.
Thomas
> tb+u...@becket.net (Thomas Bushnell, BSG) writes:
>
> > Your answer is "without plists or dynamic values, you can't do
> > symbolic programming", but I don't believe that's true.
>
> *scratches head*
>
> Heh. Oh well. I guess it's not worth trying to convince someone of
> something they didn't bother to listen to in the first place.
You missed my point.
I'm asking not why plists and dynamic values are important. I take it
you think they are essential to symbolic programming, and I don't, but
that's not the point I'm asking about.
My question is: in a language (like Scheme) which lacks plists and
dynamic values, and given a decent module system, how, exactly, are
adding packages to the mix an asset. Kent's examples almost all
revolve around using GET (and thus are only relevant if you have
plists); other people have mentioned SYMBOL-VALUE (and thus only
relevant if you have dynamic values).
Now, if you think that even in a language like Scheme, with modules
but without plists and dynamic values packages are of importance (for
symbolic programming or anything else), I'm very interested in
concrete examples... If you have something concrete to suggest, I'm
all ears! But just saying "it's essential for symbolic computation"
is really not a very concrete answer.
Thomas
> Kent M Pitman <pit...@world.std.com> writes:
>
> > Even if symbols had no plists and you had only hash tables, there is
> > still a difference in the operation of
> > (defun get-color (x)
> > (gethash x *color-table*))
> > when x is a symbol from one package or x is a symbol from another package
> > and x has no intrinsic slots at all.
>
> Sure, but then I wonder:
>
> When do you actually want to have two different packages, with no
> antecedent connections between them, contributing to the same hash
> table?
When you use symbols as all-purpose data containers.
The very simplest of these would be:
(in-package "PERSON")
(defun person-named (name)
(find-symbol name "PERSON"))
(defun person? (thing)
(and (symbolp thing)
(eq (find-symbol (symbol-name thing) "PERSON")
thing)))
(defun count-people (list) (count-if #'person? list))
(in-package "FLOWER")
(defun person-named (name)
(find-symbol name "FLOWER"))
(defun flower? (thing)
(and (symbolp thing)
(eq (find-symbol (symbol-name thing) "FLOWER")
thing)))
(defun count-flowers (list) (count-if #'flower? list))
(in-package "OTHER")
(count-flower (list (make-flower "ROSE") (make-person "ROSE")))
=> 1
> If they *do* have an antecedent connection, then what is the nature of
> the connection?
As with all dentity issues, one shares predicates and constructors and
annotators and mutators and whatever with others who have a similar object.
For example, a CLIM window and an X Window may not be the same kind of
thing because the people who conceptualized them gave them different kinds
of attributes. I start with an initial hypothesis of separation, but my
eyes are always open to the option of sharing. So if I notice someone else
has a person representation, and I'm operating on people, I might ask
"can I share that" or should I start over.
None of the answers to these questions would be different if you were talking
about instances of non-symbols.
The usefulness of symbols, however, is that you get automatic bookkeeping of
identity and a simple way of referring to things. Talking about JOE is easier
than talking about #<PERSON SSN=555-55-5555 32434>. As noted in much earlier
AI work, symbols are visually evocative, they are a natural way people want
to express themselves, they accomodate circularity, they take a natural bias
for sharing of identity among same-named things into account, etc. Also as
has come up in practice, sometimes two people use the same symbol for
incompatible purposes, and good data separation among such worlds is critical
to not having to rewrite lots of code when those two worlds are co-loaded.
> In other words, it seems really clear to me how to do all this kind of
> data manipulation; one simply needs to represent data by more than
> plain symbols--but it's still pretty darn easy.
I don't know what "more than plain symbols" is, but if it means
#<PERSON SSN=555-55-5555 32434> or #S(PERSON :SSN "555-55-5555")
and if referring to a list containing this person twice means writing
(#1=#S(PERSON :SSN "555-55-5555") #1#) instead of (JOE JOE), then I
think it suffers something in the translation.
If instead you mean that packaged symbols are more than plain symbols,
then the nice thing is that in isolation, packaged symbols look like plain
symbols, and the visual strangeness of even the pkg: prefix comes up exactly
and only in a place where the simple model of symbols would break down.
Rather than (X:FRED Y:FRED) one cannot write simply (FRED FRED) unless one
invents some additional datum that rides along side saying (X Y), or unless
one makes the list contain ((X FRED) (Y FRED)) and each of these have grave
problems.
I'm a little tired at this point of offering worked examples and having people
take potshots. I want someone to take some of the worked examples of what
I've written and show me how some of those examples would both be simpler in
a single namespace language and still also involve the use of symbols as the
core means of representing. I'm not asserting other objects can't be used,
I'm just saying that if symbols get used, they need to be this complex in
nature. And so far, I've not seen anyone taking the time to offer
any rebuttal in code, only in vagueries of words.
> Talking about JOE is easier than talking about #<PERSON
> SSN=555-55-5555 32434>.
But I don't get JOE, of course, I get something like PERSON:JOE and
FLOWER:ROSE.
Why is this radically better than (person . joe) and (flower . rose)?
Oops, Scheme people will get made at me for my habit of using improper
lists. :) (person joe) and (flower rose) then.
In other words, it seems to me that if I had an application where I
needed this, I could simply make a hash table, uniquify such pairs,
and proceed.
> > In other words, it seems really clear to me how to do all this kind of
> > data manipulation; one simply needs to represent data by more than
> > plain symbols--but it's still pretty darn easy.
>
> I don't know what "more than plain symbols" is, but if it means
> #<PERSON SSN=555-55-5555 32434> or #S(PERSON :SSN "555-55-5555")
> and if referring to a list containing this person twice means writing
> (#1=#S(PERSON :SSN "555-55-5555") #1#) instead of (JOE JOE), then I
> think it suffers something in the translation.
No, it means referring to things by, say, the sorts of lists I used
above.
> If instead you mean that packaged symbols are more than plain symbols,
> then the nice thing is that in isolation, packaged symbols look like plain
> symbols, and the visual strangeness of even the pkg: prefix comes up exactly
> and only in a place where the simple model of symbols would break down.
> Rather than (X:FRED Y:FRED) one cannot write simply (FRED FRED) unless one
> invents some additional datum that rides along side saying (X Y), or unless
> one makes the list contain ((X FRED) (Y FRED)) and each of these have grave
> problems.
So that's clearly what my solution is supposind I would do. Please
entertain me; what is the "grave problem" with using (X FRED) instead
of X:FRED? I have to work out a uniquification rule, and so forth (or
just use equal instead of eq), and it Just Works.... doesn't it?
> I'm a little tired at this point of offering worked examples and
> having people take potshots.
I'm really not trying to take potshots; I'm trying to understand what
exactly is being gained through the package mechanism that can't be
gained through a module system. I want to "push back" at your
assertions, because they don't jive with my experience and intuitions,
but not because I want to win some debate. If you like, I want to make
you give the strongest case possible. :)
> I want someone to take some of the worked examples of what I've
> written and show me how some of those examples would both be simpler
> in a single namespace language and still also involve the use of
> symbols as the core means of representing. I'm not asserting other
> objects can't be used, I'm just saying that if symbols get used,
> they need to be this complex in nature. And so far, I've not seen
> anyone taking the time to offer any rebuttal in code, only in
> vagueries of words.
Oh, I agree completely that this kind of complexity is inherent in
doing certain kinds of symbolic computation. But that complexity is a
matter of writing about four teeny little hash table functions; why
not just do that when you want it? I don't have any objection to
having them in a library either. If that's what a "package system"
is, then Scheme can have one in about five minutes anytime it's
wanted.
But the CL package system is more than just these kinds of relativized
hash tables and such. It includes a certain style of interaction with
the reader, and the complex rules for shadowing and such. This is
very important in CL, and it's this stuff that I don't see the
inherent value for.
It's also not clear to me is why the print syntax is *so* important;
why using 2-element lists instead of package/symbolname pairs is such
a lose; and why one can't do "symbolic computation" without such
"symbols" entirely--even structs actually work, and if you want the
print syntax to be prettier, that's a teeny job.
Thomas
I can sympathize with this problem, as you have described it, but I
wonder if you have ever tried to dump a new image with the packages
loaded. That has been how I have done things for the longest time. I
even tend to build my own Emacs that way. It not only saves a lot of
time in the startup phase and avoids loading a lot of files every time,
but building and dumping a new image would of course be done from
scratch, without an init file. The dumped image already has all the
packages loaded so now your init file loads without problems.
///
--
In a fight against something, the fight has value, victory has none.
In a fight for something, the fight is a loss, victory merely relief.
> Kent M Pitman <pit...@world.std.com> writes:
>
> > Talking about JOE is easier than talking about #<PERSON
> > SSN=555-55-5555 32434>.
>
> But I don't get JOE, of course, I get something like PERSON:JOE and
> FLOWER:ROSE.
That is not true. Most of the time, you can just talk about JOE, and
have it automagically mean the right thing, i.e. either PERSON:ROSE
or FLOWER:ROSE, depending on context. That is the very nice thing
about packages: It allows people to write their code most of the time
as if they owned the universe, indeed, they don't even have to know
about the package system at all, and yet I can use that package system
to keep their system from clashing with other systems.
For example, someone writes the following into a file:
;; The use of the symbols plist is only accidental here, any kind of
;; globally shared place will do...
(defun name-person (symbol name)
(setf (get symbol 'name) name))
I've also got another file that uses "the" name property of symbols
for something else entirely.
In order to prevent clashes, I can easily wrap the loading and/or
compilation of said file in the following way:
(defpackage "PERSON-STUFF"
(:use :CL)
(:export
;; Whatever you want exported, possibly next to nothing
))
(let ((*package* (find-package "PERSON-STUFF")))
;; Works similarly for
(load "person-stuff.lisp"))
Now all the code in person-stuff.lisp that talks about NAME will
transparently refer to the symbol PERSON-STUFF::NAME, instead of some
other symbol with the symbol-name NAME, which I might want to prevent
from happening.
OTOH I can also take the view that I want to share the symbol with the
code in person-stuff.lisp, and set up my package structure in order
for PKG2::NAME and PERSON-STUFF::NAME to refer to the same symbol.
And in reality, I don't normally have to resort to the "bind package
from the outside" hack/solution, since people usually just stick
(defpackage "PERSON-STUFF"
(:use :CL)
(:export
;; Whatever you want exported, possibly next to nothing
))
(in-package "PERSON-STUFF")
forms into their files, all by themselves. But again, that isn't
necessary. One of the important points about the package system is
that it is non-invasive, i.e. it doesn't require much in the way of
cooperation and changes from user's code, either my own, or, more
importantly other people's code, whose existance I might not even be
aware of.
> Why is this radically better than (person . joe) and (flower . rose)?
> Oops, Scheme people will get made at me for my habit of using improper
> lists. :) (person joe) and (flower rose) then.
>
> In other words, it seems to me that if I had an application where I
> needed this, I could simply make a hash table, uniquify such pairs,
> and proceed.
a) Your "solution" requires that all participants in such a scheme
agree beforehand that they are going to use your it. It is hard
to change non-cooperative code to cooperative code after the fact.
b) It is again not trivial to have (person joe) and (employee joe)
be considered the same joe afterall. I.e. you get complete
isolation, but at the cost of making it very expensive to break
that isolation. This point is not about after-the-fact merging of
symbols, BTW. Making (person joe) and (employee joe) refer to the
same joe, i.e. to be considered equivalent, requires special-casing
in the equality predicate, since uniquification can't make the two
lists eq.
In order to make that slightly more palatable -- you want to allow
eq to be used to test for the equality of "symbols", which also
allows references to symbols be cheap pointers and not expensive
consed data -- you could of course map your lists to some other
uniquified data structure (possibly gensymed symbols, and/or
freshly consed lists, whatever). Of course now you are well on
your way to reinventing packages, but badly, since you are not
using specially optimized data-structures, nor do you have reader
support for ease of entry.
c) Of course you can encode any kind of datastructure using just lists
and symbols (in fact, if you are really bent on it, you can encode
anything using just lists).
And this can even make sense for something you seldomly use, or are
just prototyping, or there are other overriding reasons why you
want the thing to be encoded with lists.
But just as modern Lisps have come to the conclusion that encoding
strings as lists of symbols is not adequate, and that a specialized
data-structure is in order, for both characters and strings, Lisp
users have also seen that symbols are important enough, and their
usage frequent enough, to warrant their own specialized system.
Any kind of argument that goes "but you can just encode that using
lists and symbols" is just missing the point. It's not about can,
which like turing equivalence is a given, it is about _adequacy_,
which includes such things as effort, performance, readability,
safety, low cohesion, etc.
> No, it means referring to things by, say, the sorts of lists I used
> above.
Again, you'll have to either use the (#1=(PERSON JOE) #1#) reader
syntax, or you'll have to explicitly pass the two lists through your
uniquification function. Otherwise eq doesn't work anymore, with all
the implications that has.
> So that's clearly what my solution is supposind I would do. Please
> entertain me; what is the "grave problem" with using (X FRED) instead
> of X:FRED? I have to work out a uniquification rule, and so forth (or
> just use equal instead of eq), and it Just Works.... doesn't it?
Using equal instead of eq means
a) eq doesn't give me object identity anymore, which among other
things means that all code that ever gets passed my "fake symbols"
must be aware of that, and use equal for all operations, whereas it
could get away with using eq(l) before.
b) all my references to X:FRED, which were just a pointer in CL, now
cost me two cons cells with 3 pointers now,
c) "Identity" on such objects now costs me quite a number of memory
accesses, type- and pointer tests, instead of a simple pointer
compare.
This might be justifiable for a one-off thing, it isn't justifiable
for something that is going to be used all of the time.
Regs, Pierre.
--
Pierre R. Mai <pm...@acm.org> http://www.pmsf.de/pmai/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents. -- Nathaniel Borenstein
Woah!!! Who said anything about Scheme?
I'm just pointing out the problems I have with the Common Lisp
package system, not proselytizing.
So, back to the discussion... I sort of expect EVAL to have
side effects. It would be *nice* if COMPILE-FILE did *not* have
side effects unless there are the appropriate `eval-when' forms
in it (modulo macros, etc.)
> > Where it *was* a problem was on the Lisp Machine. When you opened
> > a file in Zmacs, it would parse the file mode and it would
> > parse some of the file contents. It would attach properties to
> > the symbols mentioned in the file.
>
> I'm familiar with the problem you're talking about, but you can't
> blame this on the package system. I don't think this is a fair
> criticism. The Zmacs editor chose to put things on the property list
> of symbols prematurely and got in trouble.
I think I'll still blame this on the package system. If Common Lisp
did not have a package system (let's imagine that namespace conflicts
don't occur and we can intern everything in one package), then Zmacs
would have no problem at all attaching properties to symbols.
Putting properties on symbols isn't a problem. It's interning
the symbols for the sole purpose of putting properties on them that
is causing the problem. And the interning itself wouldn't be a
problem if the packages were set up correctly beforehand. It all
leads back to the package system. (Admittedly, Zmacs shouldn't be
using READ anyway. I once made a file with -*- #.(si:%halt) -*-
in the header....)
> > When you were browsing code, you might open a file that was
> > in a package that was not yet defined. Well, the package became
> > defined and a slew of symbols from the file would get interned
> > in it. Unfortunately, it was unlikely that the package was
> > correctly set up, so when you went to evaluate any code it
> > wouldn't work, or when you went to fix the packages, you'd run
> > into thousands of name conflicts.
>
> Just because someone can design a problem for which symbols ought not
> be used and use symbols does not make symbols a bad datastructure.
>
> I likewise had problems with what Zmacs did, but I attribute the
> problem very, very differently.
I see where you are coming from, but I disagree. Zmacs wanted to
associate properties with names, i.e. establish a mapping between
a sequence of characters and a value. The sequence of characters
has the syntax of a symbol (by definition!), so it seems that symbols
would be the right thing to use.
Manipulating a structure full of symbols and lists (a lisp source file)
seems to me to be the epitome of symbolic processing, yet I can't use
symbols?
> > So, on to more obscure problems. You can't use `forward references'
> > to symbols that don't exist in packages that don't exist (yet).
> > In your init file, you'd *like* to be able to customize some
> > parameters before you start building the system. Perhaps you have
> > some macros that are parameterized, whatever. As an example, I
> > once worked at a place that had a `verbosity' switch that I wanted
> > set.
> >
> > The obvious thing is to write (setq utility::*verbosity* t)
> > but the utility package isn't loaded yet. Never mind that the
> > utility package *will* be there when I call this function, it
> > isn't there when I *read* the code. Instead I write:
> > (let* ((utl-pkg (find-package :utility))
> > (verbosity (intern (symbol-name :*verbosity*) utl-pkg)))
> > (setf (symbol-value verbosity) t))
> >
> > The same goes for functions that I want to call. I can't
> > write (progn (load "java-tools.lisp") (java-tools:make-all))
> >
> > The way around this problem is to write a function that
> > defers interning until runtime. Then you can write
> > (progn (load "java-tools.lisp") (funcall-in-package :java-tools
:make-all))
>
> I usually did:
>
> (load "java-tools.lisp")
> (java-tools:make-all)
This is obviously not quite kosher in a file.
> as two different forms. In the few cases where this wasn't possible, I
did:
>
> (progn (load "java-tools.lisp")
> (eval (read-from-string "(java-tools:make-all)")))
>
> or
>
> (progn (load "java-tools.lisp")
> (funcall (or (find-symbol "MAKE-ALL" "JAVA-TOOLS")
> (error "JAVA-TOOLS:MAKE-ALL is not defined."))))
>
> Hardly a major problem. And it doesn't require use of any non-standard
> operators.
No, it isn't a `major problem', and as I pointed out you can write
a `funcall-in-package' form that allows you to
(funcall-in-package :java-tools :make-all), but it *is* a nuisance.
> > The package system allows me to `split' names. I can have
> > my symbol FOO be different from your symbol FOO. It doesn't
> > allow me to `join' names.
>
> > I cannot have my symbol BAR be
> > the same as your symbol FOO. (Or perhaps more illustrative,
> > I can't have `JRM:ORIGINAL-MULTIPLE-VALUE-BIND' be the same
> > as `CL-USER:MULTIPLE-VALUE-BIND'.) On the lisp machine, you
> > actually *could* do this by forwarding the memory, and it
> > comes in handy on occasion.
>
> This is a legitimate gripe. However, I believe CL can rise above it.
> I see it as an API problem, not a data structure problem.
I'm not sure that it makes a difference what the nature of the problem
is. And I'd love to see CL rise above it. Whether by `fixing',
`adding functionality', or `replacing' the package system is somewhat
immaterial. If we can keep existing code from breaking *and* eliminate
some (all?) of the `misfeatures' of the current package system, I'd
be happy.
> > Finally, a really nasty one. The `topology' of the package
> > structure is a global resource. When we were building a new
> > Lisp machine, we had a brand new compiler. Obviously, it goes
> > in the `compiler' package. Well, on the old machine, there
> > already was a compiler in the compiler package. So we had
> > to load the compiler into a different package.
>
> I don't see any problem here that adding more power to the package
> system wouldn't fix. In particular, original Zetalisp had
> hierarchical packages. Symbolics Genera usefully expanded on this by
> having "package universes" associated with Syntax. Symbolics Genera
> had about 5 to 7 syntaxes (package universes) loaded into the same
> core image at the same time, depending on the configuration. I wrote
> the code that maintained that separation. I never got complaints from
> amyone saying the implementation had major technical problems. Most
> people were surprised that Zetalisp and Common Lisp were able to be
> co-resident at the same time, given they had needs for packages to be
> called different things.
I solved this by adding `package-environments' (with triple colons to
name them!). It could get pretty hairy. In one package-environment,
interning the name "FOO" in package "BAR" would yield 'FOO::BAR, yet
in another interning the name "FOO" in package "BAR" would yield
'SYS:BAR. There were degenerate cases where you could end up with
a symbol associated with a `non-existant' package.
It *did* solve the problem, though. But only at the expense of
causing a serious amount of complexity. The intent was to abandon
this `solution' as soon as we had bootstrapped the new environment.
> > Now I realize that name conficts are a problem (look at emacs).
> > But I don't think the solution is to hack the READER. With
> > a little careful thought, I'm sure that a better way could be
> > found that defers identifier->symbol mapping until later in
> > the game.
>
> Yes, those may be possible workarounds in some of the cases you've
> mentioned. I'm not sure they exhaust the space of possible workarounds,
> nor that the presence of such a workaround is a proof that what CL does
> is bad.
The fact that you call them `workarounds' indicates that there is something
to be `worked around'. But this isn't meant to criticize CL. On the
contrary, the existance of the API to the package system makes it
possible. (Consider trying to fix an analagous problem in C++)
> I have not at any time said modules are something people should not use.
> I've said that packages are something that people can and do useful use.
Agreed. I use them myself. I just wish there were something better.
> What you're relating is very much akin to the issue of early vs late
> binding in declarations. Sometimes one adds a declaration because they
> know something about data and they want early (compile time) processing
> of that information. Sometimes one knows something about a name and
> one uses packages to carry this information. Sometimes one doesn't know
> what the name-to-package association is, and there are ways of
accomodating
> that. But often one does know.
>
> In
> (defmacro foist (x) `(car ,x))
> there is no material change in knowledge between read time and macro
> expansion time such that delaying the semantic attachment of CAR's meaning
> will do anything other than make it more difficult to figure out what
symbol
> is what. So most of the problems to do with symbols and macros might as
well
> be sorted out at read time as at macro expansion time.
I'd assert the opposite: most of the problems to do with symbols and macros
might as well be sorted out at macro expansion time (or even later).
You hit the nail right on the head when you drew an analogy between
this and the issue of `early' vs. `late' binding. I'd like to see
symbols be `late bound' as well.
> In the case you describe, if I follow you correctly, and I kind of
> doubt I do totally, you have some weird situation where your compile
> time packages and runtime packages are not aligned. (Note that CL
> says this is not valid CL, so you are on your own.)
Exactly! The package system cannot accomodate differences between
run-time and compile-time package hierarchies. I think that rather
than `outlawing' this case, it would be preferable to remove the
requirement.
> In general, I think the
> problem is more that CL does not seek to handle this problem than that
> CL would not be capable of handling this problem with available data
> structures if it chose to provide you the necessary operators. I don't
> see a data structure problem here; perhaps an API weakness. But new APIs
> can be written without changing the language spec.
Well, I'm not suggesting a change to the language spec. I just want
to not see spurious `name conflict' errors.
But then I can't refer to the symbol in the same file as the file
that loads the code that defines it. I am saying `I can't use forward
references', and you are saying `don't use them, then'. That isn't
quite the solution I had in mind.
> > > So you load the defpackage form(s) before playing with symbols in
> > > those packages.
>
> > In general, init files are loaded *before* anything else. It is
> > hard to get the defpackage forms to load prior to the init forms.
>
> Couldn't you load it in the init file?
Nope. Not if there is processing to be done between initialization
and loading of the packages (which there may be).
> Or do something like (setf (symbol-value (find-symbol ...)) ...)?
Yes. But this is the exact problem. Anywhere else in the system
I'd write (setq foo:*bar* 22), but in my init file I have to
kludge up
(setf (symbol-value
(or (find-symbol (symbol-name :*bar*)
(or (find-package :foo)
(error "no package")))
(error "no symbol")))
22)
This seems like an excessive amount of verbosity.
> > > Hmm, actually, that might be simple to add to my generic-reader.
> > > Simply have the token-interpretation-fn be a closure around some
> > > name->package EQUAL hashtable. Or have a *packages* hashtable that can
> > > be dynamically shadowed. Probably providing both options would be a
> > > good idea.
>
> > I wouldn't recommend that. The first thing we found when we tried that
> > was that we needed `package closures', i.e., a way to bind a function
> > to a particular package topology.
>
> That sounds like modules to me...
No, it's ugly. In one `package environment' package FOO would inherit
from packages BAR and BAZ and import CL:IF, in another, package FOO
would inherit from CL and BAZ and shadowing import BAR:IF.
It was the case that the cross-compiler would only work with the
second setup, but the emulator would only work with the first. What
we wanted was automatic switching of the topology when calling a
compiler or emulator function.
> > > Identifiers in Lisp are symbols.
>
> > Yes, but the vast majority need not be.
>
> They need to be because that's how Lisp code is defined.
Yes, I know how it works. It's what I'm complaining about.
> > The identifier X is interned as a symbol, yet the code does not use
> > the value cell, function cell, print name, package, or plist.
>
> But it uses the symbol X in the code to indicate what identifier you
> are using. That way the evaluator can compare two symbols and
> determine if they are the same identifier or not.
>
> > Not only that, but the identity of the identifier need not even last
> > beyond the compilation! However, were I to enter that function, and
> > then attempt to import a symbol named X from another package, I'd
> > get a name conflict. What is conflicting?
>
> The names you used. If you want to have lexical-extent symbols, I
> think I can understand, but I can't think of a practical way of
> actually doing it.
Lexical-extent symbols?!
No, I want referential transparency. When I import a symbol in a package,
it ought not depend on whether I used a lexical variable of the same name in
an unrelated function in that package.
> > > Symbol->value mapping is what occurs later.
>
> > Yes. It occurs as late in the process as possible: just before you
> > need the value for a computation.
>
> > But identifier->symbol mapping occurs extremely early in the process,
> > where it isn't obvious that the symbol will be needed at all.
>
> If the symbol is used, it's needed.
Not necessarily. If you compile this function:
(defun fact (x) (if (zerop x) 1 (* x (fact (1- x)))))
and look at the fasl file, you will not see symbol X *anywhere*.
You don't need it.
> I don't see how this can be
> removed, resulting in a language that can really be called
> "Lisp". This distinction of read, compile, eval, macroexpand, load,
> etc times is one of the distinctive, important, and useful features of
> lisp. If you don't like those features, you'll have to form a
> different (possibly overlapping) community, since many of the current
> leaders of the Lisp community seem to prefer dealing with the times
> the way we do now.
Who said I didn't like these? And I doubt that the leaders of the
CL community really enjoy dealing with symbol conflicts. My impression
is that they tolerate them because life without some kind of namespace
mechanism would be impossible.
> I think it just comes down to being comfortable
> with the way they are defined and being precise about what times
> things occur when you write your code.
I don't think it is a good idea to just become comfortable with
`the way things are'. Having to be precise about the timing of READ
is annoying. I don't have to be precise about the order of definitions
in (or across) files, I don't have to be precise about what is or
is not defined when I run the program (I run partially written code
all the time), but suddenly I *do* have to be precise when I type
a symbol name?
> Essentially, this distinction is one of the axioms of Lisp, as I see
> it.
I think I could do without errors being raised by the package system
as an axiom.
Well, suppose we have this file:
(in-package "FOO")
(defun mystery (x) (eq x 'if))
Now when we compile this, what gets stored in the fasl file?
(Yes, it will depend on whether 'foo:if is eq to 'cl:if or not.
Assume that foo:defun = cl:defun and foo:eq is cl:eq)
>> If instead you mean that packaged symbols are more than plain symbols,
>> then the nice thing is that in isolation, packaged symbols look like plain
>> symbols, and the visual strangeness of even the pkg: prefix comes up exactly
>> and only in a place where the simple model of symbols would break down.
>> Rather than (X:FRED Y:FRED) one cannot write simply (FRED FRED) unless one
>> invents some additional datum that rides along side saying (X Y), or unless
>> one makes the list contain ((X FRED) (Y FRED)) and each of these have grave
>> problems.
>
> So that's clearly what my solution is supposind I would do. Please
> entertain me; what is the "grave problem" with using (X FRED) instead
> of X:FRED? I have to work out a uniquification rule, and so forth (or
> just use equal instead of eq), and it Just Works.... doesn't it?
But that's not the same thing: in (LIST 'X:FRED 'X:FRED) the list
contains the same object twice, (LIST (LIST 'X 'FRED) (LIST 'X 'FRED))
doesn't. Sure, you can find workarounds for just about anything,
even in C. But that doesn't mean C supports everything other
languages do, although C programmers often believe that.
Regards,
--
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."
PGP key ID 0x42B32FC9
> Kent M Pitman <pit...@world.std.com> writes:
>
> > Talking about JOE is easier than talking about #<PERSON
> > SSN=555-55-5555 32434>.
>
> But I don't get JOE, of course, I get something like PERSON:JOE and
> FLOWER:ROSE.
JOE and PERSON:JOE are the same. It is an artifact of the printer that
you can make it print either way. In the FLOWER package, you will see
ROSE and PERSON:JOE. In the PERSON package you will see JOE and FLOWER:ROSE.
> Why is this radically better than (person . joe) and (flower . rose)?
Those are not symbols. This is critical to a representation that says
that conses are containers and symbols are terminals. If terminals are
conses that contain symbols, then it becomes painful programmatically
(not to mention much slower) to detect leafness on recursive descent.
> Oops, Scheme people will get made at me for my habit of using improper
> lists. :) (person joe) and (flower rose) then.
Waste of extra conses, too, then.
> In other words, it seems to me that if I had an application where I
> needed this, I could simply make a hash table, uniquify such pairs,
> and proceed.
You are utterly missing the point here. The data structure design is done
at the time you have no mix of applications. MACSYMA was designed for 20
years before it was ever mixed with any other application in the same address
space. The use of symbols was integrated through 100,000 lines of code
internally to itself, not to mention published papers on its internals, and
researchers world-wide who were familiar with its data structures.
At the point where it was finally dumped into an address space with other
programs, there was no practical hope of saying "oh, look, I have an
application that needs me to redesign its data structures". It's too late.
So either what you are suggesting is that every application MUST do this
whether needed or not (since later redesign is out of the question) or your
proposed solution doesn't work. The package system, by contrast, hides the
packageness utterly out of view as long as you don't mix packages, and only
introduces complexity at the time that you need complexity--when two
applications are joined.
> > > In other words, it seems really clear to me how to do all this kind of
> > > data manipulation; one simply needs to represent data by more than
> > > plain symbols--but it's still pretty darn easy.
> >
> > I don't know what "more than plain symbols" is, but if it means
> > #<PERSON SSN=555-55-5555 32434> or #S(PERSON :SSN "555-55-5555")
> > and if referring to a list containing this person twice means writing
> > (#1=#S(PERSON :SSN "555-55-5555") #1#) instead of (JOE JOE), then I
> > think it suffers something in the translation.
>
> No, it means referring to things by, say, the sorts of lists I used
> above.
Lists are not symbols. You're basically suggesting that an integration
program in a calculus program should change its design from:
(INTEGRATE (SIN X) X 0 3)
to
((CALCULUS INTEGRATE) ((CL SIN) (USER X)) 0 3)
I don't know about you, but I can't even _read_ that simple example clearly,
not just because of nesting but because the nesting is for different purposes.
In the thing Lispers are trained to do, such as:
(+ F (G X) Y)
or [Scheme only]
((F G) (H (M X) 3))
all of the parens mean "functional enclosure". If I have a need for the
stuff you're suggesting, parens sometimes mean "functional enclosure" and
sometimes "namespacing", so I get:
(((FRED F) (FRED G)) ((FRED H) ((FRED M) (FRED X)) 3))
and (a) it looks visually like FRED is an operator, (b) wastes visual space
on something I don't need all the time, and (c) means I need a kind of LEAFP
predicate that can tell on recursive descent that (FRED F) is a terminal
where atoms used to be terminals. I'd at minimum want bracketing to be able
to read this easier, so let's say [...] instead of parens.
(([FRED F] [FRED G]) ([FRED H] ([FRED M] [FRED X]) 3))
Maybe also if [FRED x] were the default, we could suppress FRED showing up
to make it easier to read. So let's say it looks like
(([F] [G]) ([H] ([M] [X]) 3))
Great. So now you've taken a system in which I had used lists and symbols
in a certain way and you have turned it into something where I used lists and
a new thing called a leaf in a certain way, which is effectively the same
except the leafp test is slower than the atom test was. And you've taken a
thing where symbols print as their name and turned it into something where
symbols print as something with no more semantic info than their name but just
take more characters to do it. To me, that is not entirely an improvement.
It doesn't change the shape of anything.
[This use of "shape" may be obscure. I don't mean structural shape, I
mean conceptual shape. It may or may not clarify it for me to refer
to my training in the late Bill Martin's computational lingistics
class at MIT long ago, we used to work in a representation language
called XLMS which used list-like recursive structures and heavy use
of anaphoric interconnection (like #n= and #n# on steroids) to
represent human knowledge. There were symbolic identifiers in these
networks created to name things, and to "ground" them with reality,
but I remember Martin speculating that perhaps it didn't actually
matter what the names were because to really understand something, it
had to be so connected with the universe that there may only have
been one possible set of interconnections, and really all that was
needed was shape. I guess the reason I have to mention this side
trip is to motivate my use of the word "shape" in the following
statement, since probably it's an otherwise obscure use.]
> > If instead you mean that packaged symbols are more than plain
> > symbols, then the nice thing is that in isolation, packaged
> > symbols look like plain symbols, and the visual strangeness of
> > even the pkg: prefix comes up exactly and only in a place where
> > the simple model of symbols would break down. Rather than (X:FRED
> > Y:FRED) one cannot write simply (FRED FRED) unless one invents
> > some additional datum that rides along side saying (X Y), or
> > unless one makes the list contain ((X FRED) (Y FRED)) and each of
> > these have grave problems.
>
> So that's clearly what my solution is supposind I would do. Please
> entertain me; what is the "grave problem" with using (X FRED) instead
> of X:FRED?
Timing.
You have to always do it. The point of symbol design is not to
write systems that now interface to something else that might ultimately,
without redesign, accomodate interface to something else.
Your remarks seem to take a notion that we used to take 20 years ago that
there were "applications to be joined with others" and "applications that
won't". Over time, I have come to the conclusion that the proper names for
these sets are "applications that it is not worth wasting time on ever
in your life" and "things that will succeed and become useful to someone".
Because when things that become successful do, statistically, get connected
to other things in ways they don't anticipate. It is the job of the language
designer to think about this exactly because programmers often do not.
Because otherwise, people complain that the language boxed them into a
corner. (They do anyway, but one should reduce their need for this where
anticipatable, and this is such a case.)
> I have to work out a uniquification rule, and so forth (or
> just use equal instead of eq), and it Just Works.... doesn't it?
"Just" using EQUAL instead of EQ is no small change.
EQ is often compiled open as a single instruction. EQUAL, being
recursive, is rarely compiled open and is usually a minimum of an
order of magnitude slower. It may not have to recurse long if the
object to be compared is not EQUAL, but it can take quite a while.
> > I'm a little tired at this point of offering worked examples and
> > having people take potshots.
>
> I'm really not trying to take potshots; I'm trying to understand what
> exactly is being gained through the package mechanism that can't be
> gained through a module system. I want to "push back" at your
> assertions, because they don't jive with my experience and intuitions,
> but not because I want to win some debate. If you like, I want to make
> you give the strongest case possible. :)
But if you offered a simple example, I could tell you in concrete terms
where the bug would occur.
I don't enjoy being "made" to make arguments.
Moreover, I need something from this discussion, too. I need to understand
what you think you want. It may be that I am wrong, and this will not be
elicited by your simply pushing me harder and harder.
Conversation is give and take, and you are not giving. Moreover, you
have self-identified as trying to manipulate me, and I do not appreciate
that whatsoever. This will hamper my desire to respond further to you
in conversation.
Because mine is not the only opinion here and much though I have a desire
to offer my arguments, I have no desire to offer them against a vacuum.
I can't even tell if you are hearing me unless I see you do a more elaborate
example than you do because I can't tell if (a) you are not reading what
I wrote, (b) not thinking hard about it, (c) thinking hard about it but not
articulating a perfectly sound alternate theory. I am not bold enough to
assume "not (c)" on the basis of no information, and while it may seem that
I am headstrong enough to believe there is no other information, that is
simply not true.
I feel a lot like a monkey in a box being prodded right now. "Oh, let's
see if we can make him make a new sound." I do NOT like it.
[Rest of message not responded to because I did not choose to read farther.]
> In article <sfwg030...@shell01.TheWorld.com>, Kent M Pitman
> <pit...@world.std.com> wrote:
>
> > Being one of the central parties making this noise, I'll do something I'm
> > not big on and actually cite my credentials to show that I am not blathering
> > out of ignorance:
>
> I won't follow your example because I don't know if I'm blathering out of
> ignorance or not. (That's what I'm trying to figure out.) But if you
> want to see my credentials my CV is on the Web. Suffice it to say I've
> built a successful career in part by writing Common Lisp code. I feel
> fairly confident that I'm sufficiently non-idiotic that if someone can't
> explain something in this area so that I can understand it there's a good
> chance that it's their fault and not mine.
No. I've also gone through some email exchange with you, and in my
opinion it is mainly your problem and you should really use more time
to understand what others are writing.
Your CV looks impressive, but after having this conversation with you
and reading so much about your problems on c.l.l., I do not believe
anymore that it tells the truth. Maybe some other person on this list
knowing you personally could confirm it?
Nicolas.
Yes, and this helps tremendously.
Unfortunately, at one place I worked the boss was insistent that we
always do a `clean build' before every check-in.
What I ended up doing was installing a hook to be called right after
the packages were loaded. The hook was a symbol in the user package,
and my init file would hang a function on that.
> (flower:colors-among '(flower:rose people:rose flower:daisy))
> => (FLOWER:RED FLOWER:WHITE)
IIUC, what you're saying is,
You can write simple, self-contained Scheme code to do X. However, when
you start mixing in code from different sources, the things you have to
do in Scheme to avoid name conflicts are not as clean or convenient as
with CL packages.
In this case, X happens to be symbolic processing. Couldn't you say
the same thing for any X? It's interesting that CL packages partition
symbol plists and not just their associated functions/variables, but
we're still talking about the same concept.
> - The information is manifest in symbols, a primitive data structure
> in the language
I'd like an example that stresses this feature of CL apart from the
package system.
Regardless of the discussion among CLers about whether its package
system is the best solution, I think it's a given that CL's package
system is better than nothing, which is what R5RS Scheme has.
> The above programs are contrived, but I hope show enough meat that you
> can see the problem. Try casting them into Scheme in whatever way suits you
> and we can proceed to the next conversational step from there.
I doubt whether Erann is likely to take you up on this, but with a less
package-centric example I would.
--
<brlewis@[(if (brl-related? message) ; Bruce R. Lewis
"users.sourceforge.net" ; http://brl.sourceforge.net/
"alum.mit.edu")]>
> It's interesting that CL packages partition symbol plists and not
> just their associated functions/variables, but we're still talking
> about the same concept.
this sounds quite confused.
the package system partitions _symbols_. the objects themselves. it
doesn't care specifically for any slots in the symbol.
FOO:JOE and BAR:JOE are different symbols (objects!).
the source form "(get 'BAZ:COOKIE 'JOE)" gives different results when
evaluated in the FOO package and in the BAR package. that's because
"'JOE" gets interned as either FOO:JOE or BAR:JOE, and these are
different symbols (objects!). but the plist in question is the same
in both situations, because it belongs to BAZ:COOKIE, which is named
explicitly means the same object everywhere.
perhaps this is what you meant?
--
Shit doesn't hurt the world. It provides valuable fertilizer to help
plants grow, grass to help cows, and at least 33% of the content of
off-brand chocolates. --Dave Lynch in rec.music.progressive
> "Kent M Pitman" <pit...@world.std.com> wrote in message
> news:sfwu1re...@shell01.TheWorld.com...
> > "Joe Marshall" <prunes...@attbi.com> writes:
> >
> > > READ causes noticable side effects. This can create dependencies
> > > on the order in which things are read.
> >
> > EVAL and/or COMPILE-FILE causes noticeable side-effects too. This
> > creates dependencies on the order in which Scheme expressions can be
> > processed.
>
> Woah!!! Who said anything about Scheme?
This whole discussion of packages came from a discussion of modules vs
packages which came from a discussion of what symbol processing is which
came from a discussion of what is criterial to a lisp which came from a
discussion of whether Scheme is a Lisp... or so I thought. Wasn't it
obvious. ;)
Ok, sorry for misjudging your sense of context. My reply was probably
out of frame if you weren't in the same frame of reference...
> I'm just pointing out the problems I have with the Common Lisp
> package system, not proselytizing.
>
> So, back to the discussion...
Heh.. Ok, we'll resynch as best we can.
> I sort of expect EVAL to have
> side effects. It would be *nice* if COMPILE-FILE did *not* have
> side effects unless there are the appropriate `eval-when' forms
> in it (modulo macros, etc.)
If I had it to do over again, I would vote against any statement that
COMPILE-FILE has no side-effects. Reader effects are the least of it.
EVAL-WHEN is the major issue, and we maintain the fiction that the compile
time environment is somehow detached from the runtime environment, but that
isn't true in most lisps. The practical effect is that most people who do
very reasonable things at compile time with the compile time environment
have to worry they are cheating in various subtle ways.
I agree that the package issue is complex, but so is any conversation
that allows indirection. The battle against complexity is not lost
merely because you have to engage complex lingo to accomodate it; it
is lost when you insist that the problem will go away if you fail to
evolve complex lingo.
> > > Where it *was* a problem was on the Lisp Machine. When you opened
> > > a file in Zmacs, it would parse the file mode and it would
> > > parse some of the file contents. It would attach properties to
> > > the symbols mentioned in the file.
> >
> > I'm familiar with the problem you're talking about, but you can't
> > blame this on the package system. I don't think this is a fair
> > criticism. The Zmacs editor chose to put things on the property list
> > of symbols prematurely and got in trouble.
>
> I think I'll still blame this on the package system. If Common Lisp
> did not have a package system (let's imagine that namespace conflicts
> don't occur and we can intern everything in one package), then Zmacs
> would have no problem at all attaching properties to symbols.
As nearly as I can tell, this argument comes down to what in legal terms
would be called an "attractive nuisance". That is, by virtue of having the
data structure available, it seems to invite use in some cases.
The reason I think the editor is "special" is that code to be edited
is not code in core. It stands at the brink and is really just a huge
large string in stringquotes. I regard your bug the same as if
someone had written some tool that scans files and was interning
symbols it found in stringquotes or comments. There are very
dangerous things that one finds quoted and there is a reason that
evaluation is suppressed. Programs which have only a superficial
understanding have no business doing what you are suggesting oughtt to
be the right answer in a universe with packages but ought not be the
right solution in a universe without packages.
The fact is that the Zmacs solution fell down on several other points which
include:
(a) File contains (defun #+FOO a #-FOO b ...)
because not only might I still need to edit that definition, it has
multiple names. It would be a design error even to assume you can
do (MEMBER :FOO *FEATURES*) because you don't know whether I will do
(let ((*features* (adjoin :foo *features*))) (load "thefile.lisp"))
and so you don't know that the "edit time" environment is the
environment of choice for processing the file. This will affect more
than just packages and so is a pervasive feature of the language not
corrected by replacing packages with modules.
(b) myfile.lisp.7 and myfile.lisp.8 contain incompatible versions of
definitions, or even the definition is missing in one or the other.
Really good editing tools (not sure if such exist) will model these
issues; but my point is that it should not be surprising when an editor
does stupid things if it doesn't use a representation adequate to
understanding the real situation. It violates Pitman's 2-bit rule,
that says when you have two bits of information, you should use at least
two bits to represent it.
(c) It doesn't address even acl-foo.lisp vs lw-foo.lisp when I go to
edit foo. It would think there are two defintiions and would not know
that only one will get loaded in the dynamic flow of code.
(d) It doesn't address the possibility that I am editing code for Scheme
or some other language that merely looks like Common Lisp (such as
Maclisp) that doesn't treat : specially.
You know, I had a similar problem with Zmail a few years back when I revived
my old mail files (from the mid 1980's) from backups and it tried to parse
and intern all the domain names. You may or may not be aware (heh) that
the host names have changed since then. I got a lot of failed parses for
missing hosts like MIT-MC.ARPA or even PLANTAIN.SCRC.Symbolics.COM and it
was a mess. Does this mean the domain name system is wrong? Or does it mean
that some editing programs that are not actually trying to talk to the net
yet shouldn't be prematurely doing host lookup just based on the fact that
they saw some plausible looking text go by? Because I think the latter.
> Putting properties on symbols isn't a problem.
When the thing to be represented _is_ a symbol.
I claim that FOO:X in text in an editor buffer is _not_ a symbol. It's a
piece of text that if parsed at a particular point in time would be a symbol.
This very discussion contains references to some package FOO with symbol
X that an editor would be in error to put properties on because it's a
hypothetical symbol not related to anything that someone doing Meta-.
(find function definition) should be expecting to find.
> It's interning
> the symbols for the sole purpose of putting properties on them that
> is causing the problem.
I won't expound in detail upon the philosophical issues here of what
"them" are and whether you've made a definite reference. I'll just
say that FOO is used in a lot of conversations, and putting properties
on FOO in a Scheme-like package-free system will definitely lose.
Putting properties on a CL package will mostly work but _only_ if you
do thw work (which no one does, that I know of) to dispense proper
hypothetical names (almost like gensyms, but really just a more
complicated bookkeeping system) to deal with versioning, multiple
sources, etc. Then again, there are still "morning star/evening star"
situations that could come up, where packages thought to be distinct
have to be merged later. And that's messy. But the mess is something
that philosophers have discussed for ages, long before there was Lisp,
and the philosophically sticky problem will not be fixed by a clever
choice of syntax or data layout on our part.
[Mathemeticians reason by reducing a difficult problem to a specific set
of problems with known computational complexity. So they say a problem
is a "traveling salesman" problem even when it has nothing to do with that.
This effectively avoids spinning one's wheels uselessly.
I have, as a career philosopher engaged regularly in this kind of thing,
sought to reduce philosophical problems to problems of known difficulty
in the philosophy domain, and when I find one that is an unresolved paradox,
I do not descend further into it conversationally for the same reasons.
I think philosophers routinely do this, though I don't think they have good
terminology for talking about the fact that they do or the thing it solves.]
> And the interning itself wouldn't be a
> problem if the packages were set up correctly beforehand. It all
> leads back to the package system. (Admittedly, Zmacs shouldn't be
> using READ anyway. I once made a file with -*- #.(si:%halt) -*-
> in the header....)
Ick.
But this makes an interesting point related to this: you'd think when
you saw a symbol like CL-USER or a list like (FOO :USE CL-USER) that READ
would be the obvious parsing function, but it doesn't follow that it's
the correct computational choice. Your arguments about packages are
really analogous to this. No real problem using symbols for what it does,
but assuming that the packages referred to in the file are correctly
seen as the packages in the runtime environment is a stretch...
> > > When you were browsing code, you might open a file that was
> > > in a package that was not yet defined. Well, the package became
> > > defined and a slew of symbols from the file would get interned
> > > in it. Unfortunately, it was unlikely that the package was
> > > correctly set up, so when you went to evaluate any code it
> > > wouldn't work, or when you went to fix the packages, you'd run
> > > into thousands of name conflicts.
> >
> > Just because someone can design a problem for which symbols ought not
> > be used and use symbols does not make symbols a bad datastructure.
> >
> > I likewise had problems with what Zmacs did, but I attribute the
> > problem very, very differently.
>
> I see where you are coming from, but I disagree. Zmacs wanted to
> associate properties with names, i.e. establish a mapping between
"a relationship". The relationship need not be utterly simplistic.
For example, Meta-. doesn't take you directly to a symbol. It first looks
to see if the symbol exists in the current package. If it doesn't, it
applies "package dwim" to find a lookalike to go to. It can also multiple
definitions in varying ways. The relationships have to be explainable
but they don't have to be simple, and the most usable UI behavior does
not come from an overly simple mapping.
> a sequence of characters and a value. The sequence of characters
> has the syntax of a symbol (by definition!), so it seems that symbols
> would be the right thing to use.
"seems like it should work this way" is not a very useful term of art
in programming design. ;) What mankind really learns from programming is
how much more complex the world of the mind is than most people think it is.
> Manipulating a structure full of symbols and lists (a lisp source file)
> seems to me to be the epitome of symbolic processing, yet I can't use
> symbols?
To me, symbolic processing is what you do when you've read that file.
For example, we do macros that way. That's how we got here conversationally.
But prior to loading, we are in text-land. And text land is more complex
for a reason. It is quoted. In text-land the symbol FOO:X and BAR:X may
be importantly, intentionally, different (for some reason of compatibility
with another system or future system) and cannot be merged even though the
environment may know them to be the same. Query Replace (some call it
"Find and Replace") should not assume that replacing FOO:X for Y should
disturb BAR:X just because something is known about the environment unless
you want to say the editor is only good for editing code related to the
environment that is loaded now, and the editor is of no value for editing
future versions of itself or programs written in another language or for
another platform.
> > > So, on to more obscure problems. You can't use `forward references'
> > > to symbols that don't exist in packages that don't exist (yet).
> > > In your init file, you'd *like* to be able to customize some
> > > parameters before you start building the system. Perhaps you have
> > > some macros that are parameterized, whatever. As an example, I
> > > once worked at a place that had a `verbosity' switch that I wanted
> > > set.
> > >
> > > The obvious thing is to write (setq utility::*verbosity* t)
> > > but the utility package isn't loaded yet. Never mind that the
> > > utility package *will* be there when I call this function, it
> > > isn't there when I *read* the code. Instead I write:
> > > (let* ((utl-pkg (find-package :utility))
> > > (verbosity (intern (symbol-name :*verbosity*) utl-pkg)))
> > > (setf (symbol-value verbosity) t))
> > >
> > > The same goes for functions that I want to call. I can't
> > > write (progn (load "java-tools.lisp") (java-tools:make-all))
> > >
> > > The way around this problem is to write a function that
> > > defers interning until runtime. Then you can write
> > > (progn (load "java-tools.lisp") (funcall-in-package :java-tools
> :make-all))
> >
> > I usually did:
> >
> > (load "java-tools.lisp")
> > (java-tools:make-all)
>
> This is obviously not quite kosher in a file.
Sure it is. I don't know what you're talking about. I have files that
say just exactly that. (Well, different function and package names.)
If you mean I can't put "(defun" ... ")" around it, yes, that's so. But
I can put it in a file of its own and just load the file. The file itself
will act like parens.
> > as two different forms. In the few cases where this wasn't possible, I
> did:
> >
> > (progn (load "java-tools.lisp")
> > (eval (read-from-string "(java-tools:make-all)")))
> >
> > or
> >
> > (progn (load "java-tools.lisp")
> > (funcall (or (find-symbol "MAKE-ALL" "JAVA-TOOLS")
> > (error "JAVA-TOOLS:MAKE-ALL is not defined."))))
> >
> > Hardly a major problem. And it doesn't require use of any non-standard
> > operators.
>
> No, it isn't a `major problem', and as I pointed out you can write
> a `funcall-in-package' form that allows you to
> (funcall-in-package :java-tools :make-all), but it *is* a nuisance.
Having seen how smoothly some very, very large systems ported from old
dialects of Lisp to Zetalisp (which was mostly Maclisp compatible) and were
able to interoperate with each other, I disagree.
I was also able to load in Macsyma, rename the Macsyma package, and load in
a later version and use the package system to help me compare results of
old and new systems. But it's worth noting again that even though the package
system let me do this, other parts of the system were not designed for this,
so the system herald kept saying only the more recent version of the system
was loaded. In this regard, I would say the package system was better
engineered to accomodate change in symbol data than most other parts of the
system which were not package-based were designed to accomodate object
identity and renaming...
> > > The package system allows me to `split' names. I can have
> > > my symbol FOO be different from your symbol FOO. It doesn't
> > > allow me to `join' names.
> >
> > > I cannot have my symbol BAR be
> > > the same as your symbol FOO. (Or perhaps more illustrative,
> > > I can't have `JRM:ORIGINAL-MULTIPLE-VALUE-BIND' be the same
> > > as `CL-USER:MULTIPLE-VALUE-BIND'.) On the lisp machine, you
> > > actually *could* do this by forwarding the memory, and it
> > > comes in handy on occasion.
> >
> > This is a legitimate gripe. However, I believe CL can rise above it.
> > I see it as an API problem, not a data structure problem.
>
> I'm not sure that it makes a difference what the nature of the problem
> is. And I'd love to see CL rise above it. Whether by `fixing',
> `adding functionality', or `replacing' the package system is somewhat
> immaterial. If we can keep existing code from breaking *and* eliminate
> some (all?) of the `misfeatures' of the current package system, I'd
> be happy.
You've raised a lot of problems but you haven't said what featurism you
expect. As long as your only complaint is that it can be used wrong, it's
hard to respond to. If you are looking for it to keep you from doing a
certain thing or allow you to do a certain thing, it would be easier to
respond to. As detailed as you've been, I still can't generalize from
your specific examples to anything other than "this isn't good for me"
and so I can't do much to help. Even a guess at what would be better would
again make the discussion concrete for me.
> > > Finally, a really nasty one. The `topology' of the package
> > > structure is a global resource. When we were building a new
> > > Lisp machine, we had a brand new compiler. Obviously, it goes
> > > in the `compiler' package. Well, on the old machine, there
> > > already was a compiler in the compiler package. So we had
> > > to load the compiler into a different package.
> >
> > I don't see any problem here that adding more power to the package
> > system wouldn't fix. In particular, original Zetalisp had
> > hierarchical packages. Symbolics Genera usefully expanded on this by
> > having "package universes" associated with Syntax. Symbolics Genera
> > had about 5 to 7 syntaxes (package universes) loaded into the same
> > core image at the same time, depending on the configuration. I wrote
> > the code that maintained that separation. I never got complaints from
> > amyone saying the implementation had major technical problems. Most
> > people were surprised that Zetalisp and Common Lisp were able to be
> > co-resident at the same time, given they had needs for packages to be
> > called different things.
>
> I solved this by adding `package-environments' (with triple colons to
> name them!).
The Symbolics Genera solution with "syntaxes" having "package universes"
was likewise. CL:::USER:X was what ZL called CL-USER:X and ZL:::USER:X
was ZL-USER:X to ZL.
> It could get pretty hairy. In one package-environment,
We call these syntaxes and simply had a command to move you among them.
Using Set Lisp Syntax ANSI-Common-Lisp would put you in the same mode as
if you had prefixed all symbols with ANSI-CL::: and using ANSI-CL:::CL-USER::X
gets you what ZL calls FUTURE-COMMON-LISP-USER::X.
> interning the name "FOO" in package "BAR" would yield 'FOO::BAR, yet
> in another interning the name "FOO" in package "BAR" would yield
> 'SYS:BAR. There were degenerate cases where you could end up with
> a symbol associated with a `non-existant' package.
You mean with package not found errors, like in any package universe that
was absent a package, no?
> It *did* solve the problem, though. But only at the expense of
> causing a serious amount of complexity. The intent was to abandon
> this `solution' as soon as we had bootstrapped the new environment.
I think in the Symbolics rendition, people hardly even noticed. They only
noticed if they tried to cross-call, which worked, you just had to learn
some detail. We had a good set of operations like FIND-PACKAGE-FOR-SYNTAX,
and stuff like that which were underneath the FIND-PACKAGE for each dialect,
so that most people who stayed inside a package universe never knew there
were all those other incompatible co-resident dialects.
> > > Now I realize that name conficts are a problem (look at emacs).
> > > But I don't think the solution is to hack the READER. With
> > > a little careful thought, I'm sure that a better way could be
> > > found that defers identifier->symbol mapping until later in
> > > the game.
> >
> > Yes, those may be possible workarounds in some of the cases you've
> > mentioned. I'm not sure they exhaust the space of possible workarounds,
> > nor that the presence of such a workaround is a proof that what CL does
> > is bad.
>
> The fact that you call them `workarounds' indicates that there is something
> to be `worked around'. But this isn't meant to criticize CL. On the
> contrary, the existance of the API to the package system makes it
> possible. (Consider trying to fix an analagous problem in C++)
I don't believe people should use a data structure just because it
"looks right". They should use it because it has the right properties,
in fact.
What you are talking about, as nearly as I can tell, is choosing a wrong
operator just because it's cheap and easy and is conceptually on par with
calling read to parse a token even though the token is not offered by its
creator as "a token in lisp syntax". So if the guy types 003 and you get 3,
you have lost information because the reader was not meant to carry this
difference, but it's wrong to blame READ. It's your choice to use READ
that seems wrong.
The problem with this is that while I believe in the concept of late binding,
I want to bind at the earliest time that I as the programmer know the meaning
I intend to apply. And when I write
(defmacro foist (x) `(car ,x))
I know at program design time that the CAR I mean is a particular one.
Waiting to do it later is like saying that all programs should do
(FUNCALL (SYMBOL-VALUE '+) (EVAL '3) (EVAL '4))
because maybe the meanings of + and 3 and so on will change later. I
don't expect they will, so I want to bind earlier. Indeed, and nontrivially,
I might want to write
(FUNCALL #'FUNCALL (SYMBOL-VALUE '+) (EVAL '3) (EVAL '4))
because maybe I want to defer the meaning of funcall until runtime, but
then how will I call funcall. I have to add a FUNCALL. And maybe I should
defer its meaning as well. At some ponit you have to say that I really know
what I mean. And I think the issue is not late binding is good or early
binding is good but "binding at the time we know" is good. There are ways
in CL to delay information; that's what quoting is for. You are repeatedly
remarking that you don't like quoting, but that is to say you don't want to
address the notation that is there for you to use to cause the effect you
want. I can't help you there. Yes, you can make a notation in which
everything is deferred (that's what Scheme does) but then you find that
you have to solve other problems (like that macros have to worry that
symbols they mix and match have no meaning, and you have to resort to
painting them so you don't get them confused).
> > In the case you describe, if I follow you correctly, and I kind of
> > doubt I do totally, you have some weird situation where your compile
> > time packages and runtime packages are not aligned. (Note that CL
> > says this is not valid CL, so you are on your own.)
>
> Exactly! The package system cannot accomodate differences between
> run-time and compile-time package hierarchies. I think that rather
> than `outlawing' this case, it would be preferable to remove the
> requirement.
I think honestly that you were wrong when you said you aren't arguing for
Scheme. Scheme has a solution (with its incumbant problems--they are not
being inconsistent per se, they are just trading one problem for another).
I don't like the problems created by their solution. But you sound like
you would.
> > In general, I think the
> > problem is more that CL does not seek to handle this problem than that
> > CL would not be capable of handling this problem with available data
> > structures if it chose to provide you the necessary operators. I don't
> > see a data structure problem here; perhaps an API weakness. But new APIs
> > can be written without changing the language spec.
>
> Well, I'm not suggesting a change to the language spec. I just want
> to not see spurious `name conflict' errors.
The errors are not spurious. They happen for a reason. I'm sure you
realize this, but I'm not sure you see the significance of realizing this.
> Bruce Lewis <brl...@yahoo.com> writes:
>
> > It's interesting that CL packages partition symbol plists and not
> > just their associated functions/variables, but we're still talking
> > about the same concept.
>
> this sounds quite confused.
>
> the package system partitions _symbols_. the objects themselves. it
> doesn't care specifically for any slots in the symbol.
>
> FOO:JOE and BAR:JOE are different symbols (objects!).
>
> the source form "(get 'BAZ:COOKIE 'JOE)" gives different results when
> evaluated in the FOO package and in the BAR package. that's because
> "'JOE" gets interned as either FOO:JOE or BAR:JOE, and these are
> different symbols (objects!). but the plist in question is the same
> in both situations, because it belongs to BAZ:COOKIE, which is named
> explicitly means the same object everywhere.
The above can be restated slightly more clearly as:
The _string_ "(get 'BAZ:COOKIE 'JOE)" will result in two different
_source forms_, when _read_ in the FOO package and the BAR package,
namely in the source forms (get 'BAZ:COOKIE 'FOO::JOE) vs. (get
'BAZ:COOKIE 'BAR::JOE). Since they are different source forms, they
will generally yield different results when being evaluated.
The interesting thing is that this separation of symbols will help you
in all cases where symbols are used as data (either keys or values or
whatever) in "global" data-structures, including, but not limited to,
the property lists of symbols themselves (when the same symbol is
shared between applications), but also other "global" lists, or
hash-tables, or other data-structures.
The langauge is obliged to say that it either stores the symbol by its
package or by its availability in the current package or to record the
need to check that coincidences are checked. The rational thing is to
do the same as what the printer would do, since that's what people will
expect. CL further constrains things by obliging you to make the load
time environment match the compile time environment, such that this doesn't
ever matter in valid code. But this problem is not unique to the compiler;
the same issue occurs in printer and is simpler to discuss.
> Those are not symbols. This is critical to a representation that says
> that conses are containers and symbols are terminals. If terminals are
> conses that contain symbols, then it becomes painful programmatically
> (not to mention much slower) to detect leafness on recursive descent.
So I did mention the need for a uniqueness step (just like one gets
for "free" with symbols--that is, not free, but built in).
However, that doesn't take care of everything, and I thank you for
your comments; they have brought out several issues I had not thought
about in that way before.
> Those are not symbols. This is critical to a representation that says
> that conses are containers and symbols are terminals. If terminals are
> conses that contain symbols, then it becomes painful programmatically
> (not to mention much slower) to detect leafness on recursive descent.
So I *did* say that uniquification was a part of the solution. I
do grant that this is an increased cost (perhaps a very high one,
depending on the context).
However, thanks for your reply; I think I now understand what you were
getting at better than I did before.
> Conversation is give and take, and you are not giving. Moreover, you
> have self-identified as trying to manipulate me, and I do not appreciate
> that whatsoever. This will hamper my desire to respond further to you
> in conversation.
Eek, you've totally misconstrued my point.
It's that I had thought pretty seriously about the topic, and
concluded that packages don't add much if you already have modules and
other relevant facilities. I was asking you to give some of your
examples and reasoning that conclude the following.
When I criticized particular examples, it was not to take a "pot
shot", but rather because I thought those examples did not quite prove
your point, and my goal is to see if that point is really true or
not. (I'm still not sure, but now I see more of the depth of the
issue than I did at first.)
When I said "I want to provoke you to make the strongest possible
argument", that's not a desire to manipulate! I'm sorry if that was
an unhelpful way for me to express my desire. I was just saying what
what seem to be pot-shots are not pot-shots, but actually cases where
it seemed to me that your argument was weak in a relevant way.
> Because mine is not the only opinion here and much though I have a desire
> to offer my arguments, I have no desire to offer them against a vacuum.
> I can't even tell if you are hearing me unless I see you do a more elaborate
> example than you do because I can't tell if (a) you are not reading what
> I wrote, (b) not thinking hard about it, (c) thinking hard about it but not
> articulating a perfectly sound alternate theory.
I'm certainly doing A and B; I'm a little too modest to do C.
> I feel a lot like a monkey in a box being prodded right now. "Oh, let's
> see if we can make him make a new sound." I do NOT like it.
Eek, that was *not* my intention. I want to understand the subject,
but that takes more than just taking someone's word for it. That's
why I argue back--because I want to test the *argument* (not you), and
to understand.
I'm sorry you abandoned reading just because you misunderstood my
intention.
Thomas
> Kent M Pitman <pit...@world.std.com> writes:
>
> > (flower:colors-among '(flower:rose people:rose flower:daisy))
> > => (FLOWER:RED FLOWER:WHITE)
>
> IIUC, what you're saying is,
>
> You can write simple, self-contained Scheme code to do X. However, when
> you start mixing in code from different sources, the things you have to
> do in Scheme to avoid name conflicts are not as clean or convenient as
> with CL packages.
>
> In this case, X happens to be symbolic processing. Couldn't you say
> the same thing for any X?
The identity-level issues are the same but central to the notion of a symbol
is interning, which folds two objects into the same object.
In other situations, as with (MAKE-INSTANCE 'MY-CONTAINER) the default is
not to fold but to separate.
Identity folding should not be done without a strong understanding of
the consequences of confusion. It comes down to a statistical
question of whether it is better in the face of no information to
assume two objects distinct or to assume them the same. I think
there's no way you can argue that people should defaultly assume every
object is the same object. (Though I'll listen while you try to make
such a case.)
So the question is really over names, and the question goes on in the prsence
of an understanding that everyone likes short names. Folding all objects
named JOHN into the same space means every parent in the world has to name
they're kid differently. By separating namespaces, one can stay in the realm
of names but still refer to different objects. That's what packages do.
That's what the Scheme system does not do.
Yes, you can move to non-symbols, but then that's just to say that you can
move to a paradigm which doesn't do as much name-folding by default, which is
to say you're admitting that the name-folding was the problem, which is to
say you're admitting that the CL naming system is less prone to the problem
than the Scheme one, which is kind of losing the argument. Or so it seems
to me.
> It's interesting that CL packages partition
> symbol plists and not just their associated functions/variables, but
> we're still talking about the same concept.
No, we're only talking about identity.
The use of functions, variables, and plists is just felicity because
it means we have some sample operations to try. People seem to be not
understanding that functions and variables are already carefully described
by the language not to be "slots" so that you can store them outboard by
doing [approximately]:
(defun symbol-function (name)
(gethash name *the-functions*))
since many symbols don't have functions and vendors involved in the design
of the standard didn't want to require storage to be consumed by all symbols.
As such, we are talking only of identity, and nothing else. This is not
about symbol structure nor primitive operations. It might as well be
(defun get-favorite-movie (person-name)
(gethash person-name *peoples-favorite-movies*))
There is no structural difference other than that we supposedly had to
define one fewer function to make a coherent example. But since people
have managed to get confused by the choice of an existing function, I guess
we can't use short examples. (And I find that sad.)
Identity. Identity. Identity.
That is the only thing under discussion. Everything else is a derived
consequence of identity or an orthogonal issue.
> > - The information is manifest in symbols, a primitive data structure
> > in the language
>
> I'd like an example that stresses this feature of CL apart from the
> package system.
I've lost too much context because you elided stuff. I'm not going to go
digging, but if you dredge up more context and reask the question I will
try to answer.
> Regardless of the discussion among CLers about whether its package
> system is the best solution, I think it's a given that CL's package
> system is better than nothing, which is what R5RS Scheme has.
A happy outcome for me in this would be an agreement to just refer to
scheme symbols, at least in a language-neutral discussion, as
"keywords" and not really as "symbols". This discussion is starting
to look to me very similar tot he question of whether C has integers.
I think Lisp has the moral right to say it has integers, and I'm happy
calling C's truncated integers ints. And these languages can call
their internal data structures whatever suits their language
community. But when interchagne is needed, I think either you have to
agree that no one may use the name that occurs in more than one place
or you must agree that someone has more moral right to it in an
interdisciplinary conversation than someone else... Of course, we're
not involving the Scheme people in this discussion, so I'm not likely
to get a useful agreement on terminology in this forum. But let's call
it part of the "consensus building" process toward some greater end.
> > The above programs are contrived, but I hope show enough meat that you
> > can see the problem. Try casting them into Scheme in whatever way suits you
> > and we can proceed to the next conversational step from there.
>
> I doubt whether Erann is likely to take you up on this, but with a less
> package-centric example I would.
I couldn't parse this.
> The interesting thing is that this separation of symbols will help you
> in all cases where symbols are used as data (either keys or values or
> whatever) in "global" data-structures, including, but not limited to,
> the property lists of symbols themselves (when the same symbol is
> shared between applications), but also other "global" lists, or
> hash-tables, or other data-structures.
Kent pointed out something broader--which I had missed until recently.
This is not just true of global data structures. It's also true of
data structures that the packages export to some third place, which
then combines them. That is, even if it is all really local, it still
works.
However, it seems to me that there is still a curious kind of "global
variable" thing going on here, though I can't quite put my finger on
it yet. I need to think more.
Thomas
> [Mathemeticians reason by reducing a difficult problem to a specific set
> of problems with known computational complexity. So they say a problem
> is a "traveling salesman" problem even when it has nothing to do with that.
> This effectively avoids spinning one's wheels uselessly.
> I have, as a career philosopher engaged regularly in this kind of thing,
> sought to reduce philosophical problems to problems of known difficulty
> in the philosophy domain, and when I find one that is an unresolved paradox,
> I do not descend further into it conversationally for the same reasons.
> I think philosophers routinely do this, though I don't think they have good
> terminology for talking about the fact that they do or the thing it solves.]
As an actual career philosopher here:
Suppose one wants to talk about P, but it looks like one's opinions
about P are going to depend on (say) how one deals with the
other-minds problem. And the other-minds problem is a famous rats
nest.
So one thing to do is called "bracketing". You bracket the
other-minds problem, which means to leave it unexplored for the
moment, and see how far you can get in talking about P without
worrying about the deeper harder problem. This is close to the
operation you describe above.
However, thinking about the other-minds problem is part of the bread
and butter of philosophy--that is, bracketing is a strategy for making
progress in talking about P, but it's not a satisfactory strategy full
stop--because we don't mark areas "off limits" and say "you can't talk
about this one any more". Another way to put it is that there isn't
such a thing as an unresolvable paradox, really, only paradoxes that
haven't yet been resolved.
A lighter weight version of bracketing is noting that something is
controversial. I might say "ah, P is true, because the right solution
to the other-minds problem is XXX." But the person I'm talking with
will say "oh, but the other-minds problem is very controversial".
That's a signal that the conversation has two turns. One, I need to
prove XXX--and then we will probably never get back to P. Or, I can
try to demonstrate P without presuming the solution XXX to the
other-minds problem. If the other person does not *want* a discussion
of XXX, rather than leaving both options, they will say "let's bracket
XXX for a moment" (read, "forever").
For example, I think that the solution to the other minds problem is
nicely given by taking Turing's test seriously; I'm a
Turing-test-believer. If I'm talking to someone else who shares that
opinion, then we can assume it in talking about P--no need for
bracketing.
Thomas
> Yes, you can move to non-symbols, but then that's just to say that you can
> move to a paradigm which doesn't do as much name-folding by default, which is
> to say you're admitting that the name-folding was the problem, which is to
> say you're admitting that the CL naming system is less prone to the problem
> than the Scheme one, which is kind of losing the argument. Or so it seems
> to me.
When I think about this, my inclination is to say that for many of
these programming problems, the answer in Scheme is going to be "don't
use the symbol type". Indeed, that means the Scheme symbol type is
less suitable for certain kinds of computations.
But the Scheme advice is going to be to use things like procedures
instead for all these things. Now, you point out that whatever other
things you use might well have uglier print syntax--but this is not
inherently clear to me, but even if true, it doesn't really matter all
that much to me. I don't think it's a huge problem, or much of a
problem at all.
Of course, I'm not saying anything here more than a somewhat vague
sketch. It seems to me that in a language like Scheme one simply does
symbolic programming by using different (more flexible) datatypes than
Scheme symbols. I don't think any serious reduction in expressive
power is going on here, though, but I admit I have no good argument
for that at present other than my general intuitions and experience.
Thomas
> I'm not questioning your credentials.
I didn't think you were.
> I thought you had asked effectively
> for mine since your question seemed to ask "is it just the case that the
> other side is speaking out of ignorance".
No, that wasn't my intent. What did I say that gave you that impression?
> No quibble with your credentials here. I'm not suggesting you are not
> entitled to engage in this conversation in any way. I'm just suggesting
> you're wrong in point of fact if you think there is no valid defense of a
> package system. Packages and modules are orthogonal structures and both
> have a place. I'm not anti-module, I'm anti-module-qua-panacea.
I never said nor meant to imply that there "is no valid defense of a
package system." (I called it a "necessary evil", which is not a glowing
review, but not a condemnation either.) But the only reason I asked the
question that began this thread is that people I respect have a differing
view, and I want to understand why.
> > I feel fairly confident that I'm sufficiently non-idiotic that if
> > someone can't explain something in this area so that I can
> > understand it there's a good chance that it's their fault and not
> > mine.
>
> I use this same rule of thumb myself, so I can relate. Then again, I
> think you should not conclude from someone's inability to articulate a
> natural langauge rule to explain a philosophically complex behavior
> that they do not have deep-seated and real beliefs, nor even perhaps
> well-founded beliefs.
I have drawn no such conclusions. If I had I would no longer be
participating in this discussion. (If you want to see how I participate
in an arena where I *have* drawn such conclusions go look up my postings
on comp.lang.perl.)
> I have neither any doubt that
> my belief in packages is technically, not just emotionally or
> historically, well-founded.
Yes, I believe you, and that's what I want to understand.
> I'm happy to engage in this discussion with you on that basis.
Good! That's the basis on which I want to hold the discussion.
> I'm even happy for you to conclude "Because
> Kent and others cannot say what I need to hear, I will continue to
> believe what I believe."
My position is: because Kent and others have not explained it to my
satisfaction (yet) am going to tentatively continue to believe what I
believe, but fully open to the possibility that I might be wrong. (It is
my hope that anyone engaging me in conversation is also open to the
possibility that they might be wrong, but that's not a prerequisite. What
I *do* insist on is that people not try to ascribe my lack of
understanding to some character flaw, like a refusal to listen or being an
idiot. That point of view I reject a priori. You certainly haven't done
that, Kent, but you are not my only correspondent.)
> (That is, I'm happy to live with the belief
> that you are wrong. Heh... Which incidentally is on a statement that
> you are wrong, it is just a statement about my belief structure.) I'm
> not happy for you to assert (and I hope you are not asserting, I'm
> just checking) that "The inability of various people to articulate a
> defense is a de facto proof that no such defense exists." or that kind
> of thing.
No. I just want the person on the other end of this conversation to
accept part of the responsibility for successful communication. I am
weary of Erik and Rahul who just rail at me for being an idiot when I
don't get it. Maybe I am an idiot, but being repeatedly told so just
doesn't help.
[Technical discussion begins here.]
[Much snippage]
> No. The critical value is the ability to TYPE the symbol without
> doublequotes or other impediment in most cases.
I presume you mean "type" as in the action of pushing keys on a keyboard,
not "type" as in data type.
This sounds like you're saying that being able to type:
'FOO
is significantly better (on *technical* grounds) than having to type:
(setf foo (make-symbol-object :name "foo"))
and then typing "foo" rather than "'foo" from then on.
Is that right?
[T alternative syntax description snipped]
> This is a computationally ok answer, but it is a refutation
> of any claim that the language is using "symbol" processing at that point.
OK, this tells me what symbol processing *is*, but I still don't
understand your reason for believing that symbol processing on this
definition is a Good Thing (tm). In fact, you seem to implicitly be
making the counter argument here. If the T way is "computationally OK" as
you say, what are the technical grounds for preferring symbol processing?
The amount of typing involved can be the same either way with appropriate
reader macros.
Have you read Drew McDermott's paper "Artificial Intelligence Meets
Natural Stupidity"? There he argues that AI gets itself into trouble by
attaching too much implicit meaning to the print names of symbols. This
is (I just realized) implicitly a general argument against what you seem
to be calling "symbol processing". Do you agree with that assessment? If
so, do you disagree with McDermott's argument? (BTW, McDermott is not the
mentor I alluded to in the posting that started this thread, though as my
thesis advisor's thesis advisor he has of course had a significant
influence on my own intellectual development.)
> FOO is a symbol. foo is a symbol. I don't mind typing 15\r or |15r|
> to preserve case. I don't mind typing tv:foo if I'm not in package TV
> to get to that symbol. But basically I want unimpeded access to the name
> as part of my syntax for literals if it's going to be what I think of as
> a symbol.
Why is it not enough to bind the symbol to a variable (or a global
constant) and refer to the symbol by the name of the variable?
> I don't mind quoting symbols. In Macsyma, I could do
> X:10;
> F(X,Y):=X+Y;
> F(X+3,'X+3);
> and get back X+16 which contains one symbol and one number.
Ah! This is an enlightening example. The crucial point here, it seems to
me, is that you can refer to the symbol whose name is the string "X" by
typing "'X", and you can refer to the value of that symbol by typing
simply "X". Is that right?
So consider the following alternative design. Define a reader macro, say,
=X, that expanded to (symbol-value X). (Note that there is no quote in
front of the X.) Now we write:
(defconstant x (make-instance 'symbol-class :name "X" :value 10))
(f x =x) ; == (f 'x x)
Is that "just as good"? If not, why?
E.
> "Pierre R. Mai" <pm...@acm.org> writes:
>
> > The interesting thing is that this separation of symbols will help you
> > in all cases where symbols are used as data (either keys or values or
> > whatever) in "global" data-structures, including, but not limited to,
> > the property lists of symbols themselves (when the same symbol is
> > shared between applications), but also other "global" lists, or
> > hash-tables, or other data-structures.
>
> Kent pointed out something broader--which I had missed until recently.
>
> This is not just true of global data structures. It's also true of
> data structures that the packages export to some third place, which
> then combines them. That is, even if it is all really local, it still
> works.
That is what I label under "global", or "shared" as I've also put it
(since a resource that is shared between two, even unrelated, packages
might not necessarily be available just for everyone).
> However, it seems to me that there is still a curious kind of "global
> variable" thing going on here, though I can't quite put my finger on
> it yet. I need to think more.
And this global vs. local thing is completely unrelated to the issue
of global vs. local variables. E.g. a hash-table can be a global, or
shared resource, even if it is only ever bound to a lexical variable,
like in:
(let ((my-secret-hash (make-hash-table)))
(defun add-definition (name definition)
(setf (gethash name my-secret-hash) definition))
(defun lookup-definition (name)
(gethash name my-secret-hash)))
Given that other packages/programs are given access to add-definition
and lookup-definition, the hash-table bound to my-secret-hash becomes
a shared or possibly global resource. The package system will ensure
that people in different packages can use symbols as definition names,
without having to worry that they trample on other people's
definitions.
_Furthermore_ the package system even allows distinct packages to
cooperate in an orderly fashion, allowing one package to see the
definitions of another package for some things, but not clash with
other things.
You can either disallow shared data structures, or disallow interned
symbols (i.e. people will have to explicitly share gensyms/uninterned
symbols/strings in order to both refer to the same symbol), or use a
package system to deal with such a situation. I'm very certain that I
don't want to live with the former two "solutions".
[ Aside: Many CL programmers would prefer making the hash-table in the
above available through a special variable, instead of hiding it
away inside closures, aiding interactive debugging ]
> ... It seems to me that in a language like Scheme one simply does
> symbolic programming by using different (more flexible) datatypes than
> Scheme symbols. ...
It's fine to do this, but then you are inventing at the user level
things that the compiler won't be able to recognize idiomatically and
help you with using operators and data layouts that are primitively
efficient.
I feel about this the same as I felt in Maclisp when people used to
do copy-tree by (subst nil nil x) or in Teco when people would compute
absolute value by using -3^@^@ [don't ask--just take my word for it
that control-@ was not intended to do absolute value]. There is a
difference between implementation and expression, and CL allows me to
express my need, not just implement my need. I don't want to substitute
an alternate implementation when I can say directly my desire and let
CL do the data layout and management.
> I don't think any serious reduction in expressive
> power is going on here,
Yes, it is. The shift from expressive tasks to implementation tasks
is the essence
of the difference between Lisp and other languages for me. Every time you
chip away at it, you chip away at Lisp's power. I said before that this
is not about Turing power, and this remark on yours is an appeal to Turing
power. Expressive power is exactly about concise targeted notation.
Suppose we forget Lisp for a moment and talk only Goedel numbers.
Your argument is the same as saying "you don't need a low-numbered
function, since a higher numbered function will work just as well".
But if you really think this, then you don't understand why foo.com is
more valuable as an intellectual property than
thisnameshouldsellforjustasmuchasfoodotcomdoes.com They may be turing
equivalent (each just a pointer) and you might claim there is more
"character" in the longer name, but expressiveness of languages is
about choosing the right things to be short. What Goedel numbering
tells us is that there are a finite number of short names and after
that there are only long names. Expressiveness is about keeping
programs short in the face of that. It's a form of huffman coding, if
you will, but the game is not linearly arranged so you don't always
recognize it thus.
> though, but I admit I have no good argument
> for that at present other than my general intuitions and experience.
I'm not denying you have intuitions, and I value intuitions, but
intuitions cannot win the day. They inform goalsetting but they
cannot dictate goalsetting without forcing bad decisions.
As to experience, its power and relevance is only correctly judged in
the context of a correct (possibly "subjectively correct" and
"subjective judgment", alas) understanding of the problem. Let me be
concrete about this subjectivity: It happened several times that I
came up against people in the design of ISLISP who were national
representatives much older and "more experienced" by some metric.
Certainly they had been programming longer. But I had done 10 years of
Lisp language design, and they had not. And moreover, I had been
present for the design, deployment, and later bug reports of several
dialects of Lisp, something they had never done. Those people brought
important experience that I did not have but also vice versa. We
learned to inform and respect each other.
Although a standard resume would have shown them to simply be "more
experienced than I". I have come to understand experience as
massively a multi-dimensional issue. People are used to this
distinction in resume-writing when they say "n years experience in
buzzword-37" but they are not used to making this distinction at a
finer granularity. But I think the fact that they are not articulate
about their microexperience doesn't mean it doesn't matter. Whether
or not you have experience in general is not in question; whether or
not you have experience at all in the issues I'm raising is an open
question. (You might legitimately say the same of me. I don't mean
this to be insulting in any way. I would not be insulted by the same
statement said back.) The conversationally tricky part is getting to
a point of view where people even agree on the problem.
... And, incidentally, part of the problem in so doing is that words
we use in English have delayed attachment of semantics until the point
of use; they might do better by early attachment of semantics
according to package systems so that more refined uses of words (or,
as I once in a while do in the HyperSpec because I know there is only
one glossary of relevance, subscripted uses of words) are possible.
We say we are all talking about symbols, but we trip over our failure
to say cl:symbol or (scheme symbol) in the conversation in order to
disambiguate whose notion of symbol we mean... So your experience in
having navigated this conversation, and its complexity, due to
late-binding and the absence of an English-based package system should
perhaps tell you that a package system would be helpful. But somehow
it doesn't. So maybe you mean something different about experience
than the experience I see you having. Getting a foothold for an
intelligent conversation is much harder than people generally
acknowledge. They often prefer to ascribe the difficulty to people
being obstinant. It's often more fun to punch someone out than to
ask them for a detailed explanation of how they define all the words
they're using. (And if you're like Clinton and helpfully volunteer
that you've observed a difference in your own meaning of "is" and the
meaning that others are inferring, people don't say "thank you", they
accuse you of obfuscating... go figure.)
It is not my impression that that is what Erann does. I don't always agree
with his approach but I am interested in his opinions and his accounts of
his personal experiences.
> > > He seems to not like actually using it, then.
>
> > So he must like everything about it, or he's against it?
>
> No, but he doesn't seem to like anything about it that he couldn't get
> elsewhere, along with other features that he would like better.
I think that sentiment is more than being overly sensitive to
criticism/confusion/disagreement. (Just MO)
--
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")
> Because mine is not the only opinion here and much though I have a desire
> to offer my arguments, I have no desire to offer them against a vacuum.
But there isn't a vacuum here. I read this
newsgroup every day with the hope of reading code
or practical examples of CL solutions and I must
say that your last three posts (that I have
bookmarked) have given me a lot of food for
thought.
It's considered bad netiquette to post "me too" or
"I agree" on USENET, so I don't, but maybe this
post makes you feel somewhat better. You helped at
least one newby to better understand CL.
--
Eduardo Muñoz
The discussion has been all over the map. I've been trying to limit
my comments to *only* those things about the package system that have
bothered me. In Erann Gat's original post, he says
``So I've always considered packages as more of a necessary evil than a
feature. I was told that this view was common in the CL world.''
That is pretty much my opinion, and that's what I said initially.
> You've raised a lot of problems but you haven't said what featurism you
> expect. As long as your only complaint is that it can be used wrong, it's
> hard to respond to. If you are looking for it to keep you from doing a
> certain thing or allow you to do a certain thing, it would be easier to
> respond to. As detailed as you've been, I still can't generalize from
> your specific examples to anything other than "this isn't good for me"
> and so I can't do much to help. Even a guess at what would be better
would
> again make the discussion concrete for me.
Well, I cited a number of anecdotes that have made me say `damn this
@#$%! package system!' What I'd prefer is for the package system to
do what it is doing now, but without the annoyances. Yes, it's a
subjective problem, and yes, the solution is hard.
> I think honestly that you were wrong when you said you aren't arguing for
> Scheme. Scheme has a solution (with its incumbant problems--they are not
> being inconsistent per se, they are just trading one problem for another).
> I don't like the problems created by their solution. But you sound like
> you would.
Is there a post where I said `Scheme solves the problem'?
And what is the Scheme solution? Scheme48 has a module system.
MzScheme has a different one. MIT-Scheme has yet another.
> > > In general, I think the
> > > problem is more that CL does not seek to handle this problem than that
> > > CL would not be capable of handling this problem with available data
> > > structures if it chose to provide you the necessary operators. I
don't
> > > see a data structure problem here; perhaps an API weakness. But new
APIs
> > > can be written without changing the language spec.
> >
> > Well, I'm not suggesting a change to the language spec. I just want
> > to not see spurious `name conflict' errors.
>
> The errors are not spurious. They happen for a reason. I'm sure you
> realize this, but I'm not sure you see the significance of realizing this.
Well, when I type
=> foo
Unbound variable `foo'
=> (use-package "FOO")
Error: symbol 'foo' already exists
This, to me, is a spurious error. I made a single typo, but I have
to make two trips through the error handler?
> > No. The critical value is the ability to TYPE the symbol without
> > doublequotes or other impediment in most cases.
>
> I presume you mean "type" as in the action of pushing keys on a keyboard,
> not "type" as in data type.
Yes.
> This sounds like you're saying that being able to type:
>
> 'FOO
>
> is significantly better (on *technical* grounds) than having to type:
>
> (setf foo (make-symbol-object :name "foo"))
Yes.
Though actually you meant to say
(or (find-symbol-object-named "foo")
(make-symbol-object :name "foo")))
or perhaps just
(intern "foo")
since that's what INTERN does. As to whether a SETQ is done or not,
that's a context thing in code; because of INTERN, the SETQ is sometimes
not needed.
> and then typing "foo" rather than "'foo" from then on.
> Is that right?
Yep.
> [T alternative syntax description snipped]
>
> > This is a computationally ok answer, but it is a refutation
> > of any claim that the language is using "symbol" processing at that point.
>
> OK, this tells me what symbol processing *is*, but I still don't
> understand your reason for believing that symbol processing on this
> definition is a Good Thing (tm).
Because although designed before there was an issue over this, its importance
came to be understood later as a counter-theory to what we came to know
as "procedurally embedded data". The more information one can put into
data, the less is in the code. That in turn leads (not all at once but in
many tiny design steps) to the notion of the program that is reprogrammable
without recompilation. Every time you pull a decision back into the code,
you must recompile the code in order to change that decision. I'm not saying
that every datum in a program should be symbolic, but I am saying that in the
cases where people make that information symbolic, there is substantial
opportunity to revise the program behavior by merely pushing symbols around.
> In fact, you seem to implicitly be
> making the counter argument here. If the T way is "computationally OK" as
> you say, what are the technical grounds for preferring symbol processing?
There were a lot of programs in the mid-1980's at the advent of AI
Winter where the assertion was made "this program can be translated to
C++ therefore C++ is better". But there was never any proof that
those programs could have arisen in C++. It's _often_ the case that
once you have gone through design and reached stability, you are going
to have a good chance that knowledge can be procedurally embedded for
better effect. I think this is what newell and simon meant by
practice effects. I think if you look even at massively reflective
systems like 3lisp, you'll see that the reflectivity is not used most
of the time and that in those cases 3lisp will compile down to the
same kind of code as other languages will, or at least, I understand
that to be the thesis. But the ability to compile 3lisp out of a
deployed program is not a proof that 3lisp has no place. The ability
to play tennis better after the mechanics are optimized into something
rigid and non-symbolic is not proof that tennis is better learned
mechanically than abstractly. Symbol processing is a tool for
abstract manipulation and it's hardly surprising on a case by case
basis that you can work the symbolic processing out in the end. What
would be surprising is if you could show that FORTRAN or C could just
as well have been a straightforward implementation strategy for AI
systems.
> The amount of typing involved can be the same either way with appropriate
> reader macros.
Not unless you make A, B, C etc be reader macros, IMO. But even then,
the point is that you will only make an isomorphism to symbols at that point.
Show me a worked example and I'll be clearer.
> Have you read Drew McDermott's paper "Artificial Intelligence Meets
> Natural Stupidity"? There he argues that AI gets itself into trouble by
> attaching too much implicit meaning to the print names of symbols. This
> is (I just realized) implicitly a general argument against what you seem
> to be calling "symbol processing". Do you agree with that assessment?
If I've read it, I don't recall. But I am not arguing for what symbolic
processing is useful for, except as an auxiliary issue when you ask if
one could do without it. I don't see that as what this main debate is
about. I am arguing for what it _is_, and I can't imagine that Drew can
make his arguments without first acknowledging a framework such as what
I'm saying. I'm sure there are people who don't like symbol processing,
but that's different than saying those people should be in charge of what
symbol processing is. That's like putting C people in charge of Lisp design.
Design and control of terminology and tools should be done by advocates,
not opponents. It's too easy to sabotage a facility you don't beleive in.
The question is whether an advocate can do something fun with it.
> If so, do you disagree with McDermott's argument?
I have no info. Is it webbed? If not, maybe he'll volunteer to either
web it or physmail me a copy.
> (BTW, McDermott is not the
> mentor I alluded to in the posting that started this thread, though as my
> thesis advisor's thesis advisor he has of course had a significant
> influence on my own intellectual development.)
Drew was at Yale when I was there designing T. He's a smart guy with some
interesting perspectives on things, which is why I'd be interested in the
paper, whether it disagrees with me or not.
Incidentally, nothing I've said suggests that a reliance on symbol names
is a good thing. So I might not disagree with him. I keep saying "identity".
As it happens, symbols have a name, and that's important to obtaining the
identity. But that's all bootstrap and coding. What the program operates
on at runtime is not the name, but the pointer id. The significance of the
name is not that it has _that_ name, but that it has _a_ name; that is,
that it _can be named_. This is a different issue entirely than the question
of whether (concatenate 'string 'foo- x) is going to work; that is a use of
a symbol as a designator for a string, and I personally tend to use
(concatenate 'string "FOO-" x). I don't see that as symbol processing since
a symbol was never used for its identity here, only its name. If you
substitute (concatenate 'string 'tv:foo- x) you get the same answer.
But if I do (get 'tv:foo- 'author) or (get 'foo- 'author), it matters what
package the various key symbols are, not by their name but by their identity,
and the effect is not changed if I later do [which isn't allowed in CL, but
you get the idea]: (setf (symbol-name 'tv:foo-) "BAR-"). The symbol that
was previously TV:FOO- and is now TV:BAR- will still be the same identical
symbol and all prior operations on it will be the same; it may just be harder
to find, in exactly the same way that (3 4) will be harder to find in an
EQUAL hash-table if you SETF its CAR to 5.
> > FOO is a symbol. foo is a symbol. I don't mind typing 15\r or |15r|
> > to preserve case. I don't mind typing tv:foo if I'm not in package TV
> > to get to that symbol. But basically I want unimpeded access to the name
> > as part of my syntax for literals if it's going to be what I think of as
> > a symbol.
>
> Why is it not enough to bind the symbol to a variable (or a global
> constant) and refer to the symbol by the name of the variable?
Because there is no operational way to tell there is a symbol there.
And that's not what symbol processing is about. If I could find that
symbol independently and use its identity as a key to other operations,
then the symbol is a symbolic name for some other data.
> > I don't mind quoting symbols. In Macsyma, I could do
> > X:10;
> > F(X,Y):=X+Y;
> > F(X+3,'X+3);
> > and get back X+16 which contains one symbol and one number.
>
> Ah! This is an enlightening example. The crucial point here, it seems to
> me, is that you can refer to the symbol whose name is the string "X" by
> typing "'X", and you can refer to the value of that symbol by typing
> simply "X". Is that right?
It is the notion of interning. And the notion that this name-reference
accesses an object that will be compared by identity, not by string, even
though the access to get it in the first place was by name. In DB terms,
it is the object, not the name, that is the foreign key to the database
reference backing up SYMBOL-VALUE if you store your values in a database.
> So consider the following alternative design. Define a reader macro, say,
> =X, that expanded to (symbol-value X). (Note that there is no quote in
> front of the X.) Now we write:
>
> (defconstant x (make-instance 'symbol-class :name "X" :value 10))
In this case, all you've done is haired up the syntax of symbolness needlessly.
That doesn't make it a non-symbol. But it does make your language "designed
badly" if you want it to be a symbolic processing language.
You've left out the interning issue, which is central. I'll continue as
if you had not.
> (f x =x) ; == (f 'x x)
>
> Is that "just as good"? If not, why?
MDL (the language in which Zork was programmed) does this, I think.
I'd have to study it further to see if I think it a Lisp but I'm not
informed enough.
But certainly a problem with this syntax is that you need (f x =x) to
be quoted data because otherwise you have no way to denote the list
containing F, X, and =X. If (f (f x =x) x =x) means not to evaluate
the first arg, then why does the outer form get evaluated? And if the
inner form _is_ evaluated, then you'll need a quoter to inhibit it. As
in (f '(f x =x) x =x). But that means 'x and x are the same thing.
And that means that '=x has to be explained. Or else you have to have
=(f (f x =x) x =x) which is really kind of bizarre if you have to always
be forcing evaluation. My intuition from afar was that MDL addressed this
by the <> vs () distinction. Doing
<LET ((X <+ .Y 3>)) <+ .X 4>)
or some such thing. [I vaguely recall some things that tell me I'm goofing
this up, but it's close enough for you to see what I'm getting at.]
CL has this thought through. I'm sure MDL did, too. But I don't
think you can just change this one aspect of the language and make it
work. Languages are ecologies and you must often redo the entire
ecosystem when you replace one major item in the main food chain with
another.
But whether you can do this or not doesn't really get at the heart of the
problem which is, "what looks good". Ultimately, I want to support both
of these things:
[1] (ELIZA '(I AM SAD))
=> (WHY DO YOU SAY I AM SAD)
In this case, I want symbols to look syntactically as simple as
possible so that I can talk to people who don't understand Lisp
and just say "ignore the parens". It is _MUCH_ harder to talk to
a non-programmer and say
=(ELIZA (LIST =I =AM =SAD))
=> (#S(SYMBOL NAME "WHY") #S(SYMBOL NAME "DO") #S(SYMBOL NAME "YOU") ...)
is something to tolerate. It might work implementationally but I don't
find it evocative. It also doesn't let me see quickly that
#S(SYMBOL NAME "FOO") might _not_ be EQ to another #S(SYMBOL NAME "FOO")
produced by GENSYM. At least in CL, you can do #1=#:FOO in the printer
to clarify both EQ relations among gensyms but _non_ EQ relations among
gensyms and interned symbols.
[2] Complex inter-relationships between things constructed in the simplistic
mode illustrated in [1] but where I need to compose symbols from various
packages with conflicting meanings into data structures whose form is
pre-specified. The simplest example is a Lisp program itself, being:
(hardcopy:print (with-output-to-string (str) (cl:print x str)))
Here, there are two meanings of PRINT, one that works on paper and one
on virtual streams, but no confusion results. I can't simply say
((hardcopy print) (with-output-to-string (str) ((cl print) x str)))
because the syntax is pre-defined and that's not valid syntax, and
because that's not my goal syntax because of my desires in [1].
> > > Well, I'm not suggesting a change to the language spec. I just want
> > > to not see spurious `name conflict' errors.
> >
> > The errors are not spurious. They happen for a reason. I'm sure you
> > realize this, but I'm not sure you see the significance of realizing this.
>
> Well, when I type
> => foo
> Unbound variable `foo'
>
> => (use-package "FOO")
> Error: symbol 'foo' already exists
>
> This, to me, is a spurious error. I made a single typo, but I have
> to make two trips through the error handler?
But there _is_ a reason for this. When you do
(defun bar (x) (foo x))
(defun foo (x) (+ x 3))
(bar 4)
you get a useful result. That is, when you forward-reference a symbol,
it creates it. It preserves the illusion that all symbols always exist
by demand-creating any symbol upon mention.
I dunno if you use any speech recognition software but it's often
incumbent upon you that you correct it when it gets something wrong
because if you don't you're going to get it thinking it has
confirmation it did the right thing. Same deal here.
You said FOO and you didn't immediately do (UNINTERN 'FOO). That means
CL took it as a given that there should be a symbol FOO there. From
the tiny window that INTERN is looking through, there is no difference
between that use of FOO and the use of FOO in the DEFUN of BAR above.
Late, when you (USE-PACKAGE "FOO") you are not just saying "get a package
FOO" [which I assume from your context has a symbol FOO in it that is
defined as a variable]. You are saying consolidate my uses with the FOO
package's uses. At the level of the package system, FOO has been used, but
the reason for the use is not recorded. It can't tell that you haven't typed
the three forms that I offered above, and it can't tell that FOO is not
important for its identity. So it doesn't want to discard the problem.
In my papers on error handling, such as
http://world.std.com/~pitman/Papers/Exceptional-Situations-1990.html
I make the point that an error is a situation where there are multiple
pending continuations (to use Schemish terminology) and you can't from
available program data select the right one. Advice is required and the
second error break seeks advice. The call to USE-PACKAGE does not know
how substantial your prior use of FOO is and must ask rather than do
the wrong thing some statistical number of times. People might not report
that quiet problem I mention because people tend to report problems that
enter debuggers and not report things that don't enter the debugger, but all
the more reason to enter the debugger.
You might think it could do (AND (NOT (BOUNDP X)) (NOT (FBOUNDP X)) ...)
to tell. But this assumes there are a finite number of such tests.
If you believe me when I say that BOUNDP and SYMBOL-VALUE are just accesses
to hash tables provided by the system, but that there are an unlimited
number of things the user might have used this as a key for, then you know
only a full GC could possibly keep you from losing. (And that GC would
probably fail within 3 command lines from the failure since +, ++, +++,
*, **, or *** probably refer to the FOO symbol so it wouldn't get quickly
GC'd without your help.)
Curiously, _sometimes_ the reason you do not do (UNINTERN 'FOO) immediately
after typing the stray FOO is you may not be sure that FOO is something
you want to unintern. Maybe someone _does_ have a pointer and maybe your
doing the unintern will break things. So if you are wary about calling
UNINTERN randomly, why would you want a simple program like USE-PACKAGE
with far less brainpower and visual scope than you have to do it?
Erann,
When it comes to programming languages I try to not attach words like evil
to them. As people have pointed out the package system in many ways is
necessary. When it was decided to have one the people who put the work into
developing it (I am 99% sure) went through the thought processes,
discussions and value judgments that this thread is discussing. The package
system is what they developed.
I can think of three quick approaches to learning about packages is by:
1) Using it, getting familiar with it, trying it out (in real situations),
is one way to get a feel for what the designers had in mind.
2) Trying to develop a package system yourself. With this approach you get
immediate insights into the ideas and problems behind it.
3) Teaching the subject. This forces you into actually learning the
material so you do not fall apart when the first challenging question comes
around.
Most of the time if I have sat down and seriously applied these methods to a
real world product it becomes evident that the designers have thought about
it. It dawns on me that they actually know what they are doing.
Imagine if you have put your sweat and thinking into was brought up for
comment on a newsgroup, not just once but year after year after year. How
would you feel? This is the kind of pressure that is put on the people who
put CL together. I am sure they would appreciate intelligent questions on
their ideas, or even better, someone to build and improve on them.
It is more respectful to strive to comprehend by the above 3 methods (and
then asking questions) than asking critical questions based on just
curiousity.
Wade
> The significance of the name is not that it has _that_ name, but
> that it has _a_ name; that is, that it _can be named_.
"His name is Robert Paulsen."
You know, in a way, that's very relevant to this discussion. :)
--
-> -/ - Rahul Jain - \- <-
-> -\ http://linux.rice.edu/~rahul -=- mailto:rj...@techie.com /- <-
-> -/ "Structure is nothing if it is all you got. Skeletons spook \- <-
-> -\ people if [they] try to walk around on their own. I really /- <-
-> -/ wonder why XML does not." -- Erik Naggum, comp.lang.lisp \- <-
|--|--------|--------------|----|-------------|------|---------|-----|-|
(c)1996-2002, All rights reserved. Disclaimer available upon request.
I think that finding things that look like syntactic tokens
in a file and making notes about them is a fine thing for an editor
to do. In the cases where the code you were editing *was* already
read into the system, the interning of symbols caused no problems and
enabled some really fancy features.
>
> You know, I had a similar problem with Zmail a few years back when I
revived
> my old mail files (from the mid 1980's) from backups and it tried to parse
> and intern all the domain names. You may or may not be aware (heh) that
> the host names have changed since then. I got a lot of failed parses for
> missing hosts like MIT-MC.ARPA or even PLANTAIN.SCRC.Symbolics.COM and it
> was a mess. Does this mean the domain name system is wrong? Or does it
mean
> that some editing programs that are not actually trying to talk to the net
> yet shouldn't be prematurely doing host lookup just based on the fact that
> they saw some plausible looking text go by? Because I think the latter.
The latter, obviously. But to draw a symbol analogy, suppose that by
attempting to resolve `PLANTAIN.SCRC.SYMBOLICS.COM' it *created* the domain
rather than generating an error if it was not found. Would you still
say the problem is with premature lookup?
[Lots of Zmacs stuff elided]
Well, mumble. We both agree that Zmacs did things wrong if it
tried to intern symbols in packages that were not set up.
But when you load the init file, the reader ought to fail because
there is no package named `java-tools' at the time the reader reads
the init file. Never mind that java-tools:make-all *will* be a
fine thing to invoke *after* we load "java-tools.lsp". (I have verified
this on ACL and Corman.)
> > interning the name "FOO" in package "BAR" would yield 'FOO::BAR, yet
> > in another interning the name "FOO" in package "BAR" would yield
> > 'SYS:BAR. There were degenerate cases where you could end up with
> > a symbol associated with a `non-existant' package.
>
> You mean with package not found errors, like in any package universe that
> was absent a package, no?
Not exactly. Conventionally, if you evaluate this:
(find-package (symbol-package 'foo)) => (some package)
but you could end up with interned symbols with valid packages whose
names could not be found.
I understand this. What the package-system doesn't understand is that
when I mentioned `foo' the first time (and got the unbound variable)
was that I *didn't* want that symbol. (Yeah, I know it cannot read
my mind.)
> It preserves the illusion that all symbols always exist
> by demand-creating any symbol upon mention.
The illusion is imperfect. When I (use-package ..) I can tell if
a symbol had been created on demand or not: I get errors for those
that were.
>
> You said FOO and you didn't immediately do (UNINTERN 'FOO). That means
> CL took it as a given that there should be a symbol FOO there. From
> the tiny window that INTERN is looking through, there is no difference
> between that use of FOO and the use of FOO in the DEFUN of BAR above.
From the tiny window *I* am looking through, I can't tell if my typo
interned a new symbol or found an existing one. I can't just (UNINTERN
'foo)
without knowing this.
> Curiously, _sometimes_ the reason you do not do (UNINTERN 'FOO)
immediately
> after typing the stray FOO is you may not be sure that FOO is something
> you want to unintern. Maybe someone _does_ have a pointer and maybe your
> doing the unintern will break things. So if you are wary about calling
> UNINTERN randomly, why would you want a simple program like USE-PACKAGE
> with far less brainpower and visual scope than you have to do it?
No, I don't want a DWIM. What I do want is something that is a bit
more clever. Consider this behavior: when I make a typo and get an
unbound variable error, why not have the system unintern the symbol
just created if and *only if* it was created by the typo itself.
Or this: When loading a fasl file (with appropriate switches)
the following error appears:
The symbols 'frob', 'tweak', 'twiddle', and 'transmogrify' are
not currently visible in package FOO, but package UTILITY exports
symbols of these names.
1: IMPORT just those symbols from UTILITY
2: USE-PACKAGE utility.
3: Create new symbols in package FOO.
These two behaviors (and note that I'd like that latter to be a user
controlled switch so no one *has* to monitor the load process
looking for these things) would greatly improve my relationship with the
package system.
What an interesting day. I decided to take a usenet break this weekend,
and I got up this morning to find 200+ unread postings in c.l.l. It took
me three hours just to make an initial pass through them to select the
ones that I wanted to go back and study and think about in detail and
maybe respond to. There turned out to be about twenty of those, so I
thought I would consolidate my responses in two posts.
I decided on two posts because there are two classes of resonses I want to
write. The first is technical. The second is responding to posts from
people expounding on what I think, and making various kinds of accusations
against me. This is the non-technical response.
I must say that it has been absolutely fascinating watching myself be
discussed in the third person. It's been almost as educational as reading
the technical discussions (from which I have learned a lot already).
Since I happen to believe that I am somewhat of an authority about what I
think I thought I would weigh in on some points of this discussion.
* Rahul Jain:
> You seem to complain about it more than enough and intentionally
> misunderstand it enough. And as one of the basic principles of Common
> Lisp, a dislike of packages implies a dislike of Lisp and symbolic
> processing itself.
...
> He seems to not like actually using it, then.
As I have said before, I like CL very much. I also like using it. It is
by far my favorite programming language and has been for over twenty
years. CL has been very good to me. In fact, my affection for CL, the
reasons behind it, and why that affection has been somewhat atenuated
recently, is documented in some detail in the post that started the "How I
lost my faith" thread.
Nonetheless, there are things about CL that I don't like, and there are
things about CL that I don't understand. Affecting change about those
things I don't like and achieving understanding about those things I don't
understand has to start with a question that has at least implicit within
it some level of discontent. If everything were perfect there would be
nothing to say. (Likewise if everything were hopeless.)
It is truly ironic hearing people here in c.l.l. tell me and others how
much I dislike CL. At JPL (and Google) my reputation is the exact
opposite. I am the Lisp Freak, the one who thinks Lisp is perfect and the
Answer to Everything. My enthusiasm for Lisp has actually been
detrimental to my career.
The truth is you're all wrong. I'm not a CL fanatic, and I'm not a CL
hater. And if you want to know more about what I think just ask. I'm not
shy about discussing my opinions if I think someone actually cares.
* Rahul Jain:
> He [Erann] claimed that he agreed with someone who said that they [packages]
> should be replaced.
Not so. I fact, I explicitly disclaimed agreement by saying that I merely
recalled this person's opinion, and not necessarily that I agreed with
it. The only opinion of my own that I have expressed about packages is
calling them a "necessary evil", the emphasis in this case being on the
word necessary.
* Rahul Jain:
> > > As a result, [Erann] perhaps complains about things once in a while.
> Once in a while? What else has he done in c.l.l in the entire time
> I've been reading it?
How long have you been reading it? I've been posting on c.l.l. since 1990.
But that aside, could you please cite a few examples of postings of mine
that you consider complaints? I believe that what I've been mostly doing
recently is asking questions.
* Erik Naggum:
> Only those who are against it, express their dislike of it.
This, like much of Erik's rhetoric, is so ridiculous as to barely deserve
response. Nonetheless, people on usenet seem to glom on to lots of
ridiculous ideas, and this particular idea is dangerous if too many people
start to think that Erik might be right about it. So for the record I
express my absolute dissent from this view. This is the rhetoric of the
demagogue. It serves no purpose but to marginalize dissent, which is the
first step on the road to fascism.
* Erik Naggum:
> This arrogance is probably the reason why you never _listen_ to anyone.
Being lectured on arrogance by Erik is so funny that words fail me. Erik,
you should really stop confusing not listening to *you* with not listening
to anyone. They are not the same thing. But in point of fact I listen
even to you, because people I respect keep telling me you have worthwhile
things to say if one takes the time to dig through the crap. So I keep
digging, despite the fact that my personal opinion is that I long since
passed the point of diminishing returns. Such is my respect for some
people's opinions that differ with my own.
* Rahul Jain: (Do we begin to notice a pattern here?)
> Erann never asked questions about what is better,
Yes, that's because I didn't want to start a flame war. What I asked to
start this thread is why the people who like packages like them. This is
because what I am trying to do is not resolve the question of which is
better (I don't think that question can be resolved) but simply to
understand the point of view of people who think differently from me.
* Rahul Jain:
> he complained to us that Python is better than CL,
> and it's our fault that he thought otherwise.
Excuse me? When did I ever say anything of the sort? The closest thing I
can think of is: "For example, my language of choice for doing Web
development now is Python." That was an offhand, almost throwaway comment
at the end of a *very* long post. And note the disclaimer "for doing Web
development." Web development is only a very small part of what I do.
* Nicolas Neuss:
> Your CV looks impressive, but after having this conversation with you
> and reading so much about your problems on c.l.l., I do not believe
> anymore that it tells the truth.
Wow. I've been accused of a lot of things, but this is the first time
anyone has actually accused me of fraud. Nicolas, I will provide you with
documentation for any claim on my CV that you find questionable with the
proviso that if I do so you must agree to apologise pulicly for defaming
me. (Actually, you can verify everything on there yourself if you wish,
and you would have been well advised to do so before making your libelous
acusation.)
This really pisses me off much more than I care to express at the moment.
Erann Gat, Ph.D.
Principal Scientist
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, California
g...@jpl.nasa.gov
The views expressed here are my own. I do not speak for JPL.
> > I don't think any serious reduction in expressive
> > power is going on here,
Kent M Pitman <pit...@world.std.com> scripsit:
> Yes, it is. The shift from expressive tasks to implementation tasks
> is the essence of the difference between Lisp and other languages
> for me. Every time you chip away at it, you chip away at Lisp's
> power. I said before that this is not about Turing power, and this
> remark on yours is an appeal to Turing power. Expressive power is
> exactly about concise targeted notation.
I do understand the difference between expressive and computational
power, and I did intend the former. I grant that there is *some*
reduction in expressive power, but I'm not sure that it's a serious
reduction. But your examples have "gotten my juices flowing" and
before I say something more concrete, I have to give it more time and
thought to get my thoughts together.
Thomas
But this is not a problem with Common Lisp, is it? :)
I think making a clean build of the packages with a good defsystem, then
loading them into a virgin image to dump a clean image should have been a
satisfactory solution to your boss.
If anyone has the kind of problem you have described, I think the proper
answer is to teach them to use the implementation's ability to save the
world and dump a new image, not to futz around with weird hacks.
///
--
In a fight against something, the fight has value, victory has none.
In a fight for something, the fight is a loss, victory merely relief.
> A question: what is for you the relation between "L has X",
> and "X can be implemented (with effort, without effort) in L"?
I think I'd say that if X can be implemented without effort in L then L
has X. If X can be implemented with a little effort in L then L doesn't
have X, but it doesn't matter.
E.
So, for example, if safety can be implemented without effort in sex, then
sex has safety. If safety can be implemented with a little effort (say,
wearing a condom) in sex then sex doesn't have safety, but it doesn't matter.
Just wanted to make sure you really meant what you said. ;)
To quote a childhood mantra: "almost only counts in horseshoes and
hand grenades" though in more recent days the expression "and atom bombs"
got added by some kids I knew.
> > > > I usually did:
> > > >
> > > > (load "java-tools.lisp")
> > > > (java-tools:make-all)
> > >
> > > This is obviously not quite kosher in a file.
> >
> > Sure it is. I don't know what you're talking about. I have files that
> > say just exactly that. (Well, different function and package names.)
>
> But when you load the init file, the reader ought to fail because
> there is no package named `java-tools' at the time the reader reads
> the init file. Never mind that java-tools:make-all *will* be a
> fine thing to invoke *after* we load "java-tools.lsp". (I have verified
> this on ACL and Corman.)
This is IMHO not true. The loader must process the forms one by
one, including the reader. Otherwise forms such as (in-package :foo)
would not work in loaded files.
I don't know what you tested, but with the following two files,
everything works as expected, in all CL implementations I've tested,
including ACL, LispWorks, CMUCL and CLISP:
;;; java-tools.lisp
(defpackage :java-tools (:use :CL) (:export #:make-all))
(in-package :java-tools)
(defun make-all ()
(print 42))
;;; init.lisp
(load "java-tools.lisp")
(java-tools:make-all)
If you want to compile init.lisp without manually loading
java-tools.lisp beforehand, you'd have to wrap the load form with an
appropriate eval-when form, and then even that will work just fine,
including loading a compiled init file into a fresh core, IMHO, though
CMU CL seems to exhibit a bug there.