(defun quote-all (lst)
(if (not (null lst))
(cons (string (car lst))(quote-all (cdr lst)))))
--
do...@sierratel.com
---
Garbage In, Gospel Out
Dowe> Hello, I am a lisp newbie with a slight problem, the following code works
Dowe> fine except that it smashes case when it converts from symbols to strings
Dowe> (I hope my jargon is correct). How do I keep the case information intact?
Dowe> (defun quote-all (lst)
Dowe> (if (not (null lst))
Dowe> (cons (string (car lst))(quote-all (cdr lst)))))
Your problem is that string, given a symbol, returns it's name which
is normally uppercase. You can change the case using e.g.
....
(write-to-string (car lst) :case :downcase) ...
Incidentally your code can be expressed more succinctly by the
equivalent:
(defun quote-all (lst)
(mapcar #'string lst))
--
class struggle SDI Rule Psix CIA Noriega assassination Clinton $400
million in gold bullion kibo genetic Saddam Hussein FSF explosion
supercomputer ammunition
> Hello, I am a lisp newbie with a slight problem, the following code
> works fine except that it smashes case when it converts from symbols
> to strings (I hope my jargon is correct). How do I keep the case
> information intact?
>
> (defun quote-all (lst)
> (if (not (null lst))
> (cons (string (car lst))(quote-all (cdr lst)))))
If I understand you, you mean this effect:
> (quote-all '(foo bar baz))
("FOO" "BAR" "BAZ")
That's not a bug in your code. Symbols in Lisp are case-insensitive,
look:
> 'foo
FOO
How that came to be and if it's a good idea still is being debated by
people much more experienced than me. Just some things you could
explore, depending on what you want to do:
- If you want to process textual information, use strings, not
symbols. A symbol is much more than just some characters (it can have
a value, a function value and a property list, to begin with).
- If you want to use lowercase symbols, you can quote them like this:
> '|fOo|
|fOo|
> (string '|fOo|)
"fOo"
- If you are feeling adventurous, you can make the lisp reader case
sensitive. Read all about it in the HyperSpec:
http://www.xanalys.com/software_tools/reference/HyperSpec/Body/acc_readtable-case.html
The operative word is "adventurous" here. Avoid clobbering the
standard readtable or be surprised at the amount of breakage that
occurs -- including having to type everything in UPPER CASE. Instead,
use something like:
(let ((*readtable (copy-readtable nil)))
(setf (readtable-case *readtable*) :preserve)
(do-something))
And while you're exploring, take a look at the mapping functions.
Your example function could be written as
(defun quote-all (lst)
(mapcar #'string lst))
Have fun,
Rudi
> ... How do I keep the case information intact?
> (defun quote-all (lst)
> (if (not (null lst))
> (cons (string (car lst))(quote-all (cdr lst)))))
How about
(defun quote-all (lst)
(if lst
(cons (symbol-name (first lst))(quote-all (rest lst)))
nil))
(quote-all '(|Common| |Lisp| |is| |a| |big| |language|))
or
(defun quote-all (lst)
(mapcar #'symbol-name lst))
-wb
> do...@krikkit.localdomain (Dowe Keller) writes:
>
> > Hello, I am a lisp newbie with a slight problem, the following code
> > works fine except that it smashes case when it converts from symbols
> > to strings (I hope my jargon is correct). How do I keep the case
> > information intact?
> >
> > (defun quote-all (lst)
> > (if (not (null lst))
> > (cons (string (car lst))(quote-all (cdr lst)))))
>
> If I understand you, you mean this effect:
>
> > (quote-all '(foo bar baz))
> ("FOO" "BAR" "BAZ")
>
> That's not a bug in your code. Symbols in Lisp are case-insensitive,
> look:
>
> > 'foo
> FOO
Errr... this needs a bit of expansion. Symbols in Common LISP are
case-insensitive. Common LISP is the most prevelent LISP in use these
days, but that doesn't make it the only one. Symbols in InterLISP and
Portable Standard LISP, for example, are case sensitive.
> How that came to be and if it's a good idea still is being debated by
> people much more experienced than me.
Allegedly[1], part of the US DoD requirement was that Common LISP
should be usable with a type of terminal which had no shift key, and
could display only upper case. The DoD was very influential in the
beginnings of the Common LISP project.
Like the separation of function and value cells, it's a feature of
Common LISP we're stuck with now, but which I don't think anyone any
longer seriously defends.
[1] Several people have given me this story independently. I can't at
this moment find a reference for it.
--
si...@jasmine.org.uk (Simon Brooke) http://www.jasmine.org.uk/~simon/
'Victories are not solutions.'
;; John Hume, Northern Irish politician, on Radio Scotland 1/2/95
;; Nobel Peace Prize laureate 1998; few have deserved it so much
Actually, several do. Just look at the last few month's postings on this
newsgroup. Every so often a Schemer tries to make the point that Scheme's
"Lisp-1" approach is better than Common Lisp's "Lisp-2" approach and gets
shouted down. As for myself, I've come to like the fact that I can name a
function parameter "list" without having the system go into a tizzy, even
though I am still somewhat dismayed each time I need to use a "funcall".
But, all-in-all, it's just another choice in language design that can be
easily defended either way.
faa
> Errr... this needs a bit of expansion. Symbols in Common LISP are
> case-insensitive. Common LISP is the most prevelent LISP in use these
> days, but that doesn't make it the only one. Symbols in InterLISP and
> Portable Standard LISP, for example, are case sensitive.
Common Lisp symbols *are* case sensitive:
USER(1): (eq 'foo '|foo|)
NIL
However, the CommonLisp reader may canonicalize the case of an
unescaped symbol before interning it. (And the printer may
canonicalize the case before printing it, as well.)
> Like the separation of function and value cells, it's a feature of
> Common LISP we're stuck with now, but which I don't think anyone any
> longer seriously defends.
There are certainly numerous vehement defenders of this.
> Errr... this needs a bit of expansion. Symbols in Common LISP are
> case-insensitive. Common LISP is the most prevelent LISP in use these
> days, but that doesn't make it the only one. Symbols in InterLISP and
> Portable Standard LISP, for example, are case sensitive.
> [ ... ]
> Like the separation of function and value cells, it's a feature of
> Common LISP we're stuck with now, but which I don't think anyone any
> longer seriously defends.
Assuming I haven't misunderstood, I'll defend:
I think case-sensitivity is one of the worst mis-features of a language.
From a human cognitive standpoint, there is no semantic difference between
House, house, HOUSE ... so to design a language that allows tokens with
different capitalization to have differnet semantics is, imho, insanity.
One of the 327 things I love about CL is that it *is* case-insensitive.
I also like the fact that variable names don't collide with function names.
Given that (foo foo) is (ignoreing special-forms and macros) semantically
clear, the first foo is a function and the second foo is a value, it seems
to me that forbiding a symbol from being used for both is a waste of
legalese.
> [ ... ]
maybe we need a new news group: comp.lang.common-lisp
Wrong.
| - If you are feeling adventurous, you can make the lisp reader case
| sensitive.
Correct, the Lisp reader is doing the case manipulation.
#:Erik
--
If this is not what you expected, please alter your expectations.
Is there a semantic difference between color and colour ? Should
the lisp reader stop the current insane behavior (behaviour) and make
them the same symbol?
If your answer is 'no' then what have you learned about the
responsibility of a language lexer vis-a-vis the semantics of a human
language?
Wrong. Stop confusing the reader with the symbol!
| Like the separation of function and value cells, it's a feature of
| Common LISP we're stuck with now, but which I don't think anyone any
| longer seriously defends.
What!? Are you saying that YOU THINK the existence of function and
value cells in symbols is as stupid as upper-case symbol names? If
so, are you insane, trolling, stupid, or another Scheme bigot?
comp.lang.lisp.other is probably better.
> Rudolf Schlatte <rsch...@ist.tu-graz.ac.at> writes:
>
> > do...@krikkit.localdomain (Dowe Keller) writes:
> >
> > > Hello, I am a lisp newbie with a slight problem, the following code
> > > works fine except that it smashes case when it converts from symbols
> > > to strings (I hope my jargon is correct). How do I keep the case
> > > information intact?
> > >
> > > (defun quote-all (lst)
> > > (if (not (null lst))
> > > (cons (string (car lst))(quote-all (cdr lst)))))
> >
> > If I understand you, you mean this effect:
> >
> > > (quote-all '(foo bar baz))
> > ("FOO" "BAR" "BAZ")
> >
> > That's not a bug in your code. Symbols in Lisp are case-insensitive,
> > look:
> >
> > > 'foo
> > FOO
>
> Errr... this needs a bit of expansion. Symbols in Common LISP are
> case-insensitive. Common LISP is the most prevelent LISP in use these
> days, but that doesn't make it the only one. Symbols in InterLISP and
> Portable Standard LISP, for example, are case sensitive.
Actually, I would quibble with this. Symbols in Common Lisp are, in
fact, case sensitive. It is just that the default reader will upcase
all (non-escaped) symbol names before interning the symbol.
That is why (eq 'foo 'FOO) => T
and (eq '|foo| '|FOO|) => NIL
The symbols must be case sensitive for the latter to return NIL. The
fact that the reader is doing the case conversion for you makes it
appear that lisp is case insensitive when in fact it isn't.
The "problem" that the newbie described was occurring during the reading
in of the symbols, so that means that nothing that could be done in the
function QUOTE-ALL could fix it. That is because information was lost
during the reading process.
Now it turns out that you can set the Lisp reader to behave
differently. If you set the readtable case to :preserve, then you tell
the reader not to do the case conversions. One could do this to the
global readtable by
(setf (readtable-case *readtable*) :preserve)
but I would be reluctant to do this for fear of breaking other code that
doesn't expect this. For example, after doing this, the following would
fail:
(defun quote-all (lst)
(if (not (null lst))
(cons (string (car lst))
(quote-all (cdr lst)))))
because the symbol |defun| isn't defined (among others). You would have
to write all of the built-in lisp functions in uppercase:
(DEFUN quote-all (lst)
(IF (NOT (NULL lst))
(CONS (STRING (CAR lst))
(quote-all (CDR lst)))))
Generally when I want to preserve the case, I create a new readtable and
then bind the global readtable around the forms that need to have the
reading done case sensitively. Usually I end up doing something like:
(defvar *case-sensitive-readtable* (copy-readtable nil))
(setf (readtable-case *case-sensitive-readtable*) :preserve)
and then using it in a form like:
(let ((*readtable* *case-sensitive-readtable*))
(read ...)
(read-from-string ....)
...)
For the example in question, one could do:
(let ((*readtable* *case-sensitive-readtable*))
(quote-all (read-from-string "(foo bar baz))))
=> ("foo" "bar" "baz")
Finally, one can ask why symbols are being used instead of strings in
this particular application at all.
--
Thomas A. Russ, USC/Information Sciences Institute t...@isi.edu
> Finally, one can ask why symbols are being used instead of strings in
> this particular application at all.
it seems to be a lisp tradition to do so. many lisp texts do this,
e.g., peter norvigs PAIP (look at the eliza program).
you make the symbol-name be your string and let the lisp reader
tokenize. you can also hang stuff off the symbol on the value,
function or property-list slots.
the reader case-smash makes sense for symbols used as code, but less
so for symbols used as a kind of ersatz string.
the symbol as string idea doesn't sit well with me either, but perhaps
i am missing something important. if i were doing it, i'd use strings
(or a structure or class containing the string and whatever hangers-on
it needed) and write a string-tokenizer instead of (ab)using the
reader.
can someone with more experience provide some insights?
--
johan kullstam l72t00052
> * Simon Brooke <si...@jasmine.org.uk>
> | Like the separation of function and value cells, it's a feature of
> | Common LISP we're stuck with now, but which I don't think anyone any
> | longer seriously defends.
>
> What!? Are you saying that YOU THINK the existence of function and
> value cells in symbols is as stupid as upper-case symbol names? If
> so, are you insane, trolling, stupid, or another Scheme bigot?
I'm far from being a Scheme bigot; I don't like (and don't often use)
the language. I'm not trolling either. Whether I am insane is a matter
for the psychiatric profession; whether I am stupid is for you to
judge.
I do think that treating functions as different from other data is
'stupid', yes; it leads to all sorts of hacks and kludges and results
in no benefits that I am aware of. It is, in my opinion, historical
baggage left over from implementation details in LISP
1.5. Furthermore, it's clear that many of the most influential people
in the design of Common LISP are now of the same opinion; for example,
Richard Gabriel's 1988 paper in _LISP & Symbolic Computation_ on just
this issue, or some of Scott Fahlman's posts on LISP2 and the design
of Dylan on this very group in February and March of 1995.
However, if you have a different opinion, please feel free to argue
it. What benefit is there, beyond having to invoke baroque syntax to
use a lambda expression, and having to do two operations instead of
one at every step in a structure walker?
--
si...@jasmine.org.uk (Simon Brooke) http://www.jasmine.org.uk/~simon/
"The result is a language that... not even its mother could
love. Like the camel, Common Lisp is a horse designed by
committee. Camels do have their uses."
;; Scott Fahlman, 7 March 1995
> the symbol as string idea doesn't sit well with me either, but perhaps
> i am missing something important. if i were doing it, i'd use strings
> (or a structure or class containing the string and whatever hangers-on
> it needed) and write a string-tokenizer instead of (ab)using the
> reader.
>
> can someone with more experience provide some insights?
You can "INTERN" symbols. Interning symbols means that
you put them in a package. Reading FOO and later
reading FOO will give you the same symbol (an object).
Reading "FOO" and later reading "FOO" will give you
two string objects.
--
Rainer Joswig, BU Partner,
ISION Internet AG, Steinhöft 9, 20459 Hamburg, Germany
Tel: +49 40 3070 2950, Fax: +49 40 3070 2999
Email: mailto:rainer...@ision.net WWW: http://www.ision.net/
> Simon Brooke <si...@jasmine.org.uk> writes:
>
> > Errr... this needs a bit of expansion. Symbols in Common LISP are
> > case-insensitive. Common LISP is the most prevelent LISP in use these
> > days, but that doesn't make it the only one. Symbols in InterLISP and
> > Portable Standard LISP, for example, are case sensitive.
>
> Common Lisp symbols *are* case sensitive:
>
> USER(1): (eq 'foo '|foo|)
> NIL
>
> However, the CommonLisp reader may canonicalize the case of an
> unescaped symbol before interning it. (And the printer may
> canonicalize the case before printing it, as well.)
Well, what also happens is that these symbols are INTERNed
in a package. Depending on what INTERNing does you can have
the same symbol or not. The behaviour of Common Lisp's
interning of symbols can be found in the standard docs.
> I do think that treating functions as different from other data is
> 'stupid', yes; it leads to all sorts of hacks and kludges and results
> in no benefits that I am aware of. It is, in my opinion, historical
Functions are not treated any differently from other data in Common
Lisp. Period. There exist different namespaces in Common Lisp for
different constructs, among them are namespaces for types, classes, go
tags, restarts, ..., and finally for the operator position of compound
forms and another one for symbol-macros and variables.
While there have been long ranging and often violent discussions on
the merits and demerits of separating the last two namespaces, I
have as yet not seen the same amount of effort expended on the
unification of the other namespaces. Why are we not seeing enraged
calls for the unification of say the class namespace with the s-m&v
namespace? Why is no one raging about the ugliness of having to use
(find-class 'my-class-name) to get the class object for my-class-name?
Why doesn't my-class-name evaluate to the class object? What about
restarts?
It seems to me that many instantly recognize the utility of different
namespaces in those cases, and implicitly agree that the trade-off
involved isn't very burdensome. Yet when it comes to operator names
and s-m&v names, suddenly we enter holy teritory, and no amount of
utility can ever weigh up the cardinal sin of separating those
namespaces. I find this rather remarkable, from a meta-viewpoint...
I won't argue in depth why I find the separation both of high utility
and little downside _in the context of CL_, because I find that others
have done this far more eloquently than I could have done. Take a
look at postings and papers by Kent M. Pitman in this newsgroup and
elsewhere[1].
Regs, Pierre.
Footnotes:
[1] Among others the following message-ids should provide food for
thought:
<sfwwwnf...@world.std.com>
<sfwafcp...@world.std.com>
<sfwiupa...@world.std.com>
--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
are they? what about x\yz or |aBcD| ? unless i am grossly mistaken,
those symbols retain case sensitivity, and i also think that the
conversion to upper case is done in the reader.
> days, but that doesn't make it the only one. Symbols in InterLISP and
> Portable Standard LISP, for example, are case sensitive.
>
>> How that came to be and if it's a good idea still is being debated by
>> people much more experienced than me.
>
> Allegedly[1], part of the US DoD requirement was that Common LISP
> should be usable with a type of terminal which had no shift key, and
> could display only upper case. The DoD was very influential in the
> beginnings of the Common LISP project.
available hardware at the time of development is one reason. another
cause that influences the decision about case sensitivity seems to be
the question whether program text should be talked about or whether it
should only be viewed.
> ...
--
Hartmann Schaffer
> > However, the CommonLisp reader may canonicalize the case of an
> > unescaped symbol before interning it. (And the printer may
> > canonicalize the case before printing it, as well.)
>
> Well, what also happens is that these symbols are INTERNed
> in a package. Depending on what INTERNing does you can have
> the same symbol or not. The behaviour of Common Lisp's
> interning of symbols can be found in the standard docs.
In know Rainer knows better, but he has expressed something
sloppily in a way that may someday confuse someone.
INTERN (and FIND-SYMBOL) operate on SYMBOL-NAMEs, which are
STRINGs. INTERN and FIND-SYMBOL never manipulate the case
of characters in these names. Conceptually, they compare
string names as if by EQUAL and are therefore case sensitive
and case preserving.
The reader does indeed call INTERN, and the reader does do
various things with case, but all that is the business of the
reader, not INTERN.
The reader normally shifts all lower-case characters in a symbol
name to upper case. There is a settable mode in a READTABLE
which can change this to DOWNCASE, PRESERVE, or INVERT. The
standard reader syntax allows any character (e.g. whitespace
and lower-case) to be escaped so that it loses any special
syntactic meaning and is not case shifted. See backslash and
vertical bar.
The printer normally preserves case on output, although if
*PRINT-ESCAPE* or other controls are true, it may add escapes
necessary so that the printed output could be read back
the same. The actual case behavior of the printer is controlled
by a combination of READTABLE-CASE and the *PRINT-CASE* according
to a set of rules that is far to complex to be worth learning,
much less remembering. You can see them in ANS 22.1.3.3.2.
A point often missed by beginners (because it only _very_ rarely
has any effect) is that SYMBOL-NAMEs are not used when Lisp
executes. In particular, comparison of symbols by EQ, EQL,
EQUAL, and EQUALP do not look at the SYMBOL-NAMEs or the
SYMBOL-PACKAGEs of the argument SYMBOLs. SYMBOLs are first-class
objects for which all four equality predicates operate as by EQ.
It is rare CL code that ever calls INTERN or FIND-SYMBOL, and the
only important parts of the implementation that call them are
the reader and printer. SYMBOL-NAMES are not referenced at all
during the execution of random Lisp code, unless that code explicitly
calls INTERN, FIND-SYMBOL, various package functions, or invokes
the reader or printer on SYMBOLs. Informally, all the messy case
manipulations of CL happen at read and print time.
Bill Ackerman told me the following more than 30 years ago:
Lisp would be the perfect computing language if only it didn't
have I/O.
this seems to be largely a question of your background. i
mathematics, e.g., it is quite common to use differences in fonts and
capitalization as significant. some people seem to be more visual
oriented, in which case font, capitalization etc are meaningful
distinctions, while other people seem to be more vebally oriented, in
which case case sensitivity is very difficult to grasp.
> ...
--
Hartmann Schaffer
> > Well, what also happens is that these symbols are INTERNed
> > in a package. Depending on what INTERNing does you can have
> > the same symbol or not. The behaviour of Common Lisp's
> > interning of symbols can be found in the standard docs.
>
> In know Rainer knows better, but he has expressed something
> sloppily in a way that may someday confuse someone.
>
> INTERN (and FIND-SYMBOL) operate on SYMBOL-NAMEs, which are
> STRINGs. INTERN and FIND-SYMBOL never manipulate the case
> of characters in these names. Conceptually, they compare
> string names as if by EQUAL and are therefore case sensitive
> and case preserving.
>
> The reader does indeed call INTERN,
Unless you create non-interned symbols. ;-)
? (eq ':foo ':foo)
T
? (eq '#:foo '#:foo)
NIL
> and the reader does do
> various things with case, but all that is the business of the
> reader, not INTERN.
Yes. Thus was my hint of reading the docs to see what
INTERN is doing.
But if you lean back and look at it more abstractly you
intern some object into a data structure. Later
you may want to retrieve the object based on some
input. You also may want to iterate in some order
over this datastructure. Removing objects may also
be nice.
In this case we have for packages:
- data structure: package
- interning in packages: INTERN
- retrieving from packages: FIND-SYMBOL
- iterating: DO-SYMBOLS, ...
- removing: UNINTERN
- ...
So in some Lisp system it might be possible
to think of an INTERN operation that uses
a different mechanism and/or a different
datastructure. In Symbolics Genera INTERN
also is sensitive for styles in the strings,
for example. So when you are interning
"FOO" and "FOO" you won't get the same symbol,
given that there are different styles in the string. Yes,
this is ancient and only in Symbolics Genera.
Btw., in CL-HTTP you often find these mechanisms.
An example for URLs:
- data structure: EQUAL hashtable
- interning of URLs: URL:INTERN-URL
- retrieving URLs: URL:INTERN-URL :if-does-not-exist :SOFT
- iterating: URL:MAP-URL-TABLE
- removing: URL:UNINTERN-URL
Hey, even more confusion.
And just _how_ does Common Lisp do this? Having two different value
slots for a symbol is not at all what you describe as stupid. If
you are thinking about the need to use the `function' special form
where (presumably) you would have been happier with regular variable
access, that is _so_ not the issue. You can bind a variable to a
function object any day and funcall it. Try, you may be shocked.
If this is the old "I hate funcall" whine, please say so right away
so I can ignore you completely.
| it leads to all sorts of hacks and kludges and results in no
| benefits that I am aware of.
Really? Assuming that we still talk about symbol-function and
symbol-value, the hysterical need for "hygienic" macros in Scheme is
_not_ caused by their conflated namespaces? Come on, now. Why
exaggerate your case when there is at lest one _obvious_ benefit
(and one obvious drawback to the conflating namespaces, namely the
Scheme experiment).
The main human benefits are that you don't have to worry about name
collisions when you define a lexical variable and that you don't
have to look for all sorts of bindings when reading code. The main
compiler benefit is that if you maintain control over what you stuff
into the symbol-function slot, you don't have to test it for a
function object every time you use it.
| It is, in my opinion, historical baggage left over from
| implementation details in LISP 1.5.
And which "implementation details" are you referring to?
| Furthermore, it's clear that many of the most influential people
| in the design of Common LISP are now of the same opinion; for example,
| Richard Gabriel's 1988 paper in _LISP & Symbolic Computation_ on just
| this issue, or some of Scott Fahlman's posts on LISP2 and the design
| of Dylan on this very group in February and March of 1995.
Yeah, _sure_ I'll trust a _Dylan_ designer to have useful comments
about Common Lisp. "I hate this, let's go make our _own_ language"
people are so full of agendas both hidden and overt that you can't
take anything they say about the old language seriously.
| However, if you have a different opinion, please feel free to argue
| it. What benefit is there, beyond having to invoke baroque syntax
| to use a lambda expression, and having to do two operations instead
| of one at every step in a structure walker?
And what the fuck does lambda expressions have to do with symbol and
value cells in symbols?
All of trolling, stupid, and insane, that's my conclusion.
| "The result is a language that... not even its mother could
| love. Like the camel, Common Lisp is a horse designed by
| committee. Camels do have their uses."
| ;; Scott Fahlman, 7 March 1995
If you don't like Common Lisp, Simon, you don't have to suffer it.
There are plenty of other languages out there you can use. Those
who have hated parts of the language have gone elsewhere. Those who
remain actually _like_ the language, even if this is unfathomable to
you. It it also my firm opinion that computer professionals who use
tools they dislike or hate _are_ insane at best. (Worse would be
slavery, criminal incompetence, extremely low self-esteem and other
results of prostitution, or extreme poverty, all of which remove
one's ability to choose rationally among the _generally_ available
options.) One simply does not choose to use something one hates for
whatever reasons. That's why Scott Fahlman went to Dylan, I guess,
but I know that's why I dropped working with SGML, C++, and Perl.
Where I come from case matters. "ls" is a command, "LS" is an error.
I happen not to see any similarity in those two strings (`a' and `A'
are different letters damn-it ;-).
BTW: Even if the language is case insensitive it shouldn't smash case.
BTW version 2: Thanks to everyone for their thoughtful and insightful
answers.
--
do...@sierratel.com
---
This is the theory that Jack built.
This is the flaw that lay in the theory that Jack built.
This is the palpable verbal haze that hid the flaw that lay in...
So complete the list of functions by adding make-symbol. Furrfu.
Now that you bring it up, here's a (style?) question I've been wanting
to ask (still being somewhat only a casual user of CL):
Other than reducing possible confusion while debugging with "macroexpand",
is there any real reason to prefer (gensym "foo") over (make-symbol "foo")
when defining macros that need non-capturing local variables? Both produce
fresh uninterned symbols which can't conflict with (capture) any other symbol.
So in what circumstances is one preferred over the other, or vice-versa?
-Rob
p.s. I think I know why one doesn't use literal uninterned symbols for
such things, e.g.:
(defmacro foo (arg1 arg2 &body body)
(let ((tmp '#:tmp)) ; BUG!
...stuff that uses ,tmp ...))
You only get *one* uninterned symbol when the macro is defined, which
gets used for all possible instances of the macro. So if you had nested
occurrences, in the expansion an inner occurrence of #:tmp could shadow
an outer one. But that can't happen with:
(defmacro foo (arg1 arg2 &body body)
(let ((tmp (make-symbol "tmp")))
...stuff that uses ,tmp ...))
because "make-symbol" gets called for each occurrence.
-----
Rob Warnock, 41L-955 rp...@sgi.com
Applied Networking http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. PP-ASEL-IA
Mountain View, CA 94043
> Really? Assuming that we still talk about symbol-function and
> symbol-value, the hysterical need for "hygienic" macros in Scheme is
> _not_ caused by their conflated namespaces?
Please explain what you mean with that. Common Lisp has its separate
namespaces, and even there macros commonly use gensym so as not to
accidentally use variables of their callers. How does this differ
from Scheme's hygienic macros?
(Did you mean that Scheme macros are _forced_ to be hygienic?)
Do you see the same issue with the class namespace, the package
namespace, or any of the other n namespaces that CL has?
--tim
| Common Lisp has its separate namespaces, and even there macros
| commonly use gensym so as not to accidentally use variables of their
| callers. How does this differ from Scheme's hygienic macros? (Did
| you mean that Scheme macros are _forced_ to be hygienic?)
Yes, Scheme has a "high-level" macro mechanism in which hygiene is
enforced. This is the only standardized macro mechanism in Scheme.
Schemers have also realized that it is sometimes necessary to break
hygiene intentionally, so there are various proposals to accomplish
that. It's been too long since I kept track of the Scheme world;
maybe one of these proposals is emerging as standard, or maybe not.
Scheme's hygienic macros are really easy to use, and ought to
eliminate a whole host of bugs, at least in the hands of inexperienced
programmers. I see no reason why they cannot be implemented in CL --
not replacing CL macros of course, but as a supplement. Does anybody
know if this has been tried? Of course, if there is some deeper
reasons why this cannot or should not be done, I'd be thrilled to hear
about them.
--
* Harald Hanche-Olsen <URL:http://www.math.ntnu.no/~hanche/>
- "There arises from a bad and unapt formation of words
a wonderful obstruction to the mind." - Francis Bacon
Well, I use make-symbol exclusively and see no reason for gensym or
gentemp at all. I tend to set *print-circle* to t, anyway.
Interesting.
That means that rather than macro-expanding into something that the
system provides, you generate whatever symbol name you want, right?
--
cbbr...@hex.net - <http://www.hex.net/~cbbrowne/>
Rules of the Evil Overlord #157. "Whenever plans are drawn up that
include a time-table, I'll post-date the completion 3 days after it's
actually scheduled to occur and not worry too much if they get
stolen." <http://www.eviloverlord.com/>
> In article <vgyiue...@alum.mit.edu>, Joe Marshall
> <jmar...@alum.mit.edu> wrote:
>
> > Simon Brooke <si...@jasmine.org.uk> writes:
> >
> > > Errr... this needs a bit of expansion. Symbols in Common LISP are
> > > case-insensitive. Common LISP is the most prevelent LISP in use these
> > > days, but that doesn't make it the only one. Symbols in InterLISP and
> > > Portable Standard LISP, for example, are case sensitive.
> >
> > Common Lisp symbols *are* case sensitive:
> >
> > USER(1): (eq 'foo '|foo|)
> > NIL
> >
> > However, the CommonLisp reader may canonicalize the case of an
> > unescaped symbol before interning it. (And the printer may
> > canonicalize the case before printing it, as well.)
>
> Well, what also happens is that these symbols are INTERNed
> in a package. Depending on what INTERNing does you can have
> the same symbol or not. The behaviour of Common Lisp's
> interning of symbols can be found in the standard docs.
Whether the symbols are interned or not makes no difference to case
sensitivity. Two symbols that differ in case can *never* be EQ,
regardless of whether they are interned or not. (This does *not*
imply that two symbols that have the same symbol name *are*
necessarily EQ.)
> I do think that treating functions as different from other data is
> 'stupid', yes;
Separation of namespaces is not about treating functions different from
other data. As data, CL treats functions identically to other data.
However, there can be no mistaking that most of everything that functions do
is (a) access variables, and (b) call functions. As such, having special
syntax for dealing with the "calling" of functions (nothing to do with the
function datatype itself) seems appropriate, just as having special syntax
for dealing with slot or array access in many languages is meaningful and
appropriate. Long experience with natural languages shows that people
prefer to take things that they use a lot and make shorthand notations for
them. That's one reason people like to use macros a lot, btw, because it
allows them to optimize the presentation of common idioms in a manner
appropriate to their frequency of use.
> it leads to all sorts of hacks and kludges and results
> in no benefits that I am aware of.
Hacks and kludges? I don't think so. You're welcome to cite some.
As to benefits, you need only look to the paper you cite below (for which
I was a co-author and for which my contribution was to do the "benefits"
part).
> It is, in my opinion, historical
> baggage left over from implementation details in LISP
> 1.5.
Nonsense. Taken on its face, this kind of conclusion would imply that left
to start anew, with no constraints of compatibility, people would not make
the same choice again. I and many others would readily make the same choice
again and NUMEROUS popular languages do distinguish the namespaces of
functions and variables, not just Lisp. For good reason: People's linguistic
hardware is, as evidenced by the extremely complex mechanisms for context
exhibited in human languages, well optimized for resolving names in different
contexts with only the context as a cue. It would be a waste of already
existing brain wetware, and a pessmimization of that wetware's function,
to fail to take advantage of such context information.
> Furthermore, it's clear that many of the most influential people
> in the design of Common LISP are now of the same opinion; for example,
> Richard Gabriel's 1988 paper in _LISP & Symbolic Computation_ on just
> this issue,
My paper, too. And no, it was not on the issue of why the separation is
bad. It was on the PROS AND cons of the situation. That paper's direct
result was that CL decided the weight of the evidence was technically in
favor of keeping the status quo.
Among other things, the paper specifically cites a speed issue that is not
easily resolvable without either two namespaces or an EXTREMELY POWERFUL
inference engine--more powerful than most Scheme implementations promise.
In particular, if you do
(setq x #'foo)
you don't know for sure that X is going to be used as a function. It might
be it will just be used as data. And certainly if you do
(setq x 3)
you don't know that X is *not* going to be used as a function. As a
consequence, when you call X in a Lisp1 (the term I coined in the paper
you cite for the language subfamily that includes languages like Scheme
which have only a single namespace), then you have to check at call time
whether X contains something valid to execute as a function. The only case
in which you can just immediately do a machine-level instruction jump to the
contents of X is where you have done theorem proving enough to know that X
contains good data. In the case of some functions like
(define (foo x y) (x y))
this can be arbitrarily complicated to prove because it means knowing all
callers of FOO, and since neither Scheme nor Lisp is strongly typed, you
rapidly run up against the halting problem proving that (X Y) is a valid
function call at compile time, leaving the call as inefficient as
(defun foo (x y) (funcall x y))
in Common Lisp, except that in this case, Common Lisp calls attention to the
fact that there is something unusual about the call, and some Common Lisp
programmers appreciate that. Indeed, CL also provides the ability to declare
that X is promised to be a function, consequently allowing an efficient
compilation even where theorem proving in Scheme might fail (because Scheme
designers, myself among them but outvoted) have stubbornly resisted providing
a declaration mechanism in Scheme that would allow it to know what the
programmer knows but what cannot easily be detected by general inferencing.
It's not just the FUNCALL case, though; in general, in Scheme, any
name whose value is ever assigned free is at risk of being a slow call
unless that call can be demonstrated to always get safe values. And yet,
Scheme can't compile in safety checks at the time of assignment because it
doesn't know you will do a call to the function during the time with that
bad value. Consider:
(block
(set! x 3)
(print (+ x 1))
(set! x car)
(print (x '(a b c))))
Here it requires a great deal of compiler smarts to jump directly to the
function X efficiently because the compiler needs to be fearful that X will
contain 3 at the time of jump. Only very careful flow analysis shows this
to be safe, and it's more complicated if these two pairs of SET!/PRINT
operations occur in unrelated parts of programs.
By contrast, the CL design means that when you do the assignment (which
most people agree occurs statistically a LOT less often than the calling),
you can do the test then. That is,
(progn
(setf (symbol-function 'x) 3) ; this can signal an error
...)
As a consequence, all references to the symbol function namespace are safe,
and as a consequence of that, all references of the form
(f x)
can be 100% sure that the function cell of f contains something that is truly
safe to jump to. This is a big speed gain, and the speed gain is enjoyed
regardless of whether the compiler has had a chance to completely analyze
all of the flow paths, so there's no chance that a later addition of new code
will violate this contract in being-debugged code.
> or some of Scott Fahlman's posts on LISP2 and the design
> of Dylan on this very group in February and March of 1995.
I didn't read this so have no opinion.
> However, if you have a different opinion, please feel free to argue
> it. What benefit is there, beyond having to invoke baroque syntax to
> use a lambda expression, and having to do two operations instead of
> one at every step in a structure walker?
There are several additional benefits.
One is that statistically, functions are used less often in CL. A
number of people don't like "functional programming" as a style, or
feel nervous about it, and want it "case marked" when it happens so
they can pay attention closer. That benefit doesn't apply to all, but
is a feature to those that it does apply to.
Another benefit is that it allows the correct spelling of variable names.
I notice a lot of Scheme people who spell variables whose english name
is "list" as LST just to avoid an accidental collision with the LIST
operator. CL people don't worry about this. They write variables like LIST
because they don't worry that
(defun foo (list)
(mapcar #'(lambda (x) (list 'foo x))
list))
will have some confusion about which list means what. Practical experience
in CL says functions are almost always constant, and so (list x y) is reliable
to write anywhere to mean "make a list of two things". And binding a lambda
variable doesn't change that.
Another benefit is that when you see (list x y) on a screen of a definition
that is not the start of definition, you don't have to scroll back in order
to understand its meaning. It's true that someone could try to FLET LIST,
but as a practical convention, good programmers just don't ever FLET or LABELS
definitions that are globally defined. So when you see a function that looks
like it's doing a certain function call, it generally is. Not so at all in
Scheme.
It's well and good to say that you'd make this arbitrary choice differently
if you were designing the language. But it's not well and good to say there
are no reasons for the language being as it is, nor is it well and good to
ascribe the reasons for the change as being "historical" and "ones that would
obviously be made differently with hindsight". Many choices in the language
are about either predicting a usage pattern, or about accomodating a usage
pattern that will select a certain user group you desire to cater to. CL
sought to cater to a set of people who are largely quite pleased with this
decision and most of which would be gravely offended if you changed it. In
that regard, it made the "right" decision. Not because "right" is a uniquely
determined attribute of the world, but because "right" is a term meaningless
without context. ... Uh, that is to say, I guess, that there are multiple
namespaces in which the term "right" is assigned a value.
I'll close with a favorite quote of mine (wish I knew the source) and the
obligatory analysis of the quote.
There are two kinds of people in the world.
Those who think there are two kinds of people and those who don't.
The war about two namespaces isn't, I think, just about whether or not it's
useful to sometimes distinguish context in functions and variables. In my
opinion, it's about whether there is one and only one right way to think
about programs. Most Lisp2 supporters don't object to the Lisp1 world
existing; they see a multiplicity of options and are happy for the one they
have. But Lisp1'ers tend to be annoyed at Lisp2 more often. And I think it's
not just about namespaces. It's about the fact that other programmers have
a choice of how to think, and that they can be happy in a world where the
axioms are chosen differently. I think that's terribly sad.
We all benefit from multiple points of view. We are robbed when denied them.
We should all be seeking to learn to appreciate others' points of view, not
seeking to teach people to unlearn an appreciation for points of view we've
not taken the trouble to understand.
I've worked enough in Lisp1 languages to know that there are some cool things
about those. Most of them, if I were to take your conversational posture, I
could have attacked as "hacks and kludges", but I choose not to. Each
notation yields interesting felicities which perhaps we should never take
advantage of if we really wanted to be ultra-purist. But we pick specific
notations BECAUSE we want some things to be "accidentally easy", and so what's
the point of then avoiding those things which become so? And, as a corollary,
we end up making some things "accidentally hard" as a result of some of our
choices, too. That doesn't make those choices wrong. It just means life is
full of tough choices, and we do the best we can making the ones we do.
Sure, the y-operator looks cooler in Scheme. But I rarely write y-operators.
And on the other hand, in most of the functions I write, I like seeing the
funcall and the #' on functional items. I find it a useful piece of
documentation highlighting intent. I don't see it as scary or asymmetric.
Trade-offs. It's all just trade-offs.
> Other than reducing possible confusion while debugging with "macroexpand",
> is there any real reason to prefer (gensym "foo") over (make-symbol "foo")
I'm reminded of the joke
"Other than that, Mrs. Lincoln, how did you like the play?"
This is, after all, hardly a minor issue. I think it's the key
reason, since sometimes there may be several calls to the same macro
in the same space yielding visually-otherwise-indistinguishable values
that you couldn't sort out otherwise.
Of course, MAKE-SYMBOL might be chosen in some case where such
debugging wasn't needed, or where it was known that only one such
instance was to be made, or where non-program-readable visual brevity
might be needed.
MAKE-SYMBOL can, of course, also be used in bootstrapping package
configurations where you want to hold a symbol hostage for a little
while before you INTERN it.
And, finally, MAKE-SYMBOL is subprimitive to GENSYM. You need it to
implement GENSYM. So you might as well expose it just in case. In case
someone wants to write a better GENSYM... as people often used to want
back when GENSYM didn't have as many options. And even now sometimes
people want to vary it...
How can it be any clearer? Scheme decided to conflate the function
and variable namespaces, and approximately 50 milliseconds later,
began whining about Common Lisp's lack of hygienic macros (Scheme
people are always whining about something in Common Lisp), simply
because a variable binding in Scheme screws up function calls, so an
unintentional variable binding colliding with a local function will
make Scheme code using non-hygienic macros very unsafe. Scheme code
uses local functions a lot more than Common Lisp, so you'd think
they'd realize the need to separate the namesaces is even stronger
than in Common Lisp, but noooo, "separate namespaces is stupid" so
they'd rather suffer the complicating consequences of their decision
than fix it.
| Common Lisp has its separate namespaces, and even there macros
| commonly use gensym so as not to accidentally use variables of their
| callers.
That's _so_ irrelevant. Common Lisp macros don't use fresh symbols
in order to avoid accidentally clobbering the functional value of
the symbols it uses. Scheme's hysterical need for hygiene comes
from the very significant danger of clobbering _functional_ values.
(That's why you see Scheme code us silly variable names likt "lst"
instead of "list", too -- in case you want to call `list', it'd
better not be some arbitrary list of data.)
| How does this differ from Scheme's hygienic macros?
It doesn't, of course, but if you look only for similarities to what
you already know, you will find very little of interest in your life.
Human beings don't have any problems with the noun "house" naming a
very different concept than the _verb_ "house". Anyone who argued
that "no, no, you can't call your building a `house' because that's
already a verb" should just be shot to help him out of his misery.
You obviously realize that the additional behavior GENSYM provides
is that each generated symbol is visibly different from other
generated symbols. This doesn't matter to the compiler, evaluator, or
other code walkers (which of course only depend on EQness of variable
names) but is helpful to human readers of macroexpanded code.
If it weren't for this, the save-space-at-any-cost luddite faction
would have invented some semistandard alternative to GENSYM that
would give all generated symbols the same (as in EQ) null symbol
name. One could still differentiate these using *PRINT-CIRCLE*, but
this would hardly improve the readability of macroexpanded code.
GENSYM is a rather old function and I don't feel its behavior
is optimal for its typical use. In particular, each time it is
used for macroexpansion the names of the generated variables are
numerically different. This means that when my program throws
into the debugger and the stack backtrace shows gensyms, one cannot
usefully macroexpand the source functions to determine which
generated variable is which. This has slowed me down many times
when debugging.
I wonder if some better conventions cuold be devised. Suppose
the expansion of DEFMACRO wrapped the expander body in a MACROLET
for smoe GENSYM-like operator that would communicate the nested
origin of the macro. That way, each macroexpansion (relative to
top level) would produce the "same" macroexpansion, where all
uninterned variable names would always have a reproducible
SYMBOL-NAME, even though the symbols themselves would remain
distinct, as required by hygene. This would enhance readability
abd debuggability of macro-generated code.
You don't need two namespaces to handle this efficiently, you can do
it with function cacheing the way MIT Scheme does it. When a free
variable is first invoked as a function, the value is checked to make
sure it is a function, and the entry point is computed. This entry
point is placed in a cache in the caller. The next time the function
is called, it is jumped to directly via the cache.
(Actually, the entry in the cache is *always* directly jumped to.
To invalidate the cache, you replace the entry with a call to a
trampoline that fetches the value, checks it, and updates the cache.)
> By contrast, the CL design means that when you do the assignment (which
> most people agree occurs statistically a LOT less often than the calling),
> you can do the test then.
The cacheing mechanism described above depends on this feature. When
you assign a variable that contains a function, you must invalidate
all cached links. You could be aggressive and relink at this point,
but being lazy works just as well.
> There are several additional benefits.
>
> One is that statistically, functions are used less often in CL. A
> number of people don't like "functional programming" as a style, or
> feel nervous about it, and want it "case marked" when it happens so
> they can pay attention closer. That benefit doesn't apply to all, but
> is a feature to those that it does apply to.
I don't consider this a benefit. In fact, the special `case marked'
syntax may make people feel that `something weird' is happening. If
you are using a single namespace lisp, you could certainly add some
syntax to mark the places you are invoking a computed function.
> Another benefit is that it allows the correct spelling of variable names.
> I notice a lot of Scheme people who spell variables whose english name
> is "list" as LST just to avoid an accidental collision with the LIST
> operator. CL people don't worry about this. They write variables like LIST
> because they don't worry that
> (defun foo (list)
> (mapcar #'(lambda (x) (list 'foo x))
> list))
> will have some confusion about which list means what. Practical experience
> in CL says functions are almost always constant, and so (list x y) is reliable
> to write anywhere to mean "make a list of two things". And binding a lambda
> variable doesn't change that.
I agree that this is a benefit.
> Another benefit is that when you see (list x y) on a screen of a definition
> that is not the start of definition, you don't have to scroll back in order
> to understand its meaning. It's true that someone could try to FLET LIST,
> but as a practical convention, good programmers just don't ever FLET or LABELS
> definitions that are globally defined. So when you see a function that looks
> like it's doing a certain function call, it generally is. Not so at all in
> Scheme.
Not true. Good Scheme programmers don't shadow global variables any
more capriciously than good lisp programmers.
> Trade-offs. It's all just trade-offs.
Agreed.
No no no. Nein nein nein. Gubbish, rubbish, and false:
You cannot INTERN a SYMBOL. INTERN accepts only a STRING as the
first argument. The result of calling INTERN of a STRING is
either the SYMBOL previously present in the PACKAGE, or else a
newly created SYMBOL INTERNed in the PACKAGE.
In normal usage it doesn't matter when one writes informally and
inaccurately. But when investigating and explicating the
semantics of the language, clarity and precision are paramount.
IMPORT and other operators may cause a SYMBOL to become present
in a PACKAGE.
Caching, trampolines, relinks and lazy invalidation must make
assignments and some function calls much slower. A similar, more
complex mechanism is (or at least should be) in place for CLOS generic
method call optimizations, but I think that's besides the point Kent was
making.
Robert
Joe, I really *like* Scheme, and I write much more of it than CL, but I
have to agree with Kent (and from previous comments, Erik) on this one.
In Scheme you have to be a *lot* more careful to avoid shadowing global
variables, because *all* of the standard functions are global "variables".
Not only can one not safely use "list" as a local variable, you can't use
"string" or "vector" or "min" or "max" -- or for that matter, "quotient",
"remainder", "real-part", "magnitude" or "angle"!! A *lot* of names that
are natural choices for intermediate-result are denied to the programmer,
because Scheme, a Lisp1, chose to give builtin functions those names.
IMHO, the situation would be far worse did not Scheme use the "?" suffix
convention for predicates and the "!" suffix for mutators. OTOH, that hack
denies all such potential variable names to programmers as well, since one
must assume that someday *somebody* might want to name a function with one
of those.
-Rob
p.s. And while we're making comparisons... nah, never mind, don't get me
started about how in Scheme "length" is not generic...
> IMHO, the situation would be far worse did not Scheme use the "?" suffix
> convention for predicates and the "!" suffix for mutators. OTOH, that hack
> denies all such potential variable names to programmers as well, since one
> must assume that someday *somebody* might want to name a function with one
> of those.
FTR Dylan (an OO Scheme) similarly has conventions for class names to avoid
collisions in a Lisp-1:
define class <foo> ( <object> )
end class;
define function doit ( foo :: <foo> )
format-out( foo );
end;
And <foo> is a constant that evaluates to a class object. There is no
find-class( #"<foo>" ).
__Jason
> Joe Marshall wrote:
> [...]
> > You don't need two namespaces to handle this efficiently, you can do
> > it with function cacheing the way MIT Scheme does it. When a free
> > variable is first invoked as a function, the value is checked to make
> > sure it is a function, and the entry point is computed. This entry
> > point is placed in a cache in the caller. The next time the function
> > is called, it is jumped to directly via the cache.
> >
> > (Actually, the entry in the cache is *always* directly jumped to.
> > To invalidate the cache, you replace the entry with a call to a
> > trampoline that fetches the value, checks it, and updates the cache.)
> [...]
> > When
> > you assign a variable that contains a function, you must invalidate
> > all cached links. You could be aggressive and relink at this point,
> > but being lazy works just as well.
>
> Caching, trampolines, relinks and lazy invalidation must make
> assignments and some function calls much slower.
Assignments are indeed slower, but note that Kent had this to say:
> By contrast, the CL design means that when you do the assignment (which
> most people agree occurs statistically a LOT less often than the calling),
> you can do the [functionp] test then.
He is proposing that when you assign to a function cell, you ensure
that the cell *always* has a functional value (i.e., you can always
jump to the entry point. Again, you would use a trampoline to
indicate an unbound function cell, or a function cell bound to an
object other than a function.)
> Joe Marshall <jmar...@alum.mit.edu> wrote:
> +---------------
> +---------------
>
> Joe, I really *like* Scheme, and I write much more of it than CL, but I
> have to agree with Kent (and from previous comments, Erik) on this one.
>
> In Scheme you have to be a *lot* more careful to avoid shadowing global
> variables, because *all* of the standard functions are global "variables".
> Not only can one not safely use "list" as a local variable, you can't use
> "string" or "vector" or "min" or "max" -- or for that matter, "quotient",
> "remainder", "real-part", "magnitude" or "angle"!! A *lot* of names that
> are natural choices for intermediate-result are denied to the programmer,
> because Scheme, a Lisp1, chose to give builtin functions those names.
I agree that there are too many symbols that have `unfortunate'
names. But you don't find too many scheme programs that bind
`vector-ref', `intern', `+', `expt', `sqrt', `cons-stream',
`load', etc. etc.
I'm not sure that I think that having two namespaces is the
appropriate solution, though.
No, this had nothing to do with the decision. When Common Lisp was
being designed it was technically possible to make the case-sensitive
and case-insensitive people happy. In fact the most widely distributed
Lisp at the time could support both modes. You could write case-
sensitive code and also load in Macsyma, a very large Lisp application
written for a case insensitive Lisp (MacLisp). [by case sensitive
support I mean that the reader has a mode where you can naturally
program in case sensitive manner in your environment, and on Unix that
means that you don't have to write everything in all caps, and that
symbols names have normal behavior (i.e. the symbol foo has a print name
of "foo").]
Allegro CL has always supported both modes, although it could have been
done in a much nicer way if the support had been part of the Common Lisp
spec. I tried to convince the committee that case sensitivity was
important for Lisp as it was being used in case sensitive environment
(e.g. the Unix operating systems where system library names are case
sensitive). Some members didn't believe that it was possible to have a
programming language with case sensitive identifiers (because you
couldn't speak your programs), and others didn't want anyone to have the
ability to write case sensitive lisp code for fear that they may one day
have to read it [i'm serious, these are the actual arguments against
support for a case sensitive lisp].
So rather than design a language that would have make everyone happy,
the committee chose explicitly to exclude the case sensitive group.
Since that time the case sensitive world has grown dramatically (C, C++,
Perl, Java, xml) and interfacing with it critical for Lisp's survival
(we Lisp programmers can't write everything, we have to link to code
written by outsiders). I've been using a case sensitive lisp for 20
years now and I have to say that it makes interfacing with things in the
Unix and Windows world much easier.