In article <sfwu2ze0wqu....@world.std.com> Kent M Pitman
<pit...@world.std.com> writes: > THIS SENTENCE IS SYNTACTICALLY PROPER ENGLISH. > SENTENCES IN ALL-UPPERCASE APPEAR ON BILLBOARDS ALL THE TIME.
> this sentence is not syntactically proper.
This is a good point. Although it is funny that you chose billboards, since I am seeing more and more billboards using all lower case these days. :-)
Don't forget my example of the importance of distinguishing the symbols "A" and "a" in mathematics.
> > Another argument for case sensitive symbols is that it is easier to > > implement and faster to compute.
> Since symbols in Lisp are case-sensitive, I'm not sure what you mean. > Symbols are interned and compared by pointer-comparison. The symbol > FOO can be typed as "foo" or "Foo" or "FoO" or "FOO" (minus the quotes). > But there are symbols Foo and foo which you can type as "|Foo|" or "|foo|" > (minus the quotes). The cost of the reader doing case-translation is tiny > since it's only one more memory cycle to access a case-translated character > out of an array, and this cost is only incurred when the READ function > is used, which is not in program execution under normal circumstances. > Compiled code usually has faster representations for loading which doesn't > go through read, and executing programs do not typically call READ > unless storing symbolic data in a probably-inefficient way.
Thats all that I was referring to is the reading and printing of symbols. We also have to count PRINT (your right that is handy) here since an implementation code has to figure out that it needs to print the symbol "Foo" as |Foo| and not print the symbol "FOO" as |FOO|. I agree that the time to implement this and the compute time to read and print is small, but it is just one more thing. When trying to rationally compare the differences between case sensitive versus case insensitive reading of symbols, we are talking about lots of very subtle things.
I can go either way on the default case mode for reading of symbols, so I am not ultimately trying to argue for one over the other, I just want to understand the how programmers use case in Lisp. But here is a mind experiment for you to help me answer my original query into these phenomena. Suppose that we had originally defined Lisp to be case sensitive reading of symbols, which eliminates the historical issues, and allow me to assume for the sake of this mind experiment that we are not going to consider the fact we can distinguish PRINT in prose (something I like by the way) is an important enough reason to define a language spec. Then today, would Lisp programmers have reasons to ever want the case insensitive version and for what reasons?