I've been experiencing a very devious bug and I have isolated the problem
to this demonstration code:
(defun test ()
(let ((merged nil))
(setq merged '(""))
(nconc merged '(""))
))
Line by line the code is harmless:
clisp --version
GNU CLISP 2.27 (released 2001-07-17) (built on
cyberhq.internal.cyberhqz.com [192.168.0.2])
clisp -q
[1]> (setq merged '(""))
("")
[2]> (nconc merged '(""))
("" "")
[3]> (setq merged '(""))
("")
[4]> (nconc merged '(""))
("" "")
And it even runs correctly the first time:
[1]>
(defun test ()
(let ((merged nil))
(setq merged '(""))
(nconc merged '(""))
))
TEST
[2]> (test)
("" "")
But by the second run it quickly consumes all my computer's resources:
[3]> (test)
killall -9 lisp.run
CMUCL goes into a loop on the first run:
CMU Common Lisp release x86-linux 3.0.12 18d+ 23 May 2002 build 3350
It just outputs row upon row of empty strings:
"" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" ""
nconc is supposed to be a destructive list concatention. Here's an
example from the HyperSpec:
(setq x '(a b c)) => (A B C)
(setq y '(d e f)) => (D E F)
(nconc x y) => (A B C D E F)
x => (A B C D E F)
All I believe I am doing is concatenating two lists that contain empty
strings with the result being placed in merged.
Regards,
Adam
> (defun test ()
> (let ((merged nil))
> (setq merged '(""))
> (nconc merged '(""))
> ))
You shouldn't modify constant lists.
Presumably what happens is that the compiler notices that the two
instances of '("") are similar, and they are coalesced into one
object. Thus after the first run, this "constant" has become a
circular list.
--
Frode Vatvedt Fjeld
> Hi all,
>
> I've been experiencing a very devious bug and I have isolated the problem
> to this demonstration code:
>
> (defun test ()
> (let ((merged nil))
> (setq merged '(""))
> (nconc merged '(""))
^^^^^^^^^^^^^^^^^^^^
> ))
>
Destructive operation on a literal! Not allowed. Redo from start.
Christophe
--
Jesus College, Cambridge, CB5 8BL +44 1223 510 299
http://www-jcsu.jesus.cam.ac.uk/~csr21/ (defun pling-dollar
(str schar arg) (first (last +))) (make-dispatch-macro-character #\! t)
(set-dispatch-macro-character #\! #\$ #'pling-dollar)
> Adam Warner <use...@consulting.net.nz> writes:
>
>> (defun test ()
>> (let ((merged nil))
>> (setq merged '(""))
>> (nconc merged '(""))
>> ))
>
> You shouldn't modify constant lists.
Those key words turned up this:
http://www.psg.com/~dlamkins/sl/chapter11.html
The advice given is:
If you replace '(1 2 3) (which may be compiled into constant data) with
(list 1 2 3) (which always creates a fresh list at run time) then the
function's behavior will be identical on the first and all subsequent
runs.
Which means this code is safe:
(defun test ()
(let ((merged (list nil)))
(setq merged (list ""))
(nconc merged (list ""))
))
And indeed it is!
Thanks for the tip Frode. I've got a lot of destructive list modifications
to fix.
This is the most obscure aspect of Lisp I have come across to date. Until
now I though (list "abc") and (quote ("abc")) had equal effect in
specifying a list.
The HyperSpec demolishes that idea: "The consequences are undefined if
literal objects (including quoted objects) are destructively modified."
I destructively modified a quoted object.
Being able to perform undefined operations without the interpreters
complaining seems to be very low level behaviour.
So is this code also invalid:
(defun test ()
(let ((number 0))
(setq number 1)))
Since I destructively modified (setq) a literal (0) ("if the object is a
self-evaluating object, appearing as unquoted data.").
If so, how do I specify that 0 should not be taken to be a constant (in
particular not a fixnum where the results could be undefined if it
overflowed to a bignum).
Regards,
Adam
> This is the most obscure aspect of Lisp I have come across to
> date. Until now I though (list "abc") and (quote ("abc")) had equal
> effect in specifying a list.
Think of your source-code as a list, and the quote to induce a
hot-wire between this list and the running program. I.e. that what the
running (compiled) program sees is that part of the list that is your
source-code.
> So is this code also invalid:
>
> (defun test ()
> (let ((number 0))
> (setq number 1)))
>
> Since I destructively modified (setq) a literal (0) ("if the object
> is a self-evaluating object, appearing as unquoted data.").
No, setq modifies a _binding_ that in this case is named "number". In
fact, there are no operators in Common Lisp with the capacity to
modify numbers, so you need not be concerned about this.
--
Frode Vatvedt Fjeld
The difference is subtle.
(list "abc") creates a new list with one item in it every time
you call it.
'("abc") returns the list that the reader created when it read
the form. (Or, if the file is compiled, what the fasloader created.)
> The HyperSpec demolishes that idea: "The consequences are undefined if
> literal objects (including quoted objects) are destructively modified."
>
> I destructively modified a quoted object.
>
> Being able to perform undefined operations without the interpreters
> complaining seems to be very low level behaviour.
>
> So is this code also invalid:
>
> (defun test ()
> (let ((number 0))
> (setq number 1)))
>
> Since I destructively modified (setq) a literal (0) ("if the object is a
> self-evaluating object, appearing as unquoted data.").
No, you are not modifying a literal here, you are changing the value
of a variable.
This is only a problem when you have *aggregate* structure appearing
as a literal object in your code. A string, quoted list, vector, array,
etc. Numbers are not aggregate.
(let ((foo "this is a string"))
(setf (char foo 7) #\q)) ;; illegal
There has been quite a lot of discussion on this topic in this newsgroup
very recently.
>So is this code also invalid:
>
>(defun test ()
> (let ((number 0))
> (setq number 1)))
>
>Since I destructively modified (setq) a literal (0) ("if the object is a
>self-evaluating object, appearing as unquoted data.").
This is not the same thing at all. Here, you are changing the binding of the
symbol ``number'' to a different value. You are not changing the integer zero
object itself. There is no way to do that; Lisp has no destructive function
analogous to nconc or rplaca which will modify an integer object in place.
>If so, how do I specify that 0 should not be taken to be a constant (in
Since there are no standard operations by which you can destroy zero,
this isn't a concern at all. It effectively is a constant regardless of
how the underlying storage is managed.
>particular not a fixnum where the results could be undefined if it
>overflowed to a bignum).
Zero can't overflow; it can't do anything, it just exists. An arithmetic
operation can overflow the range of fixnum, an operation such as the addition
of zero and some sufficiently large integer. That operation will then
automatically produce a bignum. Nothing happens to the zero, though.
> "Adam Warner" wrote:
>
>> This is the most obscure aspect of Lisp I have come across to date.
>> Until now I thought (list "abc") and (quote ("abc")) had equal effect
>> in specifying a list.
>
> The difference is subtle.
>
> (list "abc") creates a new list with one item in it every time you call
> it.
>
> '("abc") returns the list that the reader created when it read the
> form. (Or, if the file is compiled, what the fasloader created.)
Thanks for the replies Joe, Frode and Kaz. This subtly has shaken my faith
in Lisp's strong typing. If constant structures are effectively created by
the compiler then I would expect attempts to destructively modify those
constants would lead to an error condition.
>> The HyperSpec demolishes that idea: "The consequences are undefined if
>> literal objects (including quoted objects) are destructively modified."
>>
>> I destructively modified a quoted object.
>>
>> Being able to perform undefined operations without the interpreters
>> complaining seems to be very low level behaviour.
>>
>> So is this code also invalid:
>>
>> (defun test ()
>> (let ((number 0))
>> (setq number 1)))
>>
>> Since I destructively modified (setq) a literal (0) ("if the object is
>> a self-evaluating object, appearing as unquoted data.").
>
> No, you are not modifying a literal here, you are changing the value of
> a variable.
>
> This is only a problem when you have *aggregate* structure appearing as
> a literal object in your code. A string, quoted list, vector, array,
> etc. Numbers are not aggregate.
>
> (let ((foo "this is a string"))
> (setf (char foo 7) #\q)) ;; illegal
That's the problem. The code isn't illegal. Just undefined. No error will
be signalled.
In other words strings, quoted lists, vectors, arrays etc. can all be
immutable or mutable depending upon how they are created. And if you
create them the wrong way your code will happily perform undefined and
possibly very subtle errors (if you perform any destructive
modifications).
Regards,
Adam
> If constant structures are effectively created by
> the compiler then I would expect attempts to destructively modify those
> constants would lead to an error condition.
Why would you expect that?
Paul
Because (a) Lisp is a high level language and (b) I did not give an
explicit instruction to the compiler to optimise the variable assignment
as a constant.
(a)+(b) ==> I am warned when I perform an undefined operation.
Regards,
Adam
> Because (a) Lisp is a high level language and (b) I did not give an
> explicit instruction to the compiler to optimise the variable assignment
> as a constant.
>
> (a)+(b) ==> I am warned when I perform an undefined operation.
But why would you expect that something like '(a b c) would cons
up its list every time you evaluated it, particularly when there's
a simple explicit way to say to do just that?
Paul
I think modifying a quoted list is almost an inevitable error made
when one is first getting used to Lisp. When first encountered it
really seems to be dreadful trap for the unwary, but once understood
its fairly easy to avoid. For me the behavior of quoted lists is
evocative of identity, perhaps its aligned with CL's notions of
equality and equivalence in some subtle way.
I suspect it would impose a performance penalty to carry a "quoted
list" flag with every item in a quoted list (you'd have to record such
a bit with every list item because a given list position doesn't know
if its the first one or not- it only knows where the next position
is). However, some kind of signal would certainly be nice...
Gregm
Paul my expectations are not on trial here. Am I unique in finding this
subtle? Tell me how many newbies to Lisp expect a variable assigment to
effectively create a constant.
It seems that this distinction has come about to assist compiler
optimisation. How does that create an intuitive expectation?
Python will signal an error when you try and create a slice assignment on
an immutable object. This seems very similar yet the coding
error/undefined operation is ignored by the compiler. I would expect to
quickly learn and understand how to fix the error if the compiler
signalled the error when it was made.
Regards,
Adam
> Paul my expectations are not on trial here.
If you do not wish your expectations to be discussed, then
please do not mention them in this group:
"If constant structures are effectively created by
the compiler then I would expect attempts to
^^^^^^
destructively modify those constants would lead
to an error condition."
> Am I unique in finding this
> subtle? Tell me how many newbies to Lisp expect a variable assigment to
> effectively create a constant.
Why should we care? Avoiding stupid newbie errors isn't
a very good principle around which to design a language.
> It seems that this distinction has come about to assist compiler
> optimisation. How does that create an intuitive expectation?
It comes about to avoid gratuitously bad code in the vast majority
of cases.
> Python will signal an error when you try and create a slice assignment on
> an immutable object.
I am not familiar with Python, but it's not at all clear how you'd
do the same thing on cons cells without either considerable inefficiency
or support from the virtual memory system (read-only pages) or hardware
(ROM).
Paul
First off thank you Greg for a helpful reply that not only validated my
experience but was charitable enough to concede that the signalling of
this type of coding mistake would certainly be nice.
> Adam Warner wrote:
>
>> Paul my expectations are not on trial here.
>
> If you do not wish your expectations to be discussed, then please do not
> mention them in this group:
>
> "If constant structures are effectively created by
> the compiler then I would expect attempts to
> ^^^^^^
> destructively modify those constants would lead to an error
> condition."
Paul it is not possible for me to retroactively change my expectations by
telling me I should not have had them at the time. You could imply that my
expectations were not typical (Greg went so far as to call the mistake
"almost an inevitable error"), yet when given the opportunity you turn out
to not care:
>> Am I unique in finding this subtle? Tell me how many newbies to Lisp
>> expect a variable assigment to effectively create a constant.
>
> Why should we care? Avoiding stupid newbie errors isn't a very good
> principle around which to design a language.
Which neatly avoids the issue that the language doesn't have to be
designed any differently for a compiler to be able to signal an undefined
modification of a variable.
Regards,
Adam
> Which neatly avoids the issue that the language doesn't have to be
> designed any differently for a compiler to be able to signal an undefined
> modification of a variable.
It's not an undefined modification of a variable. The wording
of your sentence shows considerable cluelessness about lisp.
Given that, the lack of warnings from the compiler is the least of
your problems.
Now, a Lisp compiler could, in some cases, deduce that you're
illegally modifying a constant object. That could be useful, but it's
not the same thing as having dynamic detection of constant object
modification. The latter feature would have potentially serious
implications for the implementation (remember, this is Lisp, not
Python; performance is important) and so isn't required by the
standard.
Paul
> Which neatly avoids the issue that the language doesn't have to be
> designed any differently for a compiler to be able to signal an undefined
> modification of a variable.
Why don't you come up with a design for efficiently and reliably
detecting attempts to modify arbitrary constant objects then?
Remember it has to work for any type, in interpreted and compiled
code.
--tim
> > Am I unique in finding this
> > subtle? Tell me how many newbies to Lisp expect a variable assigment to
> > effectively create a constant.
>
> Why should we care? Avoiding stupid newbie errors isn't
> a very good principle around which to design a language.
That's the attitude that got us C++. Personally, I think that the
principle of least surprise (what you call "avoiding stupid newbie
errors") is a fine principle around which to design a language. Adam, you
are right to be unhappy about Lisp's behavior in this regard.
> > Python will signal an error when you try and create a slice assignment on
> > an immutable object.
>
> I am not familiar with Python, but it's not at all clear how you'd
> do the same thing on cons cells without either considerable inefficiency
> or support from the virtual memory system (read-only pages) or hardware
> (ROM).
It's actually not hard at all: just have two kinds of cons cells: mutable
and immutable. CAR and CDR work on both, RPLACA and RPLACD work only on
mutable conses. You'd need a new primitive, ICONS, to generate immutable
conses. The reader would use ICONS instead of CONS.
E.
> Paul my expectations are not on trial here. Am I unique in finding this
> subtle? Tell me how many newbies to Lisp expect a variable assigment to
> effectively create a constant.
Probably none. And that expectation is borne out -- variable
assignment doesn't create a constant.
Are you unique? No; it seems to be a fairly common error, but for no
good reason -- the definition of QUOTE is quite obvious.
> It seems that this distinction has come about to assist compiler
> optimisation. How does that create an intuitive expectation?
It has nothing to do with optimization. It's just doing what you tell
it: (quote (1 2 3)) returns the list that is the cdr of that form,
which is quite obviously created at read time, not later -- how else
could it get into the cdr?
> Python will signal an error when you try and create a slice assignment on
> an immutable object. This seems very similar yet the coding
No, it's not the same thing at all. You get an error when you try to
do slice assignment on an immutable object in Python because there is
no such functionality -- that's why it's called "immutable": the
language gives you no way to mutate it. Lisp also signals errors if
you try to mutate immutable objects, of course -- the object you're
attempting to apply the mutator to is of the wrong type.
> error/undefined operation is ignored by the compiler. I would expect to
> quickly learn and understand how to fix the error if the compiler
> signalled the error when it was made.
Try the following Python code:
def foo(x=[]):
x.append('')
return x
Call foo() several times, with no argument. Do you get an error?
--
You don't have to agree with me; you can be wrong if you want.
(setq reply-to
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>"))
> I suspect it would impose a performance penalty to carry a "quoted
> list" flag with every item in a quoted list (you'd have to record such
> a bit with every list item because a given list position doesn't know
> if its the first one or not- it only knows where the next position
> is). However, some kind of signal would certainly be nice...
What's so special about lists? You'd have to carry a "literal" flag
on every object (at least every mutable object). And how would the
flag get set? Lisp just calls READ to read the object; READ can't set
the flag, because it can also be used at load-time or run-time to read
objects that you should be able to mutate...
--
I'm sure the above forms some sort of argument in this debate, but I'm not
sure whether it's for or against.
-- Boris Schaefer
> It's actually not hard at all: just have two kinds of cons cells: mutable
> and immutable. CAR and CDR work on both, RPLACA and RPLACD work only on
> mutable conses. You'd need a new primitive, ICONS, to generate immutable
> conses. The reader would use ICONS instead of CONS.
No, you need two kinds of *everything*.
It's largely this kind of stupidity that stopped me reading cll. I
guess I was right.
--tim
> Try the following Python code:
>
> def foo(x=[]):
> x.append('')
> return x
>
> Call foo() several times, with no argument. Do you get an error?
Nice example:
>>> def foo(x=[]):
... x.append('')
... return x
...
>>> foo()
['']
>>> foo()
['', '']
>>> foo()
['', '', '']
But I would be surprised if the example that corresponds to my code
exhibited the problem:
def test ():
merged=[""]
merged.append("")
return merged
There is no error:
>>> def test ():
... merged=[""]
... merged.append("")
... return merged
...
>>> test()
['', '']
>>> test()
['', '']
>>> test()
['', '']
But you could say that merged=[""] corresponds to the (setq merged
(list "")) syntax so it's not a valid comparison.
Regards,
Adam
> * Erann Gat wrote:
>
> > It's actually not hard at all: just have two kinds of cons cells: mutable
> > and immutable. CAR and CDR work on both, RPLACA and RPLACD work only on
> > mutable conses. You'd need a new primitive, ICONS, to generate immutable
> > conses. The reader would use ICONS instead of CONS.
>
> No, you need two kinds of *everything*.
Yes? So?
> It's largely this kind of stupidity that stopped me reading cll. I
> guess I was right.
The feeling is mutual.
E.
The other day I thought I had found a performance problem with the clisp
interpreter. Only now do I understand what I did wrong:
http://www.geocrawler.com/lists/3/SourceForge/1124/0/8880175/
I ran this code to test the speed of a function. But that didn't raise any
issues by itself:
(time
(loop for i from 1 to 1000 do
(merge-list-strings '("abc" "123") '("def" "456") '("ghi" "789"))))
What happened next is that I found it was too fast so I increased the loop
by a factor of 10. This made the interpreter run an amazing 120 times
slower. Since I was only supplying constant data to a function I
considered it my duty to report the anomaly.
It's a nice example of the same issue. My timing loop should have been
written:
(time
(loop for i from 1 to 1000 do
(merge-list-strings (list "abc" "123") (list "def" "456") (list "ghi"
"789"))))
Real time: 0.128145 sec.
Run time: 0.13 sec.
Space: 187568 Bytes
Then the 10000 loop only takes 10 times longer as expected:
Real time: 1.297335 sec.
Run time: 1.3 sec.
Space: 1843568 Bytes
I supplied the function with constant objects. So the function went ahead
and destructively modfied constant objects with undefined effect. In this
case it kept increasing the size of the lists each time leading to an
exponential decrease in performance.
Regards,
Adam
> Warning: Only read on if you care about new experiences in Lisp. If
> you cannot tolerate gratuitous descriptions of obvious errors then
> your time is better spent elsewhere. [..]
You should understand that these are new experiences only to those to
whom Lisp is a new experience. I beilieve the behavior you are
describing is part of the current trade-off between efficiency,
flexibility, useability etc. that is the language design Common Lisp,
based on quite a few man-years of experience. That's why you'd be well
advised to try to understand these issues rather than complain that
they don't match your expectations.
BTW an implementation is free to implement the detection and
error-reporting of modifying constant objects, but apparently none
have found this to be worthwhile.
--
Frode Vatvedt Fjeld
> The other day I thought I had found a performance problem with the clisp
> interpreter. Only now do I understand what I did wrong:
> http://www.geocrawler.com/lists/3/SourceForge/1124/0/8880175/
> I ran this code to test the speed of a function. But that didn't raise any
> issues by itself:
> (time
> (loop for i from 1 to 1000 do
> (merge-list-strings '("abc" "123") '("def" "456") '("ghi" "789"))))
[...]
> It's a nice example of the same issue. My timing loop should have been
> written:
> (time
> (loop for i from 1 to 1000 do
> (merge-list-strings (list "abc" "123") (list "def" "456") (list "ghi"
> "789"))))
Really, I think the first form is fine; it's MERGE-LIST-STRING that's
behaving badly -- it shouldn't be modifying its first argument.
--
Oh dear god. In case you weren't aware, "ad hominem" is not latin for
"the user of this technique is a fine debater."
-- Thomas F. Burdick
> On Mon, 10 Jun 2002 18:04:12 +1200, Adam Warner wrote:
>
>> The other day I thought I had found a performance problem with the
>> clisp interpreter. Only now do I understand what I did wrong:
>> http://www.geocrawler.com/lists/3/SourceForge/1124/0/8880175/
>
>> I ran this code to test the speed of a function. But that didn't raise
>> any issues by itself:
>
>> (time
>> (loop for i from 1 to 1000 do
>> (merge-list-strings '("abc" "123") '("def" "456") '("ghi" "789"))))
>
> [...]
>> It's a nice example of the same issue. My timing loop should have been
>> written:
>
>> (time
>> (loop for i from 1 to 1000 do
>> (merge-list-strings (list "abc" "123") (list "def" "456") (list
>> "ghi" "789"))))
>
> Really, I think the first form is fine; it's MERGE-LIST-STRING that's
> behaving badly -- it shouldn't be modifying its first argument.
Are you sure? Note the line (dolist (element (rest args)). I'm only
operating on the _rest_ of the arguments. That was the point of declaring
merged to be (first args). I was able to skip the first list element in
the dolist.
Regards,
Adam
> On Mon, 10 Jun 2002 18:04:12 +1200, Adam Warner wrote:
>
>> The other day I thought I had found a performance problem with the clisp
>> interpreter. Only now do I understand what I did wrong:
>> http://www.geocrawler.com/lists/3/SourceForge/1124/0/8880175/
>
>> I ran this code to test the speed of a function. But that didn't raise any
>> issues by itself:
>
>> (time
>> (loop for i from 1 to 1000 do
>> (merge-list-strings '("abc" "123") '("def" "456") '("ghi" "789"))))
>
> [...]
>> It's a nice example of the same issue. My timing loop should have been
>> written:
>
>> (time
>> (loop for i from 1 to 1000 do
>> (merge-list-strings (list "abc" "123") (list "def" "456") (list "ghi"
>> "789"))))
>
> Really, I think the first form is fine; it's MERGE-LIST-STRING that's
> behaving badly -- it shouldn't be modifying its first argument.
BTW you can see the undefined operation in action by repacing merged with
a (write merged):
(defun merge-list-strings (&rest args)
(let ((merged (first args)))
(loop for index from 0 to (1- (length (first args))) do
(dolist (element (rest args))
(setf (nth index merged) (concatenate 'string (nth index merged) (nth
index element)))))
(write merged)))
(loop for i from 1 to 10 do
(merge-list-strings '("abc" "123") '("def" "456") '("ghi" "789")))
("abcdefghi" "123456789")
("abcdefghidefghi" "123456789456789")
("abcdefghidefghidefghi" "123456789456789456789")
("abcdefghidefghidefghidefghi" "123456789456789456789456789")
("abcdefghidefghidefghidefghidefghi" "123456789456789456789456789456789")
("abcdefghidefghidefghidefghidefghidefghi" "123456789456789456789456789456789456789")
("abcdefghidefghidefghidefghidefghidefghidefghi" "123456789456789456789456789456789456789456789")
("abcdefghidefghidefghidefghidefghidefghidefghidefghi" "123456789456789456789456789456789456789456789456789")
("abcdefghidefghidefghidefghidefghidefghidefghidefghidefghi" "123456789456789456789456789456789456789456789456789456789")
("abcdefghidefghidefghidefghidefghidefghidefghidefghidefghidefghi" "123456789456789456789456789456789456789456789456789456789456789")
Compare:
(loop for i from 1 to 10 do
(merge-list-strings (list "abc" "123") (list "def" "456") (list "ghi"
"789")))
("abcdefghi" "123456789")
("abcdefghi" "123456789")
("abcdefghi" "123456789")
("abcdefghi" "123456789")
("abcdefghi" "123456789")
("abcdefghi" "123456789")
("abcdefghi" "123456789")
("abcdefghi" "123456789")
("abcdefghi" "123456789")
("abcdefghi" "123456789")
There's only one difference for the correct code being returned this time: I
replaced QUOTE notation with LIST notation in the function call. It's
truly subtle.
Regards,
Adam
I disagree. Avoiding things that remain surprising even after
you've understood where the language is coming from is a good
idea. Avoiding surprising naive newbies is not -- that would
mean your language must look like the languages newbies typically
use, so that you conform to their unconscious expectations.
Paul
You're right of course. I'm not arguing for such a flag, having
something like it is appealing from a superficial perspective, but is
likely problematic and expensive to implement. Perhaps a better
approach is to treat (quote) similarly to how the http://www.lisp.org
style guide treats (eval);
"The following operators often abused or misunderstood by
novices. Think twice before using any of these functions.
EVAL. Novices almost always misuse EVAL. When experts use EVAL, they
often would be better off using APPLY, FUNCALL, or
SYMBOL-VALUE. Use of EVAL when defining a macro should set off a
warning bell -- macro definitions are already evaluated during
expansion. See also the answer to question 3-12. The general
rule of thumb about EVAL is: if you think you need to use EVAL,
you're probably wrong."
Gregm
Yes, they are.
| Tell me how many newbies to Lisp expect a variable assigment to
| effectively create a constant.
It is the value that is a constant, not the variable. You let the
variable contain (a pointer to) a constant value. You may modify the
variable as much as you want, but not (components of) the constant value.
| It seems that this distinction has come about to assist compiler
| optimisation.
Only to you.
| How does that create an intuitive expectation?
Good intuition is a result of paying attention and thinking. Ignorant
arrogance very seldom produces good intuition.
| Python will [...]
Sure, and smart people have figured this out. Comparing irrelevant
comparendsš benefits nobody, however. This is _not_ Python.
| I would expect to quickly learn and understand how to fix the error if
| the compiler signalled the error when it was made.
Common Lisp is generally made for people who (want to) know what they are
doing. It is not an _error_ to do what you have done. On the contrary,
it may be the intended effect and it may be supported in the environment.
-------
Å¡ this should be a word.
--
In a fight against something, the fight has value, victory has none.
In a fight for something, the fight is a loss, victory merely relief.
70 percent of American adults do not understand the scientific process.
Adam, please note that Erann has "lost his faith in Lisp" and has many
more, many stronger grievances against Common Lisp than you do at this
point. If you want to learn the language, listen to people who like it.
>> Really, I think the first form is fine; it's MERGE-LIST-STRING that's
>> behaving badly -- it shouldn't be modifying its first argument.
> Are you sure? Note the line (dolist (element (rest args)). I'm only
> operating on the _rest_ of the arguments. That was the point of declaring
> merged to be (first args). I was able to skip the first list element in
> the dolist.
That's not where the problem is. The problem is that you're using
SETF of NTH on the result list, but you initialize the result list
from the input, so you're modifying the input. There's nothing
particularly wrong with that, if it's a documented behaviour of the
function -- that's the difference between NREVERSE and REVERSE, DELETE
and REMOVE, etc. -- but if you want to be able to call it on literal
structure or otherwise rely on the arguments not getting smashed, you
have to make the function return new structure.
A simple fix for your code is to just replace
(first args)
in the LET form with
(copy-list (first args))
Also, FWIW,
(loop for index from 0 to (1- (length (first args))) ...)
can be written more perspicuously as
(loop for index from 0 below (length (first args)) do ...)
[But see DOTIMES]
> On Mon, 10 Jun 2002 22:56:48 +1200, Adam Warner wrote:
>
>>> Really, I think the first form is fine; it's MERGE-LIST-STRING that's
>>> behaving badly -- it shouldn't be modifying its first argument.
>
>> Are you sure? Note the line (dolist (element (rest args)). I'm only
>> operating on the _rest_ of the arguments. That was the point of
>> declaring merged to be (first args). I was able to skip the first list
>> element in the dolist.
>
> That's not where the problem is. The problem is that you're using SETF
> of NTH on the result list, but you initialize the result list from the
> input, so you're modifying the input.
...and since the input points to a constant object it should not be
destructively modified. And I also see your point about not smashing the
input arguments.
(snip)
All understood. Thanks for this tip:
> (copy-list (first args))
> Also, FWIW,
>
> (loop for index from 0 to (1- (length (first args))) ...)
>
> can be written more perspicuously as
>
> (loop for index from 0 below (length (first args)) do ...)
...and is a great excuse to use the word `perspicuously.'
> [But see DOTIMES]
OK
Thanks,
Adam
> I supplied the function with constant objects. So the function went ahead
> and destructively modfied constant objects with undefined effect. In this
> case it kept increasing the size of the lists each time leading to an
> exponential decrease in performance.
Quadratic, not exponential.
P.
> BTW an implementation is free to implement the detection and
> error-reporting of modifying constant objects, but apparently none
> have found this to be worthwhile.
In the past, Clisp could be compiled with this kind of memory
management. I don't know if modern Clisp is still compilable
in this way.
P.
> On Mon, 10 Jun 2002 12:25:54 +1200, Paul F. Dietz wrote:
>
> > Adam Warner wrote:
> >
> >> Because (a) Lisp is a high level language and (b) I did not give an
> >> explicit instruction to the compiler to optimise the variable
> >> assignment as a constant.
> >>
> >> (a)+(b) ==> I am warned when I perform an undefined operation.
> >
> > But why would you expect that something like '(a b c) would cons up its
> > list every time you evaluated it, particularly when there's a simple
> > explicit way to say to do just that?
>
> Paul my expectations are not on trial here. Am I unique in finding this
> subtle? Tell me how many newbies to Lisp expect a variable assigment to
> effectively create a constant.
it is a subtle issue. i think everyone runs their head against this
wall early in their lisp career. i read the "don't modify constant
structure", figured i knew what i was doing and ran straight into the
same thing. after stuggling with it and then asking on the newsgroup
what i was doing wrong, i figured it out.
> It seems that this distinction has come about to assist compiler
> optimisation. How does that create an intuitive expectation?
well, you could have used APPEND instead of NCONC. APPEND doesn't
have this problem since it will cons up a new list. APPEND is the
usual lisp concatentor. NCONC is a hand optimization. it is faster
but also more dangerous.
you should use APPEND in favor of NCONC (at least until you get used
to the NCONC idioms). lisp is good at profiling and tweaking for
speed later in development. it is difficult (for me at least) to
resist the siren song of pre-mature optimization, but you can try.
in a better world, maybe you wouldn't have had an explicit NCONC at
all and the compiler would figure out which strategy to use, APPEND or
NCONC backends if you will, from the preceding context. certainly,
many APPENDs could be automatically turned into NCONC by a smart
compiler because the last list is obviously and explicitly fresh. i
wonder if any implementations do so?
lists not obviously fresh could be subject to a declaration which
would then influence optimization. but then, declarations are
somewhat verbose. some things in common-lisp are historical and
therefore a bit inconsistant with newer styles. NCONC is historical
and experience lispers know how it is used. i expect they like it
this way and would be loathe to change. still, declarations affecting
the performance of APPEND could be added without removing NCONC.
> Python will signal an error when you try and create a slice assignment on
> an immutable object. This seems very similar yet the coding
> error/undefined operation is ignored by the compiler. I would expect to
> quickly learn and understand how to fix the error if the compiler
> signalled the error when it was made.
maybe some lisp implementations will do so if you turn up safety enough.
--
Johan KULLSTAM <kulls...@attbi.com> sysengr
What, precisely, would make you quit whining and get with the program?
Not related to this issue at all, I heard the term "retrophrenology"
today, for the theory that aspects of personality and behavior may be
altered by modifying the shape of the skull.
"clue stick"ology, oh my.
marc
> Adam Warner <use...@consulting.net.nz> writes:
>
> > Warning: Only read on if you care about new experiences in Lisp. If
> > you cannot tolerate gratuitous descriptions of obvious errors then
> > your time is better spent elsewhere. [..]
>
> You should understand that these are new experiences only to those to
> whom Lisp is a new experience. I beilieve the behavior you are
> describing is part of the current trade-off between efficiency,
> flexibility, useability etc. that is the language design Common Lisp,
> based on quite a few man-years of experience. That's why you'd be well
> advised to try to understand these issues rather than complain that
> they don't match your expectations.
This is true. Actually, several people have tried to explain the
problems from an implementation/vendor's point of view, but most
have not succeeded in capturing the philosophy of the
detection-of-the-undefined from the vendor's point of view. Also
missing is the distinction between what the vendor
must-do/is-free-to-do/wants-to-do and what the user
can-assume/cannot-assume, and why these distinctions are
important in a community of multiple CL implimentations that
tend to be highly portable
> BTW an implementation is free to implement the detection and
> error-reporting of modifying constant objects, but apparently none
> have found this to be worthwhile.
It is hard, but not impossible, and certainly worthwhile. See the
description of excl:pure-string
http://www.franz.com/support/documentation/6.1/doc/pages/operators/excl/pure-string.htm
Also, it is always desirable to detect attempts to mutate immutable
objects, even though implementors are not required to do so:
CL-USER(1): (symbol-name 'car)
"CAR"
CL-USER(2): (setf (char * 0) #\D)
Error: Attempt to store into purespace address #xfa36ba04.
[condition type: SIMPLE-ERROR]
Restart actions (select using :continue):
0: Return to Top Level (an "abort" restart).
1: Abort entirely from this process.
[1] CL-USER(3):
--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)
> * Erann Gat
> | Adam, you are right to be unhappy about Lisp's behavior in this regard.
>
> Adam, please note that Erann has "lost his faith in Lisp" and has many
> more, many stronger grievances against Common Lisp than you do at this
> point. If you want to learn the language, listen to people who like it.
And if you want to learn how to improve on the language, listen to your
own intuitions. "Good enough" is the death knell of progress.
E.
P.S. This is getting a bit tiresome, but for the record (again): Erik is
wrong. I have no "grievances" against Common Lisp.
I'm not advocating catering to newbies as an overriding design principle,
just as one factor among many. Avoiding surprises for newbies has some
value. It doesn't have infinite value, but it doesn't, as the original
poster implied, have zero value either.
E.
"Paul F. Dietz" wrote:
>
> Avoiding things that remain surprising even after
> you've understood where the language is coming from is a good
> idea.
I remember how I howled in indignation the first time I deleted a Prolog
rule from the source file, re-ran, discovered the rule still there...
Or howseabout the first time we all tripped over zero and oh not being
interchangeable?
COBOLers just plain learn to look for missing or extra full stops.
The list goes on, and there are good reasons why all those newbie
gotchas should not be fixed.
As for newbie expectations in general, I know I am not the only one to
have noticed early in the learning curve something odd about Lisp:
whenever in the heat of coding I went to the doc to see exactly how some
new (to me) function worked, it always worked /exactly the way I
needed-wanted-hoped/. That always scared me.
:)
--
kenny tilton
clinisys, inc
---------------------------------------------------------------
""Well, I've wrestled with reality for thirty-five years, Doctor,
and I'm happy to state I finally won out over it.""
Elwood P. Dowd
> "Paul F. Dietz" wrote:
> >
> > Avoiding things that remain surprising even after
> > you've understood where the language is coming from is a good
> > idea.
>
> I remember how I howled in indignation the first time I deleted a Prolog
> rule from the source file, re-ran, discovered the rule still there...
>
> Or howseabout the first time we all tripped over zero and oh not being
> interchangeable?
>
> COBOLers just plain learn to look for missing or extra full stops.
>
> The list goes on, and there are good reasons why all those newbie
> gotchas should not be fixed.
Really? What are they?
E.
> In article <3D04C1F7...@nyc.rr.com>, Kenny Tilton
> <kti...@nyc.rr.com> wrote:
...
> > I remember how I howled in indignation the first time I deleted a Prolog
> > rule from the source file, re-ran, discovered the rule still there...
> >
> > Or howseabout the first time we all tripped over zero and oh not being
> > interchangeable?
> >
> > COBOLers just plain learn to look for missing or extra full stops.
> >
> > The list goes on, and there are good reasons why all those newbie
> > gotchas should not be fixed.
>
> Really? What are they?
One is that you can't please all of the people all of the time. Every
language feature has tradeoffs, and newbies are the ones that get bit
by the "other" side of the tradeoff most often.
--
--Ed L Cashin | PGP public key:
eca...@uga.edu | http://noserose.net/e/pgp/
If you make something foolproof, it may only be fit for fools. You're only
a newbie for a short time; do you really want to have to live to use a
dumbed-down language for the rest of your career?
Most of these gotchas are a consequence of other features that are good
ideas. For instance, the rule that you can't modify a literal allows the
compiler to merge duplicate literals into a single instance, which can be a
performance benefit.
--
Barry Margolin, bar...@genuity.net
Genuity, Woburn, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
> It's truly subtle.
imagine you are controlling a behind-the-goal crane camera for a world cup
match. the ball moves around pretty fast so at a crucial moment (a goal) you
happen to be pointing the camera in the wrong direction. luckily, the fixed
cameras in the goal's back corners catch the action so your boss doesn't have
to fire you. where is the subtlety in this system?
thi
> g...@jpl.nasa.gov (Erann Gat) writes:
>
> > In article <3D04C1F7...@nyc.rr.com>, Kenny Tilton
> > <kti...@nyc.rr.com> wrote:
> ...
> > > I remember how I howled in indignation the first time I deleted a Prolog
> > > rule from the source file, re-ran, discovered the rule still there...
> > >
> > > Or howseabout the first time we all tripped over zero and oh not being
> > > interchangeable?
> > >
> > > COBOLers just plain learn to look for missing or extra full stops.
> > >
> > > The list goes on, and there are good reasons why all those newbie
> > > gotchas should not be fixed.
> >
> > Really? What are they?
>
> One is that you can't please all of the people all of the time.
So the reason we should not attempt to improve is that we have no hope of
achieving perfection? Blarg! No wonder the world is such a mess.
> Every
> language feature has tradeoffs, and newbies are the ones that get bit
> by the "other" side of the tradeoff most often.
Obivously. But what I'm asking is: what *are* those tradeoffs,
specifically, for the examples that Kenny cited above?
E.
Erann Gat wrote:
>
> Really? What are they?
Really. You don't know? Which one(s)?
...and this is a basic example of that.
One would like to think that if the compiler is smart enough to fold two
constant instances into the "same" instance, then it might be smart enough
to detect that this "immutable" object is being directly modified.
However there are several caveats.
One, the compiler it not "required", by the Standard, to do this kind of
checking. The Standard defines immutable objects, and the compiler can rely
on immutable object. But rather than defining the behaviour of immutable
objects, that Standard leaves certain behaviours as undefined.
Having an undefined behavior does not mean that Standard authors "don't
know" what will or should happen, it means that they decided that it should
be punted back to the implementations for behaviour. I always read
"implementation dependant" to mean something that was essentially necessary
for we, the lisp programmer, to understand that we could use but be aware of
porting ramifications. I always read "undefined" behavior as something that
is, typically, to have bad connotations, and something that should be
avoided. In reality, they're both "implementation" dependant.
Perhaps some inner compiler implementor relies on their own internal
"undefined" behaviour by modifying constant structure for a certain side
effect. Perhaps this reliance has 15K of comments surrounding the hows,
whys, wherefore and circumstances of their reliance on this behavior, but
it's their implementation, and the Standard says "be my guest".
Two, in this specific case, it's a behaviour that Adam will never forget.
Everyone bit by this learns it once and never needs to relearn it. No
different from any other behavior people learn, like the optional arguments
for READ-LINE. Newbies are CONSTANTLY being bit by READ-LINE, despite that
the standard says exactly what is required. But, we don't expect the
compiler to warn about it's errors.
You learn that once too.
Three, if there were a more robust Lisp market, then perhaps there would be
motivation to add a warning during an attempt to modify immutable
structures. But, right now I'm not seeing an "Annual Lisp Compiler Shootout"
on the cover any of the magazines that I read. I'm sure that if there were
these shootouts, that this one topic would come up year after year, with
pages and pages of commentary talking about it, rather than, perhaps,
improved standard support, better performance, better GC, or new features
and system facilities. At least, if its affect on this newsgroup is any
indicator, then that would appear to be the main discussion in the reviews.
Finally, the real question, the true question is (and I'm not picking on
Adam): "Why did Adam think that this was even possible?"
The reason is that every book, every example, every post here on USENET,
uses constant lists ALL THE TIME. If you were a new person, learning lisp,
you'd think that to create a list, you used '(), or something similar.
That's what everyone does. Nobody uses (list ...), particularly in "quick"
code.
'(...) is so prevalent, it is easy think that it was how lists were created.
Nobody coming into the language, learning by EXAMPLE (copying code), would
even question it. You never see the question "Is this how you create lists
in Lisp? '(X Y Z)?". At the beginning level, it would be the same as "How do
I create decimal numbers?".
There's a book on Franz's website, at
http://www.franz.com/resources/educational_resources/cooper.book.pdf. Named
"Basic Lisp Techniques".
I quote from it, section 3.1.3:
--quote--
Here is another example of using quote to return a literal expression:
(quote (+ 1 2))
Unlike evaluating a list expression "as is," this will return the literal
list (+ 1 2). ___This
list may now be manipulated as a normal list data structure.___
--end quote--
(Emphasis mine)
Now, we note that it mentions "literal" several times, but then goes on how
it can "be manipulated as a normal list data structure". As we've seen, this
isn't necessarily true.
Is the author, David Cooper, a "complete newbie idiot"? Doesn't Franz know
ANYTHING about the language it's selling? How can they let such BLATANT
misinformation be published publicly??
I'd like to give in to the reasonable assumption that both David and Franz
are less than idiots, but that's my nature.
Mind you, I only picked this book as it was the first one that showed up in
Google. An interesting corallary is "Succesful Lisp", which doesn't get into
assembling and tearing lists apart until way deep in the book, and then he
consistently uses (list ...). However, in my casual overview, he doesn't
mention the Gotcha about using constant lists and destructive modifiers. On
the other hand, a user whose sole exposure to lisp was this book would
probably hardly even use a constant list, as very few of the examples use
them.
So, basically, it's not the Standards fault, IMHO. It's not the
implementations fault. It's not even Adams fault. It's whoever Adam looked
to to learn his basics that led him down this path. If nothing else, Adam,
be grateful you've hit it early and did not discover this detail of lisp
after writing several thousand lines of code.
Also be grateful that you've discovered this issue early in a dynamic
environment, and not in a statically loaded environment. In many ways, this
issue is no different than overrunning a local array in C and clobbering
your other variables (perhaps mashing your return stack, or even other
wonders).
However, Adam, this is it. We'll let the buck pass this time, but next time,
no warnings, no flags, no blinking text. Now you know better. If you do it
again, it's your fault.
Best Regards,
Will Hartung
(wi...@msoft.com)
> Most of these gotchas are a consequence of other features that are good
> ideas. For instance, the rule that you can't modify a literal allows the
> compiler to merge duplicate literals into a single instance, which can be a
> performance benefit.
The problem at hand is not the rule that forbids modification of literals,
but the fact that implementations almost universally *do* allow you to
modify literals, and the implementation of this rule is left as a burden
on the programmer. This is no different from the myriad ways in which you
can shoot yourself in the foot in C++ and for which C++ is (rightly IMO)
often criticised by Lispers. If doing (setf (car '(1 2 3)) 4) resulted in
an error (or even a warning) we wouldn't be having this discussion (over
and over and over again).
E.
There *are* compilers that automatically load literals into read-only pages
of virtual memory, so you will get an error.
While runtime checking of many common errors is traditional in Lisp, most
of them are *not* required by the language specification. The ones that
are easy to implement are still done in most cases. For instance, type
checking falls naturally out of type dispatching, which is necessary
because many functions allow different types of arguments.
> Erann Gat wrote:
> >
> > Really? What are they?
>
> Really. You don't know? Which one(s)?
Hm, I've lost my antecedent context. Let's try this again:
* Kenny Tilton:
> I remember how I howled in indignation the first time I deleted a Prolog
> rule from the source file, re-ran, discovered the rule still there...
>
> Or howseabout the first time we all tripped over zero and oh not being
> interchangeable?
>
> COBOLers just plain learn to look for missing or extra full stops.
>
> The list goes on, and there are good reasons why all those newbie
^^^^^^^^^^^^^^^^^^^^^^
> gotchas should not be fixed.
What are those "good reasons"?
What is the good reason that deleting a rule should not cause that rule to
disappear from a Prolog rule base?
What is the good reason that oh and zero should not be interchangeable?
(Truthfully, I've never heard of this being a problem for anyone. It
certainly never was for me.)
What is the good reason that COBOLers have to learn to look for missing or
extra full stops?
E.
Any time you use functions like nconc and (setf (nth ..)) you are
taking a known and often needless risk. You are trading away
predictability for a possible increase in performance. This is like
to coercing away a const in C++. You are effectively overiding the
usual protections in the language.
Its not the (list ...) vs (quote ...) that should signal a potential
problem. Its your destructive manipulation of the lists. Whenever
you do something like that, you are no longer being subtle, and
careful consideration of how you input data is being created is
required. You are overiding the usual language protections.
You could have written merge-list-strings non-destructively and avoid
any potential side effects. Such as:
(defun new-merge-list-strings (&rest args)
(apply #'mapcar (lambda (&rest strs)
(apply #'concatenate 'string strs)) args))
Timing the loop:
(time
(loop for i from 1 to 10000 do
(merge-list-strings (list "abc" "123" "ABC")
(list "def" "456" "DEF")
(list "ghi" "789" "GHI")
(list "jkl" "XYZ" "JKL"))))
Compiling first and using clisp I get for merge-list-strings:
Real time: 1.468425 sec.
Run time: 0.51 sec.
Space: 3680072 Bytes
GC: 7, GC time: 0.04 sec.
and for new-merge-list-strings I get:
Real time: 0.7438 sec.
Run time: 0.25 sec.
Space: 3440072 Bytes
GC: 7, GC time: 0.04 sec.
with similar results with CMUCL, so I don't think consing is much
of an issue in your example.
--
Barry
> Its not the (list ...) vs (quote ...) that should signal a potential
> problem. Its your destructive manipulation of the lists. Whenever you
> do something like that, you are no longer being subtle, and careful
> consideration of how you input data is being created is required. You
> are overiding the usual language protections.
Thanks Barry. That's a great summary of the situation. Even though Lisp
allows you or I to perform low level operations that break the language
protections it's quite possible to avoid them (since I now understand what
they are). Or go the other way and introduce something as potentially
dangerous as inline assembly code.
Like Will said I'm grateful that I "hit it early and did not discover this
detail of lisp after writing several thousand lines of code."
Regards,
Adam
While I really do understand and apreciate the sentiment, this is just not
stated correctly. You do not give up *any* predictability by using nconc
and (setf (nth ..)). None at all. The only problem was modifying constant
data. And you do not override any "protections" you are just using more
"nuts and bolts" kinds of tools. Yes you can be very surprised and perhaps
confused by what happens but that is not the same thing as giving up
protections or reliability (like coersing away a const).
However, along these lines, it is good advice to *not* use nconc and (setf
(nth ..)) and raplcd and nreverse etc *unless* you know you really want
those things and you know what you are doing.
But again, the only mistake above is the QUOTE. Some good advice start with
"are you *sure* you want to (setf nth) that? why?"
--
Coby Beck
(remove #\Space "coby 101 @ bigpond . com")
> whenever in the heat of coding I went to the doc to see exactly how some
> new (to me) function worked, it always worked /exactly the way I
> needed-wanted-hoped/. That always scared me.
I know. I had the same feeling.
Especially when I discovered that I didn't need to
write a special case in a function I was writing because
(reduce #'* '()) => 1
and
(reduce #'+ '()) => 0
I was in heaven! Whoever designed that understood their group theory.
:-)
I assume you know that (*) => 1 and (+) => 0, too, but just in case...
>
> Which neatly avoids the issue that the language doesn't have to be
> designed any differently for a compiler to be able to signal an undefined
> modification of a variable.
How do you feel about the following?
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
snapdragon:~ > cat > stringsmash.c
int
main(void)
{
char *str = "This is a constant string.\n";
/* Smash the constant string. */
*str = 't';
printf("The smashed string is %s\n", str);
return 0;
}
snapdragon:~ > make stringsmash
cc stringsmash.c -o stringsmash
snapdragon:~ > ./stringsmash
Bus error (core dumped)
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
--
Fred Gilham gil...@csl.sri.com
It's not when people notice you're there that they pay attention; it's
when they notice you're *still* there. --- Paul Graham
Define perfection. :-)
reducto ad absurdum
One of the reasons we have so many languages out there is that people
become dissatisfied with what they have and try to make something that
fits their mindset and/or the problem they are trying to solve, with
the problem usually coming in as a secondary requirement and without
much consideration of what was already available.
Was there really a good reason to create Java instead of using one of
the already existing gc'd, platform-independent language and just adding
an up-to-date GUI library?
More to the point, programmers are fairly inventive in making mistakes,
once you protect against a set of `common' errors, a whole new set
will surface to the light. No doubt with someone crying, "Why don't we
protect newbs from these?" Where do you stop, you cannot protect against
everything. Not quite true, you could end up with SIMPLE:
"SIMPLE is the acronym for Sheer Idiot's Monopurpose Programming Linguistic
Environment. This language, developed at Hanover College for Technological
Misfists, was designed to make it impossible to write code with errors in it.
The statements are, therefore, confined to BEGIN, END and STOP. No matter how
you arrange the statements, you can't make a syntax error."
> > Every
> > language feature has tradeoffs, and newbies are the ones that get bit
> > by the "other" side of the tradeoff most often.
>
> Obivously. But what I'm asking is: what *are* those tradeoffs,
> specifically, for the examples that Kenny cited above?
>
Efficiency, space, time, cost, the usual things that plague everyday life.
Someone has to implement the additional checks. Are you willing to double
or triple what you pay for a compiler that produces slower code just so
some beginners are protected against ???_every_??? common error?
As for most implementations not dumping literals in read-only memory,
Lisp is one of the few languages where literals can be gc'd. It's not
difficult to put together a program that continually creates and releases
literals. Personally I think it sounds a bit silly to keep writing
and deleting from `read-only' memory. Of course, this is not intended
to keep a implementer from doing it, just to point out why one might
choose not to.
The main point to all this is that the language committee/implementer has
to decide where to stop adding additional error checking and there will
always be things to complain about no matter what they decide.
--
Geoff
> What is the good reason that deleting a rule [from a source file, I said] should not cause that rule to
> disappear from a Prolog rule base?
Prolog is like CL, you build things up from a core image interactively
during a session. What if I /moved/ the rule to a diff source file? What
if I duplicated a rule in my saved image for debugging purposes and now
am done with that debugging effort?
C recreates the whole wadge with every edit-compile-link cycle, so it
does not have this problem.
>
> What is the good reason that oh and zero should not be interchangeable?
You want to support keyboard ambiguity? have the compiler use AI to work
out whether, when I hit oh, I was (a) intending zero or (2) making a
mistake? Sounds like a helluva lot of work to save newbies from learning
(as they must anyway) that compilers are no place for AI.
> (Truthfully, I've never heard of this being a problem for anyone. It
> certainly never was for me.)
A newbie I still know lost all interest in programming after spending a
week trying to figure out why the compiler was giving him an error. None
of the other newbies in the intro to programming lab could figure it out
either. Finally a half-blind comp sci major held a punch card about an
inch from his best eye and identified the oh/zero switch.
A good side-thread might be why compiler errors are so inscrutable.
Should some huge AI effort be made so newbies can learn how to figure
out what they did wrong in spite of the compiler's best efforts to
misdirect them?
>
> What is the good reason that COBOLers have to learn to look for missing or
> extra full stops?
Because the language treats full stops as "end-if", which did not appear
explicitly until COBOL-84 (?). Lotsa bugs come from cutting and pasting
and inadvertently introducing or losing an endif, without creating
illegal syntax. Only via a Pythonian study of the indentation can the
programmer's intent be guessed at. But compilers should not be guessing
at intent.
That way lies madness.
> How do you feel about the following?
>
> ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
>
> snapdragon:~ > cat > stringsmash.c
> int
> main(void)
> {
> char *str = "This is a constant string.\n";
>
> /* Smash the constant string. */
> *str = 't';
> printf("The smashed string is %s\n", str); return 0;
> }
> snapdragon:~ > make stringsmash
> cc stringsmash.c -o stringsmash
> snapdragon:~ > ./stringsmash
> Bus error (core dumped)
>
> ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
I "feel" that many people are looking for ways to mitigate the effect of
these kinds of errors to help increase the overall security of
applications and systems. Stack smashing/buffer overflows are symptoms of
low level language use. No matter how careful people try and be they
eventually slip up and introduce security vulnerabilites or destabilise
the system. While your example appears harmless it might not be if the
smashing string could be modified by the user.
Even a language like C can be handled with a compiler that attempts to
produce safe code. Look at what Microsoft is trying to achieve with Visual
Studio .NET and managed code (where the security threats from some forms
of buffer overruns are minimised).
My comment was a recognition that the ANSI CL standard doesn't prohibit
the safe handling of undefined operations. And I guess your point is that
there's no reason C couldn't be made safer as well with the appropriate
compiler. And I'd agree. Lisp has the additional benefit of interactive
development where errors could be discovered before the code was even
compiled.
Regards,
Adam
> Efficiency, space, time, cost, the usual things that plague everyday
> life. Someone has to implement the additional checks. Are you willing to
> double or triple what you pay for a compiler that produces slower code
> just so some beginners are protected against ???_every_??? common error?
I can see a couple of reasons why people could be willing:
(1) You may be willing for these errors to be picked up by a slower
running interpreter so that by the time you get to the compiled stage the
errors have been removed--allowing the compiled code to run at full speed.
(2) Without any checking for the errors they will not only impact upon
beginners but a number will escape into distributed code and impact upon
third parties.
Regards,
Adam
Because it changes the simple rule that loading a source file is the same
as typing the statements it contains into the interactive listener, which
is a much easier rule to explain than what would be necessary if it deleted
rules automatically. It would also prevent you from having different rules
for the same term in different source files, since loading the second file
would delete the rules that the first file installed.
> a number will escape into distributed code and impact upon
> third parties.
this is a societal bug and cannot be fixed by the language.
thi
This seems like the appropriate way to request such features:
(declare (optimize (debug 3) (safety 3)))
Of course, the clueless newbie will now complain about the performance.
> "Erann Gat" <g...@jpl.nasa.gov> wrote in message
> news:gat-100602...@k-137-79-50-101.jpl.nasa.gov...
> > In article <873cvv3...@cs.uga.edu>, Ed L Cashin <eca...@uga.edu> wrote:
> >
> > So the reason we should not attempt to improve is that we have no hope of
> > achieving perfection? Blarg! No wonder the world is such a mess.
> >
>
> Define perfection. :-)
It was defined by the person to whom I was responding: pleasing all of the
people all of the time.
> The main point to all this is that the language committee/implementer has
> to decide where to stop adding additional error checking and there will
> always be things to complain about no matter what they decide.
Clearly. Nonetheless, that fact alone does not make the complaints
invalid, nor the complainers worthy of contempt.
E.
> Sorry for the weird thread insertion, my NG reader is cranky:
>
> > What is the good reason that deleting a rule [from a source file, I
said] should not cause that rule to
> > disappear from a Prolog rule base?
>
> Prolog is like CL, you build things up from a core image interactively
> during a session. What if I /moved/ the rule to a diff source file? What
> if I duplicated a rule in my saved image for debugging purposes and now
> am done with that debugging effort?
>
> C recreates the whole wadge with every edit-compile-link cycle, so it
> does not have this problem.
OK, so now we have two approaches to the problem, neither one of which
seems to be completely satisfactory. We can now do one of two things: we
can say "you just can't please everyone" and leave it at that, or you can
keep thinking about how to improve the situation, and perhaps come up with
the idea of, say, giving the programmer a choice between a "baseline load"
and an "incremental load" or something like that.
> > What is the good reason that oh and zero should not be interchangeable?
>
> You want to support keyboard ambiguity? have the compiler use AI to work
> out whether, when I hit oh, I was (a) intending zero or (2) making a
> mistake? Sounds like a helluva lot of work to save newbies from learning
> (as they must anyway) that compilers are no place for AI.
That would indeed be a helluva lot of work. But maybe there are other
answers? For example, why not merge capital-O and zero into a single
character? Let 1000 and 1OOO both mean (unambiguously) one thousand.
> > (Truthfully, I've never heard of this being a problem for anyone. It
> > certainly never was for me.)
>
> A newbie I still know lost all interest in programming after spending a
> week trying to figure out why the compiler was giving him an error. None
> of the other newbies in the intro to programming lab could figure it out
> either. Finally a half-blind comp sci major held a punch card about an
> inch from his best eye and identified the oh/zero switch.
OK, I stand corrected. So tell me again: what is the good reason to make
this distinction?
> > What is the good reason that COBOLers have to learn to look for missing or
> > extra full stops?
>
> Because the language treats full stops as "end-if", which did not appear
> explicitly until COBOL-84 (?). Lotsa bugs come from cutting and pasting
> and inadvertently introducing or losing an endif, without creating
> illegal syntax. Only via a Pythonian study of the indentation can the
> programmer's intent be guessed at. But compilers should not be guessing
> at intent.
>
> That way lies madness.
Now I'm really confused. That doesn't sound to me like a "good reason" at
all, it sounds simply like a manifestly stupid design decision.
E.
> In article <gat-100602...@k-137-79-50-101.jpl.nasa.gov>,
> Erann Gat <g...@jpl.nasa.gov> wrote:
> >What are those "good reasons"?
> >
> >What is the good reason that deleting a rule should not cause that rule to
> >disappear from a Prolog rule base?
>
> Because it changes the simple rule that loading a source file is the same
> as typing the statements it contains into the interactive listener, which
> is a much easier rule to explain than what would be necessary if it deleted
> rules automatically. It would also prevent you from having different rules
> for the same term in different source files, since loading the second file
> would delete the rules that the first file installed.
This would only be true if you forced yourself to choose a priori between
incremental loading and "cold" loading. But why do you have to choose
between those two options a priori? Why not give the programmer the
option? Imagine:
(load-rulebase foo)
; Warning - rulebase foo does not have a load-mode declaration. Assuming
; incremental mode. If this is not what you want (or you're a newbie and
; you don't understand what this means) refer to <a href=...>the appropriate
; section of the manual</a>
E.
And be darn sure never to use FOO as a symbol just in case someone
twiddles the read-base... (Oh, and don't forget about uppercase I and
lowercase l when you're telling the parser how to do its job.) Or just
use a decent font?
It seems that some people are taking these kinds of gotchas as reasons
to twiddle the languages in question, and others are taking them as
opportunities for useful learning experiences.
paul
there is nothing in the common lisp standard that prevents an
implementation from incorporating all those checks you find useful.
the reason why the standards committee chose to explicitely use the
term "undefined behavior" was to give the implementors a choice to
select the behavior that best matches their objectives and resources.
it seems that most lisp implementors came to the conclusion that
adding these checks without compromising their objectiveswould tax
their more than the expected benefits would warrant.
for a nice example of what can be done with a lisp style language look
at DrScheme (sorry, erik), an implementation od scheme that was
explicitly designed with teaching in mind. it has several user levels
that you can select on startup (from "total novice" to "advanced
expert"), and the language you are using differs slightly depending on
which level you have selected: they let experts get away with things
that they flag as errors for beginners. this is fine for a teaching
system, but most common lisp implementation seem to oriented towards
people who are using it for production code, and the trade-off
analysis ... (cf above)
> Even a language like C can be handled with a compiler that attempts to
> produce safe code. Look at what Microsoft is trying to achieve with Visual
> Studio .NET and managed code (where the security threats from some forms
> of buffer overruns are minimised).
that's the implementation. what about the language specs (c#?)? i
can easily imagine that the language specs have been carefully
designed to leave cetain behaviors unspecified, in order not to bind
implementations too strictly
hs
--
don't use malice as an explanation when stupidity suffices
So what? You can make the same argument for buffer overflows in C
code. Given the potential (actual, in that case: see bugtraq or
securityfocus or lwn.net or CERT or ...) impact of this happening, I'd
gladly take a technical fix that ameliorated the problem even if it's
not a 100% solution.
"People are stupid" is only really a convincing argument not to change
anything if there's a reasonable way of avoiding having to depend on
code that these stupid people may have written.
-dan
--
http://ww.telent.net/cliki/ - Link farm for free CL-on-Unix resources
> This seems like the appropriate way to request such features:
>
> (declare (optimize (debug 3) (safety 3)))
>
> Of course, the clueless newbie will now complain about the performance.
The clueless newbie is probably running the code interpreted anyway,
so there's some slack there to negotiate with.
Well, you can't really DECLARE merged to be (first args). You may have
bound the symbol merged to (first args), but establishing that binding
doesn't copy the value of (first args). It just means that there are
now two pointers to the same list structure.
Lisp is quite happy to share list structure, which provides some nice
efficiencies in terms of reducing the amount of storage required. Some
Lisp software systems aggressively exploit this to facilitate better
scalability.
>
> Regards,
> Adam
--
Thomas A. Russ, USC/Information Sciences Institute t...@isi.edu
> In article <ae56qc$3tllm$1...@id-105510.news.dfncis.de>,
> Adam Warner <use...@consulting.net.nz> writes:
>> ...
>> I "feel" that many people are looking for ways to mitigate the effect
>> of these kinds of errors to help increase the overall security of
>> applications and systems. Stack smashing/buffer overflows are symptoms
>> of low level language use. No matter how careful people try and be they
>> eventually slip up and introduce security vulnerabilites or destabilise
>> the system. While your example appears harmless it might not be if the
>> smashing string could be modified by the user.
>
> there is nothing in the common lisp standard that prevents an
> implementation from incorporating all those checks you find useful.
You snipped what I said and then reworded what I wrote! I wrote in that
reponse: "My comment was a recognition that the ANSI CL standard doesn't
prohibit the safe handling of undefined operations."
<snip>
> for a nice example of what can be done with a lisp style language look
> at DrScheme (sorry, erik), an implementation od scheme that was
> explicitly designed with teaching in mind. it has several user levels
> that you can select on startup (from "total novice" to "advanced
> expert"), and the language you are using differs slightly depending on
> which level you have selected: they let experts get away with things
> that they flag as errors for beginners. this is fine for a teaching
> system,
<snip>
I found it unhelpful as a teaching system and moved to the advanced
settings straight away. The problem with the teaching system is that the
lower settings disable legal language constucts. So if you follow the
language specification some code doesn't work because some functionality
has been disabled.
We are discussing a different situation where checks can be implemented
while still confirming to the ANSI CL specification. A point that you just
made above.
> it seems that most lisp implementors came to the conclusion that adding
> these checks without compromising their objectives would tax their more
> than the expected benefits would warrant.
...or there are just many more important things to implement first, like
achieving full ANSI compatibility before attempting to tackle an issue
like the safe handling of undefined operations.
Regards,
Adam
> >That would indeed be a helluva lot of work. But maybe there are other
> >answers? For example, why not merge capital-O and zero into a single
> >character? Let 1000 and 1OOO both mean (unambiguously) one thousand.
>
> And be darn sure never to use FOO as a symbol just in case someone
> twiddles the read-base...
If you're twiddling read-base then you can't reliably use FOO (or any
other alphanumeric name for that matter) as a symbol even as things stand
today. The reason why is left as an exercise for the reader (pun
intended).
> It seems that some people are taking these kinds of gotchas as reasons
> to twiddle the languages in question, and others are taking them as
> opportunities for useful learning experiences.
I would use the word "improve" rather than "twiddle". And it seems to me
that many people choose neither of the above options.
E.
First of all C is a portable high level assembler of a language, I
like it for what it is. It has some design choices that were made to
help a very simple language become a very popular systems/application
programing language, the feature is pointers and the direct memory
access that goes with it. With out pointers C is a toy language. The
problem with pointers is that pointers require the programmer to know a
bunch of rules about memory allocation and management that you must
follow every time or you leak memory&/dump core. So to be a *good C
programmer* you need to manage your memory well. The problem is that
most people who are coding in C are not good C programmers so you get
*completely avoidable bugs* that would not have happened if you just
learned & followed the rules.
> securityfocus or lwn.net or CERT or ...) impact of this happening, I'd
> gladly take a technical fix that ameliorated the problem even if it's
> not a 100% solution.
>
> "People are stupid" is only really a convincing argument not to change
> anything if there's a reasonable way of avoiding having to depend on
> code that these stupid people may have written.
>
All people are not stupid, stupid people are stupid. There is a big
difference there. I personally do not want stupid people coding the
software I use. The problem as I see it is there are too many stupid
people in programing and there is no legal penalties for them or there
organization for there repeated stupidity. In civil engineering when
the bridge falls down the engineer goes to jail for negligence, most
of the time, and his/her carrier is pretty much over anyway. The
company is pretty well screwed also, would you hire them? I also feel
there is a large problem with students cheating there way through
there undergrad degree. I do not think it is so bad in the tier 1
schools (cmu, mit, stanford ...) but where I went cheating was
rampant. The number of unique programs was less then the number of
programs handed in. It was so bad that if a professor really went
after it, it would have ended his ended his/her career there.
marc
> So what? You can make the same argument for buffer overflows in C
> code. Given the potential (actual, in that case: see bugtraq or
> securityfocus or lwn.net or CERT or ...) impact of this happening, I'd
> gladly take a technical fix that ameliorated the problem even if it's
> not a 100% solution.
>
> "People are stupid" is only really a convincing argument not to change
> anything if there's a reasonable way of avoiding having to depend on
> code that these stupid people may have written.
the societal bug i was referring to is the acceptance of software w/o more
rigorous validation than "click here". that bug masks technical bugs (which
is what you are talking about and which certainly can be addressed through
language restrictions, etc).
thi
> All people are not stupid, stupid people are stupid. There is a big
> difference there. I personally do not want stupid people coding the
> software I use. The problem as I see it is there are too many stupid
> people in programing and there is no legal penalties for them or there
> organization for there repeated stupidity.
I realised last night Marc that these "stupid people" are convenient scape
goats for coding mistakes because they are not "you" nor "I".
If you concede that we are all on a different point in a continuum of
coding competency then even the most talented programmer will eventually
make an elementary mistake. It's just that the probability of the talented
programmer making an elementary mistake is much lower.
But once the elementary mistake happens my point (2) can still apply: The
code may be distributed and adversely impact upon third parties.
Elementary coding mistakes are not only made by stupid people. And open
source development makes this abundantly clear because we all get to see
when an immensely talented programmer makes an elementary mistake (that
may have been picked up earlier if higher level checks were performed).
To continue your falling down bridge analogy: if tools were available that
engineers could employ to minimise the casualties from an unintended
design error, would it be worthwhile for the engineers to shun those tools
because they consider it is other people who are stupid and make design
errors?
While there may be good performance reasons for coding in a more "risky"
or low level environment it is not just "stupid people" that make mistakes
in those environments. You can't dismiss the risks.
I'll rewrite your paragraph by substituting "stupid people" with "people
who make elementary coding mistakes":
---Begin substituted paragraph---
All people do not make elementary coding mistakes, only people who make
elementary coding mistakes are people who make elementary coding mistakes.
There is a big difference there. I personally do not want people who make
elementary coding mistakes coding the software I use. The problem as I
see it is there are too many people making elementary coding mistakes in
programming and there are no legal penalties for them or their
organization for repeated making elementary coding mistakes.
---End substituted paragraph---
The paragraph loses some of its force when you are no longer able to
single out "them" (the stupid people) from "us" (everyone who eventually
makes elementary coding mistakes).
Regards,
Adam
Are you saying that there are no stupid people? You got to be
kidding.
>
> If you concede that we are all on a different point in a continuum of
> coding competency then even the most talented programmer will eventually
> make an elementary mistake. It's just that the probability of the talented
> programmer making an elementary mistake is much lower.
>
and when he *tests his code* because he is a competent individual he in
all probability will find the problem.
> But once the elementary mistake happens my point (2) can still apply: The
> code may be distributed and adversely impact upon third parties.
> Elementary coding mistakes are not only made by stupid people. And open
> source development makes this abundantly clear because we all get to see
> when an immensely talented programmer makes an elementary mistake (that
> may have been picked up earlier if higher level checks were performed).
>
So you are advocating some kind of overseer for a programmer, the last
thing I want to see a compiler say is "Well Mr. coder I can not prove
your code works to my satisfaction so go fix it" when I know it
works. Also the restrictions you are proposing would take a lot of
the fun out of programming. One of the things I really like about CL
is that the programmer is expected to master his tools to the degree
that he can be *trusted* to use them properly. And this opens up
whole new areas of fun and profit.
> To continue your falling down bridge analogy: if tools were available that
> engineers could employ to minimise the casualties from an unintended
> design error, would it be worthwhile for the engineers to shun those tools
> because they consider it is other people who are stupid and make design
> errors?
It would depend on the tool. One tool to do that would be a law
stating that no bridge can be over 200 yards long, because we know how
to build those. This is not a good tool to help out is it?
>
> While there may be good performance reasons for coding in a more "risky"
> or low level environment it is not just "stupid people" that make mistakes
> in those environments. You can't dismiss the risks.
If their are good reasons for allowing it then why do you have a
problem with giving people the tools they need to do it? The above
comment does some serious damage to the argument you are trying to
build that we should not allow this useful stuff to be used, because
we might get it wrong. And your implication that we would not be able
to find it and fix it. And the implication is generally wrong.
>
> I'll rewrite your paragraph by substituting "stupid people" with "people
> who make elementary coding mistakes":
>
why are you rewriting what I said, especially since in rewriting it
you basically fucked it up and turned it into nonsense.
> ---Begin substituted paragraph---
>
> All people do not make elementary coding mistakes, only people who make
> elementary coding mistakes are people who make elementary coding mistakes.
> There is a big difference there. I personally do not want people who make
> elementary coding mistakes coding the software I use. The problem as I
> see it is there are too many people making elementary coding mistakes in
> programming and there are no legal penalties for them or their
> organization for repeated making elementary coding mistakes.
>
> ---End substituted paragraph---
>
> The paragraph loses some of its force when you are no longer able to
> single out "them" (the stupid people) from "us" (everyone who eventually
> makes elementary coding mistakes).
I believe you are a bright person who has trained himself to be a
fucking idiot. Was all the effort worth it?
marc
>
> Regards,
> Adam
> I believe you are a bright person who has trained himself to be a
> fucking idiot. Was all the effort worth it?
No because my intent was never for you to spit the dummy and embarrass
yourself.
Goodbye Marc Spitzer.
> I assume you know that (*) => 1 and (+) => 0, too, but just in case...
Yes. Indeed, I was trying to explain how cool it was
that + with no arguments yielded zero, but my interlocutor,
coming from a C-like language, couldn't understand how you
could use + without arguments.
:-)
But for interpreted code, this problem is even more intractable.
> This would only be true if you forced yourself to choose a priori between
> incremental loading and "cold" loading. But why do you have to choose
> between those two options a priori? Why not give the programmer the
> option? Imagine: [...]
prolog is quite old, and leaves very little to your imagination. :)
there are many reputable prologs you can look at and see what
options they offer for the neophyte and the expert.
fyi, here is quintus prolog's load_files. quintus prolog (like some
other prologs) supports a module system based on predicate modularity
(ie. predicates are identified by their modules as well as name and
arity) so perhaps issues regarding the loading of predicates and so
on are properly addressed by a module system. [further discussion
should go to comp.lang.prolog, where the experts hang out.]
Synopsis:
load_files(+Files) or load_files(+Files, +Options)
Options
if(X)
X=true
(default) always load
X=changed
load file if it is not already loaded or if
it has been changed since it was last loaded
when(X)
X=run_time
(default) The file does not define any
predicates that will be called during
compilation of other files.
X=compile_time
the file only defines predicates that will be
called during compilation of other files; it
does not define any predicates that will be
called when the application is running.
X=both
the file defines some predicates that will
be needed during compilation and some that
will be needed during execution.
load_type(X)
X=compile
compile Prolog source code
X=qof
load QOF code
X=latest
(default) load QOF or compile source, whichever
is newer. The latest option is effective only
if Files are sepcified without extensions.
must_be_module(X)
X=true
the files are required to be module-files
X=false
(default) the files need not be module-files
imports(X)
X=all
(default) if the file is a module-file, all
exported predicates are imported
X=List
list of predicates to be imported. Note that
if the option imports is present, the option
must_be_module(true) is enforced.
all_dynamic(X)
X=true
load all predicates as dynamic
X=false
(default) load predicates as static unless
they are declared dynamic. Note that the
all_dynamic option has no effect when a QOF
file is loaded. Thus it is not normally useful
to use all_dynamic(true) in conjunction with
load_type(latest), since the file will be
loaded in dynamic mode only if the source
file is more recent than the QOF file.
silent(X)
X=true
loading information is printed as silent
messages (see section G-20 for details).
X=false
(default) loading information is printed as
informational message.
oz
--
bang go the blobs. -- ponder stibbons
> I would think that when people started using versions of C compilers
> that enforced string constancy, they were surprised when suddenly their
> code started giving segmentation faults or bus errors.
Point taken. Thanks Fred.
Regards,
Adam
But hard to avoid since interpreted code can be created at
run-time, reducing the impact of a compile-only solution.
Does anyone write Lisp code, compile it first, then test for
correctness? For me testing almost always comes before compilation,
unless the routine requires the additional speed to work. Newbies
are even more likely to follow the path of least resistance, "I
couldn't get it to work that way, so I cut out these steps and..."
A combination of compiled and interpreted code is a likely end
product, the addition of checks is likely to permeate the system
in both sections to the detriment of it's speed and size. How about
two interpreters as well? One for the programmer, chock full of error
protection, one for delivery, lean and mean. No problem, I'm sure
the implementors will absorb the costs of development to present
safer systems for newbies and not pass increases on to the
customers. :-)
Which brings us back to the question of whether the time is better
spent teaching newbies, "Don't do that!" instead.
--
Geoff
Just to note, my example is just modifying a string. It neither
smashes the stack or overflows the buffer. It's exactly analogous to
modifying a quoted list in lisp.
People used to write code like this. GCC even has a special
backwards-compatibility flag, -fwritable_strings, to allow this kind
of old code to work.
snapdragon:~ > gcc -o stringsmash -fwritable-strings stringsmash.c
snapdragon:~ > ./stringsmash
The smashed string is this is a constant string.
I would think that when people started using versions of C compilers
that enforced string constancy, they were surprised when suddenly
their code started giving segmentation faults or bus errors.
--
Fred Gilham gil...@csl.sri.com
I'm skeptical about attempts to proclaim hell to people that don't
already have the taste of it in their mouths. Hell as the ultimate
loss of relationship, of health, of even sanity, makes sense to
someone who already sees the beginnings of that process taking hold in
his own life. Hell as an imaginary place that is like being sent to
your room for a long, long time, with the heat turned up really high,
doesn't make sense.
Compiled code can be created at runtime too:
(defun foo ()
69)
(compile 'foo)
(compiled-function-p #'foo) => T
> Does anyone write Lisp code, compile it first, then test for
> correctness?
Fairly frequently, yes. I use ilisp's C-c C-c binding for "Compile
defun", and C-c C-l (load file) which asks if you want to compile
first.
> A combination of compiled and interpreted code is a likely end
> product,
Personally (since you're asking), my end products are generally either
saved worlds, or completely compiled files with only the loader
running interpreted.
Just my $0.02,
joelh
> Compiled code can be created at runtime too:
Technically, compiled code can _only_ be created at runtime. :)
--
-> -/ - Rahul Jain - \- <-
-> -\ http://linux.rice.edu/~rahul -=- mailto:rj...@techie.com /- <-
-> -X "Structure is nothing if it is all you got. Skeletons spook X- <-
-> -/ people if [they] try to walk around on their own. I really \- <-
-> -\ wonder why XML does not." -- Erik Naggum, comp.lang.lisp /- <-
|--|--------|--------------|----|-------------|------|---------|-----|-|
(c)1996-2002, All rights reserved. Disclaimer available upon request.
Yes, but then it would be processed by the compiler,
becoming the rule not the exception.
>
> > Does anyone write Lisp code, compile it first, then test for
> > correctness?
>
> Fairly frequently, yes. I use ilisp's C-c C-c binding for "Compile
> defun", and C-c C-l (load file) which asks if you want to compile
> first.
There you go, I figured there'd be at least one...
> > A combination of compiled and interpreted code is a likely end
> > product,
>
> Personally (since you're asking), my end products are generally either
> saved worlds, or completely compiled files with only the loader
> running interpreted.
>
Depends a lot on whether you get a lot, or any, user-generated code,
I suppose. I was thinking of Emacs and various math and cad programs
when I wrote this.
--
Geoff
> "Joel Ray Holveck" <jo...@juniper.net> wrote in message
> news:y7ck7p4...@sindri.juniper.net...
> > > But hard to avoid since interpreted code can be created at
> > > run-time, reducing the impact of a compile-only solution.
> >
> > Compiled code can be created at runtime too:
>
> Yes, but then it would be processed by the compiler,
> becoming the rule not the exception.
The compiler runs at runtime, so it's the rule, NOT the exception.
> > Personally (since you're asking), my end products are generally either
> > saved worlds, or completely compiled files with only the loader
> > running interpreted.
>
> Depends a lot on whether you get a lot, or any, user-generated code,
> I suppose. I was thinking of Emacs and various math and cad programs
> when I wrote this.
I think most of them implement custom lisp interpreters anyway (and
don't compile to machine code because of that, if they even compile to
bytecode). Dunno about Cadence Design's extensibility at runtime, but
considering that it uses ACL, a license to distribute it with the
interpreter is probably prohibitive (but maybe large customers would
pay for this).
> But hard to avoid since interpreted code can be created at
> run-time, reducing the impact of a compile-only solution.
> Does anyone write Lisp code, compile it first, then test for
> correctness? For me testing almost always comes before compilation,
> unless the routine requires the additional speed to work.
In the CMUCL manual, compiled code is recommended even for debugging.
Also the additional checks during compilation are very useful for
finding bugs.
Nicolas.
Arrgh. In this case, error checking was defined to come in during
compiliation and there is an unstated but reasonable assumption
that the system is compiled for testing. Generated then compiled
code [(debug 3) (safety 3)] has the error checking neatly tucked
inside it, any interpreted code generated is the *exception* of
interest because it does not. Satisfied?? :-P
--
Geoff
My ILISP has M-C-x bound to compile-defun-lisp, and ever since I
changed it, debugging's been easier.
--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'
> Sure, and smart people have figured this out. Comparing irrelevant
> comparendsš benefits nobody, however. This is _not_ Python.
...
> Å¡ this should be a word.
I'd expect "comparand", by analogy with "operand" and "multiplicand"
and so on. (Exercise for the reader: work out why I think "multiplicand"
a better model than "addend".) I think I'd say that it *is* a word,
even though it isn't in any dictionary I possess. It can be formed
using a fairly standard process from an existing word, so it's a word.
I get 973 hits for "comparand" at Google.
--
Gareth McCaughan Gareth.M...@pobox.com
.sig under construc
> Not related to this issue at all, I heard the term "retrophrenology"
> today, for the theory that aspects of personality and behavior may be
> altered by modifying the shape of the skull.
Coined by Terry Pratchett, I believe. A wonderful idea. It
probably works somewhat better than ordinary phrenology does,
too.
| Not related to this issue at all, I heard the term "retrophrenology"
| today, for the theory that aspects of personality and behavior may be
| altered by modifying the shape of the skull.
The Mayans treated themselves, or more accurately, their kids, this way.
There are two head shape designs I know of. I have a photo of a drawing of
a child-on-the-press if anybody is interested. My impression was that the
goal was beauty or handsomeness. Does anyone know the achieved behavioral
alteration?
Robert
> Does anyone know the achieved behavioral alteration?
to state the obvious, the dis-inhibition of propagating the practice.
thi
Behavioral alteration wasn't the goal. This was practiced widely in
the Western North American continent, from the land of the Maya all
the way north to the Salish (in Washington State, USA). Its purpose
was chiefly beauty. Europeans concurred that it gave the adults in
tribes that practiced this a more 'regal and imposing' air.
The other common body modification, particularly along the Northwest
Coast tribes, was the 'labret'. A slit was made underneath the lip,
allowing the lip to be pulled forward away from the chin. Behind the
lip and in front of the lower teeth a piece of finely shaped wood was
placed, stretching the lip outwards in front of the chin. Weirdly, a
similar practice (though not quite as dramatic) has been resurrected
amongst some modern youth, in that they pierce the raphe beneath the
lip and wear jewelry there. I'm sure you've seen this at Starbucks or
other coffee shops.
Coming from a tribe that used to practice labret piercing and plugging
(the Tlingit people, if you're wondering), I can't say I particularly
think it's beautiful. I think they should go for chin tattoos instead.
As to behavioral modification through cranial osseorestructuring, I
can attest that this was indeed practiced by Native Americans. The
implement used was called in English a 'war club'. The idea was that
you struck the patient's upper cranial region with this 'war club' in
hopes that they would change their mind about marrying your daughter,
taking your wife as a slave, or stealing fish from your camp.
'james
--
James A. Crippen <ja...@unlambda.com> ,-./-. Anchorage, Alaska,
Lambda Unlimited: Recursion 'R' Us | |/ | USA, 61.20939N, -149.767W
Y = \f.(\x.f(xx)) (\x.f(xx)) | |\ | Earth, Sol System,
Y(F) = F(Y(F)) \_,-_/ Milky Way.
>Erik Naggum wrote:
>
>> Not related to this issue at all, I heard the term "retrophrenology"
>> today, for the theory that aspects of personality and behavior may be
>> altered by modifying the shape of the skull.
>
>Coined by Terry Pratchett, I believe. A wonderful idea. It
>probably works somewhat better than ordinary phrenology does,
>too.
In the limit case, it privedes very consistant results...
-- Attaining and helping others attain "Aha!" experiences, as satisfying as
attaining and helping others attain orgasms.