Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Destructive Side Effects

48 views
Skip to first unread message

David McClain

unread,
Nov 17, 2001, 1:13:01 PM11/17/01
to
FWIW,

I just noticed that REMF actually destructively modifies a property list and
returns a boolean. But REMOVE erodes sequences without destructively
modiying the source, returning the modified sequence. DELETE on the other
hand destructively modifies a sequence and then returns it as well. There
does not appear to be a DELF fuction in ANSI CL.

I was a bit shocked at first to find this behavior... I guess I spend too
much time writing SML.

- D.McClain, Tucson, AZ


Erik Naggum

unread,
Nov 17, 2001, 1:37:50 PM11/17/01
to
* David McClain

| I just noticed that REMF actually destructively modifies a property list
| and returns a boolean. But REMOVE erodes sequences without destructively
| modiying the source, returning the modified sequence. DELETE on the
| other hand destructively modifies a sequence and then returns it as well.
| There does not appear to be a DELF fuction in ANSI CL.

Is it just the name that you are dissatisfied with? What about get and
set, getf and setf? Is that confusing or what? Or are you allergic to
destructive operations? It would probably help to think of the property
list not as a list that you are supposed to use as a list, any more than
you are supposed to look at a package as an object whose internals you
can mess with. That it is implemented as a list is just an historical
accident. So is the name.

| I was a bit shocked at first to find this behavior... I guess I spend too
| much time writing SML.

Yeah, SML should come with a warning label.

///
--
Norway is now run by a priest from the fundamentalist Christian People's
Party, the fifth largest party representing one eighth of the electorate.
--
Carrying a Swiss Army pocket knife in Oslo, Norway, is a criminal offense.

David McClain

unread,
Nov 17, 2001, 1:50:45 PM11/17/01
to
Actually, I think that programming without side effects has some rather nice
qualities. At first the thought of using immutable data objects, as one
finds in the FP languages is somewhat dumbfounding... how can anything
useful ever happen if data cannot be modified. But then after working with
it for a while you find that quite a lot can be done, and it can be done
safely.

I, personally, wouldn't like to release dynamically typed code to the field,
unless it were peppered with exception handling clauses and well guarded
against user abuse. On the other hand, I am completely comfortable releasing
FPL code, with the knowledge that it won't suffer unforseen failures in the
field -- barring failures in the underlying OS....

But I do find Lisp to be the most useful language for rapid prototyping. It
is the ultimate modeling clay language. I guess the Lisp language was
developed over many more years than the more modern FPL's and so there are a
lot of historical quirks about the language.

Actually, I was not taken aback by the naming of functions, but rather that
it forced me to use destructive operations in a routine that simply needed
some elements elided from a list before passing along to legacy code. Lisp
does force one to view the argument lists of functions as lists, and with
keyword parameters, portions of these lists can be viewed as property lists.
I find that convenient.

But sometimes keyword arguments must be removed from argument lists before
handing these args off to older routines that would find the additional
keywords offensive. For this purpose, a destructive operation seemed overly
harsh, but I do concede that it probably produces faster running code.

Destructive operations need to rise higher in one's mind, as functions that
must be used with caution, as they may have unexpected consequences
elsewhere in the overall code of a large program.

- DM

"Erik Naggum" <er...@naggum.net> wrote in message
news:32150110...@naggum.net...

Kent M Pitman

unread,
Nov 17, 2001, 2:13:44 PM11/17/01
to
"David McClain" <bar...@qwest.net> writes:

> I just noticed that REMF actually destructively modifies a property list and
> returns a boolean. But REMOVE erodes sequences without destructively
> modiying the source, returning the modified sequence. DELETE on the other
> hand destructively modifies a sequence and then returns it as well. There
> does not appear to be a DELF fuction in ANSI CL.

REMF is derived from REMPROP, not for REMOVE.

The name REMPROP has substantial history in CL and it would have been a
gratuitously irritating change for CL to have renamed it.



> I was a bit shocked at first to find this behavior... I guess I spend too
> much time writing SML.

As I recently pointed out in the Slashdot interview, people who want
elegance and consistency over tradition and practical commercial cost,
probably want Scheme, not Common Lisp.

If it helps, consider this passage from X3J13/86-020 "X3J13 draft purposes",
a full copy of which you can read at
http://world.std.com/~pitman/CL/x3j13-86-020.html

2. The committee will begin with the language described in _Common
Lisp: The Language_ by Guy L. Steele Jr. (Digital Press, 1984),
which is the current de facto standard for Common Lisp. Whenever
there is a proposal for the standard to differ from _Common Lisp:
The Language_,
==> the committee shall weigh both future costs of
==> adopting (or not adopting) a change and costs of conversion of
==> existing code. Aesthetic criteria shall be a subordinate
==> consideration.

Jochen Schmidt

unread,
Nov 17, 2001, 3:31:27 PM11/17/01
to
David McClain wrote:

> Actually, I think that programming without side effects has some rather
> nice qualities. At first the thought of using immutable data objects, as
> one finds in the FP languages is somewhat dumbfounding... how can anything
> useful ever happen if data cannot be modified. But then after working with
> it for a while you find that quite a lot can be done, and it can be done
> safely.
>
> I, personally, wouldn't like to release dynamically typed code to the
> field, unless it were peppered with exception handling clauses and well
> guarded against user abuse. On the other hand, I am completely comfortable
> releasing FPL code, with the knowledge that it won't suffer unforseen
> failures in the field -- barring failures in the underlying OS....

Dynamic typing has nothing to do with having or not having sideeffects...
Another thing is that dynamic typing allows you to catch _more_ errors by
using finer grained types that get checked at runtime.
I do not really believe in the idea that a program is ready for the public
if a static typed compiler has found no error on compilation. To write
robust software you can _never_ bypass to test each functionality at
_runtime_ using a extensive test-suite. Since cutting off testing is never
an option for robust software it is IMHO a better idea to be able to have
good runtime (dynamic) testing/debugging support.

> But I do find Lisp to be the most useful language for rapid prototyping.
> It is the ultimate modeling clay language. I guess the Lisp language was
> developed over many more years than the more modern FPL's and so there are
> a lot of historical quirks about the language.
>
> Actually, I was not taken aback by the naming of functions, but rather
> that it forced me to use destructive operations in a routine that simply
> needed some elements elided from a list before passing along to legacy
> code. Lisp does force one to view the argument lists of functions as
> lists, and with keyword parameters, portions of these lists can be viewed
> as property lists. I find that convenient.

Hm... it is not forcing you to use destructive operations...

> But sometimes keyword arguments must be removed from argument lists before
> handing these args off to older routines that would find the additional
> keywords offensive. For this purpose, a destructive operation seemed
> overly harsh, but I do concede that it probably produces faster running
> code.

Nobody forces you to use destructive functions - if you don't like them,
don't use them! Please not too that most of CL's destructive functions have
to be used in a functional way (you have to use the return value). The only
thing that is important to know using destructive functions is that you
actually know that no other objects rely on it's state before modification.

> Destructive operations need to rise higher in one's mind, as functions
> that must be used with caution, as they may have unexpected consequences
> elsewhere in the overall code of a large program.

Again: Nobody forces you to use destructive functions - if you don't like
them or if you feel unable to cope with the consequences then don't use
them!

ciao,
Jochen

--
http://www.dataheaven.de

David McClain

unread,
Nov 17, 2001, 2:48:44 PM11/17/01
to
No, you are quite correct in everything you stated... I made some very
sweeping generalizations.

Dynamic typing per se does not incur unsafe programs. But what is typical of
most dynamically typed languages, e.g,. Lisp, Scheme, NML, RSI/IDL, Matlab?,
is that they allow the creation of conditional branches that go unchecked at
compile time.

They don't even require that the code at compile time be runnable without
errors, e.g., references to undefined functions. You only find out about
misspellings in some of these languages if and when you try to execute the
erroneous branch of code. This has been a source of constant irritation for
me, especially with the RSI/IDL language used for scientific analysis.

Dynamically typed languages also permit potentially unsafe operations on
data whose type couldn't possibly be known at compile time. Unsafe here
means only that a program failure will ensue. For example,

(defun 1+ (x) (+ 1 x))

The implication here is that x should be something numeric to which this
operation can safely be applied, but there is no enforcement of that
condition. If this code were burried deep within a large program, and then
one day a string comes along in place of a number then you would have
failure in the field released code. You could add type checking to make it
safer.

But when I use FPL to do the same thing,

let one_plus x = 1 + x

I get a signature back that states that this function applies only to type
integer, and all future uses of this function are checked at compile time
for compliance.

But look, no need to get any hackles raised over these issues. Why do
discussions of programming languages have to degenerate into religious wars.
I use all the languages I can get my hands on, in part out of curiosity, and
in part because understanding many languages gives me a lot of different
viewpoints on any given problem. The ways we think are ultimately
constrained by our languages -- both spoken and for programming.

No need to feel attacked. I merely noticed that the naming of REMF was
disjoint from the naming of REMOVE and DELETE, even though the action of
REMF is more akin to DELETE. But Kent has pointed out the reasons for this.
I expected his answers to be what they were.

- DM


"Jochen Schmidt" <j...@dataheaven.de> wrote in message
news:9t6doa$436$1...@rznews2.rrze.uni-erlangen.de...

David McClain

unread,
Nov 17, 2001, 2:51:02 PM11/17/01
to
> don't use them! Please not too that most of CL's destructive functions
have
> to be used in a functional way (you have to use the return value). The
only

Actually, this is one of the surprising things about REMF that caught my
attention. You are correct that most often destructive operations in Lisp
operate in a functional manner... except for REMF (and no doubt a few
others...)

- DM

"Jochen Schmidt" <j...@dataheaven.de> wrote in message
news:9t6doa$436$1...@rznews2.rrze.uni-erlangen.de...

Kent M Pitman

unread,
Nov 17, 2001, 3:21:53 PM11/17/01
to
"David McClain" <bar...@qwest.net> writes:

> They [Lisp & such] don't even require that the code at compile time


> be runnable without errors, e.g., references to undefined functions.

Yes. Not to detract from your point, and not even to suggest you
should change your personal point of view, but you should at least
know that some of us regard this not as a bug but as a MAJOR MAJOR
MAJOR feature of Lisp. I use this ALL THE TIME when debugging. There
is nothing I hate more than the "waterfall" model of programming where
you do a complete implementation before you can test anything.

I OFTEN compile partial programs just so I can run them until the
first undefined function, defining stubs along the way that I will
redefine later, just so I can get the excitement of trying the parts I
have done. In fact, I've complained (to no avail) to the Java
maintainers that they don't let you do this very important thing.
(I'm sure they see the "importance" differently.)

> You only find out about
> misspellings in some of these languages

Um, it's not a "policy" of Lisp that you don't find out about misspellings.
It's merely left to the implementation, which means "it's up to the
implementation to weigh the obvious value of providing warnings (NOTE:
NOT ERRORS) if this happens against the cost of doing the extra implementation
work to do this checking. Most significant commercial vendors do non-fatal
diagnostics that tell you about misspellings. Some lower cost commecial
or freeware implementations might not want this expense right off, and might
prefer to defer it until they have more time or money. Since it does not
affect correct execution, it's not required for conformance. Instead, it's
left to the market to sort out whether an implementation is useful absent
such things, and as far as I know, the answer is: people always wish they
got more free goodies, but this is not a show-stopper for most users. That
suggests to me that we made a correct decision in not REQUIRING the
behavior at language level. HOWEVER, you should not confuse this as meaning
we at the language design level did not value these things. We simply thought
implementors had enough more importnat concerns on their hands and we didn't
want to burden them with extras.

> if and when you try to execute the
> erroneous branch of code. This has been a source of constant irritation for
> me, especially with the RSI/IDL language used for scientific analysis.

For this, I'd get a different implementation. The language is not the
source of your impediment here. For some of your other issues, the
problem is more complex.

> Dynamically typed languages also permit potentially unsafe operations on
> data whose type couldn't possibly be known at compile time. Unsafe here
> means only that a program failure will ensue. For example,
>
> (defun 1+ (x) (+ 1 x))
>
> The implication here is that x should be something numeric to which this
> operation can safely be applied, but there is no enforcement of that
> condition.

Yes, you are correct that the language does not require this. That is
for efficiency reasons, and is again left to the implementation to
sort out. HOWEVER, just about all implementations that I know of
will, at high safety [see the SAFETY quality under the OPTIMIZE
declaration] do this checking. So in practice this is not a problem
in any implementation I know of unless you specifically notch down
SAFETY, at which point, you're basically telling the compiler "I
already debugged this, go ahead and optimize it as if there is correct
data flow".

> If this code were burried deep within a large program, and then
> one day a string comes along in place of a number then you would have
> failure in the field released code.

Nothing forces you to ratchet down SAFETY even in production code.
There are cases where I don't.

You can always make a macro that declares low safety lexically for a specific
call to + or whatever that you know you have checked. e.g.,

(defmacro +& (x y)
`(locally (declare (optimize (debug 0) (safety 0) (space 0)
(compilation-speed 0) (speed 3)))
(the fixnum (+ (the fixnum ,x) (the fixnum ,y)))))

or whatever you like for some case that you are sure is a fast integer
add without overflow, and you can continue to leave safety globally set
to 3 around it for ordinary uses of + that you do not have a special feeling
of confidence about.

> You could add type checking to make it safer.
>
> But when I use FPL to do the same thing,
>
> let one_plus x = 1 + x
>
> I get a signature back that states that this function applies only to type
> integer, and all future uses of this function are checked at compile time
> for compliance.

Well, that transitively relies on + having the property you want, which
it doesn't in CL. In CL, you have the identical safety issue for + as 1+,
so in a sense, doing

(declaim (inline add1))
(defun add1 (x) (+ x 1))

gives you the identical power that the let gives you in FPL (whatever that
is). The difference is in the base case of the induction, not in the
abstraction capability, at least in this case.

> But look, no need to get any hackles raised over these issues. Why do
> discussions of programming languages have to degenerate into religious wars.

Because, to be honest, people begin with different axioms than one another,
and often find it hard to reach a common point of discussion.

Lisp serves a community that wants a specific set of results that are different
than what you're asking for. Leaning heavily on compile-time checking may
indeed catch some problems, but making those problems "errors" and not just
"friendly warnings" means compiling-in assumptions, and that in turn becomes
a bigger problem later if you need to redefine things, for example.

I like to characterize it as political, rather than religious, disagreement
because something about that makes it feel ever so slightly more maleable.
Religion is often so final. But in the end, I think it's natural and even
probably essential that people cluster to more than one political party and
more than one religion because there is no way you'll get a global community
to agree on a common starting point for design. Best you can do is some
statistical clustering and hope people can gravitate, like dust into planets
in a proto-solar system, into communities of various workable sizes that
largely orbit independently of one another, etc.

Gabe Garza

unread,
Nov 17, 2001, 3:30:56 PM11/17/01
to
"David McClain" <bar...@qwest.net> writes:

> Dynamic typing per se does not incur unsafe programs. But what is typical of
> most dynamically typed languages, e.g,. Lisp, Scheme, NML, RSI/IDL, Matlab?,
> is that they allow the creation of conditional branches that go unchecked at
> compile time.

I wouldn't be so quick to lump Lisp into that group. It's possible
for a Lisp compiler do to extensive typechecking and code analysis
that will pick up some errors: CMUCL in particular is very clever
about this kind of thing. The catch, of course, is that you have to
actually supply the compiler with the necessary information with
declarations. I view this has a *huge* advantage of Lisp, because it
gives you the best of both worlds. You can have the flexibility and
ease of prototyping offered by no declarations, or you can use them
and--with the right compiler--get static type-checking and very
efficient code.

> They don't even require that the code at compile time be runnable without
> errors, e.g., references to undefined functions. You only find out about
> misspellings in some of these languages if and when you try to execute the
> erroneous branch of code. This has been a source of constant irritation for
> me, especially with the RSI/IDL language used for scientific analysis.

Once again, this just isn't true with Lisp. Most Lisp compilers will
warn about calls to undefined functions. Give this definition, we see
the following errors in a few Lisps:

(defun foo (x) (plus x 1))

Allegro 6.0:

CL-USER(3): (compile 'foo)
Warning: While compiling these undefined functions were referenced:
PLUS.

CMUCL 18c:

* (compile 'foo)
In: LAMBDA (X)
(PLUS X 1)
Warning: Undefined function: PLUS

LispWorks:

CL-USER 2 > (compile 'foo)

The following function is undefined:
PLUS which is referenced by FOO

> Dynamically typed languages also permit potentially unsafe operations on
> data whose type couldn't possibly be known at compile time. Unsafe here
> means only that a program failure will ensue. For example,
>
> (defun 1+ (x) (+ 1 x))
>
> The implication here is that x should be something numeric to which this
> operation can safely be applied, but there is no enforcement of that
> condition. If this code were burried deep within a large program, and then
> one day a string comes along in place of a number then you would have
> failure in the field released code. You could add type checking to make it
> safer.

So either add type checking--it would be trivial to cook up some Lisp
macros that did type checking if you didn't have a compiler that did it
for you, e.g., CMUCL--or do proper exception handling.

> But look, no need to get any hackles raised over these issues. Why do
> discussions of programming languages have to degenerate into religious wars.

My hackles are quite flat, but it's not very fair to make (IMHO)
incorrect generalizations about a language (in a newsgroup devoted to
the language, no less) and not expect a response. This discussion is
not even close to a religious war.

Gabe Garza

Kent M Pitman

unread,
Nov 17, 2001, 3:31:41 PM11/17/01
to
"David McClain" <bar...@qwest.net> writes:

> > don't use them! Please not too that most of CL's destructive functions
> have
> > to be used in a functional way (you have to use the return value). The
> only
>
> Actually, this is one of the surprising things about REMF that caught my
> attention. You are correct that most often destructive operations in Lisp
> operate in a functional manner... except for REMF (and no doubt a few
> others...)

Well, from a philosophical point of view, it's generally important
that this BE destructive. The whole point of properties is to
implement a kind of blackboard approach to programming, and the idea
of having a bunch of people working on a shared blackboard with only
copying-assignment available is pretty odd. By reverse induction (if
there is such a thing), all plists would be always empty.

I'm not saying there's never a reason to want to spawn a separate plist,
but I don't think it's the norm.

Similarly, one doesn't have destruction-free hash tables either.

It is true that REMOVE is a possible operation on alists, but in practice
it's very rare for me to use such a thing... You never know how long an
alist will be and it's a good way to get O(n^2) behavior if you go randomly
copying all its elements to do something. We just saw someone here the other
day who was implementing (cdr (assoc 'foo x)) as
(cadar (remove-if-not #'(lambda (key) (eq key 'foo)) x :key #'car))
or some such thing. Saying that people should program in a side-effect-free
way creates situations where people do this kind of thing.

I know there are languages where this kind of thing is done routinely.
MOO does all side-effects to lists (which it implements more like vectors)
by copying. But then, while MOO is very pleasant to program in for small
problems, it is not known for its ability to enable programmers with the
full computational speed of the underlying machine. Lists are only second
class data in MOO, and really have no identity.

And perhaps there are FP languages that secretly do better by doing the dirty
side-effects invisibly so you don't have to see them. Ultimately, assembly
code ALWAYS works by side-effect, after all. Everything above the basic memory
model of the machine that suggests otherwise is an illusion.

Kaz Kylheku

unread,
Nov 17, 2001, 4:13:51 PM11/17/01
to
In article <mMxJ7.518$ul2....@news.uswest.net>, David McClain wrote:
>FWIW,
>
>I just noticed that REMF actually destructively modifies a property list and
>returns a boolean.

What else it is to do? The property list is global. It could non-destructively
modify the list to make a new one, and then assign the new one in place of
the old: same thing. The structure of the list is not yours, so you don't
care what happens with the cons cells.

When you add or remove a property, that change is published throughout
the whole program; that is implicitly a destructive change. If you
don't like the semantics of property lists, don't use them.

Kaz Kylheku

unread,
Nov 17, 2001, 4:25:54 PM11/17/01
to
In article <KjyJ7.723$ul2.1...@news.uswest.net>, David McClain wrote:
>Actually, I think that programming without side effects has some rather nice
>qualities.

So do a lot of people. If you want to program without side effects,
you can do that in Lisp. But you aren't forced to use that discipline;
Lisp is a multi-disciplinary language. That means you can practice your
religious faith in functional programming, but you can't push it onto
other Lisp programmers as the One True Discipline.

>some elements elided from a list before passing along to legacy code. Lisp
>does force one to view the argument lists of functions as lists, and with

Actually, no it does not. The arguments of functions are normally bound to
lexical variables for you, and are not viewed as lists. The exception
are &rest parameters, of course, and the apply function which converts
a list into parameters.

>keyword parameters, portions of these lists can be viewed as property lists.
>I find that convenient.

They are property lists in the sense that they are pairs of indicators and
values. But they are not the same thing as Lisp property lists, which
are a special language feature.

>But sometimes keyword arguments must be removed from argument lists before
>handing these args off to older routines that would find the additional
>keywords offensive.

There are other ways to do this; for example, if a function does not
specify &allow-other-keys to robustly accept keys that it doesn't
understand, you can force it not to complain about the extra keywords
by specifying the key :allow-other-keys t. Look it up in the HyperSpec.

So you see you don't have to necessarily filter lists for the sake of
bouncing them down to functions that don't like some of the keywords.

>For this purpose, a destructive operation seemed overly
>harsh, but I do concede that it probably produces faster running code.

There is no destructive operation that is specifically designed for, or
required in conjunction with keyword parameters.

The only way you can even get at these parameters as a list is if you
specify both &rest and &key. The list you capture with &rest is an
ordinary list, which you can treat destructively or not.

David McClain

unread,
Nov 17, 2001, 4:38:54 PM11/17/01
to
> There are other ways to do this; for example, if a function does not
> specify &allow-other-keys to robustly accept keys that it doesn't
> understand, you can force it not to complain about the extra keywords
> by specifying the key :allow-other-keys t. Look it up in the HyperSpec.
>

Hey thanks! I didn't realize that. This kind of response is one of the
reasons I posted in the first place.

- DM


David McClain

unread,
Nov 17, 2001, 4:49:38 PM11/17/01
to
> Well, that transitively relies on + having the property you want, which
> it doesn't in CL. In CL, you have the identical safety issue for + as 1+,
> so in a sense, doing
>
> (declaim (inline add1))
> (defun add1 (x) (+ x 1))
>
> gives you the identical power that the let gives you in FPL (whatever that
> is). The difference is in the base case of the induction, not in the
> abstraction capability, at least in this case.

Kent, I'm not following you here... How does allowing Lisp to inline the
add1 function give me the same power as FPL with tight typing? I think you
are indicating something very important here, and I would like to understand
it.

- DM


David McClain

unread,
Nov 17, 2001, 4:52:24 PM11/17/01
to
...and yep! It works like a champ! Thanks a bunch for that insight!

- DM

"David McClain" <bar...@qwest.net> wrote in message
news:nNAJ7.524$dc3.3...@news.uswest.net...

Kaz Kylheku

unread,
Nov 17, 2001, 4:53:20 PM11/17/01
to
In article <9azJ7.898$ul2.1...@news.uswest.net>, David McClain wrote:
>No, you are quite correct in everything you stated... I made some very
>sweeping generalizations.
>
>Dynamic typing per se does not incur unsafe programs. But what is typical of
>most dynamically typed languages, e.g,. Lisp, Scheme, NML, RSI/IDL, Matlab?,
>is that they allow the creation of conditional branches that go unchecked at
>compile time.

But, on the other hand, what is typical of most statically typed
languages?

Segmentation fault
(core dumped)

Really, there is no ideal. The idea of checking a program for misuse of
values before running it is basically sound. However, it leads to restrictions
which programmers circumvent; and once the circumvention takes place, there
is no more safety net. The results of a type mismatch are unpredictable.

When no circumention takes place, static checking does provide a lot of
increased confidence that the program is correct (but not absolute confidence).

Don't discard the idea of statically checking a dynamically typed
program. Just because the language hasn't pushed complexity onto the
programmer in order to make static type checking *easy* doesn't mean
that such checking is *impossible*. There is a lot that can be done.

>Dynamically typed languages also permit potentially unsafe operations on
>data whose type couldn't possibly be known at compile time. Unsafe here
>means only that a program failure will ensue. For example,
>
>(defun 1+ (x) (+ 1 x))

Statically typed languages which don't provide escape hatches for
similar abuse have all died; in terms of popularity and widespread use,
static languages are represented by C and C++ which have escape
hatches. These languages are popular partly because of their
permissiveneness, which allows the programmer to defeat the type system
when it gets in the way; and in real world programming, programmers do
find that it does get in the way.

A dynamic language can provide a proper error signal when such an
error happens, and that signal can be handled.

E.g. try this in your Common Lisp system:

(handler-case
(+ 3 "4")
(error (x) (format t "An error has been caught: ~a~%")))

A dynamic language isn't just a static language with checking removed;
that would be weak typing (representative example: BCPL). There is
type information associated with the values being manipulated; that is
latent typing.

>But look, no need to get any hackles raised over these issues. Why do
>discussions of programming languages have to degenerate into religious wars.

Because there is often a good mixture of fanatics, and clueless people
in these debates who don't properly understand the languages they
are discussing. Few people enter into these discussions with clean
motives, like trying to increase their understanding, or promote proper
understanding in others.

Also keep in mind that just because people find faults with your reasoning
or your comprehension of the facts doesn't mean that the debate has
become a religious war.

Erik Naggum

unread,
Nov 17, 2001, 5:02:16 PM11/17/01
to
* David McClain

| Actually, I think that programming without side effects has some rather
| nice qualities.

You are never programming without side effects, you just like the ones
you use. This applies to all programming languages and all issues. Just
because you have a different set of side effects that you like than other
people, does really not give you the right to proclaim that you do not
have any side effects. This annoying expression "without side effects"
is just for show and marketing: The only program that has no side effects
is the program that is never even written. Geez, this is that stupid
tail-call "elimination" thread all over again.

| I, personally, wouldn't like to release dynamically typed code to the
| field, unless it were peppered with exception handling clauses and well
| guarded against user abuse.

What compelled you to share your personal choices with us? Why post
anything about your personal choices at all? Do you think anyone cares?
Do you welcome random noise-makers who walk into your SML newsgroup and
proclaim that I would not like to release SML code in the field? What
kind of worthless assholes would you consider people who did that?

Now, please be smart enough to respect that people make different sets of
choices and that some consider this nonsense about "without side effects"
to be an offensive marketing lie. If you are not smart enough to respect
or indeed figure this out, get the hell out here before you go postal on
us with a stupid line of self-defense, like so many others who think it
is perfectly OK to post "why I think I Lisp is insufficient" articles.

| Actually, I was not taken aback by the naming of functions, but rather that
| it forced me to use destructive operations in a routine that simply needed
| some elements elided from a list before passing along to legacy code.

More understanding and less fixation on style would be more beneficial
than telling people _that_ you were "taken aback". Explain why, explain
how you got to accept the gospel of fewer visible side effects. What you
_feel_ is completely irrelevant to everybody else in a technical forum.

| Lisp does force one to view the argument lists of functions as lists, and
| with keyword parameters, portions of these lists can be viewed as
| property lists. I find that convenient.

I find it massively misguided. It has also proved unproductive for you
since you ended up with a counter-productive conclusion and only offered
you an opportunity to air your arrogance about side effects. Quit that.

| But sometimes keyword arguments must be removed from argument lists
| before handing these args off to older routines that would find the
| additional keywords offensive.

Well, if you knew Common Lisp, you would know that you can always add the
keyword argument :allow-other-keys and its value t to the argument list
of a function that accepts keywords. People who are unable to deal with
the language they actually program in, pining for whatever programming
language they learned first or whatever their problem is, always ignore
the fact that people much smarter than them _have_ solved their problems
for them already. I find this tremendously annoying. It is not the
ignorance that I find so annoying, but the stupid arrogance of people who
think they have reached the plateau in life where no more information or
knowledge is necessary before they pass judgment on the world in general,
when know things so well that they do not need to ask for guidance or
help or more information before passing their judgment, getting all huffy
and puffy with their stupid concerns that are only _their_mistakes_.

If you had been smart enough to ask us what you should do if you wanted
to pass an argument list obtained with &rest to another function that did
not accept as many keywords, you would have received an intelligent
answer specific to our problem, and you would have avoided all the silly
crap you guessed were problems, but which are not.

| Destructive operations need to rise higher in one's mind, as functions
| that must be used with caution, as they may have unexpected consequences
| elsewhere in the overall code of a large program.

It also helps to learn how to write higher quality software without being
so snotty about your _inability_ to write code in the language you use.

One of the most annoying side effects of langauges that promote some
usually silly design idea is that people who use them think they are so
superior because of this _one_ feature. Next thing you know, some people
will think they are superior because their skin is paler than many other
people. That would never happen, would it? People can be stupid about
the superiority of their programming language, but skin color? No way.

David McClain

unread,
Nov 17, 2001, 5:04:49 PM11/17/01
to
> Don't discard the idea of statically checking a dynamically typed
> program. Just because the language hasn't pushed complexity onto the
> programmer in order to make static type checking *easy* doesn't mean
> that such checking is *impossible*. There is a lot that can be done.

How can you say this about pushing complexity onto the programmer when I
compare this

(defmacro +& (x y)
`(locally (declare (optimize (debug 0) (safety 0) (space 0)
(compilation-speed 0) (speed 3)))
(the fixnum (+ (the fixnum ,x) (the fixnum ,y)))))

against this

let plus_amper x y = x + y

???

As for my pushing (?) ideas about FPL here, I guess everyone missed the
earlier line where I said...

> But I do find Lisp to be the most useful language for rapid prototyping.
It
> is the ultimate modeling clay language.

BTW, I am one of those few you mention without ulterior motives. I like
computer languages; I have made them a major part of my professional life. I
have no particular axe to grind...

Cheers,

- DM


"Kaz Kylheku" <k...@ashi.footprints.net> wrote in message
news:k%AJ7.46612$Ud.22...@news1.rdc1.bc.home.com...

David McClain

unread,
Nov 17, 2001, 5:12:01 PM11/17/01
to
I beg your pardon, Sir!

Actually, I didn't realize that I needed to ask the question that ultimately
got answered here by Kaz, and by yourself below. Like so many programming
failures around, this was one of a failure of anticipation. But I am
ultimately glad to have received his answer.

For your part, you give similar information, but you don't really know me
well enough to address me in this manner.

- DM

"Erik Naggum" <er...@naggum.net> wrote in message

news:32150233...@naggum.net...

Kent M Pitman

unread,
Nov 17, 2001, 5:10:35 PM11/17/01
to
"David McClain" <bar...@qwest.net> writes:

Well, _if_ + was going to give you static typechecking,t hen
an inline function that uses + would also give it.

Normally, a function call is an intentionally-opaque boundary.
You can make it transparent by using INLINE.

You seem to be suggesting in the case of let in FPL that it will
propagate type outward. This propagation aspect may be useful to
some (perhaps, but not necessarily, many/most) people or in some
(perhaps again, but not necessarily, many/most) situations, especially
those who don't want to redefine their functions. The whole C/C++ community
depends on this, for example. So does the whole FP community, apparently.
(An odd alliance.)

However, I personally regard this as an abstraction barrier, not an
abstraction feature, because it encourages a kind of "prejudice" about
what you expect a function to do based on what you last saw it doing,
effectively keeping the function from changing/growing. In effect, while
I use macros a lot, I see them as kind of an abstraction sham. They appear
to hide implementation, but the compiler does not compile the abstraction,
it compiles the implementation. So at runtime, they have all the
redefinability properties (i.e., "very little") of C/C++ unless you manually
structure them to do better than their nature would have them do by default.
If you redefine one, you must redefine its callers.

I regard it as a MAJOR step in the evolution of languages that the
function call boundary was created. Its whole purpose in life, IMO,
was to hide what was inside. Once you don't know what's inside, then,
it seems odd to make assumptions about what the function WON'T do (which
is what a type error is) when you aren't willing to make assumptions about
what it WILL do. They seem part and parcel of the same thing to me.
Either both appropriate (as in macros, where you can see by expanding both
what the macro will and won't do) or both inappropriate (as in the opacity
with which I view functions).

Inline functions seek to be a kind of hybrid. An opaque function barrier
can be turned transparent on demand if the compiler was given notice that
the function should be inlineable prior to compiling the function. After
that, notinline lexically turns off the transparency and inline lexically
turns it on.

But the whole idea of making a language based on type propagation is, to
my personal eyes, the same as macro a function based on macros.
Implementations are exposed for optimization at the price that the
redefinition will be painful, involving a lot of bookkeeping at minimum,
and often not being possible at all because many find the bookkeeping needed
to unroll a type propagation assumption in a heavily optimized system
unbearable. Better to just say, as C/C++ do, "all bets are off" if you
need to redefine something after you've compiled based on static assumptions.

Sometimes we make these kinds of optimizations in Lisp at the
implementation-level when block compiling for application delivery, but
it's important to understand that this is different than doing it at
the language-level semantics level. I suppose we could introduce sealing
into the language if we wanted to, and be like Dylan, but having seen the
Dylan experiment played out, I'm not so sure this was as much of a total
win as it seemed like at the time. It seems to me it left Dylan a lot more
fragile than people had expected it to. Its name notwithstanding, I think
Dylan is a lot more static (even if its static nature is somewhat adjustable)
than Lisp. Whether you view that as positive or negative, of course, is
personal preference.

I've tried to present this in a way that emphasizes what I like/value in how
Lisp behaves without suggesting the other points of view are not legitimate.
Surely people can value other things. I'm just not sure such value systems
are compatible with the higher level goals of Lisp. They cut to the core of
what it means to call Lisp a dynamic language and we've pushed this a lot to
the very brink in Lisp, but I'm not sure how far we can to beyond where we are
without losing some of the properties we seek to retain. If anything, some
of the type declarations one can now do threaten Lisp's dynamicity, but we
tolerate them out of a conceptual inconsistency born of pragmatism...

David McClain

unread,
Nov 17, 2001, 5:19:57 PM11/17/01
to
> However, I personally regard this as an abstraction barrier, not an
> abstraction feature, because it encourages a kind of "prejudice" about
> what you expect a function to do based on what you last saw it doing,
> effectively keeping the function from changing/growing. In effect, while

Yes, you are correct about this kind of last-look inferencing in FPL. And
yes, I find that it does inhibit generalization of code.

In some ways, using FPL like SML and OCaml is a bit like going backward in
history to a former time where, when one needs to accommodate growth in user
defined data types, one must find and replace all the code that accesses
these kinds of data. I find that quite irritating.

This is one of the beauties of CLOS. I can factor behaviors into a hierarchy
of classes, and even the notion of mixin classes -- which seems unique to
CLOS among OO languages.

But thank you for your elucidation on my original question. You have some
very interesting viewpoints about programming languages, and I will ponder
these myself.

- DM

"Kent M Pitman" <pit...@world.std.com> wrote in message
news:sfwg07d...@shell01.TheWorld.com...

Kent M Pitman

unread,
Nov 17, 2001, 5:23:32 PM11/17/01
to
Erik Naggum <er...@naggum.net> writes:

> | But sometimes keyword arguments must be removed from argument lists
> | before handing these args off to older routines that would find the
> | additional keywords offensive.
>
> Well, if you knew Common Lisp, you would know that you can always add the
> keyword argument :allow-other-keys and its value t to the argument list
> of a function that accepts keywords.

Heh. Not "always". I'd give you "sometimes" or probably even "usually".

In fairness to David, we should admit this only works when the
function you're passing through to is known not to use the keywords you want
to pass as extras. If it DOES use such keywords, but for different reasons
than the containing function was using them, you really do have to remove
them.

In some such situations, I *think* you can shadow (add more keywords
in front) the offensive keyargs instead of removing them. (I'd have
to double-check. Duplicated keyargs are definitely permitted and the
leftmost one is used, but I can't remember if the ones that are
shadowed have to be compliant with the declared argtypes even though
they will not be used..)

The LispM had a macro called something like si:with-rem-keywords or some
such that did the removal operation (on the stack, etc.)

This whole problem, btw, is partly an artifact of our use of :keywords
(mashing keywords into the same argspace) instead of regular packaged symbols.
Probably overall I still believe that was a good trade, but it does have its
expressional costs and keyword collision between competing "subcontracting"
functions is among them.

Erik Naggum

unread,
Nov 17, 2001, 6:10:01 PM11/17/01
to
* "David McClain" <bar...@qwest.net>

| Actually, this is one of the surprising things about REMF that caught my
| attention. You are correct that most often destructive operations in
| Lisp operate in a functional manner... except for REMF (and no doubt a
| few others...)

Why this _completely_ irrelevant hangup? If properties were implemented
with a hashtable, would you complain that remhash does not return a new
hashtable instead of modifying the one you passed, too? Oh, no, now you
will complain about remhash. What have I _done_? Now Common Lisp will
crumble because property lists _and_ hashtables are mutable objects.

Have you noticed that when you intern a symbol in a package, you do not
get a new package? Or that when you read a character off a stream, you
do not geta new stream? Or that when you write a character to a stream,
which modifies a file on disk, you do not get a new disk?

That you think REMF is "destructive" is only because you are looking at
things at too low a level for your own good and abuse the implementation
detail that it is a list, which is again because you have not quite
figured out how to get what you want from the language. Even in your
"side-effect-freeness" you actually modify memory cells in your computer,
you know, but you would most probably argue that this is irrelevant to
the side-effect-freeness nature of your programming language, right?
Well, side-effect-free is actually _nothing_ but perspective.

You only have a problem understanding how (Common) Lisp works. Once you
get that understanding, you will either be the type who gripes about some
feature you do not like for the rest of your life, or you will be the
type who can make a more informed choice about which language you want to
spend your time on, and then leave the rest alone. I really wonder what
it is with Common Lisp that makes people first discover the language and
then start to make up so many stupid problems with it just because they
have some serious personality problems. I mean, _none_ of the standard
conditionals are usable by some dude, and now remf is "destructive" and
that by itself ticks some other troll off. Sheesh.

Thomas F. Burdick

unread,
Nov 17, 2001, 8:07:32 PM11/17/01
to
"David McClain" <bar...@qwest.net> writes:

> > Don't discard the idea of statically checking a dynamically typed
> > program. Just because the language hasn't pushed complexity onto the
> > programmer in order to make static type checking *easy* doesn't mean
> > that such checking is *impossible*. There is a lot that can be done.
>
> How can you say this about pushing complexity onto the programmer when I
> compare this
>
> (defmacro +& (x y)
> `(locally (declare (optimize (debug 0) (safety 0) (space 0)
> (compilation-speed 0) (speed 3)))
> (the fixnum (+ (the fixnum ,x) (the fixnum ,y)))))
>
> against this
>
> let plus_amper x y = x + y
>
> ???

The language doesn't push this complexity onto the programmer. In the
vast majority of my code, I don't use type knowledge to optimize
things at compile time. The language does allow you to deal with
complexity, if you want to. And of course, if you're dealing with
optimization, you're in implementation-specific areas. A more
realistic way to get the guaranteed fixnum addition would be:

(defmacro with-inner-loop (&body forms)
`(locally (declare (optimize (speed 3) (debug 0) (safety 0) (space 0)
(compilation-speed 0)))
,@forms))

(defun foo (x y)
(declare (fixnum x y)
(values fixnum))
;; ... lots of stuff here ...
(with-inner-loop
(+ x y)))

Ta-da!, I get the ADD instruction. But this is totally optional, I
could have just written (defun foo (x y) ... (+ x y)), with only a
polynomial increase in complexity of the code generated [ :-) ].

A good implementation (*cough* the Python compiler *cough*) allows
pretty sparse type declarations. And, uh, by the way, your FPL
example (is that really the language name?!) isn't the same as Kent's
macro. Unless it's possible for the FPL equivalent of (> x (1+ x)) to
return the equivalent of NIL, (ie, unless FPL doesn't have bignums),
there's a difference between fixnums and integers. If I replace all
the "fixnum"s with "integer" in the above, I get the following
warnings at compile-time:

In: LAMBDA (X Y)
(+ X Y)
Note: Forced to do GENERIC-+ (cost 10).
Unable to do inline fixnum arithmetic (cost 2) because:
The first argument is a INTEGER, not a FIXNUM.
The second argument is a INTEGER, not a FIXNUM.
The result is a INTEGER, not a FIXNUM.
Unable to do inline (unsigned-byte 32) arithmetic (cost 5) because:
The first argument is a INTEGER, not a (UNSIGNED-BYTE 32).
The second argument is a INTEGER, not a (UNSIGNED-BYTE 32).
The result is a INTEGER, not a (UNSIGNED-BYTE 32).
etc.

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Thomas F. Burdick

unread,
Nov 17, 2001, 8:17:54 PM11/17/01
to
Erik Naggum <er...@naggum.net> writes:

> You only have a problem understanding how (Common) Lisp works. Once you
> get that understanding, you will either be the type who gripes about some
> feature you do not like for the rest of your life, or you will be the
> type who can make a more informed choice about which language you want to
> spend your time on, and then leave the rest alone. I really wonder what
> it is with Common Lisp that makes people first discover the language and
> then start to make up so many stupid problems with it just because they
> have some serious personality problems. I mean, _none_ of the standard
> conditionals are usable by some dude, and now remf is "destructive" and
> that by itself ticks some other troll off. Sheesh.

I rhink you're projecting here. It's totally unfair to someone who's
flirting with Lisp, to compare him to someone who's been using it for
a Really Long Time. It's not like he's coming here trying to make us
see the light on how horrid remf is because it's destructive, he was
noting his discomfort with its destructiveness. Which is a good
thing. Much better than continuing to be uncomfortable with the
language, because if you publicly note your discomfort with something
you don't fully understand, you're likely to get an education on why
it's that way, and, if it's considered a good thing, why. Which is
exactly what he's getting here. I don't think McClain's problems with
remf are analagous to Fodorado's problems with if and cond. The one's
trolling[*] for enlightenment, the other's just trolling.

[*] I hate very much that in usenet-speak, "trolling" has to be
"trolling for flames". I learned about trolling when trolling for
salmon, which are a lot more like heaven than hell. Among people who
fish, it's a word with such nice associations...

David McClain

unread,
Nov 17, 2001, 8:26:39 PM11/17/01
to
> pretty sparse type declarations. And, uh, by the way, your FPL
> example (is that really the language name?!) isn't the same as Kent's
> macro. Unless it's possible for the FPL equivalent of (> x (1+ x)) to
> return the equivalent of NIL, (ie, unless FPL doesn't have bignums),
> there's a difference between fixnums and integers. If I replace all

Yes, you are absolutely correct about this incredible generality made
available by Lisp. FPL's don't ordinarily provide this kind of generality,
although Haskell might, with its type classes. I'm not expert enough in
Haskell to speak for it though.

Furthermore, my example is not a macro in the sense of Lisp macros. It is
only a single function to which all references are directed. Some languages,
like OCaml, may find it useful to inline-compile these references, but not
guaranteed. That choice is up to the compiler. In many cases, if I
understand the situation correctly, the placement of such simple functions
inside of higher-order modules, called Functors, may prohibit this inlining.

No -- FPL is Functional Programming Language -- a generic term. Specific
instances of FPL that I use are OCaml, SML/97, SML/NJ, MOSML, NML, Haskell,
Clean, Erlang, and a few others. Even Mathematica has some of the
characteristics of FPL, though I would not ordinarily classify it as such...

But at any rate, your points are well taken. It is truly difficult to
compare such apples and oranges as Lisp and some arbitrary FPL. I appreciate
the tremendous power of Lisp -- that's why I use it so much myself.

- DM

"Thomas F. Burdick" <t...@apocalypse.OCF.Berkeley.EDU> wrote in message
news:xcvwv0o...@apocalypse.OCF.Berkeley.EDU...

Jochen Schmidt

unread,
Nov 17, 2001, 9:36:32 PM11/17/01
to
Kaz Kylheku wrote:

I can't help but this reminds me too much on:

"Doctor, Doctor it hurts if I do this..."

;-)

David McClain

unread,
Nov 17, 2001, 8:45:01 PM11/17/01
to
He, he! I have to chuckle here... My "flirting" with Lisp has been on and
off for the past 15 years. I have used :allow-other-keys many times, but I
only did it in function and method declarations -- never as a function
client. So you pick up something new every day if you try...

But compared to some of you guys, I am a lightweight. I won't be surprised,
either, if I start getting surprised about things I already knew at one
time, as I get older...

For the really heavy duty Lisp stuff, I work with a Lisp Guru... It'll be
interesting to quiz him and get an honest reply on whether he knew about
this feature or not.

At any rate, I appreciate your kindness. I find it perplexing that a
language with such expressive power and freedom can spawn such venom from
its proponents. On the contrary, whenever I write Lisp, I feel exhilaration
and freedom. It causes my endorphines to release, I guess...

Cheers,

- DM

"Thomas F. Burdick" <t...@apocalypse.OCF.Berkeley.EDU> wrote in message

news:xcvsnbc...@apocalypse.OCF.Berkeley.EDU...

Thomas F. Burdick

unread,
Nov 17, 2001, 9:12:54 PM11/17/01
to
"David McClain" <bar...@qwest.net> writes:

> He, he! I have to chuckle here... My "flirting" with Lisp has been on and
> off for the past 15 years. I have used :allow-other-keys many times, but I
> only did it in function and method declarations -- never as a function
> client. So you pick up something new every day if you try...

Well, I have no idea how much you use Lisp, but it sounded like a
longtime, superficial use, which I think "flirt" captures quite well :-).
I certainly use the word to describe my relationship with Pascal,
which I've been pretty constantly making light use of for over 10
years now (I keep trying to stop, but then I'll come across employment
programming an old DOS/TurboPascal system, and, well, I'm not gonna
say no).

> But compared to some of you guys, I am a lightweight. I won't be surprised,
> either, if I start getting surprised about things I already knew at one
> time, as I get older...
>
> For the really heavy duty Lisp stuff, I work with a Lisp Guru... It'll be
> interesting to quiz him and get an honest reply on whether he knew about
> this feature or not.
>
> At any rate, I appreciate your kindness. I find it perplexing that a
> language with such expressive power and freedom can spawn such venom from
> its proponents. On the contrary, whenever I write Lisp, I feel exhilaration
> and freedom. It causes my endorphines to release, I guess...

Heh. I don't think it's Lisp as much as usenet that causes fuses to
shorten. God knows I get a short fuse if I don't kill Foderado's
threads. But it is kind of funny that you don't come across outraged
Perl hackers that often ... maybe they just get so used to being
wronged, having to deal with that horror of a language all day long,
that they stop fighting back?

Erik Naggum

unread,
Nov 17, 2001, 10:48:29 PM11/17/01
to
* Thomas F. Burdick

| Heh. I don't think it's Lisp as much as usenet that causes fuses to
| shorten. God knows I get a short fuse if I don't kill Foderado's
| threads. But it is kind of funny that you don't come across outraged
| Perl hackers that often ... maybe they just get so used to being wronged,
| having to deal with that horror of a language all day long, that they
| stop fighting back?

There is also a difference between the proverbial bull in a china shop
and a bull loose on a garbage dump. You would spend rather more energy
getting rid of the former before it damaged something valuable than you
would the latter. You might even tolerate more "collateral damage" if
what you protect is really beautiful than if it is butt ugly.

It takes close to a genius to make a serious improvement on Common Lisp,
but if you can open doors and cross streets without help, you could
improve on Perl and make modules that help people open doors and cross
streets who cannot do it on their own. Perl was created to solve those
problems that were caused by real and artificial stupidity -- file
formats created by morons, log files that were almost completely useless,
idiotic incompatibilities between programs, etc, whereas Common Lisp grew
out of the problems of artificial intelligence. Coming from opposite
corners of the problem space, it is no wonder that the solutions are at
opposite ends of several axes, including the smartness of their users.

Perl hackers do not get angry with stupidity and ignorance -- they thrive
on it. More stupidity, more code, more job security. Common Lisp
hackers keep looking for the elegant solution. More genius, less code,
more real, labor-saving solutions. Consequently, the Perl community
should _welcome_ morons because each moron would bring something valuable
to the community: The more morons keep churning out stinking, rotten Perl
code, the more popular books on Perl become, the more slightly smarter
Perl hackers can take over from the morons, the more "market" there is
for people who can fix moronic Perl code, etc. Perl is the embodiment of
a vicious circle if there ever was one. Bringing Perl attitudes to Lisp
is not a good thing, but people all over the industry are brought up to
be incredibly _unthinking_ these days. The moral obligation to be
intelligent is not exactly appreciated. Perl and C++ thrive on the lack
of desire to solve the right problem, and _anybody_ can solve the wrong
problem. Common Lisp grew out of the need to solve the right problem and
consequently people who want to solve the wrong problem in Common Lisp
only piss people off because it wastes everybody's time.

Marco Antoniotti

unread,
Nov 18, 2001, 9:43:00 AM11/18/01
to

"David McClain" <bar...@qwest.net> writes:

> > Don't discard the idea of statically checking a dynamically typed
> > program. Just because the language hasn't pushed complexity onto the
> > programmer in order to make static type checking *easy* doesn't mean
> > that such checking is *impossible*. There is a lot that can be done.
>
> How can you say this about pushing complexity onto the programmer when I
> compare this
>
> (defmacro +& (x y)
> `(locally (declare (optimize (debug 0) (safety 0) (space 0)
> (compilation-speed 0) (speed 3)))
> (the fixnum (+ (the fixnum ,x) (the fixnum ,y)))))
>
> against this
>
> let plus_amper x y = x + y
>

Just as an aside. OCaml does the following

# let plus_amper x y = x + y;;
val plus_amper : int -> int -> int = <fun>


ML (in the amazing Poplog system) does

fun plus_amper x y = x + y;

Error: cannot determine a type for overloaded identifier
val + : 'A * 'A -> 'A


I.e. in a sense, ML is more correct and faithful to it's own guts.
The OCaml version works only because the operators are partitioned.
To get float operations you need to use

# let times x y = x *. y;;
val times : float -> float -> float = <fun>


Note the `*.'

Cheers

--
Marco Antoniotti ========================================================
NYU Courant Bioinformatics Group tel. +1 - 212 - 998 3488
719 Broadway 12th Floor fax +1 - 212 - 995 4122
New York, NY 10003, USA http://bioinformatics.cat.nyu.edu
"Hello New York! We'll do what we can!"
Bill Murray in `Ghostbusters'.

Samir Sekkat

unread,
Nov 18, 2001, 10:36:12 AM11/18/01
to
In article <sfwg07d...@shell01.TheWorld.com>, pit...@world.std.com
says...

> However, I personally regard this as an abstraction barrier, not an
> abstraction feature, because it encourages a kind of "prejudice" about
> what you expect a function to do based on what you last saw it doing,
> effectively keeping the function from changing/growing. In effect, while
> I use macros a lot, I see them as kind of an abstraction sham. They appear
> to hide implementation, but the compiler does not compile the abstraction,
> it compiles the implementation. So at runtime, they have all the
> redefinability properties (i.e., "very little") of C/C++ unless you manually
> structure them to do better than their nature would have them do by default.
> If you redefine one, you must redefine its callers.

Dear Kent,

I always wondered why a propagation feature for macros (which would
redefine all callers) was not in the CL standard.

This would mean of course that the system would have to keep track of
the source code, and perhaps this is too difficult because there are so
many different ways to define functions/methods.

Was this a deliberate design choice of CL? Are there any difficulties I
do not see???

Thanks in advance
Samir

Kent M Pitman

unread,
Nov 18, 2001, 1:49:52 PM11/18/01
to
Samir Sekkat <sse...@gmx.de> writes:

Well, there's the space issue. Storing a lot of source code "just in
case" is a lot to ask. Remember in the design of CL, we were entering
into an era of unbounded (read: greater than 256K) address space but we
still weren't comfortable with that. So we still behaved like we wanted
to be space conservative.

And there's the security issue. HyperCard stored source code and it
was said to be easy for people to steal each other's HyperTalk. I
certainly never felt like a simple copyright notice was going to
protect me from piracy. Binaries make it a little easier to prove you
have the original sources; one can, of course, disassemble and
reconstruct but turning that into manageable source is another matter.

Plus there's probably a synchronization issue. If you do on-demand
recompilation after a single redefinition, how many times will you
recompile a file that needs 6 macros redefined, and will it even be
correct to do so (if all 6 need to be redefined in lockstep in order
to allow it). Demand recompilation is a messy issue, I think. We do
it in CLOS for methods, but a lot of that is because most method
combination is just doing template plugging and is nto really
necessarily doing fancy recompilation like would happen with function
definitions.

I don't know that I ever heard anyone discuss this issue consciously during
CL design in particular, though I think I heard people discuss it out of band.
I never saw a working version of it, and we did try to stick to things
that were extant and not just pie in the sky at the ANSI level. CL itself
pre-dated ANSI, of course, but it was that much closer to a small-address-space
in 1980 when it began and 1984 when published than ANSI CL, which was designed
in the 1986-1992 timeframe under better address space but more conservative
design principles.

Tim Bradshaw

unread,
Nov 19, 2001, 6:11:22 AM11/19/01
to
Erik Naggum <er...@naggum.net> wrote in message news:<32150273...@naggum.net>...

> Or that when you write a character to a stream,
> which modifies a file on disk, you do not get a new disk?

You are not yet Enlightened. If you return to the monastry of the
cult of Eff-Pee[1] and study and fast for long enough then one day you
will realise that, in fact, you do get a new disk. As I type this
message I am filled with wonder at the incessant creation of new
screens for each character I type[2]. It's cold up here on this
mountain-top, but understanding this eternal truth makes it all worth
while.

--tim

[1] Something is going to censor this for rude words, isn't it?
[2] More so since it's quite an expensive-looking LCD screen.

Nils Kassube

unread,
Nov 18, 2001, 4:19:39 PM11/18/01
to
"David McClain" <bar...@qwest.net> writes:

> They don't even require that the code at compile time be runnable without
> errors, e.g., references to undefined functions. You only find out about
> misspellings in some of these languages if and when you try to execute the
> erroneous branch of code. This has been a source of constant irritation for

This is what unit testing is for. Yes, I learnt it the hard way.

Samir Sekkat

unread,
Nov 20, 2001, 2:45:43 AM11/20/01
to
In article <sfw4rnr...@shell01.TheWorld.com>, pit...@world.std.com
says...

> Well, there's the space issue. Storing a lot of source code "just in
> case" is a lot to ask. Remember in the design of CL, we were entering
> into an era of unbounded (read: greater than 256K) address space but we
> still weren't comfortable with that. So we still behaved like we wanted
> to be space conservative.

OK, this is an important point.

> And there's the security issue. HyperCard stored source code and it
> was said to be easy for people to steal each other's HyperTalk. I
> certainly never felt like a simple copyright notice was going to
> protect me from piracy. Binaries make it a little easier to prove you
> have the original sources; one can, of course, disassemble and
> reconstruct but turning that into manageable source is another matter.

To circoumvent this issue, one could have two areas, the private and
public area. The private area would have the macro propagation feature
and would store source code. After propagation, if needed, the code can
be scrambled and transfered in the public area.

Does this make sense? Is anyone doing something like that?

> Plus there's probably a synchronization issue. If you do on-demand
> recompilation after a single redefinition, how many times will you
> recompile a file that needs 6 macros redefined, and will it even be
> correct to do so (if all 6 need to be redefined in lockstep in order
> to allow it). Demand recompilation is a messy issue, I think. We do
> it in CLOS for methods, but a lot of that is because most method
> combination is just doing template plugging and is nto really
> necessarily doing fancy recompilation like would happen with function
> definitions.

I was more thinking about leaving the files and storing the code in
pieces in the CL system. Some CL systems have already the dependency
information available. We would then have a dependency graph to
correctly propagate the change.

Of course, it would mean that one would have to avoid side-effects in
the compilation process, writing the code chunks in a "functional
manner".

The backround of my question is following:

At the moment my code dependencies are defined with the defsystem
facility. I have a core system and a secondary system. The secondary
system is dependant of the core system and the compilation of the
secondary system take a few minutes.

So, each time I change a macro in the core system, most of the core is
recompiled whithout any need. This is not a real problem, because this
compilation is very fast.

But then, the defsystem thinks that the secondary system has to be
recompiled.

So I have to manually decide if the secondary system has to be
recompiled.

At the moment this is not a real issue, but if I release the technology
to customers, I would love to have a way to insure consistency but avoid
lengthy and unneeded compilation time (the secondary system may take in
the future a _long_ time to compile)

Anyway, thanks for your answer, Kent.
Samir

Kent M Pitman

unread,
Nov 20, 2001, 3:55:54 AM11/20/01
to
Samir Sekkat <sse...@gmx.de> writes:

> The backround of my question is following:
>
> At the moment my code dependencies are defined with the defsystem
> facility. I have a core system and a secondary system. The secondary
> system is dependant of the core system and the compilation of the
> secondary system take a few minutes.
>
> So, each time I change a macro in the core system, most of the core is
> recompiled whithout any need. This is not a real problem, because this
> compilation is very fast.
>
> But then, the defsystem thinks that the secondary system has to be
> recompiled.

Yes, but remember that you also don't know if there were toplevel uses
of the macro that have come and gone. Consider the code:

;---------- a.lisp ----------
(defmacro deffoo (x) `(setf (get ',x 'foo) t))

;---------- b.lisp ----------
(deffoo alpha)
(deffoo beta)
...

Suppose you redefine DEFFOO. Now it's
(defmacro deffoo (x) `(setf (get ',x 'fu) t))
You're thinking that all uses are in code, but some may ahve been toplevel
as in b.lisp above. You can't keep a record of all prior evaluations, so
you pretty much have to reexecute the code that was declared as a dependant.

There might be ways to carve this up into smaller dependency points but
you still can't just have the compiler notice a use and recompile it.
(It would be easy for me to write a macro that relies for its correctness
on being compiled in sequence in its source file and can't be compiled
out of context.)

> So I have to manually decide if the secondary system has to be
> recompiled.

Yes, that's right. And I don't see an easy way around this.

Although I think there is a working paper on this issue that was
written by Richard Robbins (under supervision of Dick Waters). I
don't know if you'll be able to find a copy online, since the stupid
official policy of the MIT AI Lab is that working papers are not
intended for reference in the literature. (I'd rather this was determined
by the literature rather than being pre-ordained by the Ivory Tower, since
I've seen worthless "Memos" and very useful "Working Papers".)

I don't know if BUILD (the tool described in the memo) is the answer to
any major problems, but it's yet one more point of view on this complex
issue. Btw, if you haven't seen my related paper
http://world.std.com/~pitman/Papers/Large-Systems.html
you might find it useful.

> At the moment this is not a real issue, but if I release the technology
> to customers, I would love to have a way to insure consistency but avoid
> lengthy and unneeded compilation time (the secondary system may take in
> the future a _long_ time to compile)

I certainly appreciate the problem. I'm just not sure the solutions are
as easy as they seem.

Btw, in 1984, I got talked by some guys at the Open University in the UK
into helping them get started on their KEATS (knowledge elicitation) project.
What I ended up writing, I realized after the fact, was an early hypertext
editor. (The paper for that isn't webbed, but is on my list.) It was
intended to help you slice and dice a long paper into paragraphs and then
re-chunk the knowledge into other shapes, ultimately yielding prolog rules
with a textual pedigree back to source text. But as a side use, I noticed
that if you break up source text into individual functions rather than
thinking of them as files, and you make each little chunk carry compilation
and execution dependency information, it can be quite useful; especially so
if you can re-arrange the text in various ways that violate this order when
editing without disturbing the possibility of viewing and processing according
to one of these declared orders. I think solutions like this that integrate
the editing process with the compilation process have a better hope of
succeeding than isolated solutions that involve just program text that
has not been maintained in a way that really acknowledges relationships of
text parts, and that tries to piggy-back too much information on linear
file arrangement. (Sorry if that's too much information in too short a
space. Just trying to get you thinking; not really to detail the entire
issue.)

> Anyway, thanks for your answer, Kent.
> Samir

Sure.

Tim Bradshaw

unread,
Nov 20, 2001, 8:50:08 AM11/20/01
to
Samir Sekkat <sse...@gmx.de> wrote in message news:<MPG.1662fe488...@news.t-online.de>...

>
> So, each time I change a macro in the core system, most of the core is
> recompiled whithout any need. This is not a real problem, because this
> compilation is very fast.
>

You can often get around this by factoring your system into chunks
with slightly more complex dependencies. I'm not sure you can do this
in your case though, or not easily.

> At the moment this is not a real issue, but if I release the technology
> to customers, I would love to have a way to insure consistency but avoid
> lengthy and unneeded compilation time (the secondary system may take in
> the future a _long_ time to compile)

What is `long'? If they are used to large C++ systems they will be
delighted with anything that will build in less than a day.

--tim

Samir Sekkat

unread,
Nov 20, 2001, 12:35:26 PM11/20/01
to
In article <fbc0f5d1.0111...@posting.google.com>,
tfb+g...@tfeb.org says...

> You can often get around this by factoring your system into chunks
> with slightly more complex dependencies. I'm not sure you can do this
> in your case though, or not easily.

I already have complex dependencies, and I have macros used over layer
bounderies :-(

> > At the moment this is not a real issue, but if I release the technology
> > to customers, I would love to have a way to insure consistency but avoid
> > lengthy and unneeded compilation time (the secondary system may take in
> > the future a _long_ time to compile)
>
> What is `long'? If they are used to large C++ systems they will be
> delighted with anything that will build in less than a day.

I agree, but if I ever get very large clients, the secondary system
might compile in hours.

I already have the virus of the-right-thing and have sometimes to get
back to earth :-)

Thanks anyway for your comments.

Samir

Samir Sekkat

unread,
Nov 20, 2001, 12:35:24 PM11/20/01
to
In article <sfw6685...@shell01.TheWorld.com>, pit...@world.std.com
says...

> > So I have to manually decide if the secondary system has to be
> > recompiled.
>
> Yes, that's right. And I don't see an easy way around this.
>

I think that the right path is to categorise the top-level calls and
code-chunks in a certain way (perhaps manually) to help the compiler to
build the dependency graph.
Your paper on large systems looks interesting...

I will look onto those issues, including the top-level calls, more
deeply and come back then :-)

Samir

0 new messages