Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

CLOS: read only slots?

159 views
Skip to first unread message

Milan Zamazal

unread,
Sep 7, 1999, 3:00:00 AM9/7/99
to
Is there any good way to say in CL that some slot defined in `defclass'
is read only (e.g. something like :read-only in `defstruct')?

Thanks for any advice.

Milan Zamazal

Rainer Joswig

unread,
Sep 7, 1999, 3:00:00 AM9/7/99
to

If you are using acessors you can specify
:ACCESSOR, :READER and/or :WRITER.

If you are using SLOT-VALUE - well there are exotic
solutions like when you have a MOP and the function
SLOT-VALUE-USING-CLASS - you could write a method
for (SETF SLOT-VALUE-USING-CLASS) for a new
meta-class ...

Tim Bradshaw

unread,
Sep 7, 1999, 3:00:00 AM9/7/99
to
* Milan Zamazal wrote:
> Is there any good way to say in CL that some slot defined in `defclass'
> is read only (e.g. something like :read-only in `defstruct')?

You can define just a reader, which means that you'd have to use
SLOT-VALUE to write it, which is pretty detectable. If you do MOP
stuff you can probably prevent SLOT-VALUE from updating it, but I
suspect this would have bad consequences (one of the things that
people who support the AMOP MOP tend to say not to do is ever define
methods on SLOT-VALUE-USING-CLASS).

--tim

Craig Brozefsky

unread,
Sep 7, 1999, 3:00:00 AM9/7/99
to
Tim Bradshaw <t...@tfeb.org> writes:

> You can define just a reader, which means that you'd have to use
> SLOT-VALUE to write it, which is pretty detectable. If you do MOP
> stuff you can probably prevent SLOT-VALUE from updating it, but I
> suspect this would have bad consequences (one of the things that
> people who support the AMOP MOP tend to say not to do is ever define
> methods on SLOT-VALUE-USING-CLASS).

Hmm, provided that you are careful not to get stuck in loops, I see no
reason why one would not want to defmethod on SLOT-VALUE-USING-CLASS.
PLOB! and some of my code for GUI programming do this. Could you
elaborate on why it is not a good thing to do this?

--
Craig Brozefsky <cr...@red-bean.com>
Free Scheme/Lisp Software http://www.red-bean.com/~craig
"riot shields. voodoo economics. its just business. cattle
prods and the IMF." - Radiohead, OK Computer, Electioneering

Tim Bradshaw

unread,
Sep 8, 1999, 3:00:00 AM9/8/99
to
* Craig Brozefsky wrote:

> Hmm, provided that you are careful not to get stuck in loops, I see no
> reason why one would not want to defmethod on SLOT-VALUE-USING-CLASS.
> PLOB! and some of my code for GUI programming do this. Could you
> elaborate on why it is not a good thing to do this?

Probably an implementor could better answer this, but my understanding
is that so long as there are no (user) methods on this, then the
system can simply not call it and optimize SLOT-VALUE into something
really fast (like array access, basically). The moment there are
methods you have to start actually doing the protocol and slot-access
performance goes down the tubes.

However it may be that the problems can be contained so that you can
still get decent performance on instances of classes with non-contaged
metaclasses or something. It also might depend on how heroic the
implementation is willing to be (is it willing to recompile lots of
methods on the fly for instance).

--tim

Harley Davis

unread,
Sep 8, 1999, 3:00:00 AM9/8/99
to
Tim Bradshaw <t...@tfeb.org> wrote in message
news:ey3d7vt...@lostwithiel.tfeb.org...

This brings up a general problem with Common Lisp, and especially with its
MOP: Lack of simple performance predictability. When a seemingly innocent
change like adding a method on a documented generic function can change slot
access performance so profoundly, the user is going to be alarmed. There
are similar issues throughout Common Lisp - for instance, some combinations
of keyword parameters may be recognized and optimized by the compiler, and
so are not any slower than using positional parameters, while other
combinations may require list consing and analysis. To make things worse,
there is no really good tradition of standard implementational tricks,
except pretty pessimistic ones, so users have to spend a lot of time to
acquire wisdom about the various implementations and how they work.

This adds to the impression that Common Lisp is slow. Actually, it is not
slow, just unpredictably efficient for beginning and intermediate users.

I believe that future work on Lisp should address this concern and make it
possible to have a simple, predictable efficiency model if the goal is to
enhance acceptability among new users of the language.

-- Harley

Barry Margolin

unread,
Sep 8, 1999, 3:00:00 AM9/8/99
to
In article <37d6e2ea$0$2...@newsreader.alink.net>,

Harley Davis <nospam...@nospam.museprime.com> wrote:
>I believe that future work on Lisp should address this concern and make it
>possible to have a simple, predictable efficiency model if the goal is to
>enhance acceptability among new users of the language.

This sounds like the worst case of premature optimization I've ever heard
of.

--
Barry Margolin, bar...@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Rainer Joswig

unread,
Sep 9, 1999, 3:00:00 AM9/9/99
to
In article <37d6e2ea$0$2...@newsreader.alink.net>, "Harley Davis" <nospam...@nospam.museprime.com> wrote:

> This brings up a general problem with Common Lisp, and especially with its
> MOP: Lack of simple performance predictability.

See:

Henry G. Baker, CLOStrophobia : Its Etiology and Treatment

Espen Vestre

unread,
Sep 9, 1999, 3:00:00 AM9/9/99
to
"Harley Davis" <nospam...@nospam.museprime.com> writes:

> This adds to the impression that Common Lisp is slow. Actually, it is not
> slow, just unpredictably efficient for beginning and intermediate users.

Now that java has made it into university introductory courses,
Common Lisp will be the language that *surprises* beginning users
with its unbelieveable efficiency :-)

--
(espen)

Marco Antoniotti

unread,
Sep 9, 1999, 3:00:00 AM9/9/99
to

"Harley Davis" <nospam...@nospam.museprime.com> writes:

> Tim Bradshaw <t...@tfeb.org> wrote in message
> news:ey3d7vt...@lostwithiel.tfeb.org...
> > * Craig Brozefsky wrote:
> >
> > > Hmm, provided that you are careful not to get stuck in loops, I see no
> > > reason why one would not want to defmethod on SLOT-VALUE-USING-CLASS.
> > > PLOB! and some of my code for GUI programming do this. Could you
> > > elaborate on why it is not a good thing to do this?
> >
> > Probably an implementor could better answer this, but my understanding
> > is that so long as there are no (user) methods on this, then the
> > system can simply not call it and optimize SLOT-VALUE into something
> > really fast (like array access, basically). The moment there are
> > methods you have to start actually doing the protocol and slot-access
> > performance goes down the tubes.
> >
> > However it may be that the problems can be contained so that you can
> > still get decent performance on instances of classes with non-contaged
> > metaclasses or something. It also might depend on how heroic the
> > implementation is willing to be (is it willing to recompile lots of
> > methods on the fly for instance).
>

> This brings up a general problem with Common Lisp, and especially with its

> MOP: Lack of simple performance predictability. When a seemingly innocent
> change like adding a method on a documented generic function can change slot
> access performance so profoundly, the user is going to be alarmed.

Apart from the comments you make later on, why adding a method to a
slot accessor would be less "predictable" tahn adding a 'getX' method
to a Java class or a 'virtual getX' member function to a C++ class?

> There
> are similar issues throughout Common Lisp - for instance, some combinations
> of keyword parameters may be recognized and optimized by the compiler, and
> so are not any slower than using positional parameters, while other
> combinations may require list consing and analysis. To make things worse,
> there is no really good tradition of standard implementational tricks,
> except pretty pessimistic ones, so users have to spend a lot of time to
> acquire wisdom about the various implementations and how they work.
>

> This adds to the impression that Common Lisp is slow. Actually, it is not
> slow, just unpredictably efficient for beginning and intermediate users.
>

> I believe that future work on Lisp should address this concern and make it
> possible to have a simple, predictable efficiency model if the goal is to
> enhance acceptability among new users of the language.

This might be a worthwhile thing to have. I just am not sure how much
this would really impact the developers. Maybe it would just be a
nice thing to have developers publicize the "efficiency model" of
certain features. Which ones would make more sense than others, I
don't know.

Cheers

--
Marco Antoniotti ===========================================
PARADES, Via San Pantaleo 66, I-00186 Rome, ITALY
tel. +39 - 06 68 10 03 17, fax. +39 - 06 68 80 79 26
http://www.parades.rm.cnr.it/~marcoxa

Fernando Mato Mira

unread,
Sep 9, 1999, 3:00:00 AM9/9/99
to
Harley Davis wrote:

> I believe that future work on Lisp should address this concern and make it
> possible to have a simple, predictable efficiency model if the goal is to
> enhance acceptability among new users of the language.

mop-based compiler.

Erik Naggum

unread,
Sep 9, 1999, 3:00:00 AM9/9/99
to
* Harley Davis

| I believe that future work on Lisp should address this concern and make
| it possible to have a simple, predictable efficiency model if the goal is
| to enhance acceptability among new users of the language.

there is an important difference between reducing inacceptability to new
users and enhancing acceptability, and I'd say the two are not on the
same axis: you can do the latter without succeeding in the former.

simple, predictable efficiency models have the very obvious drawback that
they aren't useful in a complex world. that's why C programs (C has a
simple, predictable efficiency model in my view -- don't know about
yours) are frequently efficient only at the lowest and local levels and
dramatically inefficient globally. Common Lisp programs are often
globally efficient and inefficient at the local level, because of this.
that's why you can frequently improve the performance of a Common Lisp
program greatly by tweaking a few key functions discovered by profiling.

however, I'd settle for a predictable efficiency model. it doesn't have
to be simple.

#:Erik
--
it's election time in Norway. explains everything, doesn't it?

Tim Bradshaw

unread,
Sep 9, 1999, 3:00:00 AM9/9/99
to
* Harley Davis wrote:

> This brings up a general problem with Common Lisp, and especially with its
> MOP: Lack of simple performance predictability. When a seemingly innocent
> change like adding a method on a documented generic function can change slot
> access performance so profoundly, the user is going to be alarmed.

(Ignoring the trivial point that a MOP isn't part of CL, which I don't
want to labour as it's beside the point).

It seems to me that if you have a language that allows you to step in
and control very fundamental operations, like the access to components
of its data types, in a pervasive way, then you should probably not
expect the performance to be particularly predictable if you do that.

I think the solution to that problem is to make sure that when you
teach the language you make sure that people develop a good enough
model to realise that anything that lets you control slot access is
going to be likely to be on the critical path. This is just another
case of the things you already have to do, like make sure they really
*understand* that access to lists is linear in length. My jaundiced
view of CS teaching is that it often fails to do things like this (not
just for Lisp).

the keyword-argument problem is a better example, because it's less
explicable. On the other hand, I've never actually been conscious of
it, so, at least for the implementations I use, I think any
performance problems it induces must be not extreme (or I've been very
lucky).

> I believe that future work on Lisp should address this concern and make it
> possible to have a simple, predictable efficiency model if the goal is to
> enhance acceptability among new users of the language.

It seems to me that if you do that, you end up with languages like C
-- which is very predictable, if sometimes painful to use compared to
Lisp. I'd rather address the problem by better teaching.

--tim

Tim Bradshaw

unread,
Sep 9, 1999, 3:00:00 AM9/9/99
to
* Marco Antoniotti wrote:
> Apart from the comments you make later on, why adding a method to a
> slot accessor would be less "predictable" tahn adding a 'getX' method
> to a Java class or a 'virtual getX' member function to a C++ class?

It wasn't adding a method to a slot accessor, it was controlling *all*
slot access, which I think is a different thing. I think it's more
like pervasively overloading -> in C++. Or is that what getX methods
do?

I think a much better example of the kind of problem is if something
like this was a performance problem:

(defclass foo ()
((blob :reader foo-blob
...)))

(defmethod foo-blob :after ((foo foo))
...)

-- And in fact, on Genera, it *was* a serious performance problem to
define aux methods on accessor GFs, and that was really mysterious
until you worked out that you needed to declare FOO-BLOB notinline.

--tim

Marco Antoniotti

unread,
Sep 9, 1999, 3:00:00 AM9/9/99
to

Tim Bradshaw <t...@tfeb.org> writes:

> * Marco Antoniotti wrote:
> > Apart from the comments you make later on, why adding a method to a
> > slot accessor would be less "predictable" tahn adding a 'getX' method
> > to a Java class or a 'virtual getX' member function to a C++ class?
>
> It wasn't adding a method to a slot accessor, it was controlling *all*
> slot access, which I think is a different thing. I think it's more
> like pervasively overloading -> in C++. Or is that what getX methods
> do?

Java (and C++) style, pretty much encourage you to have 'getX' and
'setX' methods, so that you can do wrapper tricks.

So, defining a :reader, :writer, or :accessor shouldn't really change
much, w.r.t. these cases. I.e. it is "as predicatble as".

Pervasive overloading of '->', is a different story, which I would
stick with a pervasive adding of methods to SLOT-VALUE.

> I think a much better example of the kind of problem is if something
> like this was a performance problem:
>
> (defclass foo ()
> ((blob :reader foo-blob
> ...)))
>
> (defmethod foo-blob :after ((foo foo))
> ...)
>
> -- And in fact, on Genera, it *was* a serious performance problem to
> define aux methods on accessor GFs, and that was really mysterious
> until you worked out that you needed to declare FOO-BLOB notinline.

That is a quirk indeed. But IMHO it is a problem of an "unchecked
optimization", not of the overall design of the language.

Marco Antoniotti

unread,
Sep 10, 1999, 3:00:00 AM9/10/99
to

Marco Antoniotti <mar...@copernico.parades.rm.cnr.it> writes:

> Tim Bradshaw <t...@tfeb.org> writes:
>
> > * Marco Antoniotti wrote:
> > > Apart from the comments you make later on, why adding a method to a
> > > slot accessor would be less "predictable" tahn adding a 'getX' method
> > > to a Java class or a 'virtual getX' member function to a C++ class?
> >
> > It wasn't adding a method to a slot accessor, it was controlling *all*
> > slot access, which I think is a different thing. I think it's more
> > like pervasively overloading -> in C++. Or is that what getX methods
> > do?
>
> Java (and C++) style, pretty much encourage you to have 'getX' and
> 'setX' methods, so that you can do wrapper tricks.
>
> So, defining a :reader, :writer, or :accessor shouldn't really change
> much, w.r.t. these cases. I.e. it is "as predicatble as".
>
> Pervasive overloading of '->', is a different story, which I would
> stick with a pervasive adding of methods to SLOT-VALUE.

OOOOPS. I really slipped on this. SLOT-VALUE is not a GF. That tells
you how much I use it.

Rainer Joswig

unread,
Sep 10, 1999, 3:00:00 AM9/10/99
to

> > Pervasive overloading of '->', is a different story, which I would
> > stick with a pervasive adding of methods to SLOT-VALUE.
>
> OOOOPS. I really slipped on this. SLOT-VALUE is not a GF. That tells
> you how much I use it.

If you have a MOP than you could specialize
SLOT-VALUE-USING-CLASS which is a generic function.
SLOT-VALUE is supposed to call SLOT-VALUE-USING-CLASS.

Howard R. Stearns

unread,
Sep 10, 1999, 3:00:00 AM9/10/99
to
Fernando Mato Mira wrote:

>
> Harley Davis wrote:
>
> > I believe that future work on Lisp should address this concern and make it
> > possible to have a simple, predictable efficiency model if the goal is to
> > enhance acceptability among new users of the language.
>
> mop-based compiler.

Could you elaborate on what you mean by a mop-based compiler?

I have long taken it on intuition that the following features would be
sufficient to produce efficient applications in practice, but I have
seen no analysis or other evidence to back me up:

1. A sealing protocol neither more nor less than that of Dylan.

2. A compiler capable of inlining everything, including effective
methods and individual methods therein.

3. A documented set of platform-independent optimization controls. For
example, a classification of the various kinds of hints that one might
want forms a basis to further document exactly when they are or are not
utilized, and when they are implied by speed/safety, etc.

4. A compiler-mop, by which I mean that where the clos-mop is an
extendible protocol for controlling how CLOS operates, a compiler-mop is
an extendible protocol for controlling how the compiler operates. For
example, if the compiler-mop has defined generic functions that operate
on different kinds of forms, environments and continuations, one could
define one's own, say, continuation class and have the compiler make use
of some optimization you provide specialized for this class.

Can anyone site anything to suggest that these are sufficient to produce
the "Sufficiently Smart Compiler"?

Robert Monfera

unread,
Sep 10, 1999, 3:00:00 AM9/10/99
to
Hi Howard,

"Howard R. Stearns" wrote:

> Can anyone site anything to suggest that these are sufficient to produce
> the "Sufficiently Smart Compiler"?

Sufficiently smart for what objectives?

Anyway:

- A type inference engine?

- Dynamic recompiling similar to Hot Spot? (E.g., having an initial
small footprint bytecode-image, which, with the help of maintained
run-time statistics and declared constraints, compiles functions and
methods into native code with the appropriate inlining. It's not unlike
a cost based database engine.)

Regards
Robert

Howard R. Stearns

unread,
Sep 10, 1999, 3:00:00 AM9/10/99
to
Robert Monfera wrote:
>
> Hi Howard,
>
> "Howard R. Stearns" wrote:
>
> > Can anyone site anything to suggest that these are sufficient to produce
> > the "Sufficiently Smart Compiler"?
>
> Sufficiently smart for what objectives?
>
> Anyway:
>
> - A type inference engine?

I like to distinguish between making use of the declarations given,
including nested declarations refering to the same actual binding, vs.
using data-flow analysis to infer type definitions (e.g. transfering
type declarations from one variable to the next after an assignment).

I think the former is certainly necessary to achieve efficient code and
predictable performance, and to my mind this is just one tiny part of
the much broader umbrella of the sealing protocol.

I'm not certain that the latter is necessary for performance, and might
get in the way of predicatability. That's not to say that it might not
be convenient....

>
> - Dynamic recompiling similar to Hot Spot? (E.g., having an initial
> small footprint bytecode-image, which, with the help of maintained
> run-time statistics and declared constraints, compiles functions and
> methods into native code with the appropriate inlining. It's not unlike
> a cost based database engine.)

Can you tell me more about this? Would it be relevent to say that I
assume (again, without evidence,) that doing compile-time cost analysis
of whether or not to inline something may be convenient, but I'm not
convinced that it is necesarry in practice to achieve the goal of
efficient commercial code with predictable performance.

>
> Regards
> Robert

I guess I view things such that if someone wants efficiency and
predictability to be locally no worse than C (whatever that might REALLY
mean), then all that is necessary is to require, accept, and make use of
the same kinds of declarations as C. The inlining is necesarry to bring
these benefits to CLOS, and gives us a boost over C as well, but it's
still a programmer that decides when and where it makes sense.

Harley Davis

unread,
Sep 10, 1999, 3:00:00 AM9/10/99
to

Erik Naggum <er...@naggum.no> wrote in message
news:31458599...@naggum.no...
> * Harley Davis

> | I believe that future work on Lisp should address this concern and make
> | it possible to have a simple, predictable efficiency model if the goal
is
> | to enhance acceptability among new users of the language.
>
> there is an important difference between reducing inacceptability to new
> users and enhancing acceptability, and I'd say the two are not on the
> same axis: you can do the latter without succeeding in the former.

Good point. Clearly simply increasing function call (or similar) efficiency
will not attract the masses, but it will help insure that they don't leave
the fold once enticed by whatever other strategy might work. (And I think
the only successful strategy these days to promote new languages is
large-scale, loss-leading corporate sponsorship, similar to Sun's with Java
and Microsoft's with C++ and Basic, where first-degree money-losing
promotion of a language is intended to have secondary benefits for the
corporation via platform application availability. So the next chance for
Lisp will be the next generation popular software platform, probably in
another 5 years. Better have something ready to dust off by then, guys!)

> simple, predictable efficiency models have the very obvious drawback
that
> they aren't useful in a complex world. that's why C programs (C has a
> simple, predictable efficiency model in my view -- don't know about
> yours) are frequently efficient only at the lowest and local levels and
> dramatically inefficient globally. Common Lisp programs are often
> globally efficient and inefficient at the local level, because of this.
> that's why you can frequently improve the performance of a Common Lisp
> program greatly by tweaking a few key functions discovered by profiling.

C is pretty much the epitome of simple, predictable efficiency models.
However, the language and its standard libraries has obvious (to this
community, anyways), huge, gaping flaws for large system work.

Common Lisp programs could be both locally and globally efficient by
continuing to provide the same kind of high-level libraries it does today
(and more!) and also have a predictable efficiency model for
micro-optimizations.

> however, I'd settle for a predictable efficiency model. it doesn't have
> to be simple.

Simplicity isn't important to you but it is important to beginning and
intermediate users, I would wager.

-- Harley


Erik Naggum

unread,
Sep 11, 1999, 3:00:00 AM9/11/99
to
* Harley Davis

| (And I think the only successful strategy these days to promote new
| languages is large-scale, loss-leading corporate sponsorship, similar to
| Sun's with Java and Microsoft's with C++ and Basic, where first-degree
| money-losing promotion of a language is intended to have secondary
| benefits for the corporation via platform application availability. So
| the next chance for Lisp will be the next generation popular software
| platform, probably in another 5 years. Better have something ready to
| dust off by then, guys!)

two immediate thoughts: (1) is the money spent on languages this way
actually recovered, or are they marketing vehicles and thus merely
marketing costs for something else? (2) does unlimited source access
help recover the money spent or just help promote the language (and
depending on question 1, helpful to that something else)?

| C is pretty much the epitome of simple, predictable efficiency models.

:


| Simplicity isn't important to you but it is important to beginning and
| intermediate users, I would wager.

OK, so if we set the baseline for both simple and predictable at C, I
would say that we don't need it _that_ simple. for instance, there are
no basic operationgs in C that are worse than O(1). in Common Lisp, = is
O(n), but /= is O(n²). in C, (lo <= x && x <= hi) requires computing x
twice if it isn't known to be unchanging, and it's fast, because the
types are known, but in CL, it may surprise people used to the C mindset
that (<= lo x hi) is usually a function call away, but (TYPEP x '(INTEGER
lo hi)) is fast and inlined (when lo and hi are constants, obviously). I
think the latter is simple, too, but it is clearly predicated on having
groked the dynamic typing concept. does this make it less simple, or
only simple in non-C terms?

let's make it as simple as possible, but no simpler. (incidentally, is
this a Feynmanism, or was it Einstein?)

Fernando Mato Mira

unread,
Sep 11, 1999, 3:00:00 AM9/11/99
to
"Howard R. Stearns" wrote:

> Fernando Mato Mira wrote:
> >
> > Harley Davis wrote:
> >

> > > I believe that future work on Lisp should address this concern and make it
> > > possible to have a simple, predictable efficiency model if the goal is to
> > > enhance acceptability among new users of the language.
> >

> > mop-based compiler.
>
> Could you elaborate on what you mean by a mop-based compiler?
>

> 4. A compiler-mop, by which I mean that where the clos-mop is an
> extendible protocol for controlling how CLOS operates, a compiler-mop is
> an extendible protocol for controlling how the compiler operates. For
> example, if the compiler-mop has defined generic functions that operate
> on different kinds of forms, environments and continuations, one could
> define one's own, say, continuation class and have the compiler make use
> of some optimization you provide specialized for this class.

This.

Marco Antoniotti

unread,
Sep 11, 1999, 3:00:00 AM9/11/99
to

Robert Monfera <mon...@fisec.com> writes:

> Hi Howard,
>
> "Howard R. Stearns" wrote:
>
> > Can anyone site anything to suggest that these are sufficient to produce
> > the "Sufficiently Smart Compiler"?
>
> Sufficiently smart for what objectives?
>
> Anyway:
>
> - A type inference engine?

There are difficulties wrt a type inference in CL. CMUCL gets pretty
close to it though.

> - Dynamic recompiling similar to Hot Spot? (E.g., having an initial
> small footprint bytecode-image, which, with the help of maintained
> run-time statistics and declared constraints, compiles functions and
> methods into native code with the appropriate inlining. It's not unlike
> a cost based database engine.)

You are assuming a byte code version hanging around. This is a
problem for Java but not necessarily for CL in general. Since there
is no notion of CLVM, there is no notion of "bytecode" in the language
(the notion is there in several "implementations", but that is another
story: CLisp compiles to bytecodes and CMUCL has *also* a byte code
compiler). All in all the Hotspot stuff does not apply much to CL,
unless somebody is willing to define a CLVM.

Robert Monfera

unread,
Sep 11, 1999, 3:00:00 AM9/11/99
to
Marco Antoniotti wrote:

> (the notion is there in several "implementations", but that is another
> story: CLisp compiles to bytecodes and CMUCL has *also* a byte code
> compiler).

Maybe some of the commercial lisps also use bytecodes when in
interpreter mode?

> All in all the Hotspot stuff does not apply much to CL,
> unless somebody is willing to define a CLVM.

The main idea that looks conformant with the Lisp philosophy is that the
implementation is free to optimize programs as they see fit, which does
not exclude run-tyme optimizations. The way an application works on your
particular data could give a lot of information to the compiler, which
in turn could optimize memory vs. safety vs. speed issues (e.g.,
bytecodes allow smaller images). This is analogous to the suggestion
from experts on this NG about having to profile to allow effective
optimizations - it is something like the automated version of it). I do
not know specifics about Hot Spot, but the idea of run-time compiler
activities sounded very lispy.

Regards
Robert

Bjørn Remseth

unread,
Sep 12, 1999, 3:00:00 AM9/12/99
to

jos...@lavielle.com (Rainer Joswig) writes:

>
> In article <37d6e2ea$0$2...@newsreader.alink.net>, "Harley Davis" <nospam...@nospam.museprime.com> wrote:
>
> > This brings up a general problem with Common Lisp, and especially with its
> > MOP: Lack of simple performance predictability.
>

> See:
>
> Henry G. Baker, CLOStrophobia : Its Etiology and Treatment

ftp://ftp.netcom.com/pub/hb/hbaker/CLOStrophobia.html
--
(Rmz)

Bj\o rn Remseth !Institutt for Informatikk !Net: r...@ifi.uio.no
Phone:+47 91341332!Universitetet i Oslo, Norway !ICBM: N595625E104337

Marco Antoniotti

unread,
Sep 12, 1999, 3:00:00 AM9/12/99
to

Robert Monfera <mon...@fisec.com> writes:

> Marco Antoniotti wrote:
>
> > (the notion is there in several "implementations", but that is another
> > story: CLisp compiles to bytecodes and CMUCL has *also* a byte code
> > compiler).
>
> Maybe some of the commercial lisps also use bytecodes when in
> interpreter mode?

Maybe they do. I have not cared for it in any particular way. But
even if they do, the point is that there is *no* standard bytecode
format for CL (I don't even know whether there is a standard for Emacs
Lisp - though the byte codes for Emacs and Xemacs may be as close as
they can be).

> > All in all the Hotspot stuff does not apply much to CL,
> > unless somebody is willing to define a CLVM.
>
> The main idea that looks conformant with the Lisp philosophy is that the
> implementation is free to optimize programs as they see fit, which does
> not exclude run-tyme optimizations.

> The way an application works on your
> particular data could give a lot of information to the compiler, which
> in turn could optimize memory vs. safety vs. speed issues (e.g.,
> bytecodes allow smaller images).

Yes. But they require a virtual machine: you are trading off. Note
that up to this point the size of the JVM has been one of the things
scaring away managers (and developers as well) from this language for
"small footprint" applications.

> This is analogous to the suggestion
> from experts on this NG about having to profile to allow effective
> optimizations - it is something like the automated version of it). I do
> not know specifics about Hot Spot, but the idea of run-time compiler
> activities sounded very lispy.

They are. But I believe many are already in place without you
actually knowing. CLOS methods being one example of "run-time
compilation" (details apart). Also CMUCL does not really have an
intepreter. It is closer to a JIT compiler than you may think.

The bottom line is Java *needs* Hotspot. CL may not. There are other
things CL needs and some of the discussion on "COMPILER MOP" intrigues
me. All in all, I believe that a common (standardized) set of
"optimization" notes (to be used in declarations) would be a good
thing. Implementations may be allowed to ignore them, but at least
they would be there to be used if needed. How difficult this would be
is another story.

Gareth McCaughan

unread,
Sep 12, 1999, 3:00:00 AM9/12/99
to
Erik Naggum wrote:

> but in CL, it may surprise people used to the C mindset
> that (<= lo x hi) is usually a function call away, but (TYPEP x '(INTEGER
> lo hi)) is fast and inlined (when lo and hi are constants, obviously).

Neat hack. (But if X is already known to be an integer, I'd hope
that a good compiler would produce the same code for both.
<clickety-clack> Hmm. It turns out that

(defun foo (x)
(declare (fixnum x))
(<= 123 x 456))

and its Naggum transform [:-)] compile to identical code under
CMU CL, but if I replace FIXNUM with INTEGER then using TYPEP
produces much shorter code. (I haven't timed either version.)

> let's make it as simple as possible, but no simpler. (incidentally, is
> this a Feynmanism, or was it Einstein?)

Einstein.

--
Gareth McCaughan Gareth.M...@pobox.com
sig under construction

Howard R. Stearns

unread,
Sep 13, 1999, 3:00:00 AM9/13/99
to

I haven't worked on it for a while, and I'm not sure when I'm going to
get back to it. However, I've put some of my working notes up at
http://www.elwood.com/eclipse/papers/compiler-mop/walk.htm.

These notes are only of interest to someone who is SERIOUSLY thinking
about these issues and would like to read through random text looking to
spark some ideas.

Harley Davis

unread,
Sep 13, 1999, 3:00:00 AM9/13/99
to

Erik Naggum <er...@naggum.no> wrote in message
news:31460343...@naggum.no...

> * Harley Davis
> | (And I think the only successful strategy these days to promote new
> | languages is large-scale, loss-leading corporate sponsorship, similar to
> | Sun's with Java and Microsoft's with C++ and Basic, where first-degree
> | money-losing promotion of a language is intended to have secondary
> | benefits for the corporation via platform application availability. So
> | the next chance for Lisp will be the next generation popular software
> | platform, probably in another 5 years. Better have something ready to
> | dust off by then, guys!)
>
> two immediate thoughts: (1) is the money spent on languages this way
> actually recovered, or are they marketing vehicles and thus merely
> marketing costs for something else?

Basic corporate behavior pretty much dictates cost accounting and P/L
reporting for individual product lines. Languages have to be cheap since
there are usually free or nearly free alternatives that programmers will
turn to. Also, programmers are cheap bastards, usually because they don't
have much budgetary control in their organizations. The only way around the
requirement for a product to make money is to have the highest possible
level of corporate sponsorship - basically, the CEO of the company supplying
the programming product has to buy into the idea, and for the CEO to buy
into it, there better be a pretty strong incentive. In the case of Java,
McNealy jumped on an opportunity to unseat Microsoft as the only platform
worth programming for. Microsoft needed to further enhance its platform,
and the best way to do it was with a development environment that enshrined
the Windows API and made it as easy as possible for programmers to start
developing Windows apps, and as hard as possible for them to stop doing so.
Hence Visual C++ with wizards and so on, combined with a deadly complicated
API that required a substantial investment in time to learn and so made
developers who did so loathe to give up the advantage by switching. But the
strategy engendered little love among the developers forced into this crappy
situation, especially those who had known something better from Lisp, Unix,
and other more programmer-friendly cultures. Hence Java's easy acceptance
by these programmers.

To succeed on the same scale as the popular languages, Lisp needs to enter
into similar dynamics, and to do so, it needs to justify itself in a
large-scale strategic platform move. The issue of whether a Lisp
programming product can make money becomes secondary. The fact is, we don't
know how much money Microsoft makes or loses off of Visual C++ and Basic, or
how much Sun loses (this is a given, IMHO) off of Java directly.

Over time, companies will always try to squeeze every possible penny of
revenue out of all corporate assets, so you get things like upgrade
subscriptions, developer's clubs and conferences, "Lite" and "Pro" versions,
etc. Whether this makes the products marginally profitable, I can't say,
but it at least makes it possible for managers of Developer Product
Divisions to show that they are doing their best to add to the company's
bottom line, and thus make it possible for them to continue their career
progression.

> (2) does unlimited source access
> help recover the money spent or just help promote the language (and
> depending on question 1, helpful to that something else)?

Honestly, I have no idea.

-- Harley

Robert Monfera

unread,
Sep 15, 1999, 3:00:00 AM9/15/99
to
Marco Antoniotti wrote:
...

> even if they do, the point is that there is *no* standard bytecode
> format for CL
...
It does not have to be standard at all to be an effective optimization -
probably it's a point in response to something that has not been said or
implied. CL optimizations, and especially internal representation of
code are not standardized.
...

> Yes. But they require a virtual machine: you are trading off.

Yes, it is expected that run-time optimizations have their overhead:
retaining a bytecode interpreter, which is probably even smaller than
retaining eval, plus of course the optimization logic. By the way, the
use of bytecodes is not even required to implement runtime
optimizations, it's just a possibility (which may compress less
frequently used functions dramatically).
...
> [CMUCL] is closer to a JIT compiler than you may think.
...
It's about run-time optimizations based on statistics rather than JIT or
effective method caching. No one has said in this thread that Java is
cool because it has bytecodes, JIT or Hot Spot - we are talking about
CL.

Some of the choices in the evolution and standardization of CL are
evidences that runtime optimizations were never meant to be excluded.
For example, once Kent explained someone who wanted to know how
hash-tables are implemented that the implementor is free to represent
hash tables as (for example) lists or a-lists, as it may give a size or
performance advantage over a few elements. Some amount of run-time
based optimization are probably in effect already.

My original mail was in response to Howard's list, and after he
clarified its scope, it is clear that runtime optimizations are not at
the level of optimization primitives, but they rather build on them.
Because of CL's dynamic nature, it should be even easier to implement
this optimization layer on CL than on Java.

Regards
Robert

0 new messages