Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Macro question (bizarre)

25 views
Skip to first unread message

Jonathan BAILLEUL

unread,
Mar 17, 2000, 3:00:00 AM3/17/00
to
Hello.

I want to make a macro returning a sequence of instructions, like:

(setf a 1)
(list a 1 2)

and so on. The tricky point is that I want the generated code to be
several lists, and not a list of lists. I want this sequence to be
evaluated and I can't use progn. (the aim is to generate package
definitions).

In a general way, is it possible? How?...
(I presume no but I still hope).

thanks for your help...

--
----------------------------------------------
Jonathan BAILLEUL (bail...@emi.u-bordeaux.fr)
Maitrise Informatique, Universite Bordeaux I

Will Deakin

unread,
Mar 17, 2000, 3:00:00 AM3/17/00
to
Jonathan BAILLEUL wrote:
>The tricky point is that I want the generated code to be several
>lists, and not a list of lists.
I'm not sure what you are trying to do, but if you want to return a
number of things and not as lists of lists, have you thought about
using VALUES or VALUES-LIST[1]?

Best Regards,

:) will

[1] I'm sure you have, it's just that I'm really not sure what you want
to do :(


Sent via Deja.com http://www.deja.com/
Before you buy.

Hrvoje Niksic

unread,
Mar 17, 2000, 3:00:00 AM3/17/00
to
Jonathan BAILLEUL <bail...@emi.u-bordeaux.fr> writes:

> I want to make a macro returning a sequence of instructions, like:
>
> (setf a 1)
> (list a 1 2)
>
> and so on. The tricky point is that I want the generated code to be
> several lists, and not a list of lists. I want this sequence to be
> evaluated and I can't use progn.

Why can't you use progn? progn is designed exactly for that purpose.

Espen Vestre

unread,
Mar 17, 2000, 3:00:00 AM3/17/00
to
Jonathan BAILLEUL <bail...@emi.u-bordeaux.fr> writes:

> I want to make a macro returning a sequence of instructions, like:
>
> (setf a 1)
> (list a 1 2)
>
> and so on. The tricky point is that I want the generated code to be
> several lists, and not a list of lists. I want this sequence to be

> evaluated and I can't use progn. (the aim is to generate package
> definitions).

The question is maybe how you want this code to be merged into
the rest of your code. Maybe using ,@-syntax inside backquote
might be what you're looking for?

--
(espen)

Pierre R. Mai

unread,
Mar 17, 2000, 3:00:00 AM3/17/00
to
Jonathan BAILLEUL <bail...@emi.u-bordeaux.fr> writes:

> Hello.


>
> I want to make a macro returning a sequence of instructions, like:
>
> (setf a 1)
> (list a 1 2)
>
> and so on. The tricky point is that I want the generated code to be
> several lists, and not a list of lists. I want this sequence to be
> evaluated and I can't use progn. (the aim is to generate package
> definitions).

Even if it were possible, you'd still have the same problems, since
you are confusing macro-expansion and read-times. Read-time (even for
macro-expanded code) always comes before macro-expansion and
evaluation times...

> In a general way, is it possible? How?...

No, it isn't possible. And from previous postings on this topic it
still seems to me that what you are trying to achieve is not what you
should really be wanting to achieve, anyway IMHO, especially with evil
package hackery...

Maybe you should try to describe in clear and general (i.e. not
implementation concerned) terms what you wanted to achieve in the
first place. Start at the very beginning (i.e. long before packages,
macros and package hackery comes into the picture)...

Regs, Pierre.

--
Pierre Mai <pm...@acm.org> PGP and GPG keys at your nearest Keyserver
"One smaller motivation which, in part, stems from altruism is Microsoft-
bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]

Jonathan BAILLEUL

unread,
Mar 17, 2000, 3:00:00 AM3/17/00
to


Okay.


Starting from the beginning, I have completed a small package (see the
sample below) for matrix manipulation.
It can virtually manage any kind of matrices since 3 constants (matrix
dimensions, information type) are set and used in the package.
The provided methods can be used this way:
(mat:make-null)
, mat being the package nickname.

Suppose I need to handle several matrix-types at the same time:
-3*3, integers
-4*4, double-float

You understand that I am puzzled because I need to modify my source file
by hand to set constant values. Then, I cannot handle both types at a
time. I know that in lisp, handling specialized types seems bizarre but
it is needed for speeding the execution on a 3d project (CMU).

My original idea was to use a 'on-demand-package-generator(parameters)'
including in the environment a new package definition. This way, I also
have the might to make extra changes in the generated package.

I think the standard approach would be to create a struct with an array
and dimensions specifications. So, each matrix could be treated
differently by the same generic code. You imagine I cannot choose this
convenient way.

I have other reasons to choose my current way. Imagine, for example,
that a call to make-identity returns you the raw precalculated identity
matrix: you avoid unnecessary computations and spare time. This example
is trivial, of course, but time gains are more noticable when it comes
to multiply matrices: eliminating dotimes really speeds-up the process
(no tests at each step), and specializing calculations for a given set
of matrix-types is really rewarding (3d transformations context).
These 'static' code calculations are performed by macros, of course.

The idea is so to generate and use a package using a macro.
Notice that, of course, these concepts are relatively new for me.


I would be happy to benefit from your experience and I will consider
every suggestion: the aim is not only to 'make-it-work', but to learn a
new programming language.

--
----------------------------------------------
Jonathan BAILLEUL (bail...@emi.u-bordeaux.fr)
Maitrise Informatique, Universite Bordeaux I


>>>PS: the sample code:

(in-package 'glos-matrix)

(defpackage "GLOS-MATRIX"
(:use "COMMON-LISP")
(:export "MAKE-NULL" "MAKE-IDENTITY" "MAKE-GIVEN"
"SLOT" "INDEX-LIST" "ROW" "COLUMN"
"MULTIPLY" "DISPLAY")
(:nicknames "MAT")
(:documentation "Package providing generic matrix facilities"))

;;; Constants

(defconstant +slots-type+ 'double-float
"unique type of the variables stored in the matrix slots")

(defconstant +number-rows+ 4
"number of rows in the current matrix type")

(defconstant +number-columns+ 4
"number of columns in the current matrix type")

(defun make-null ()
"create a matrix with null slots"
(make-array (* +number-rows+ +number-columns+)
:element-type +slots-type+))

Fernando D. Mato Mira

unread,
Mar 17, 2000, 3:00:00 AM3/17/00
to
1. Why are you using 1-dimensional arrays?
2. Why are you using constants? make-null should take as parameters either the
dimensions, or a matrix belonging to the same space.

--
Fernando D. Mato Mira
Real-Time SW Eng & Networking
Advanced Systems Engineering Division
CSEM
Jaquet-Droz 1 email: matomira AT acm DOT org
CH-2007 Neuchatel tel: +41 (32) 720-5157
Switzerland FAX: +41 (32) 720-5720

www.csem.ch www.vrai.com ligwww.epfl.ch/matomira.html


Tim Bradshaw

unread,
Mar 17, 2000, 3:00:00 AM3/17/00
to
* Hrvoje Niksic wrote:

> Why can't you use progn? progn is designed exactly for that purpose.

Because he wants the stuff to be read one at a time because he's
frobbing the package half way through.

I think the answer is `no, you can't do this, you should take another
approach'.

--tim

Jon S Anthony

unread,
Mar 17, 2000, 3:00:00 AM3/17/00
to
Jonathan BAILLEUL wrote:

>
> Pierre R. Mai wrote:
> >
> > Maybe you should try to describe in clear and general (i.e. not
> > implementation concerned) terms what you wanted to achieve in the
> > first place. Start at the very beginning (i.e. long before packages,
> > macros and package hackery comes into the picture)...
> >
> > Regs, Pierre.
> >
...
> Okay.
...

You're trying to use packages as some sort of template. That's not
what they are for - they are for name space organization.

...<code>

Since you seem to want to tie together the element type and specific
dimensions and such for each kind of array, it seems that something as
simple as the following would do what you want:


(defmacro defmatrix-thingy (name (&key type dimensions))
(let ((namestg (symbol-name name))
(init-val (case type
((double-float) 0.0D0)
((single-float) 0.0S0))))
`(progn
(defun ,(intern (concatenate 'string "MAKE-" namestg)) ()
(make-array ',dimensions :element-type ',type
:initial-element ,init-val))

(defun ,(intern (concatenate 'string namestg "-MULT")) (m1 m2)
(declare (type (array ,type ,dimensions) m1 m2)
(optimize (speed 3) (safety 1)))
;; code
1)

(defun ,(intern (concatenate 'string namestg "-ADD")) (m1 m2)
(declare (type (array ,type ,dimensions) m1 m2))
;; code
2)

;; Etc.
)))


You could also use the "the" special operator in the function
definitions for further optimization opportunities.


(defmatrix-thingy dbl-flt-3X3 (:type double-float :dimensions (3 3)))

(let ((m1 (make-dbl-flt-3X3))
(m2 (make-dbl-flt-3X3))
(m3 (make-dbl-flt-3X3)))
...; set the values and such
(dbl-flt-3X3-mult m1 (dbl-flt-3X3-add m2 m3)))


There are many variations that you could do. If you need some scoped
"private" information (maybe an identity matrix or whatever) just
replace the progn with a let and the apprpriate information.


/Jon

--
Jon Anthony
Synquiry Technologies, Ltd. Belmont, MA 02478, 617.484.3383
"Nightmares - Ha! The way my life's been going lately,
Who'd notice?" -- Londo Mollari

Tom Breton

unread,
Mar 17, 2000, 3:00:00 AM3/17/00
to
Jonathan BAILLEUL <bail...@emi.u-bordeaux.fr> writes:

> Hello.
>
> I want to make a macro returning a sequence of instructions, like:
>
> (setf a 1)
> (list a 1 2)
>
> and so on. The tricky point is that I want the generated code to be
> several lists, and not a list of lists. I want this sequence to be
> evaluated and I can't use progn. (the aim is to generate package
> definitions).

No, because that's not what a macro does. A macro is just a function
that's "called funny" - its arguments are not evaluated, its return
value is. It isn't always the mechanism to deal with meta-code.

ISTM two things: What you want is an ordinary function. And your
caller is going to have to do some of the work, such as pulling apart
a list of lists into - well, however you were going to represent
"several lists" as not a list of lists.

--
Tom Breton, http://world.std.com/~tob
Not using "gh" since 1997. http://world.std.com/~tob/ugh-free.html
Rethink some Lisp features, http://world.std.com/~tob/rethink-lisp/index.html

Tom Breton

unread,
Mar 18, 2000, 3:00:00 AM3/18/00
to
Will Deakin <aniso...@my-deja.com> writes:

> Jonathan BAILLEUL wrote:
> >The tricky point is that I want the generated code to be several
> >lists, and not a list of lists.

> I'm not sure what you are trying to do, but if you want to return a
> number of things and not as lists of lists, have you thought about
> using VALUES or VALUES-LIST[1]?
>
> Best Regards,
>
> :) will
>
> [1] I'm sure you have, it's just that I'm really not sure what you want
> to do :(

It's almost certainly not. Pardon my opinion, but multiple values are
basically for when you have code set in stone... and then suddenly you
have to change it, and you can't possibly rewrite all the callers.
You return values, so that it looks the same way to the original
callers, but your newer functions that know about the values can get
at them. IOW, values is basically a backwards-compatibility thing.

When that's *not* the case, simply returning a list is better in every
way that springs to mind, IMO.

Erik Naggum

unread,
Mar 18, 2000, 3:00:00 AM3/18/00
to
* Tom Breton <t...@world.std.com>

| Pardon my opinion, but multiple values are basically for when you have
| code set in stone...

noted as opinion.

| IOW, values is basically a backwards-compatibility thing.

it can definitely be used for this purpose, and may even have great value
used this way, but I must admit that I never thought of values like that.

values to me is a mechanism that removes the burden of agreeing on the
aggregate form of the returned values. I guess this has to be explained
in terms of how other languages deal with the same issue: multiple values
are often expressed in C by passing a pointer to a structure in the
caller's memory to be filled in by the callee. returning a structure is
_still_ not kosher in the C world, and incompatibilities exist in how it
is done. this affects how people return more than one value from their
functions. in some cases, the caller needs to pass multiple pointers as
arguments to be written to. Ada has a clean way to do this: with in and
out arguments, the latter of which act just like multiple-value-setq.

| When that's *not* the case, simply returning a list is better in every
| way that springs to mind, IMO.

since consing and destructuring both have very significant costs, I'd
rate this is an insufficiency of things that spring to mind.

#:Erik

Tim Bradshaw

unread,
Mar 18, 2000, 3:00:00 AM3/18/00
to
* Bulent Murtezaoglu wrote:

> On first reading I thought this was a horrible hack, then I thought it was
> very cute, now I don't know what to think. Could I impose on you for an
> example?

I don't have a standalone one I'm afraid, but this gives the kind of
idea. Apologies for the undefined and C-influenced type names...

(defun ensure-string (string length)
(declare (type (or string null) string)
;; strings have only a 16bit length field.
(type ushort length))
;; good type inferencing compilers may elide this...
(unless (<= length #xffff)
(error "String too long"))
(or (and string
(cond ((= (length string) length)
string)
((and (>= (array-total-size string) length)
(array-has-fill-pointer-p string))
(setf (fill-pointer string) length)
string)
(t nil)))
;; it might be better to round up here
(make-array (max length *buffer-string-size*)
:element-type 'uchar
:fill-pointer length)))

(defun map-thingy-chunks (fn thingy &optional nbuf vbuf)
;; FN is called with 2 strings. FN should assume that these
;; strings have dynamic extent and must copy them if it wishes
;; to keep them (in fact it's usually called with the same string over
;; and over but with different contents).
(loop with next = 0
while next
do
(multiple-value-setq (next nbuf vbuf)
(next-in thingy next nbuf vbuf))
(funcall fn nbuf vbuf)
finally (return (values nbuf vbuf))))


(defun next-in (start nbuf vbuf)
...
(let ((nb (ensure-string nbuf ...))
(vb (ensure-string vbuf ...))
...
(next ...))
...
(setf (subseq nb 0) ...
(subseq vb 0) ...)
(values next nb vb)))

Bulent Murtezaoglu

unread,
Mar 18, 2000, 3:00:00 AM3/18/00
to

TimB> ... So what I do is arrange life so that the
TimB> functions take some optional arguments which are buffers to
TimB> use (sizes &c are checked, allocation is done in big chunks
TimB> with a fill-pointer if need be), and pass back these buffers
TimB> as additional values. ...

On first reading I thought this was a horrible hack, then I thought it was
very cute, now I don't know what to think. Could I impose on you for an
example?

Thanks.

BM

Tim Bradshaw

unread,
Mar 18, 2000, 3:00:00 AM3/18/00
to
* Tom Breton wrote:

> It's almost certainly not. Pardon my opinion, but multiple values are
> basically for when you have code set in stone... and then suddenly you
> have to change it, and you can't possibly rewrite all the callers.
> You return values, so that it looks the same way to the original
> callers, but your newer functions that know about the values can get

> at them. IOW, values is basically a backwards-compatibility thing.

I don't use them this way, though I can see that that is a reasonable
use for them.

I use them in two places --

1. where I have a function which need to
return several logical things and I want to avoid consing a composite
object, especially if I'm intending to immediately take it to
bits. For instance I have functions which takes an (UNSIGNED-BYTE 32)
and return 2 or 4 (UNSIGNED-BYTE 8)s which represent it in a byte
stream. This thing is called a lot, and I really don't want to cons a
list of a vector and then take it to pieces. I tend to do this quite
a lot and have developed a (doubtless appalling!) style of writing
rather big functions with a lot of little local functions inside which
pass information around using multiple values if need be.

2. Where I have a function which has some useful additional
information to pass back to the caller but where `naive' callers might
not want to use this. This is kind of close to your use I think,
except it isn't backwards-compatibility, it's by design. Another
gratuitous example: I have some code which needs to allocate temporary
buffers. Stack allocating them would be good but not everyone does
it, and those that do often have (reasonable) restrictions I can't
live with (size known at compile time for instance). So what I do is
arrange life so that the functions take some optional arguments which
are buffers to use (sizes &c are checked, allocation is done in big
chunks with a fill-pointer if need be), and pass back these buffers as
additional values. Then callers who don't care can just use the
functions, but those who do care can catch the buffers and pass them
back in for later use (at the cost of having to manage the reentrancy
issues). This turns out to be a gloriously successful way of doing it
in terms of performance -- I suspect it is *better* than stack
allocation -- all made possible by multiple values and fill-pointers.

--tim

Duane Rettig

unread,
Mar 18, 2000, 3:00:00 AM3/18/00
to
Tom Breton <t...@world.std.com> writes:

> Will Deakin <aniso...@my-deja.com> writes:
>
> > Jonathan BAILLEUL wrote:
> > >The tricky point is that I want the generated code to be several
> > >lists, and not a list of lists.
> > I'm not sure what you are trying to do, but if you want to return a
> > number of things and not as lists of lists, have you thought about
> > using VALUES or VALUES-LIST[1]?
> >
> > Best Regards,
> >
> > :) will
> >
> > [1] I'm sure you have, it's just that I'm really not sure what you want
> > to do :(
>

> It's almost certainly not. Pardon my opinion, but multiple values are
> basically for when you have code set in stone... and then suddenly you
> have to change it, and you can't possibly rewrite all the callers.

This is a valid use, but it is too harsh to restrict the usage of
values to only this.

I view multiple-values-returned as closely analagous to optional
arguments; in fact, probably all that would be necessary to give
multiple-values more similarity to optionals would be to enhance
multiple-value-bind to take default and supplied-p parameters (but
I am not advocating this). So whatever uses you can think of for
optional arguments on the input side, can almost certainly be
thought of for muliple values on the reurn side.

> You return values, so that it looks the same way to the original
> callers, but your newer functions that know about the values can get
> at them. IOW, values is basically a backwards-compatibility thing.

And optionals can also be used in the same way; leave the earlier
arguments alone, and add and use new optionals as needed without
disturbing older code. However, I would be hard-pressed to offer
backward-compatibility as the only (or even basic) use for optionals.

Perhaps the problem you have with multiple-values is a philosophical
one; is it the case that you believe in a pure functional programming
environment in which a function has precisely one value?

> When that's *not* the case, simply returning a list is better in every
> way that springs to mind, IMO.

Returning a list always requires consing; returning multiple values
(mostly) does not. Therefore, I tend to favor returning multiple
values over consing up a new list, unless there are other factors
that sway the balance the other way.

--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)

Tim Bradshaw

unread,
Mar 18, 2000, 3:00:00 AM3/18/00
to
* Bulent Murtezaoglu wrote:

> On first reading I thought this was a horrible hack, then I thought it was
> very cute, now I don't know what to think. Could I impose on you for an
> example?

Incidentally, I think this *is* pretty much a hack, witness the fact
that the function getting mapped over the buffers has to know that it
needs to copy them if it wants to store them (in a hashtable say) (and
the real code is full of C-like caveats like this). But in more
general code the multiple-values / optional argument trick means that
unless you explicitly use these extra values and pass them back in --
in which case you're assumed to know what you're doing -- things just
work as you expect. So it's a way of providing a very efficient
interface and a less-efficient but safe one from the same set of
functions.

And I gleefully admit to obsessing about performance & consing
probably unjustly here. However in degenerate cases (where all I'm
doing is I/O, so it's really an unfair test) the buffer-reuse trick
gets you a several-fold performance increase. I suspect this is
because of better caching behaviour -- my buffers are a few K
typically, and probably fit in a cache very close to the CPU, whereas
if I keep allocating new ones then it has to flush the old one out to
memory (it doesn't know it's scrap). At least this is my *guess* -- I
haven't measured how much time is going to the GC but I doubt it
accounts for the difference, anywhere near.

--tim

Tom Breton

unread,
Mar 18, 2000, 3:00:00 AM3/18/00
to
Erik Naggum <er...@naggum.no> writes:

> * Tom Breton <t...@world.std.com>


> | Pardon my opinion, but multiple values are basically for when you have
> | code set in stone...
>

> noted as opinion.


>
> | IOW, values is basically a backwards-compatibility thing.
>

> it can definitely be used for this purpose, and may even have great value
> used this way, but I must admit that I never thought of values like that.
>
> values to me is a mechanism that removes the burden of agreeing on the
> aggregate form of the returned values.

OK, that I can agree with, but ISTM it mostly recapitulates the
backwards-compatibility issue. For callers that are constructed at
about the same time as the callee, values doesn't help them agree.


> I guess this has to be explained
> in terms of how other languages deal with the same issue: multiple values
> are often expressed in C by passing a pointer to a structure in the
> caller's memory to be filled in by the callee. returning a structure is
> _still_ not kosher in the C world, and incompatibilities exist in how it
> is done. this affects how people return more than one value from their
> functions. in some cases, the caller needs to pass multiple pointers as
> arguments to be written to. Ada has a clean way to do this: with in and
> out arguments, the latter of which act just like multiple-value-setq.

This is kind of a different issue, how easy it is to destructure a
returned list. Granted, both progv and destructuring bind try to do
much more than multiple-value-bind does. That's an issue of
optimizing progv and destructuring-bind during compilation.

> | When that's *not* the case, simply returning a list is better in every
> | way that springs to mind, IMO.
>

> since consing and destructuring both have very significant costs, I'd
> rate this is an insufficiency of things that spring to mind.

Well, I'm not going to tell you "optimize last"; you surely already
know that. I do want to note that, like most optimizations, 90%+ of
the time you don't care and shouldn't arrange your code to accomodate
it. The rest of the time, you should do it last.

I actually have some ideas about how Lisp as a language could make
that particular transition easier, but none are particularly important
so I won't go into them here.

So in sum, I grant that values can do a little more than I said, on
the optimiztion front, but IMO optimization is not something that one
should think about when coding.

Tom Breton

unread,
Mar 18, 2000, 3:00:00 AM3/18/00
to
Tim Bradshaw <t...@cley.com> writes:
[snippy]

OK, that basically touched the same points as Erik's, and my answer is
that I grant that values can do a little more than I said, on the
optimization front, but optimization should be a last resort.

Tom Breton

unread,
Mar 18, 2000, 3:00:00 AM3/18/00
to
Duane Rettig <du...@franz.com> writes:

> Tom Breton <t...@world.std.com> writes:
>
> > Will Deakin <aniso...@my-deja.com> writes:
> >
> > > Jonathan BAILLEUL wrote:
> > > >The tricky point is that I want the generated code to be several
> > > >lists, and not a list of lists.
> > > I'm not sure what you are trying to do, but if you want to return a
> > > number of things and not as lists of lists, have you thought about
> > > using VALUES or VALUES-LIST[1]?
> > >
> > > Best Regards,
> > >
> > > :) will
> > >
> > > [1] I'm sure you have, it's just that I'm really not sure what you want
> > > to do :(
> >
> > It's almost certainly not. Pardon my opinion, but multiple values are
> > basically for when you have code set in stone... and then suddenly you
> > have to change it, and you can't possibly rewrite all the callers.
>
> This is a valid use, but it is too harsh to restrict the usage of
> values to only this.
>
> I view multiple-values-returned as closely analagous to optional
> arguments;

OK, I grant that there's the possibility of simplifying the call to a
function that returns values that often aren't used. OTOH, that
entails that you are destructuring the return value, so you can do
essentially the same with destructuring-bind and lists.

Except in the case where you just let it silently degrade to a
singleton return. But that's not strongly analogous to optional.
That gives you a single alternate interface which saves you the
trouble of writing 3 letters and a pair of parents `(car )', but
forces you to pay attention to exactly when the return degrades to a
singleton.

> in fact, probably all that would be necessary to give
> multiple-values more similarity to optionals would be to enhance
> multiple-value-bind to take default and supplied-p parameters (but
> I am not advocating this).

ISTM you'd also have to make it a little easier to use keywords. They
can be used now, IIRC by declaring them in an ftype.

If one were to do that - I think it'd be only a small gain - ISTM it
would be better to have a single destructuring statement that covers
all of multiple-value-bind, destructuring-bind, with-slots, and
with-struct-slots (which is something I wrote to let defstructs have
with-slots too)

> So whatever uses you can think of for
> optional arguments on the input side, can almost certainly be
> thought of for muliple values on the reurn side.

But also for lists on the return side. And with `apply' there's 4-way
symmetry.

>
> Perhaps the problem you have with multiple-values is a philosophical
> one; is it the case that you believe in a pure functional programming
> environment in which a function has precisely one value?

<digression>
"Believe in" is a very baggy term. I like the idea for some things.
The past few days, I've gotten into SML for the first time, and the
pure functional style has strong merits. But AFAICT that doesn't
allow you to manipulate calls anywhere near as easily Lisp does.

My philosophy, which you made the fatal mistake of asking about }:),
is "Maximum expressiveness, they're called programming *languages* for
a reason. More expressive than you think I mean. Way more.", and
"Let me write code close to the problem domain", and "I want to think
about stuff *once*, write it down, and be done thinking about that
part of it" and "Underneath it all, make the pieces simple,
indivisible, atomic." Of course these goals are in conflict.

</digression>

But I don't think there's a conflict wrt multiple values, because
destructuring is as expressive (and more) than multiple-value-bind.


> > When that's *not* the case, simply returning a list is better in every
> > way that springs to mind, IMO.
>

> Returning a list always requires consing; returning multiple values
> (mostly) does not.

Which is an optimization issue. I've a few ideas that ISTM would make
it easier to do that particular optimization, but they're not
important so I won't go into them unless there's interest.

> Therefore, I tend to favor returning multiple
> values over consing up a new list, unless there are other factors
> that sway the balance the other way.

ISTM the fact that values degrade to singletons, and fact that they
require some attention to detail to see exactly where they degrade (eg
prog1 does, progn doesn't), sways the balance the other way for code
in general. As I say, for optimization ya do whatever ya gotta, but
ya leave it till last.

Pierre R. Mai

unread,
Mar 19, 2000, 3:00:00 AM3/19/00
to
Bulent Murtezaoglu <b...@acm.org> writes:

> TimB> ... So what I do is arrange life so that the
> TimB> functions take some optional arguments which are buffers to
> TimB> use (sizes &c are checked, allocation is done in big chunks
> TimB> with a fill-pointer if need be), and pass back these buffers
> TimB> as additional values. ...
>

> On first reading I thought this was a horrible hack, then I thought it was
> very cute, now I don't know what to think. Could I impose on you for an
> example?

Just to give another data point on this: I often do the same thing
in (possibly) performance critical code. And I don't think this
technique is restricted to buffers, but can be used reasonably for
all kinds of costly result and auxilliary data structures (in this
way this is not dependend on multiple values in principle). What it
gives you is the ability to offer both a simple, non-destructive and
inefficient interface, and a more complex, destructive but efficient
interface, taking into account many caller profiles, without code
duplication. Heavily touched-up example:

(defun make-empty-consumer-plan (&optional (size 1000))
(make-hash-table :test #'eql :size size))

(defgeneric fill-consumer-materials-plan
(consumer end-date &optional (result nil)))

(defmethod fill-consumer-materials-plan
((consumer simple-consumer) end-date &optional (result nil))
(unless result
(setq result (make-empty-consumer-plan)))
(map-production-plan
#'(lambda (product count start-date cycle-time)
(declare (ignore start-date cycle-time))
(loop for (material amount) in (product-bom product)
do
(incf (gethash material result 0) (* amount count))))
(consumer-production-plan consumer)
end-date))

This has the added benefit, that this caller will have to do much less
work, and much less consing:

(defmethod material-requirements-forecast
((provider simple-provider) end-date)
(let ((result (make-empty-consumer-plan)))
(dolist (consumer (provider-consumers provider) result)
(fill-consumer-materials-plan consumer end-date result))))

There are plenty of other examples where this approach is useful.
E.g. consider an iteration construct that allows mapping a function
over the tuples (lists) provided by some abstract iteration
interface (e.g. relational database backends via a FFI). For large
result sets and many tuples, you don't want to cons up a new argument
list for each tuple, so you want to reuse a single argument list.
OTOH in other contexts, you might not want to care about setting up
your own argument list, and all that stuff, so you let the iterator
cons up fresh-ones as needed:

(defgeneric database-result-set-fetch-next-tuple
(result-set &optional (result nil result-p)))

(defmethod database-result-set-fetch-next-tuple
((result-set oracle-large-result-set) &optional (result nil result-p))
#+PARANOID
(when result-p
(assert (= (length result) (tuple-length result-set)) (result)
"Provided result list must match tuple-length."))
(unless result-p
(setq result (make-list (tuple-length result-set))))
;; Fill result list, etc.
result)

In general, if this is documented well, I don't consider it an ugly
hack, and indeed this kind of code seems to be a common idiom in
performance critical interfaces in CL.

Pierre R. Mai

unread,
Mar 19, 2000, 3:00:00 AM3/19/00
to
Jonathan BAILLEUL <bail...@emi.u-bordeaux.fr> writes:

> You understand that I am puzzled because I need to modify my source file
> by hand to set constant values. Then, I cannot handle both types at a
> time. I know that in lisp, handling specialized types seems bizarre but
> it is needed for speeding the execution on a 3d project (CMU).

Always be wary of interface changes that are only justified by the
need for speed. While there are cases where this is justification
enough, there are many more where you should seek out other means of
achieving the speed-up you need, without sacrificing a clear interface
design.

If I understand you correctly, then what you want to achieve is a
combination of code-specialization, code reuse and compile-time
optimizations. There are a couple of approaches you should check out:

- Do a class-based design, put the specialized code into generic
functions specialized on the types of matrices in question (or their
type-names, via eql-specializers, for "constants" like
matrix-null-element). Solves the specialization problem neatly,
but might be prohibitive because of GF-calling overhead
(type-dispatch at run-time, instead of compile-time), and possibly
other factors. While it seems unlikely (I'd wager that your program
will be computation bound, and not call bound), you'd need to do
further profiling to decide this. This approach in itself doesn't
solve the code-reuse problem, but it can be combined with the
macro-based solution, to give you that. It also doesn't solve the
compile-time issue completely, since you still do the dispatch at
runtime. Again I don't think this will be a terrible problem, and
there are ways around this, in combination with the other two
approaches (possibly doing some hacks).

- Write macros that produce the specialized code, as you do now, but
instead of using packages as the namespace, use either different
function names, or use the namespace generic functions provide (in
combination with the approach above). Packages are really the wrong
way to think about this, since in the context of CL packages are
about packaging up complete libraries and applications, quite unlike
packages in e.g. Ada. You can do the on-demand facility much more
cleanly with macros in combination with either GFs and/or
compiler-macros.

- Use compiler-macros. This will give you the ability to provide nice,
normal interfaces, yet optimize the hell out of code at compile-time.
This approach might be combined with both approaches above. Since
environment access hasn't been standardized yet, you will have to
live with some restrictions on this: Either make it easy for portable
code to recognize optimization opportunities, at the expense of a
nice interface, or resort to implementation specific environment
access. If you only want to work on CMUCL anyway, you might also
want to look at CMUCL's own optimizer framework, which can be
extended quite easily, and which will give you _much_ leverage.

Compiler-macros also offer you cross-function optimization
opportunities, which your hand-dispatch approach doesn't offer:
Especially with concatenated matrix operations, it might be
beneficial to do many of them in one step (reduced iteration
overhead, possibly reduced need for temporaries, better
cache-locality, etc.).

Here's some demo code for compiler-macros and CMUCL's optimization
interface (IR1-transforms and type-inference):

First a portable example of combining operations (note that these kind
of compiler-macros are best written post-profiling, to identify the
combinations that are used in hot-spots, since providing a complete
set of optimizers will mean exponential growth):

(define-compile-macro matrix-addition (&whole whole matrix-a matrix-b)
(cond
((and (consp matrix-a)
(eq (car matrix-a) 'matrix-scalar-multiplication))
`(%matrix-scalar-add-mult ,(cadr matrix-a) ,(caddr matrix-a) ,matrix-b))
;; Etc. Don't forget symmetric cases...
(t
;; We can't optimize, leave generic
whole)))

Again, writing macros to write these compiler-macros might be a good
idea...

In addition you might want to optimize on types of matrices known at
compile-time. This is implementation specific, since you'll want to
dispatch on the type of parameters, information that isn't available
via a standardized interface. So, since we'll be implementation
dependent anyway, we might as well use CMUCL's optimizer interface,
which lies below compiler-macros (you're really extending CMUCL's
compiler with this):

(deftransform matrix-addition ((matrix-a matrix-b)
(3x3-matrix 3x3-matrix))
`(let ((a (matrix-array matrix-a))
(b (matrix-array matrix-b)))
,@(loop for x from 0 below 9
collect `(incf (row-major-aref a ,x)
(row-major-aref b ,x)))
matrix-a))

The above open-codes and unrolls in-place addition.

One possible solution would be to combine all of the three approaches
above: Write macros that will provide both methods and internal
functions which are specialized to the given dimensions. Provide
generic implementations for non-specialized dimensions. Provide
compiler-macros that combine concatenated operations. Provide
optimizers that call your internal functions, or possibly open-code
them for calls to your GFs that have discernible types. Use type
declarations in your programs to trigger these optimizations.

Or maybe you can get away with a lot less. This will depend on the
information you can glean from profiling your code. In optimization,
profiling isn't everything, but without profiling everything else is
nothing ;)

Will Deakin

unread,
Mar 19, 2000, 3:00:00 AM3/19/00
to
Tom Breton wrote in message ...

>It's almost certainly not. Pardon my opinion, but multiple values are
>basically for when you have code set in stone... When that's *not*

>the case, simply returning a list is better in every
>way that springs to mind, IMO.


We will have to disagree, I think there are valid uses for multiple values other
than the one you describe. However, this irked me (which is my problem) until I
remembered remembered this:

"Hogen lived alone in a small temple in the country. One day four travelling
monks appeared and ask if they could make a fire in his yard.

Whilst building the fire, Hogen heard them arguing about objectivity and
subjectivity. He joined them and said: `There is a big stone. Do you consider it
to be inside or outside your mind?'

One monk replied: `Everything is an objectification of the mind, so I would say
the stone is inside my mind.'

`Your head must feel very heavy,' observed Hogen, 'if you are carrying a rock
like that in your head.'"

;) will

Erik Naggum

unread,
Mar 19, 2000, 3:00:00 AM3/19/00
to
* Tom Breton <t...@world.std.com>

| So in sum, I grant that values can do a little more than I said, on
| the optimiztion front, but IMO optimization is not something that one
| should think about when coding.

people who profess aggressively _not_ to care about optimizing tend to
work just as hard _pessimizing_ their code as those who profess to care
about optimizing tend to work on optimizing theirs. the right balance is
the one that results from deep insight into the costs of all operations
involved, and the simple concept of avoiding wanton abuse of resources.
in my opinion, those who argue against optimization on the basis of a
desire to squander resources, are lazy in a very destructive sense.

#:Erik

Erik Naggum

unread,
Mar 19, 2000, 3:00:00 AM3/19/00
to
* Tom Breton <t...@world.std.com>

| OK, that basically touched the same points as Erik's, and my answer is
| that I grant that values can do a little more than I said, on the
| optimization front, but optimization should be a last resort.

but what kind of code do you have before you "resort" to optimization?
is this really a counter-argument to being _economical_?

#:Erik

Tom Breton

unread,
Mar 19, 2000, 3:00:00 AM3/19/00
to
Erik Naggum <er...@naggum.no> writes:

> * Tom Breton <t...@world.std.com>
> | So in sum, I grant that values can do a little more than I said, on
> | the optimiztion front, but IMO optimization is not something that one
> | should think about when coding.
>
> people who profess aggressively _not_ to care about optimizing tend to
> work just as hard _pessimizing_ their code as those who profess to care
> about optimizing tend to work on optimizing theirs.

I've never known anyone who worked at pessimizing their code. I
certainly don't.

> the right balance is
> the one that results from deep insight into the costs of all operations
> involved, and the simple concept of avoiding wanton abuse of resources.
> in my opinion, those who argue against optimization on the basis of a
> desire to squander resources, are lazy in a very destructive sense.

These are all things that should be deferred until they are needed,
IMO. One of the best things I learned as a programmer was *not* to
grab at everything that looked like an opportunity to optimize. When
the urge strikes me - and it still does - I write comments indicating
that an optimization is possible here, and that usually takes care of
the optimization-urge and I can get back to doing useful work. I find
I rarely actually want the optimization enuff to be worth doing later.

Tom Breton

unread,
Mar 19, 2000, 3:00:00 AM3/19/00
to
Erik Naggum <er...@naggum.no> writes:

> * Tom Breton <t...@world.std.com>


> | OK, that basically touched the same points as Erik's, and my answer is
> | that I grant that values can do a little more than I said, on the
> | optimization front, but optimization should be a last resort.
>
> but what kind of code do you have before you "resort" to optimization?

Clean. (Was that rhetorical?)

> is this really a counter-argument to being _economical_?

I would not call that economical. You're spending the more important
resource, coding time/effort/attention, handling the multiple values.
"Can I use them here, or have they degraded?" "Why did my code stop
working? Could this prog1 be the problem? It couldn't possibly be,
the logic's exactly the same. (pull hair out while poring over code)"

Pierre R. Mai

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Tom Breton <t...@world.std.com> writes:

> These are all things that should be deferred until they are needed,
> IMO. One of the best things I learned as a programmer was *not* to
> grab at everything that looked like an opportunity to optimize. When
> the urge strikes me - and it still does - I write comments indicating
> that an optimization is possible here, and that usually takes care of
> the optimization-urge and I can get back to doing useful work. I find
> I rarely actually want the optimization enuff to be worth doing later.

There is a difference between optimizing code, and not doing
_needlessly_ inefficient or misleading things. Using multiple values
instead of lists where there is clearly no advantage to using lists
(or even small inconveniences) falls into the second category, IMHO.

Programming in a programming language is about using the most specific
and best suited abstractions available to you, within the bounds of
practicability and your knowledge of the problem. When a more
suitable construct is obvious and trivial to implement it should be
used right from the start. Optimization for speed or other criteria
sets in when the more suitable is either non-obvious, or non-trivial
to implement, or both.

Multiple values and lists are in different parts of the design space
for related but distinct problem descriptions. They offer different
trade-offs and are intended to express different intentions. Just
take the difference in identity: Lists have an identity of their own,
whereas multiple values don't. Lists are "persistent" aggregates,
whereas multiple values are not. Therefore they express different
intents.

The only arguments I have seen against providing multiple values in
the language seem to come because of concerns for what could be termed
"Language Minimalism". Suffice it to say that I don't believe in LM
as an overriding principle in language design.

Pierre R. Mai

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Tom Breton <t...@world.std.com> writes:

> I would not call that economical. You're spending the more important
> resource, coding time/effort/attention, handling the multiple values.
> "Can I use them here, or have they degraded?" "Why did my code stop
> working? Could this prog1 be the problem? It couldn't possibly be,
> the logic's exactly the same. (pull hair out while poring over code)"

You seem to be encountering problems using multiple values that I have
never encountered, and not for lack of using MVs. Also finding out
about multiple-value-prog1 is a one time investment only. Maybe 5
minutes, at most. How much is 5 minutes in the life of a programmer?

Erik Naggum

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
* Tom Breton <t...@world.std.com>

| I would not call that economical. You're spending the more important
| resource, coding time/effort/attention, handling the multiple values.
| "Can I use them here, or have they degraded?" "Why did my code stop
| working? Could this prog1 be the problem? It couldn't possibly be,
| the logic's exactly the same. (pull hair out while poring over code)"

I'm sensing fire, burned children, and fear. being economical can either
be conscious and achieved by thinking, as in the unskilled programmer who
makes the best choices late, or automated and achived by emotion and gut
feeling, as in the highly skilled who makes the best choices early. you
seem to think that writing efficient, economical code is something you do
on a conscious basis after you have done something "clean" that is stupid
and uneconomical according to the resource expenditure measures that you
attemt to introduce later. this is the incompetent's credo, and I for
one do not subscribe to it.

#:Erik

Erik Naggum

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
* Tom Breton <t...@world.std.com>

| One of the best things I learned as a programmer was *not* to grab at
| everything that looked like an opportunity to optimize.

I'm sure that's true for you. one of the best things I learned as a
novice programmer was that no matter what I choose to do, it has impact
on performance, and if I want to waste as little time as possible going
back over code and rewrite it, it would help me a lot to understand as
much as possible both about the hardware and the language used. the more
factors I can deal with at the same time, the less time I spending fixing
up code that ignored one or more factors that turned to be important.
the only way to deal with a whole bunch of factors at once is to work so
much with them they migrate into your automated, emotional response
system, just like people walk and type and talk without actually thinking
about _any_ of the specific physical operations involved.

| I find I rarely actually want the optimization enuff to be worth doing
| later.

of course not. that's the problem with doing optimization too late.

premature optimization can hurt you, and real bad at that. that doesn't
mean thinking in terms of optimization is bad. it means that you don't
do optimization stuff that is likely to hurt you, and you spend a fair
amount of your working life acquiring the experience that tells you that
some things may hurt a little, yet need to be done, anyway. avoiding all
kinds of hurt is a very good way never to acquire _any_ experience.

tell you what. I recently got a position where I'm about to hire a
sysadm and some programmers. I'm looking for people who are good at what
they do, obviously, and to me, that has always meant an interest in stuff
that happens "below" whatever you're "supposed" to be doing, as in
_caring_. a sysadm who doesn't care about hardware is obviously going to
run into a situation one day where his non-caring will impact everyone
else badly. it is less obvious with a programmer, but I have decided
that I'm not going to hire Common Lisp programmers who don't want to know
how to read the disassembled code of a compiled function. neither will I
let anyone who is unwilling to read RFCs to understand TCP/IP and other
networking fundamentals work on network protocol implementations, even
high-level ones. likewise, if someone told me that he'd always optimize
late, I'd assume that he'd be a really bad programmer whose brain didn't
work or fast well enough that he could deal with economy of expression
and algorithm and resource use at the same time, and would have to take
care of each one at a time. just as you don't disassemble code all the
time, and certainly don't think about IP packets and network issues all
the time, the fact that you care about it means you don't ignore it, and
not ignoring it means not overlooking something that can save you days in
debugging, weeks in design, and months in customer relations.

I'm not sure what you're trying to tell us, except that I get this really
bad feeling you're defending a programming style that optimized very much
for you not being hurt again. if you optimize so heavily for that, I'm
sure you'll appreciate that other people may optimize for other factors
with at least as much effort. the result for you is reinforced efficacy
in dealing with programming problems by removing a whole slew of issues.
the result for those who have automatized their optimization of code is
reinforced efficacy in dealing with programming problems by removing a
whole slew of issues. the net result, however, is that you all feel
good, but their code also runs much more efficiently in the same time.
now, I want programmers who feel good about themselves, but I'm not going
to pay more than half the money for one that doesn't write good code.

#:Erik

Tom Breton

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
pm...@acm.org (Pierre R. Mai) writes:

> Tom Breton <t...@world.std.com> writes:
>
> > I would not call that economical. You're spending the more important
> > resource, coding time/effort/attention, handling the multiple values.
> > "Can I use them here, or have they degraded?" "Why did my code stop
> > working? Could this prog1 be the problem? It couldn't possibly be,
> > the logic's exactly the same. (pull hair out while poring over code)"
>

> You seem to be encountering problems using multiple values that I have
> never encountered, and not for lack of using MVs. Also finding out
> about multiple-value-prog1 is a one time investment only. Maybe 5
> minutes, at most. How much is 5 minutes in the life of a programmer?

OTOH, figuring out every time whether you want prog1 or
multiple-value-prog1 is a lot more minutes. That's part of the cost
of using multiple values. It's one more things to keep track of.

Rob Warnock

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Tom Breton <t...@world.std.com> wrote:
+---------------

| IOW, values is basically a backwards-compatibility thing.
| When that's *not* the case, simply returning a list is better in every

| way that springs to mind, IMO.
+---------------

Actually, a third option -- which I find myself using a lot more often
than *either* multiple values or lists -- is to return a structure:

- It's relatively cheap to construct;

- It's cheaper than a list to access elements other than the first; and

- The slot names give you an element of self-documentation that's missing
with either lists or MVs (that is, on the "values" side of MVs -- the
using or destructuring side is of course quite self-documenting).

But like the other two, it's not always appropriate. So use the one that is.


-Rob

p.s. In Schemes that don't have structures (there are still a few), I
sometimes return small vectors. That's yet another option in CL, too.

-----
Rob Warnock, 41L-955 rp...@sgi.com
Applied Networking http://reality.sgi.com/rpw3/
Silicon Graphics, Inc. Phone: 650-933-1673
1600 Amphitheatre Pkwy. PP-ASEL-IA
Mountain View, CA 94043

Tim Bradshaw

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
* Tom Breton wrote:

> I've never known anyone who worked at pessimizing their code. I
> certainly don't.

You haven't spent enough time looking at typical academic lisp code
then (:-).

--tim

Tim Bradshaw

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
* Tom Breton wrote:

> OTOH, figuring out every time whether you want prog1 or
> multiple-value-prog1 is a lot more minutes. That's part of the cost
> of using multiple values. It's one more things to keep track of.

That's silly. Just use MULTIPLE-VALUE-PROG1 all the time, and treat
the `I definitely only want one value' case specially, the same as you
do for functions &c.

--tim

Erik Naggum

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
* Duane Rettig

| Returning a list always requires consing; returning multiple values
| (mostly) does not.

* Tom Breton <t...@world.std.com>


| Which is an optimization issue.

I think this is getting closer to the point. "optimization" to you
obviously means anything that involves thinking about resource usage.
such is in my view an obviously invalid and useless abstraction, and does
a major disservice to the work involved in writing good software.

however, I do find it rather curious that you optimize your writing to
the point of being unreadable with random four-letter abbreviations.

#:Erik

Jonathan BAILLEUL

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Fernando D. Mato Mira wrote:
>
> 1. Why are you using 1-dimensional arrays?

Well, I was told to devise my matrix package so. In fact, my accessors
have two arguments (line,column), and conversion is done on-the-fly.
I presume it is faster to use simple-arrays, but, to be honest, I did
not computed the speed gains.
> 2. Why are you using constants? make-null should take as parameters either the
> dimensions, or a matrix belonging to the same space.

I understand. But the idea is to have minimal code for the specific
computations I need. I understand It really clashes standard lisp
programs design, but I repeat that the purpose is really specific and
speed-demanding.
It does not means I accept writing unreadable code: for now I can create
a package on demand which code is automatically "optimized" during
creation. I just have some problems to do it properly and to understand
the implied concepts.

>
> --
> Fernando D. Mato Mira
> Real-Time SW Eng & Networking
> Advanced Systems Engineering Division
> CSEM
> Jaquet-Droz 1 email: matomira AT acm DOT org
> CH-2007 Neuchatel tel: +41 (32) 720-5157
> Switzerland FAX: +41 (32) 720-5720
>
> www.csem.ch www.vrai.com ligwww.epfl.ch/matomira.html


--
----------------------------------------------
Jonathan BAILLEUL (bail...@emi.u-bordeaux.fr)
Maitrise Informatique, Universite Bordeaux I

Pierre R. Mai

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Tom Breton <t...@world.std.com> writes:

> OTOH, figuring out every time whether you want prog1 or
> multiple-value-prog1 is a lot more minutes. That's part of the cost
> of using multiple values. It's one more things to keep track of.

No, it is not one more thing to keep track of. The moment you write a
prog1 form (m-v-prog1 or normal prog1), you are certain that you want
to return values from the first form. Since you are certain that you
want to do this, you'd better know what those values are, or how would
you know that you wanted to return them. Since you already know what
those values are, you already know whether you only care about the
first value, or all values, or indeed only specific ones. So you use
the prog1 form suited for the job, or other constructs to extract the
values you want. I don't expend additional mental effort on this, so
why should anyone else reasonably skilled in the language?

Of course if you use multiple-values to pass around what should be
opaque tuples, you'll have to spend additional mental effort, but then
you are using multiple values for things that they were never intended
for. Use lists or probably better yet structures or objects to pass
opaque, non-fleeting tuples around.

As I said: Multiple values, lists, vectors, structures and objects
are all different ways of handling tuples of data, with different
properties, costs and intents. Use each of them wisely, and things
will be good. Misuse them, or eliminate one of them from the
language, and things will not be good. Thus he spoke and wandered off
into the desert... ;-)===

William Deakin

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Erik wrote:

> premature optimization can hurt you, and real bad at that.

I suggest what you want is optimal optimisation ;)

> I want programmers who feel good about themselves, but I'm not going to pay
> more than half the money for one that doesn't write good code.

I agree with this, and think that these are wise words

:) will


Jonathan BAILLEUL

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Pierre R. Mai wrote:
>
> Jonathan BAILLEUL <bail...@emi.u-bordeaux.fr> writes:
>
> > You understand that I am puzzled because I need to modify my source file
> > by hand to set constant values. Then, I cannot handle both types at a
> > time. I know that in lisp, handling specialized types seems bizarre but
> > it is needed for speeding the execution on a 3d project (CMU).
>
> Always be wary of interface changes that are only justified by the
> need for speed. While there are cases where this is justification
> enough, there are many more where you should seek out other means of
> achieving the speed-up you need, without sacrificing a clear interface
> design.
>
> If I understand you correctly, then what you want to achieve is a
> combination of code-specialization, code reuse and compile-time
> optimizations. There are a couple of approaches you should check out:

Yes...

>
> - Write macros that produce the specialized code, as you do now, but
> instead of using packages as the namespace, use either different
> function names,

it would be a bit heavy...

> or use the namespace generic functions provide (in
> combination with the approach above). Packages are really the wrong
> way to think about this, since in the context of CL packages are
> about packaging up complete libraries and applications, quite unlike
> packages in e.g. Ada. You can do the on-demand facility much more
> cleanly with macros in combination with either GFs and/or
> compiler-macros.

OK.

>
> - Use compiler-macros. This will give you the ability to provide nice,
> normal interfaces, yet optimize the hell out of code at compile-time.

[...]


> Or maybe you can get away with a lot less. This will depend on the
> information you can glean from profiling your code. In optimization,
> profiling isn't everything, but without profiling everything else is
> nothing ;)
>
> Regs, Pierre.

All right. I am going to study these last points which seem to go in the
way I intended to pursue my work. For now, I think I really have to
improve my knowledge on macros (I will use "On Lisp").
Thank you very much for your precious advices and your patience.

Now, I just suceeded in "making a package on demand". This works, but is
quite ugly and unreadable. I will look this closer with my teacher
tomorrow, so I think things may become clearer soon...

Jonathan BAILLEUL

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Jon S Anthony wrote:
>
> Jonathan BAILLEUL wrote:
> >
> > Pierre R. Mai wrote:
> > >
> > > Maybe you should try to describe in clear and general (i.e. not
> > > implementation concerned) terms what you wanted to achieve in the
> > > first place. Start at the very beginning (i.e. long before packages,
> > > macros and package hackery comes into the picture)...
> > >
> > > Regs, Pierre.
> > >
> ...
> > Okay.
> ...
>
> You're trying to use packages as some sort of template. That's not
> what they are for - they are for name space organization.
>

That was the idea, I admit. I see it is not appropriate, unfortunately.

> ...<code>
>
> Since you seem to want to tie together the element type and specific
> dimensions and such for each kind of array, it seems that something as
> simple as the following would do what you want:
>
> (defmacro defmatrix-thingy (name (&key type dimensions))
> (let ((namestg (symbol-name name))
> (init-val (case type
> ((double-float) 0.0D0)
> ((single-float) 0.0S0))))
> `(progn
> (defun ,(intern (concatenate 'string "MAKE-" namestg)) ()
> (make-array ',dimensions :element-type ',type
> :initial-element ,init-val))
>
> (defun ,(intern (concatenate 'string namestg "-MULT")) (m1 m2)
> (declare (type (array ,type ,dimensions) m1 m2)
> (optimize (speed 3) (safety 1)))
> ;; code
> 1)
>
> (defun ,(intern (concatenate 'string namestg "-ADD")) (m1 m2)
> (declare (type (array ,type ,dimensions) m1 m2))
> ;; code
> 2)
>
> ;; Etc.
> )))
>
> You could also use the "the" special operator in the function
> definitions for further optimization opportunities.
>
> (defmatrix-thingy dbl-flt-3X3 (:type double-float :dimensions (3 3)))
>
> (let ((m1 (make-dbl-flt-3X3))
> (m2 (make-dbl-flt-3X3))
> (m3 (make-dbl-flt-3X3)))
> ...; set the values and such
> (dbl-flt-3X3-mult m1 (dbl-flt-3X3-add m2 m3)))
>
> There are many variations that you could do. If you need some scoped
> "private" information (maybe an identity matrix or whatever) just
> replace the progn with a let and the apprpriate information.
>
> /Jon
>
> --
> Jon Anthony
> Synquiry Technologies, Ltd. Belmont, MA 02478, 617.484.3383
> "Nightmares - Ha! The way my life's been going lately,
> Who'd notice?" -- Londo Mollari

Thank you. I'm going to study your proposition.

Jonathan BAILLEUL

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Tom Breton wrote:
>
> Jonathan BAILLEUL <bail...@emi.u-bordeaux.fr> writes:
>
> > Hello.
> >
> > I want to make a macro returning a sequence of instructions, like:
> >
> > (setf a 1)
> > (list a 1 2)
> >
> > and so on. The tricky point is that I want the generated code to be
> > several lists, and not a list of lists. I want this sequence to be
> > evaluated and I can't use progn. (the aim is to generate package
> > definitions).
>
> No, because that's not what a macro does. A macro is just a function
> that's "called funny" - its arguments are not evaluated, its return
> value is. It isn't always the mechanism to deal with meta-code.
>
> ISTM two things: What you want is an ordinary function. And your
> caller is going to have to do some of the work, such as pulling apart
> a list of lists into - well, however you were going to represent
> "several lists" as not a list of lists.

>
> --
> Tom Breton, http://world.std.com/~tob
> Not using "gh" since 1997. http://world.std.com/~tob/ugh-free.html
> Rethink some Lisp features, http://world.std.com/~tob/rethink-lisp/index.html

Ok.
The purpose of the thread is now obsolete: I understood that most of
package operations depend on the reader level ("when the file is read
from disk", I understood).
The particular multiple-value problem seems not to be easy to solve,
since what I wanted to do was not logical.

Anyway, thank you for your help.

Lyman S. Taylor

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Jonathan BAILLEUL wrote:
>
> Jon S Anthony wrote:
> >
....

> >
> > You're trying to use packages as some sort of template. That's not
> > what they are for - they are for name space organization.
> >
>
> That was the idea, I admit. I see it is not appropriate, unfortunately.
>

You could use a variation of Jon's code, packages, and macros.
However, that would only serve to

For that approach I'd use two macro. One to specify the interface
and another for the "definitions".

Something along the lines of

(defmacro defmatrix-thingy-interface ( name nickname )
`(defpackage ,name
......
(:export .. standard set of names ... )
))


(defmacro instantiate-defmatrix-thingy ( name ( type dimensions ) )

`(progn
(deftype ,name ... )
(defun (intern "NEW-MATRIX" ) .... )

(defun (intern "MADD" ) ...)
(defun (intern "MMULT") ...)
))


Then in the file that loads your system you'd have

load.lisp

(in-package "COMMON-LISP-USER" )

(defmatrix-thingy-interface double-float-3x4 df3x4 )
(load "df3x4")


In another file

df3x4.lisp

(in-package "DOUBLE-FLOAT-3X4" )

(common-lisp-user::instantiate-defmatrix-thingy double-float-3x4 ( double-float (3 4 )) )

[ Admittedly this would make for an extremely short file, but lays the groundwork for any
specialized additions you'd want to make to this "type" and its interface later. ]

In dealing with packages I always have less hassles over the long term if packages are defined
(make-package or defpackage) before you start using them (in-package).


You could alter instantiation macro to check wither the target package exists and bounce in
and out.

(defmacro instantiate-defmatrix-thingy ( name ( type dimensions ) )
(let ( (name-str (symbol-name name ))
(unless (find-package name-str) (error "No previous interface def"))
`(progn
(in-package ,name-str)
(deftype ,name ... )
(defun (intern "NEW-MATRIX" ) () ... )

(defun (intern "MADD" ) (m1 m2)
(declare ,name m1 m2 )
...)
(defun (intern "MMULT") ...)
....
(in-package ,(package-name *package*))
))

> > (defmacro defmatrix-thingy (name (&key type dimensions))

[ I'd make the type and dimension non optional. Computations with the
NIL type on a 0x0 matrix can't possibly be interesting. :-)
Either that or supply some default arguments. ]


The major difference between the two is
1. Having to do some preparation to construct the
namespaces.
2. The number of hyphens versus double colons (or single
colons, since these are exported and the presence
of a package use declaration.).


(let ((m1 (df3x3::new-matrix))
(m2 (df3x4::new-matrix))
(m3 (sf2x2::new-matrix))
(m4 (sf2x2::new-matrix))
....
(df3x3::mmult m1 m2 )
....
(sf2x2::madd m3 m4 )
....
)

In other words more, name mangling in a single namespace
or more generic names in multiple namespaces.

3. If you are only going to deal with one type of matrix you can
just import that interface and drop the package prefixes.


Lyman

Tom Breton

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Erik Naggum <er...@naggum.no> writes:

> * Duane Rettig
> | Returning a list always requires consing; returning multiple values
> | (mostly) does not.
>
> * Tom Breton <t...@world.std.com>
> | Which is an optimization issue.
>
> I think this is getting closer to the point. "optimization" to you
> obviously means anything that involves thinking about resource
> usage.

Yes, or more exactly, thinking about efficient usage.

> such is in my view an obviously invalid and useless abstraction, and does
> a major disservice to the work involved in writing good software.

Well, we disagree.

> however, I do find it rather curious that you optimize your writing to
> the point of being unreadable with random four-letter abbreviations.

WTF? IIRC, 4LA's have been used on Usenet for a long time. YMMV. }:)

Seriously, I'm not against optimization in principle. I'm against
premature optimization. I assure you, there was no possibility of
sending this message out "and after it's tested and it works, then
making it fast".

Tom Breton

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Tim Bradshaw <t...@cley.com> writes:

> * Tom Breton wrote:
>
> > OTOH, figuring out every time whether you want prog1 or
> > multiple-value-prog1 is a lot more minutes. That's part of the cost
> > of using multiple values. It's one more things to keep track of.
>

> That's silly. Just use MULTIPLE-VALUE-PROG1 all the time, and treat
> the `I definitely only want one value' case specially, the same as you
> do for functions &c.

ISTM that's even more problematic. But the discussion's gotten old
and I spent all my attention answering Erik.

Tom Breton

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Erik Naggum <er...@naggum.no> writes:


> the only way to deal with a whole bunch of factors at once is to work so
> much with them they migrate into your automated, emotional response
> system, just like people walk and type and talk without actually thinking
> about _any_ of the specific physical operations involved.

Another way is to divide and conquer. Doing two things at once does
neither thing well.

> | I find I rarely actually want the optimization enuff to be worth doing
> | later.
>
> of course not. that's the problem with doing optimization too late.

That's the blessing of it. In the clear lite of another day, I almost
always realize I actually didn't need the optimization, or even want
it very much. That is especially true for low-level optimizations, as
opposed to using better algorithms.

> high-level ones. likewise, if someone told me that he'd always optimize
> late, I'd assume that he'd be a really bad programmer whose brain didn't
> work or fast well enough that he could deal with economy of expression
> and algorithm and resource use at the same time, and would have to take
> care of each one at a time.

Here I've got to disagree with you. IME (*) it's better, whenever
possible, to "divide and conquer" attention tasks such as programming.

For example, I'm more than happy to spend X amount of time building a
utility for some auxilliary programming task, and then do the main
task Y with no further attention to the auxilliary task except to
"push the button that makes it go", even if the combined tasks take
exactly X+Y time. I make fewer mistakes and see solutions faster,
because I'm not frequently shifting context.

So IMO making it correct first, then making it fast enuff, is easier
on my brain even if it were to take exactly as much time.

(*) Yes, it's another TLA. It means "In my experience". TLA means
"three letter acronym", a term I'm abusing slitely here.

> just as you don't disassemble code all the
> time, and certainly don't think about IP packets and network issues all
> the time, the fact that you care about it means you don't ignore it, and
> not ignoring it means not overlooking something that can save you days in
> debugging, weeks in design, and months in customer relations.
>
> I'm not sure what you're trying to tell us,

But it's the usual message. I'd have thaut you'd be yawning with
boredom, hearing "make it correct first, then make it fast enuff" for
the hundredth time. I mean, a few messages ago I was sure I was going
to get "(Yawn) Do optimization last? So tell us something we don't
know" from all quarters.

> except that I get this really
> bad feeling you're defending a programming style that optimized very much
> for you not being hurt again. if you optimize so heavily for that, I'm
> sure you'll appreciate that other people may optimize for other factors
> with at least as much effort.

Very well, live and let live.

Tom Breton

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
pm...@acm.org (Pierre R. Mai) writes:

> Tom Breton <t...@world.std.com> writes:
>
> > OTOH, figuring out every time whether you want prog1 or
> > multiple-value-prog1 is a lot more minutes. That's part of the cost
> > of using multiple values. It's one more things to keep track of.
>

> No, it is not one more thing to keep track of.

Obviously it is.

Tim Bradshaw

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
* Tom Breton wrote:

> ISTM that's even more problematic. But the discussion's gotten old
> and I spent all my attention answering Erik.

How convenient.

Duane Rettig

unread,
Mar 20, 2000, 3:00:00 AM3/20/00
to
Tom Breton <t...@world.std.com> writes:

> But it's the usual message. I'd have thaut you'd be yawning with
> boredom, hearing "make it correct first, then make it fast enuff"

But this is _not_ the message that I hear from (and say to) this
newsgroup. The message that I believe is the correct one is "make
it correct first, and then make it fast." This differs subltlely
from your version of the message in that your version allows situations
in which you will create software that is functionally accurate but
does not meet performance requirements (i.e. it is not "correct").
Whenever writing a program you must always be aware of the minimal
requirements for performance (if speed simply doesn't matter at all,
then that _is_ the statement of the requirement) before you do your
design. Otherwise, you stand a chance of failing to meet the
requirements of your customer, and thus might not get paid for your
work...

> for
> the hundredth time. I mean, a few messages ago I was sure I was going
> to get "(Yawn) Do optimization last? So tell us something we don't
> know" from all quarters.

It is true that any performance requirements above and beyond the
minimal speed requirements for the system are gravy, and should
be saved for last.

--
Duane Rettig Franz Inc. http://www.franz.com/ (www)
1995 University Ave Suite 275 Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253 du...@Franz.COM (internet)

Tom Breton

unread,
Mar 21, 2000, 3:00:00 AM3/21/00
to
Duane Rettig <du...@franz.com> writes:

> Tom Breton <t...@world.std.com> writes:
>
> > But it's the usual message. I'd have thaut you'd be yawning with
> > boredom, hearing "make it correct first, then make it fast enuff"
>
> But this is _not_ the message that I hear from (and say to) this
> newsgroup. The message that I believe is the correct one is "make
> it correct first, and then make it fast."

There is a difference, and ISTM my message is the proper one. As you
yourself point out, making it fast beyond requirements is gravy.

> This differs subltlely
> from your version of the message in that your version allows situations
> in which you will create software that is functionally accurate but
> does not meet performance requirements (i.e. it is not "correct").

Well, no. It allows situations where you have a non-final version
that is functionally correct but isn't fast enuff, as opposed to
having no intermediate version at all. To me the superiority is
clear, but YMMV.

Erik Naggum

unread,
Mar 21, 2000, 3:00:00 AM3/21/00
to
* Tom Breton <t...@world.std.com>

| Doing two things at once does neither thing well.

this is your personal experience. it is not mine, which is that the more
we _think_ about at the same time, the less we need to do over later.

| (*) Yes, it's another TLA. It means "In my experience". TLA means
| "three letter acronym", a term I'm abusing slitely here.

you just spent two whole lines explaining your "optimized" writing style!
I'm sorry, Tom, but I find this positively hilarious, to the point where
it has removed any credibility to your ridiculous points about optimizing.

#:Erik

Gareth McCaughan

unread,
Mar 22, 2000, 3:00:00 AM3/22/00
to
Erik Naggum wrote:

> * Tom Breton <t...@world.std.com>


> | (*) Yes, it's another TLA. It means "In my experience". TLA means
> | "three letter acronym", a term I'm abusing slitely here.
>
> you just spent two whole lines explaining your "optimized" writing style!
> I'm sorry, Tom, but I find this positively hilarious, to the point where
> it has removed any credibility to your ridiculous points about optimizing.

To judge from his .sig, he's not optimising for shortness
but for minimum number of "gh"es. (This seems just as silly
to me as I'm sure it does to you...)

--
Gareth McCaughan Gareth.M...@pobox.com
sig under construction

Tom Breton

unread,
Mar 23, 2000, 3:00:00 AM3/23/00
to

> Erik Naggum wrote:
>
> > * Tom Breton <t...@world.std.com>
> > | (*) Yes, it's another TLA. It means "In my experience". TLA means
> > | "three letter acronym", a term I'm abusing slitely here.
> >
> > you just spent two whole lines explaining your "optimized" writing style!
> > I'm sorry, Tom, but I find this positively hilarious, to the point where
> > it has removed any credibility to your ridiculous points about optimizing.

Wow, you're getting really defensive, putting words in my mouth,
attacking me personally for explaining something you specifically
asked about, and the rest. Well, I didn't mean to cause you such
emotional distress by teaching you remedial programming. Guess I must
have accidentally struck a nerve. Really sorry, hope you recover.

Courageous

unread,
Mar 23, 2000, 3:00:00 AM3/23/00
to
Jonathan BAILLEUL wrote:
>
> Hello.
>
> I want to make a macro returning a sequence of instructions, like:
>
> (setf a 1)
> (list a 1 2)
>
> and so on. The tricky point is that I want the generated code to be
> several lists, and not a list of lists.

I'm not sure if I'm following this, but I suspect that the answer
involves the ,@ form.


C/

Erik Naggum

unread,
Mar 23, 2000, 3:00:00 AM3/23/00
to
* Tom Breton <t...@world.std.com>

| Wow, you're getting really defensive, putting words in my mouth,
| attacking me personally for explaining something you specifically
| asked about, and the rest. Well, I didn't mean to cause you such
| emotional distress by teaching you remedial programming. Guess I must
| have accidentally struck a nerve. Really sorry, hope you recover.

you think you have been teaching me remedial programming?
that's, like, amazing. you really _are_ quite nuts.

#:Erik

Robert Monfera

unread,
Mar 25, 2000, 3:00:00 AM3/25/00
to

"Pierre R. Mai" wrote:

> Compiler-macros also offer you cross-function optimization
> opportunities, which your hand-dispatch approach doesn't offer:
> Especially with concatenated matrix operations, it might be
> beneficial to do many of them in one step (reduced iteration
> overhead, possibly reduced need for temporaries, better
> cache-locality, etc.).

This is one of the optimizations the SERIES package does pretty well: a
number of independently defined functions will translate into one big
loop without the user knowing (if certain constraints are met, except
for a few weird things like having to recompile callees after a function
change).

Robert

Fernando D. Mato Mira

unread,
Mar 25, 2000, 3:00:00 AM3/25/00
to
Robert Monfera wrote:

> loop without the user knowing (if certain constraints are met, except
> for a few weird things like having to recompile callees after a function
> change).

Well, if you wrapped the callees of series functions in development in

(compiler-let ((*optimize-series-expressions* nil))

)

you wouldn't have to recompile them.


Anyway, the really important constraint is that what you are trying to
compute
can be pipelined. If everything doesn't operate in lockstep, the expression
will be split
into independently optimizable subgraphs with buffering between them. And
you'll get
a warning about it, so you can see if you can rewrite the expression so
that full-graph
optimization is possible, otherwise you can switch to crafting your own
LOOP or whatever.
Optimizable series functions can only be preorder. That's why no equivalent
of reverse
is provided. Although someone could still enhance array scanning functions
to optionally
operate in postorder.

Which takes me to a question that has been haunting me for the last week or
so:

WHY NO STANDARD DOUBLY-LINKED LIST FACILITIES IN CL?

[Now for a little draft]

Let's just give whatever name to a cell and constructor for now, say DONS.

A DONS _extends_ the layout of a CONS, that is the C**R functions operate
the same way on DONSes.

The new field is called CBR (for `back')

The main design issue to settle is whether RPLACD and (SETF CDR) also
affect the sibling CBR.
There's good arguments for both. I think I prefer no. One way or the other,
the alternative versions
are useful/needed.
RPLACB and (SETF CBR) should be defined in an analogous way.

etc.

Fernando D. Mato Mira

unread,
Mar 25, 2000, 3:00:00 AM3/25/00
to
Man, I hate Netscape for posting/emailing.

Tom Breton

unread,
Mar 26, 2000, 3:00:00 AM3/26/00
to
"Fernando D. Mato Mira" <mato...@iname.com> writes:

>
> Which takes me to a question that has been haunting me for the last week or
> so:
>
> WHY NO STANDARD DOUBLY-LINKED LIST FACILITIES IN CL?

Serious question? Because making more complex graphs into basic types
would force expression-walking to always have to watch out for those
types. Here "basic type" means a type that's available even when
you've loaded nothing.

I can see a standardized library supporting a `dons' class and
list-like operations on it, but I can't see it as a basic thing.

0 new messages