Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Allocating on the stack and *only* on the stack

10 views
Skip to first unread message

Fernando Rodríguez

unread,
Dec 11, 2000, 11:07:23 AM12/11/00
to
How can I make *absolutelly* sure that lisp allocates memory on the stack?
It's a class that uses some windows resources that *must* be freed when
leaving the function body where the class is used.

In general, how can you free memory "manually", when you can't wait for the
gc?

TIA


//-----------------------------------------------
// Fernando Rodriguez Romero
//
// frr at mindless dot com
//------------------------------------------------

Raymond Wiker

unread,
Dec 11, 2000, 11:09:52 AM12/11/00
to
Fernando Rodríguez <spa...@must.die> writes:

> How can I make *absolutelly* sure that lisp allocates memory on the stack?
> It's a class that uses some windows resources that *must* be freed when
> leaving the function body where the class is used.

unwind-protect, perhaps?

> In general, how can you free memory "manually", when you can't wait for the
> gc?

--
Raymond Wiker
Raymon...@fast.no

Joe Marshall

unread,
Dec 11, 2000, 11:58:16 AM12/11/00
to
Fernando Rodríguez <spa...@must.die> writes:

> How can I make *absolutelly* sure that lisp allocates memory on the stack?
> It's a class that uses some windows resources that *must* be freed when
> leaving the function body where the class is used.

Why? What happens if you don't free the memory when you leave the
function body?

> In general, how can you free memory "manually", when you can't wait for the
> gc?

In general, you can't. You can supply hints to the compiler to
arrange for some things to be `stack allocated' but the compiler is
free to ignore these.


-----= Posted via Newsfeeds.Com, Uncensored Usenet News =-----
http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
-----== Over 80,000 Newsgroups - 16 Different Servers! =-----

Tim Bradshaw

unread,
Dec 11, 2000, 12:45:33 PM12/11/00
to
Fernando Rodríguez <spa...@must.die> writes:

> How can I make *absolutelly* sure that lisp allocates memory on the stack?
> It's a class that uses some windows resources that *must* be freed when
> leaving the function body where the class is used.
>

You can't. But what you *can* do is what you probably really want to
do which is to make sure that some action happens when you leave a
`block' -- that action can be whatever you need to say to windows to
free the resources. I think his is one of those cases which Erik (?)
described really well as the difference between memory and object
management.

--tim


Erik Naggum

unread,
Dec 11, 2000, 1:45:45 PM12/11/00
to
* Fernando Rodríguez <spa...@must.die>

| How can I make *absolutelly* sure that lisp allocates memory on the
| stack?

By calling a stack-allocating function yourself, presumably, but this
is such an odd request that I doubt that you have asked for what you
really need.

| It's a class that uses some windows resources that *must* be freed
| when leaving the function body where the class is used.

What does "free" mean? Objects on the stack are not _freed_ at all.
The bytes of memory that made them up are still there until they are
overwritten with something else. If you have any references to that
data when you leave the function, you're in _deep_ trouble. So what I
think you must mean is that you want to guarantee that there are no
references to your data. I can't see why that is a problem that does
not have an obvious solution: Just the variables/slots/whatever that
holds any such references to some other value, or nil, or whatever,
and you no longer have any way of finding the data.

| In general, how can you free memory "manually", when you can't wait
| for the gc?

Well, describe what it means to free it, and I'm sure it can be done.

#:Erik
--
"When you are having a bad day and it seems like everybody is trying
to piss you off, remember that it takes 42 muscles to produce a
frown, but only 4 muscles to work the trigger of a good sniper rifle."
-- Unknown

Fernando Rodríguez

unread,
Dec 12, 2000, 8:27:27 AM12/12/00
to

Yes, that's what I need, a sort of "destructor", but that is called
immediatelly when an instance of my class gets out of scope. I could do this
explicitly, but I guess there must be a way to automate it.

Fernando Rodríguez

unread,
Dec 12, 2000, 8:27:28 AM12/12/00
to
On 11 Dec 2000 18:45:45 +0000, Erik Naggum <er...@naggum.net> wrote:


>| In general, how can you free memory "manually", when you can't wait
>| for the gc?
>
> Well, describe what it means to free it, and I'm sure it can be done.

What I really need is what Tim said in a previous post: an action to be
performed when a class instance gets out of scope. It cannot be delayed until
the GC starts. I could do this explicitly (and this is the solution
recommended by MS for Java, so I guess there must be some better way ;-), but
it seems ugly and error prone.

Maybe this is a newbie's mistake, but I still don't feel safe to let the GC
handle "resource management" (not just memory management): file handles,
brushes, db connections, etc... that must be released asap because they are
scarce or that must be released in a precise order (that's the specific
problem I'm having).

Tim Bradshaw

unread,
Dec 12, 2000, 8:48:45 AM12/12/00
to
Fernando Rodríguez <spa...@must.die> writes:

> Yes, that's what I need, a sort of "destructor", but that is called
> immediatelly when an instance of my class gets out of scope. I could do this
> explicitly, but I guess there must be a way to automate it.
>

OK, that's quite easy. If you just want it to happen in one place you
can do something like:

(unwind-protect
(progn
... code in the body ...)
... code that must be run on exit ...)

Or, assuming that the operations to get the thing are ALLOCATE-CRUN
and the operation to destroy it are DESTROY-CRUN you can write
something like this. (I am assuming that ALLOCATE-CRUN succeeds or
signals an error and so on)

(let ((secret nil))
(unwind-protect
(let ((crun (setf secret (allocate-crun))))
... use crun ...)
(when secret (destroy-crun secret))))

If you do this more than once you probably want to write a macro so
you have some syntax to make it easy to see what is going on:

(defmacro with-crun ((var) &body body)
(let ((sv (gensym)))
`(let ((,sv nil))
(unwind-protect
(let ((,var (setf ,sv (allocate-crun))))
,@body)
(when secret (destroy-crun secret))))))

(with-crun (x)
... use X ...)

You are correct that it's inappropriate to let the GC do this kind of
resource management: there are plenty of cases where you need to
*know* when some kind of `finished-with-x' action happens, or that
such actions happen in a given order, and the GC can generally not
offer those kinds of promises. However Lisp offers plenty of ways of
ensuring that they happen, and probably better ways than many
languages, as it decouples them from the low-level issue of memory
management.

--tim

Lars Lundbäck

unread,
Dec 12, 2000, 9:11:00 AM12/12/00
to
Fernando Rodríguez wrote:
>
> On 11 Dec 2000 18:45:45 +0000, Erik Naggum <er...@naggum.net> wrote:
>
> >| In general, how can you free memory "manually", when you can't wait
> >| for the gc?
> >
> > Well, describe what it means to free it, and I'm sure it can be done.
>
> What I really need is what Tim said in a previous post: an action to be
> performed when a class instance gets out of scope. It cannot be delayed until
> the GC starts. I could do this explicitly (and this is the solution
> recommended by MS for Java, so I guess there must be some better way ;-), but
> it seems ugly and error prone.
>
> Maybe this is a newbie's mistake, but I still don't feel safe to let the GC
> handle "resource management" (not just memory management): file handles,
> brushes, db connections, etc... that must be released asap because they are
> scarce or that must be released in a precise order (that's the specific
> problem I'm having).
>

I don't see how you can have the GC handle such resources, unless they
have really been _allocated_ by the Lisp runtime. This problem is old,
really. Your Lisp probably only passes your requests, no more.

The integration with e.g. the graphics (windows) environment is often
shallow, so that the resources for any handles you get via Lisp are
allocated elsewhere, and must be manually "freed". As Erik says, it can
be done, as there is (should be) a "release a resource" function
corresponding to the one you used to get the resource. But you must
build this kind of resource management into your application, since the
raw Lisp has no way to do it for you automatically.

Regards, Lars

Marco Antoniotti

unread,
Dec 12, 2000, 9:28:52 AM12/12/00
to

Fernando Rodríguez <spa...@must.die> writes:

> On 11 Dec 2000 18:45:45 +0000, Erik Naggum <er...@naggum.net> wrote:
>
>
> >| In general, how can you free memory "manually", when you can't wait
> >| for the gc?
> >
> > Well, describe what it means to free it, and I'm sure it can be done.
>
> What I really need is what Tim said in a previous post: an action to be
> performed when a class instance gets out of scope. It cannot be delayed until
> the GC starts. I could do this explicitly (and this is the solution
> recommended by MS for Java, so I guess there must be some better way ;-), but

^^^^^^^^^^^


> it seems ugly and error prone.

This is a signal that something is fishy. :)

> Maybe this is a newbie's mistake, but I still don't feel safe to let the GC
> handle "resource management" (not just memory management): file handles,
> brushes, db connections, etc... that must be released asap because they are
> scarce or that must be released in a precise order (that's the specific
> problem I'm having).

A motivating example would help.

I really believe that given the current abundance of memory the GC is
the best you can do unless you are working in an "embedded" framework,
where the enginners still think that adding an extra 0.5USD of memory
to their chips would cause failure in the market (therefore forcing
longer development and debugging times mostly due to memory management
issues, in spite of all the "time-to-market" hoopla.)

"C programmers think memory management is too important to leave it to
the system. Lisp programmer think memory management is too important
to leave it to the programmer." (I believe this is a quote from
Stroustroup, though I do not know whether it is his own).

Anyway, this is CLL :) And C/C++ programmer (including the designers
of COM/OLE at MS) then go on to reinvent the wheel with reference
counting. :)

Cheers


--
Marco Antoniotti =============================================================
NYU Bioinformatics Group tel. +1 - 212 - 998 3488
719 Broadway 12th Floor fax +1 - 212 - 995 4122
New York, NY 10003, USA http://galt.mrl.nyu.edu/valis
Like DNA, such a language [Lisp] does not go out of style.
Paul Graham, ANSI Common Lisp

Tim Bradshaw

unread,
Dec 12, 2000, 9:51:49 AM12/12/00
to
Tim Bradshaw <t...@tfeb.org> writes:

>
> (with-crun (x)
> ... use X ...)
>

I forgot to add the final generalisation to this -- if you define a
generic DESTROY-THING function then you can make these WITH-x macros
work for any kind of thing you might want to have this control over --
you need to have some extra arguments to specify the type of thing you
want &c.

--tim

Tim Bradshaw

unread,
Dec 12, 2000, 10:00:52 AM12/12/00
to
Marco Antoniotti <mar...@cs.nyu.edu> writes:

>
> I really believe that given the current abundance of memory the GC is
> the best you can do unless you are working in an "embedded" framework,
> where the enginners still think that adding an extra 0.5USD of memory
> to their chips would cause failure in the market (therefore forcing
> longer development and debugging times mostly due to memory management
> issues, in spite of all the "time-to-market" hoopla.)
>

But this isn't anything to do with memory management. It's just
trivial to think of cases where resources absolutely must
unconditionally be freed when you exit some extent. Think of locks or
resources on a remote machine, -- you *really* don't want to leave it
up to the GC to free that kind of thing. The problem is that C++
people confuse this kind of thing with storage management, because C++
confuses this kind of resource management with storage management.
And Lisp people *also* confuse it with storage management and since GC
does largely solve that problem assume that these other problems must
be solved too.

--tim

Erik Naggum

unread,
Dec 12, 2000, 9:54:07 AM12/12/00
to
* Fernando Rodríguez <spa...@must.die>

| What I really need is what Tim said in a previous post: an action to
| be performed when a class instance gets out of scope.

Oh, I see. I should have realized that when C++ and Java programmers
talk about objects that are "freed", freeing _memory_ is nowhere near
what they really talk about, they talk about running destructors, and
I got a little hung up in terminology and your need to allocate on the
stack and talk about the GC, which confused me. Sorry about that.

One does these things with macros that utilize unwind-protect to run
cleanup-forms in Lisp, and Tim's answer was right on the mark. Use a
macro around the form that allocates, binds to a variable, and cleans
up, but don't worry about the memory.

| It cannot be delayed until the GC starts.

Well, what happens at GC time is that memory that is not referenced by
anything can be reused for new objects. This is strictly about
freeing _memory_ and has absolutely nothing to do with destructors.
The objects that GC sees are live objects, not dead ones.

There is an exception to this, of course, called finalization. It is
code triggered after the GC discovers that there is only one reference
to the object, and that is from the finalization. Most Common Lisp
implementations also have "weak references", references that are
changed to nil when they are the last reference.

| I could do this explicitly (and this is the solution recommended by MS
| for Java, so I guess there must be some better way ;-), but it seems
| ugly and error prone.

Yup, it would be in a language without macros and unwind-protect.

| Maybe this is a newbie's mistake, but I still don't feel safe to let
| the GC handle "resource management" (not just memory management): file
| handles, brushes, db connections, etc... that must be released asap
| because they are scarce or that must be released in a precise order
| (that's the specific problem I'm having).

No, don't just feel unsafe, feel _scared_! You're quite right that
doing resource management with GC is wrong. It's a pity that C++ and
Java conflated the management of memory and other resources, as memory
essentially is free and everything else is not, leading to radically
different allocation, deallocation, and management strategies for the
two kinds of resources. Oh, it just occurred to me that maybe these
languages are really based in the assumption that memory is _also_
scarce. Geez, _that's_ a flashback to the 50's! On with your white
lab coats and roll in the decks of cards with the Java program from
the browser request made last week. Crank up the power to that extra
bank of 16K core memory and start the espresso program on the coffee
system, too. It's gonna be a _long_ night. This code forgot to
deallocate!

unwind-protect is your friend here, but most people find that using it
gives your code a low-level feel. Let fancy macros like with-open-file
does the whole resource management cycle for you, from opening the
stream to guaranteeing that it is closed when you leave. You don't
even see the memory management involved, of course. That's the bonus.

Dr Nick Levine

unread,
Dec 12, 2000, 11:24:20 AM12/12/00
to
>> Crank up the power to that extra bank of 16K core memory

I thought it was on a revolving drum of some sort...?

-n

Tim Bradshaw

unread,
Dec 12, 2000, 12:14:40 PM12/12/00
to
Dr Nick Levine <n.le...@anglia.ac.uk> writes:

> I thought it was on a revolving drum of some sort...?

mercury delay lines.

Erik Naggum

unread,
Dec 12, 2000, 11:25:35 AM12/12/00
to
* Tim Bradshaw <t...@tfeb.org>

| Think of locks or
| resources on a remote machine, -- you *really* don't want to leave it
| up to the GC to free that kind of thing.

I think it is important to realize that GC doesn't _free_ memory or
objects. A transitive, active verb here implies something that exists
and that something is done to it. Not so. Unreferenced memory does
not exist as far as (access to) objects is concerned. The reason it
is not usable for new allocations is that even the allocator doesn't
know it exists. GC makes such unreferenced memory usable, again, but
at this point, we have an essentially uninitialized resource that
cannot possibly keep track of what it was last used for. Let's call
this _abandoned_ objects and memory.

This is in sharp contrast to "freeing" an object, an active operation
on an object that still exists qua object. When the free function
puts a chunk of bytes back on the free list used by malloc, the memory
never _ceases_ to be referenced. It is in fact an _error_ for memory
to be abandoned in such a system! (It's 3:30 AM. Do you know where
your memory is? My God, it must be hard to live in the C++ mindset,
parenting _memory_ as if it were a scarce resource. Sheesh!)

I supposed that if you think only in terms of referenced memory and
that memory always has a structure, it will be very ahrd to switch to
think in terms of unreferenced memory and allowing yourself to let go
of memory, since that used to be a source of serious problems and
outright pain and suffering during development. If you aren't aware
of what hurts, you're likely to fear something else that appears to
coincide with it. (Such as people who can't be bothered to figure out
the difference between posting to comp.lang.lisp and being stupid.)

| And Lisp people *also* confuse it with storage management and since GC
| does largely solve that problem assume that these other problems must
| be solved too.

Let's find a way to help Lisp people overcome that confusion, since GC
doesn't solve any other problem than how to reclaim abandoned memory.
If something else happens at GC time, I think it needs to be spelled
out that there is a (very significant) convenience factor in doing it
at that time, not an intrinsic relationship between the reclamation of
abandoned memory and the more complex ways through which objects may
be discovered to have been abandoned after all the next time around.

Example: You don't have to reclaim abandoned memory to find that some
object has no other references than that from the finalization list,
it's just massively more efficient to mark the old objects you copy in
a copying garbage collector so you can walk over the list of objects
to finalize and see if anyone of them are still unmarked, at which
point you call the finalization code associated with those objects.

Raymond Toy

unread,
Dec 12, 2000, 12:18:41 PM12/12/00
to
>>>>> "Marco" == Marco Antoniotti <mar...@cs.nyu.edu> writes:

Marco> I really believe that given the current abundance of memory the GC is
Marco> the best you can do unless you are working in an "embedded" framework,
Marco> where the enginners still think that adding an extra 0.5USD of memory
Marco> to their chips would cause failure in the market (therefore forcing
Marco> longer development and debugging times mostly due to memory management
Marco> issues, in spite of all the "time-to-market" hoopla.)

But if you sell 10 million of the chips next year, that's $5 million
extra. That might have paid for all of the development costs and then
some. And if you sell that many the year after, that's pure profit.

It has to be carefully balanced with the time it takes to develop that
software to save that memory, of course. That's the trick.

Ray

Knut Arild Erstad

unread,
Dec 12, 2000, 12:48:22 PM12/12/00
to
[Erik Naggum]
:
: Oh, I see. I should have realized that when C++ and Java programmers

: talk about objects that are "freed", freeing _memory_ is nowhere near
: what they really talk about, they talk about running destructors, and
: I got a little hung up in terminology and your need to allocate on the
: stack and talk about the GC, which confused me. Sorry about that.

Actually, this _shouldn't_ be a problem for Java programmers; there are no
destructors in Java. Unfortunately, a lot of Java programmers (even book
authors) seem to be using finalizers as destructors.

Java also has a try/finally clause that works like unwind-protect, but of
course, you can't use macros to hide the details.

--
"And these bumbling old men spent their time kicking away the pillars of
the world, and they'd nothing to replace them with but uncertainty. And
they were /proud/ of this?" -- Terry Pratchett, Small Gods

Tim Bradshaw

unread,
Dec 12, 2000, 2:14:51 PM12/12/00
to
knute...@ii.uib.no (Knut Arild Erstad) writes:

>
> Actually, this _shouldn't_ be a problem for Java programmers; there are no
> destructors in Java. Unfortunately, a lot of Java programmers (even book
> authors) seem to be using finalizers as destructors.

For generationsl GCs with finalizers what's the typical relation
between finalization & tenuring? It looks to me like you could argue
that you should never tenure anything that has a finalizer, because if
you do, it might `never' get run.

--tim

Michael Livshin

unread,
Dec 12, 2000, 2:36:43 PM12/12/00
to
Tim Bradshaw <t...@tfeb.org> writes:

> For generationsl GCs with finalizers what's the typical relation
> between finalization & tenuring? It looks to me like you could argue
> that you should never tenure anything that has a finalizer, because if
> you do, it might `never' get run.

it's actually an interesting and much-discussed (on the GC mailing
list) topic.

some things people said there that seem correct:

. finalizers should never be used as the _only_ way to return some
scarce resource to the system; there always should be an explicit
mechanism to return the resource.

. there's no such thing as timely finalization -- you should never
expect it. this is a corollary from the previous and the next
point.

. things get even more interesting when the finalization system
releases objects in topological order (which is a nice safety
feature, as you can be assured that you can do things on some
complex unreachable object and not be afraid that some parts of it
are already finalized). apart from the problem of cycles (which
doesn't seem to occur very often, and cycles can usually be easily
detected (by the GC) and broken (by the user, after seeng a
warning)), this also has the effect of postponing finalization of
depended-on objects until some future GC invocation, which, of
course, has interesting interactions with tenuring.

that said, I don't think you could really argue that some finalizers
will "never" run. I don't know of any real-world GC-backed system
that relies on ephemeral collection exclusively. usually the full GC
gets run periodically, too.

--
(only legal replies to this address are accepted)

You question the worthiness of my code? I should kill you where you
stand!
-- Klingon Programmer

Marco Antoniotti

unread,
Dec 12, 2000, 3:39:20 PM12/12/00
to

Tim Bradshaw <t...@tfeb.org> writes:

These are good points indeed. This is why there are WITH-OPEN-FILE
and friends in the language and why whenever you have processes you
also have WITH-LOCK. However, apart from the nice notion of
DESTROY-INSTANCE (just to make up another name), I still need to see
how the original poster really needs something other than the GC.

Marco Antoniotti

unread,
Dec 12, 2000, 3:42:20 PM12/12/00
to

Raymond Toy <t...@rtp.ericsson.se> writes:

I have heard this argument many times. My experience is that this is
not taken into account and there is no balancing act going on.

Of course, this is just my 0.5USD opinion. You can have 10 millions
of 'em :)

Raymond Toy

unread,
Dec 12, 2000, 4:09:22 PM12/12/00
to
>>>>> "Marco" == Marco Antoniotti <mar...@cs.nyu.edu> writes:

Marco> Raymond Toy <t...@rtp.ericsson.se> writes:

>> It has to be carefully balanced with the time it takes to develop that
>> software to save that memory, of course. That's the trick.

Marco> I have heard this argument many times. My experience is that this is
Marco> not taken into account and there is no balancing act going on.

I have a feeling that what happens is that the amount of memory on the
chip is decided very early because it takes a long time to layout the
chip, fab it, test it, etc. At that point, it seems like a reasonable
thing to do because the SW effort to support it doesn't look so big.
By the time the chip has arrived, the SW requirements have ballooned
so what was easy is now very hard. And then the product is late, you
only sell 100,000 instead of 10 million, and you lose your shirt and
then some on a product that nobody wants anymore.... :-(


Marco> Of course, this is just my 0.5USD opinion. You can have 10 millions
Marco> of 'em :)

Shall I send you my account number at my Swiss bank? :-)

Ray

P.S. "Lisp" There. It's now on-topic. :-)

Erik Naggum

unread,
Dec 12, 2000, 3:43:16 PM12/12/00
to
* Knut Arild Erstad

| Actually, this _shouldn't_ be a problem for Java programmers; there
| are no destructors in Java. Unfortunately, a lot of Java programmers
| (even book authors) seem to be using finalizers as destructors.

You're right! I had actually missed that, and have been thinking in
destructor terms myself. Thanks! Goes to show how little I use them.

| Java also has a try/finally clause that works like unwind-protect, but
| of course, you can't use macros to hide the details.

I think the ugliness of such verbose forms deters people from doing
the right thing. Proper handling of exceptional situtations in Java
is also too verbose for my comfort. Maybe I'm just spoiled.

Tim Bradshaw

unread,
Dec 13, 2000, 5:37:38 AM12/13/00
to
Michael Livshin <mliv...@yahoo.com> writes:

>
> that said, I don't think you could really argue that some finalizers
> will "never" run. I don't know of any real-world GC-backed system
> that relies on ephemeral collection exclusively. usually the full GC
> gets run periodically, too.
>

My `never' was intentionally in quotes. `tomorrow' is the same as
`never' for many purposes...

--tim

Lars Lundbäck

unread,
Dec 13, 2000, 7:23:08 AM12/13/00
to
Marco Antoniotti wrote:

>
> Tim Bradshaw <t...@tfeb.org> writes:
>
> >
> > But this isn't anything to do with memory management. It's just
> > trivial to think of cases where resources absolutely must
> > unconditionally be freed when you exit some extent. Think of locks or
> > resources on a remote machine, -- you *really* don't want to leave it
> > up to the GC to free that kind of thing. The problem is that C++
> > people confuse this kind of thing with storage management, because C++
> > confuses this kind of resource management with storage management.
> > And Lisp people *also* confuse it with storage management and since GC
> > does largely solve that problem assume that these other problems must
> > be solved too.
>
> These are good points indeed. This is why there are WITH-OPEN-FILE
> and friends in the language and why whenever you have processes you
> also have WITH-LOCK. However, apart from the nice notion of
> DESTROY-INSTANCE (just to make up another name), I still need to see
> how the original poster really needs something other than the GC.
>

It's just like Tim says. This is about object management, specifically
about objects outside the reach of the Lisp GC. Fernando's question was:

"How can I make *absolutelly* sure that lisp allocates memory

on the stack? It's a class that uses some windows resources that


*must* be freed when leaving the function body where the class is
used."

And like I said earlier, the integration of Lisp and environment
resources is often shallow. Fernando doesn't say which Lisp he is using,
and _how_ he has gotten the resource and the class.

But since he mentions "brushes etc", I'd guess that he is running
something on top of MS Windows. Anyway, such resources are usually kept
in a limited pool which is managed outside Lisp. Unless the Lisp garbage
collector has been designed to explicitly make a "releasing" call (eg,
to the windowing system), nothing happens. The resource is still
reserved when the reference to the "brush" has vanished.

I used to get into this kind of thing some 15 years ago, and never found
an acceptable solution. That is why I respond here, because the problem
is about marrying environment resources, often graphic, with Lisp
programs. It is simply shit, excuse, when you want to build applications
that go beyond "Hello world, click the mouse on me". I made extensions
to GCL, fiddled with Franz Lisp (not Allegro), connected to graphics
systems (we used vector-graphics screens at that time, for CAD), both
inlined, as separate processes, etcetera.

I tried Garnet years ago, and have a vague memory that one didn't have
to worry about dried-up brushes. Garnet uses X11 of course, and the
terminology there is the "graphics context", commonly GC, but these are
also limited). I recently tried the Allegro Lite Windows graphics just
for fun. It seems that the level has been raised a bit, but it's still
MS Windows graphics.

This other dratted yes-you-did-no-I-did-not thread we have currently,
originated with questions about Tcl-Tk, I think. That system, and any
other which is merely attached beside Lisp, and not integrated well
inside, makes for "non-Lisp" thinking. So I think we should place our
hopes on whatever results you CLIM-guys can bring forward.

Lars

Lars Lundbäck

unread,
Dec 13, 2000, 7:47:48 AM12/13/00
to
Lars Lundbäck wrote:
>
> This other dratted yes-you-did-no-I-did-not thread we have currently,
> ....

That word "other" was unfortunate. Please disregard it, I am certainly
not inferring that this thread is too.

Marco Antoniotti

unread,
Dec 13, 2000, 9:13:03 AM12/13/00
to

"Lars Lundbäck" <lars.l...@era.ericsson.se> writes:

> Marco Antoniotti wrote:
> >
> > Tim Bradshaw <t...@tfeb.org> writes:
> >
> > >
> > > But this isn't anything to do with memory management. It's just
> > > trivial to think of cases where resources absolutely must
> > > unconditionally be freed when you exit some extent. Think of locks or
> > > resources on a remote machine, -- you *really* don't want to leave it
> > > up to the GC to free that kind of thing. The problem is that C++
> > > people confuse this kind of thing with storage management, because C++
> > > confuses this kind of resource management with storage management.
> > > And Lisp people *also* confuse it with storage management and since GC
> > > does largely solve that problem assume that these other problems must
> > > be solved too.
> >
> > These are good points indeed. This is why there are WITH-OPEN-FILE
> > and friends in the language and why whenever you have processes you
> > also have WITH-LOCK. However, apart from the nice notion of
> > DESTROY-INSTANCE (just to make up another name), I still need to see
> > how the original poster really needs something other than the GC.
> >
>
> It's just like Tim says. This is about object management, specifically
> about objects outside the reach of the Lisp GC. Fernando's question was:
>
> "How can I make *absolutelly* sure that lisp allocates memory
> on the stack? It's a class that uses some windows resources that
> *must* be freed when leaving the function body where the class is
> used."

I understand the points that Tim makes, but as long as you do not tell
me what are the "resources that must be freed", I am still entitled to
ask: why? :)

> And like I said earlier, the integration of Lisp and environment
> resources is often shallow. Fernando doesn't say which Lisp he is using,
> and _how_ he has gotten the resource and the class.
>
> But since he mentions "brushes etc", I'd guess that he is running
> something on top of MS Windows. Anyway, such resources are usually kept
> in a limited pool which is managed outside Lisp. Unless the Lisp garbage
> collector has been designed to explicitly make a "releasing" call (eg,
> to the windowing system), nothing happens. The resource is still
> reserved when the reference to the "brush" has vanished.

That is legitimate. And writing your own WITH-BRUSH macro is the
right thing to do.

As for the rest of what you say, ..., well, it does have some merit.

Marco Antoniotti

unread,
Dec 13, 2000, 9:16:37 AM12/13/00
to

Raymond Toy <t...@rtp.ericsson.se> writes:

> >>>>> "Marco" == Marco Antoniotti <mar...@cs.nyu.edu> writes:
>
> Marco> Raymond Toy <t...@rtp.ericsson.se> writes:
>
> >> It has to be carefully balanced with the time it takes to develop that
> >> software to save that memory, of course. That's the trick.
>
> Marco> I have heard this argument many times. My experience is that this is
> Marco> not taken into account and there is no balancing act going on.
>
> I have a feeling that what happens is that the amount of memory on the
> chip is decided very early because it takes a long time to layout the
> chip, fab it, test it, etc. At that point, it seems like a reasonable
> thing to do because the SW effort to support it doesn't look so big.

Exactly. And also because of the "0.5 x 10 million" argument.

> By the time the chip has arrived, the SW requirements have ballooned
> so what was easy is now very hard. And then the product is late, you
> only sell 100,000 instead of 10 million, and you lose your shirt and
> then some on a product that nobody wants anymore.... :-(
>
>
> Marco> Of course, this is just my 0.5USD opinion. You can have 10 millions
> Marco> of 'em :)
>
> Shall I send you my account number at my Swiss bank? :-)

I knew Swiss banks took money from all over the world. :) I did not
know they took opinions as well. :)

Knut Arild Erstad

unread,
Dec 13, 2000, 11:14:13 AM12/13/00
to
[Tim Bradshaw]
:
: For generationsl GCs with finalizers what's the typical relation

: between finalization & tenuring? It looks to me like you could argue
: that you should never tenure anything that has a finalizer, because if
: you do, it might `never' get run.

I am not sure what you mean by tenuring in this context, but IMHO you
should avoid writing a finalizer that _has_ to be run. Finalizers should
be used to free memory allocated in a different language and not much else
(maybe debugging and profiling).

I know that closing files is often used as an example of finalization, and
this makes some sense: if a file descriptor object is being GCed while the
file is still open, of course you should close it before the object is
lost. But IMO it is bad style to wait for the GC to close files, in fact
I would want the finalizer to issue a warning if it has to close the file.

Will Deakin

unread,
Dec 13, 2000, 11:35:26 AM12/13/00
to
Knut Arild Erstad wrote:

> Tim Bradshaw wrote:
> > It looks to me like you could argue
> > that you should never tenure anything that has a finalizer, because
> > if you do, it might `never' get run.
> I am not sure what you mean by tenuring in this context...

IIRC some generational gc never try to reclaim objects in the oldest
generation. These objects can then be said to be `tenured.'

:)w

Lars Lundbäck

unread,
Dec 13, 2000, 12:01:50 PM12/13/00
to
Marco Antoniotti wrote:
>
>
> I understand the points that Tim makes, but as long as you do not tell
> me what are the "resources that must be freed", I am still entitled to
> ask: why? :)

You are indeed. We are entitled to say many things in this ng. :))

As to why, some sample snips e.g. from the Win32 Programmer's Guide may
illuminate us.

"Because only a limited number of common device contexts exist in the
window manager's heap, an application should release these device
contexts after it has finished drawing."

>
> That is legitimate. And writing your own WITH-BRUSH macro is the
> right thing to do.

Sure, and other methods have been suggested, which may or may not be
better in a particular case. The point is, I think, that an explicit
action is needed.


> As for the rest of what you say, ..., well, it does have some merit.
>

Mature men accept all praise, however slight.

Many cheers returned,

Lars

Tim Bradshaw

unread,
Dec 13, 2000, 12:13:17 PM12/13/00
to
knute...@ii.uib.no (Knut Arild Erstad) writes:

>
> I am not sure what you mean by tenuring in this context, but IMHO you
> should avoid writing a finalizer that _has_ to be run. Finalizers should
> be used to free memory allocated in a different language and not much else
> (maybe debugging and profiling).

Something is `tenured' if it's got to the point where the generational
GC is no longer considering it at all, but you have to do some more
thorough GC to actually look at it, often called a `dynamic' or
`global' GC. It's quite possible to run many generational systems in
a mode where the dynamic level of GC happens seldom or never (that's
`never' if you want to be precise...). This term may actually be
specific to Allegro, although it's a good one I think.

What I was really trying to do in my article was to point out that a
GC with finalization support really isn't suitable for some of the
things that Java people (and, perhaps, Lisp people) seem to be using
it for.

If you want the finalization to happen reasonably promptly, then you
can't tenure anything that has a finalizer. If you use finalization a
lot this may mean that a *lot* of things will never be tenured, and
thus the `fast' generational GCs will become slow because there's a
lot of live data.

If you are willing to tenure things with finalizers, then it might
take hours to free resources. You gave the example of closing open
file descriptors which I think is pretty good -- if a file descriptor
is closed by a finalizer, then any unwritten data may not be flushed
until that finalizer runs, which might (if the descriptor is tenured)
take a very long time, and this could be pretty undesirable.

I think, perhaps, that Java gets away with this because most
implementations have only a very rudimentary GC. However at least
some of Sun's Java systems (hotspot?) advertise themselves as having
a generational GC, which will show up this problem much more.

Really, the underlying point I'm trying to make is that GC is a good
way of managing one particular resource: memory. It is not
necessarily a good way of managing other resources, and relying on GC
to do this is in general not a good thing, although there are
obviously cases where it can be used perfectly well. Fortunately Lisp
provides such linguistic flexibility that it's quite possible to
invent application-specific ways of managing other resources which
take a lot of the pain away, while still avoiding the trap of
confusing it with memory management (as C++ does with destructors &c).

--tim

Erik Naggum

unread,
Dec 13, 2000, 12:35:39 PM12/13/00
to
* Knut Arild Erstad

| I know that closing files is often used as an example of finalization,
| and this makes some sense: if a file descriptor object is being GCed
| while the file is still open, of course you should close it before the
| object is lost. But IMO it is bad style to wait for the GC to close
| files, in fact I would want the finalizer to issue a warning if it has
| to close the file.

I think all (scarce) resources should have finalizers associated with
their allocation request, and that having to relinquish them in the
finalization code should cause a warning, preferably with a clue as to
when and where the resource was requested. This both for safety and
debugging purposes.

The main reason not to use finalization for scarce resources, and files
are a very good example, is that you circumvent error handling in case
anything goes seriously wrong. File systems are known to run full no
matter how big they are. Network connections may disappear under your
feet. Et cetera. Interactive intervention with hung threads or even
processes may cause unwind clauses not to be run, and it is unlikely that
what failed to complete will fare any better in the finalization code.

This leads me to believe that the finalization code should be system-
supplied and written to protect the integrity of the system more than
the application that failed to protect itself, leading to the obvious
conclusion that really important resources need system support, with all
the attendant machinery. This is probably not a new idea, so I'm heading
for the Lisp machine manuals ...

#:Erik
--
The United States of America, soon a Bush league world power. Yeee-haw!

cbbr...@hex.net

unread,
Dec 14, 2000, 9:22:08 PM12/14/00
to
>>>>> "Marco" == Marco Antoniotti <mar...@cs.nyu.edu> writes:
Marco> Raymond Toy <t...@rtp.ericsson.se> writes:
>> >>>>> "Marco" == Marco Antoniotti <mar...@cs.nyu.edu> writes:
Marco> I really believe that given the current abundance of memory
Marco> the GC is the best you can do unless you are working in an
Marco> "embedded" framework, where the enginners still think that
Marco> adding an extra 0.5USD of memory to their chips would cause
Marco> failure in the market (therefore forcing longer development
Marco> and debugging times mostly due to memory management issues,
Marco> in spite of all the "time-to-market" hoopla.)

>> But if you sell 10 million of the chips next year, that's $5
>> million extra. That might have paid for all of the development
>> costs and then some. And if you sell that many the year after,
>> that's pure profit.

>> It has to be carefully balanced with the time it takes to develop
>> that software to save that memory, of course. That's the trick.

Marco> I have heard this argument many times. My experience is that
Marco> this is not taken into account and there is no balancing act
Marco> going on.

There's another confounding effect, which is the consideration that
this is all a risky business.

There _might_ be ten million chips sold next year, and that _might_
result in $0.50/chip of extra profit.

On the other hand, there might _not_ be ten million chips next year,
or, more likely, if AMD comes out with a competitive model, Intel
might need to _drop_ the price of the chip, which means that it is in
no way clear that there is an "extra" $0.50 in it for Intel.

The effect you suggest may be _possible_; the problem is that there
are so many possible confounding effects that it is nicely hidden. It
is more in Intel's interests to deploy a new model than it is for them
to "do it right the first time." [And after all, there are so many
architectures that Intel fabs, it might as well only be Intel that is
out there :-).]

--
(reverse (concatenate 'string "ac.notelrac.teneerf@" "454aa"))
<http://www.ntlug.org/~cbbrowne/>
"...Roxanne falls in love with Christian, a chevalier in Cyrano's
regiment who hasn't got the brains God gave an eclair..."
-- reviewer on NPR

0 new messages