Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Autolisp Memory Issues

535 views
Skip to first unread message

Peter Tobey

unread,
Dec 11, 1998, 3:00:00 AM12/11/98
to
I've basically ignored memory issues vis-a-vis Autolisp since we got
past having to configure Extended Lisp & include a (vmon) at the start
of large routines.

Well, I seem to have come up against a memory ceiling with the project
I'm working on, & I can't seem to find much in the way of help sorting
things out. I've read what little there is in the official docs. I know
that a node is supposed to be 12 bytes, & that nodes & string space
share what's known as the "heap". On the other hand, the docs say the
default segment size is 514 nodes (6168 bytes), but in a virgin AutoCAD
session (no lisp loaded) my system says 256 nodes. The docs also give an
example of setting the segment size (alloc 10000) which is unattainable
on my system. It seems the maximum value I can get is 5460 - a most
curious number (* 2 2 3 5 7 13).

I'm building a number of fairly large lists of lists, each in the same
way - in fact using similar arguments to the same subroutine. The
process flies along as I would expect, until at some point it slows to a
crawl. Using (mem) before & after this point I can see that up to the
critical point there are incremental "collections" (1 or 2 at a time) &
if I've set up a large pool of nodes to start with, this value has been
steadily declining. After the breaking point, there have been a large
number of collections (80+) & if I leave the management to Autolisp, the
nodes total has been expanded by nearly tenfold.

Exploring this problem prompted me to take a look at changes to (mem)
from loading various things - my ACAD.LSP (5500 nodes), BONUS.LSP (28300
nodes), etc. The basic project application consumes about 45K nodes on
loading, but this balloons to nearly 600K when I first run it. The
particular function I'm seeing the problem with brings the total node
count to anywhere between 1.7 & 3.7 million when it completes, depending
on whether or not & how I handle any manual adjustments beforehand.

So I have some questions that I hope someone here can shed light on:

1) With regard to the results of (mem):

a) Am I correct in identifying the difference between the "Nodes" and
"Free nodes" values as the total nodes used? I've tried to equate the
sizes of lists or functions with the nodes required to store or load
them, but the numbers seem to be all over the place, leading me to be
*very* suspect of the output from (mem).

b) At first the "Nodes" value equals "Segments" times "Allocate", but
over time this does not hold. What *is* the relationship here? What
makes the "Segments" value occasionally decrease?

c) Is there any way to capture these memory values programatically? Are
there any 3rd-party tools for examining Autolisp's memory usage?

2) Is there an absolute limit on heap space available to Autolisp? Are
there any configuration settings that affect this?

I've managed to send my hard drive into some serious thrashing by
pushing the node count toward 10 million - clearly "virtual function
paging" kicks in at some point.

3) How can I determine an optimum combination of the (alloc) and
(expand) functions, if I know I'm going to seriously push the envelope?

I know the conventional advice is to leave it to Autolisp, but clearly
that doesn't always cut it. What is the functional difference between
(alloc 5000) (expand 25) and (alloc 256) (expand 500)?

4) What are some strategies for minimizing node consumption?

I'm already scrupulous about keeping all variables local, with the
exception of a couple of key shared lists.

I know that using symbol names of 6 or fewer characters is conservative
of string space in the heap, & clearly big lists are big consumers of
nodes.

Tony T. recently railed against converting selection sets to lists, for
just this reason. I have to plead guilty to using this method (& within
the subject routine), but so far I can see only a small (< 5%)
difference in node consumption between otherwise identical routines
using each method (then again, my metrics are admittedly unreliable).
I'll probably be reverting to the (ssname) construct, but I'm hoping to
find more substantive ways to economize.

5) How does Vital Lisp deal with memory? Does it maintain its own heap
space, & is it managed with the same functions, or is this all a moot
point when a project is zipped up into a VILL ARX?

Thanks in advance to any who have read this far. Please do others a
favor & respond (if you can <g>) to my points by number, or simply point
me toward sources of information, rather than quoting back the whole
mess.

-peter

--
o----------------------(Net-Map)------------------------o
| Facilities Data Capture & Management in AutoCAD |
| Peter B. Tobey - pbt...@ix.netcom.com - 503-658-4054 |
o-------------------------------------------------------o
-|-
Sometimes you feel like a hunter-gatherer, sometimes you don't...

kamal boutora

unread,
Dec 11, 1998, 3:00:00 AM12/11/98
to
Very interesting question. I knew that acadr 10 lisp had limitations on
memory,
but not other versions.

>
> 4) What are some strategies for minimizing node consumption?
>
* try a garbage collection after managing (gc) your lists.
I always add a (gc) when entering and when exiting functions that do memory
consumption.

* functions which are called to perform actions, not to do an evaluation,
should return nil, instead of the last evaluated expression.

* Free your ssgets.

* If your code is very large, consider splitting it in many little files,
that should be loaded on need, and freed after usage by setting the
declared symbols to nil.

* If you're familar with c/c++ consider writing a program in this language
just for managing your data.

Hope this will help.
--
kamal boutora

Peter Tobey

unread,
Dec 11, 1998, 3:00:00 AM12/11/98
to
Kamal;

Thanks for the reply.

<< functions which are called to perform actions, not to do an
evaluation, should return nil, instead of the last evaluated expression
>>

I'll take a look at this. I'm generally careful about what subroutines
return - usually nil unless I need a value returned.

<< Free your ssgets >>

Again, I'm careful about localizing variables, including those I use for
selection sets, & I mostly use the same name for these.

<< If your code is very large, consider splitting it in many little
files, that should be loaded on need, and freed after usage by setting
the declared symbols to nil >>

This is a large chunk of code, but it consists of about 300 separate
files, so what you suggest is quite doable. I'll think seriously about
working out some sort of conditional load mechanism.

<< If you're familar with c/c++ consider writing a program in this
language just for managing your data >>

This may be the *real* ceiling I've hit...

Ken Krupa

unread,
Dec 12, 1998, 3:00:00 AM12/12/98
to
This is something I have not paid much attention to and don't understand.
If the return of the function is not being stored, does it really make a
difference in how much memory used? I would have thought any return would
just "evaporate". Care to explain?

Ken Krupa

kamal boutora <syn...@IST.CERIST.DZ> wrote
> * functions which are called to perform actions, not to do an evaluation,
> should return nil, instead of the last evaluated expression.


kamal boutora

unread,
Dec 12, 1998, 3:00:00 AM12/12/98
to
This is an assumption i made based on my knowledge of how compilers and
interpreters function (that is hard to test as we can't access to the
mechanism of memory management by autolisp).
As you said, the result "evaporate", but before evaporating, it is stored
some where.
it is stored at the top of the evaluation stack. so, i use to free it
before ending my functions, this allows proper cleaning of unused memory.

But, after your post, i made several tests, and noticed that there is no
difference.
so, my response is: you're right and thanks for correcting me.

Any addendum from the gurus ?

Regards.
--
kamal boutora

Ken Krupa <k...@sksoftware.com> a écrit dans l'article
<01be25e5$f33dba80$42e0accf@toshiba>...

David Stein

unread,
Dec 12, 1998, 3:00:00 AM12/12/98
to
If, by chance, you're using NT as your operating system, I would strongly
suggest you use the system monitor to watch the use of memory while running
some test code within AutoCAD or Mech. Desktop. This can be very useful to
pinpointing where in a piece of code that any sudden jumps in memory usage
occur. It is often best to include debug print-outs to help determine where
the code is in execution while watching the resource meter in NT.

David Stein
dst...@visi.net

Peter Tobey

unread,
Dec 12, 1998, 3:00:00 AM12/12/98
to
David;

Thanks, I hadn't thought of that. I have a dual-boot (actually triple)
setup, although I mostly live in '95. I'll check it out.

jda...@my-dejanews.com

unread,
Dec 13, 1998, 3:00:00 AM12/13/98
to
AutoLISP doesn't give as much control over memory (atleast directly)
as your posts suggests. Your worries are more for a C++ programmer,
not AutoLISP. But all of the porblems you suggest are quite true,
an annoying thing about AutoLISP. A strange irony, the state of MN
'requires' no fault auto insurance then puts no caps on insurance rates.

Anyway, obviously the AutoCAD memory manager is 'supposed' to deal with
memory allocation and paging, but these days doesn't deal with the larger
applications people are writing. I think the folks at Autodesk never
expected us CAD jockeys to be writing anything intensive nor that AutoLISP
would be anything more than a "toy". They are wrong, but as usual too
ignorant and self-absorbed to have figured it out yet.

With the very few tools to your disposal, you resigned to playing
a 'cat and mouse' style of programming. Here are some rules I use ...

When dealing with potentially large amounts of list data internally
- Always set a varable to NIL *immediately* when it is no longer needed
or set to another variable name. In other words, don't just rely
the idea that memory will clear after the interpreter has exited the
top level function where the variable is localized, most often the data
is still taking up node space until those memory slots are needed and
reclaimed by the AutoCAD memory manager, which won't be until
all available memory is filled up and AutoCAD starts paging out
memory. In a sense, by not actively clearing memory, your
setting yourself up for an overload.
- After setting large data list variables to nil, follow
this with a garbage cleanup (gc) to immediately clear up
the node space, don't wait for the AutoCAD memory manager
to do it (see above).
- Track the size of your data lists and (alloc) more memory
proportionally as needed, then un-(alloc) when its not needed.
Setting up a function that acts as a memory ballon, inflating (alloc) when
memory use is high and deflating (alloc) when its low, will take some
extra time, but makes a difference.

This is about what your limited to, since you will inevitably
be wrestling with the AutoCAD memory manager anyway, thats why
I use the term 'cat and mouse' more like 'CAD and mouse'.

It would be great, and solve alot of problems, if Autodesk would
give the ability to turn that damn memory manager off and provide
a few extra functions for dynamically allocating memory within
AutoLISP. All that memory manager does is run the till while it
sits back and gets fat off the profits. Meanwhile your code is
working harder and harder.


--
Jesse Danes
Senior Draftsman / AutoCAD Admin
Honeywell, Home & Building Controls
Golden Valley, MN

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own

Vladimir Nesterovsky

unread,
Dec 14, 1998, 3:00:00 AM12/14/98
to pbt...@ix.netcom.com
Peter -- like I wrote numerous times
in this and other ng's, set the (alloc)
value to highest possible. Don't do any
manual (gc)s and most certainly, never
decrease the (alloc) value. One thing you
can do, is to preallocate the memory needed by
your application in advance, via (expand NN)
statement.

The (alloc)'s value controls by how big chunks
the memory will be expanded, when needed. IOW
it defines the segment size, in nodes. Since the
node is 12 bytes, the 5460 nodes mean roughly 64K
sized segments -- pretty logical value, don't you
think?

The key point here, is that every time AutoLISP
runs out of memory, it performs a (gc), looks if
it got the needed memory and then, if not, performs
(expand 1) -- which allocates 1 segment -- by the
amount of nodes as specified by (alloc)'s value.

If it still needes some memory, it will
do the (gc)-check-(expand 1) cycle over again.
So obviously, the larger (alloc)'s value,
the lesser the number of (gc) calls performed
by the system. And this is a KEY FACTOR --
every (gc) takes up a noticable amount of time.

So don't do any (gc)s yourself, it's useless
and will just take time. System will perform
(gc) whenever it sees fit. To immideately
NIL out the variables when they're not
needed is, rather, a good advice, since their
memory will be eligible for freeing and reuse
immidiately after that point. Do that especially
for big and complex nested lists.

Never decrease the (alloc) value, as it will just
increase the amount of (gc)s called by a system,
and of course won't perform any "un-allocating",
which is just not possible in AutoLISP. AutoLISP
won't ever give up memory it uses; it will
only try to reorganize and reuse it.

The only thing you can do is preallocate some
memory with (expand) statement. You can
experiment with the number of segments to
preallocate, watching the System Monitor
(which works well for me in Win95 and 98
for just such things).

I didn't find a (mem) function output to be
reliable under r14. I could make sence out of it
under r12, but never for r14. One thing that pretty
much sums it all up, is like you noticed, the number
of (gc) calles performed so far. So watch
this -- and the amount of allocated memory on your
comuter, thru SysMon. This will give you a clues
as to how your program is doing, memory-wise.
(and of course, listen to your HD thrashing... ;-)) )

As for your question,
>> what's the difference between
>> (alloc 5000) (expand 25) and (alloc 256) (expand 500),
I think you can see now that it's the number of (GC)s
that will be performed behind the scenes by the system
_after_ the preallocated memory pool will be exhausted.

In short, I can say I find the AutoLISP implementation
in r14 outstanding in terms of speed and efficiency.

Whenever I had problems with memory I could pinpoint
the code responsible and fix it -- and in general
every time it was some unnecessarily big and complex
list that has been kept unnecessarily in memory.
So IOW it was _my_ bug. :-))

Look out for MAPCAR's -- it's often possible
to do the same thing by iteration. A good example
is using SELSET-LIST or SELSET-FOREACH issue.
Consider calling (mapcar 'entget (selset-list sset))
There is a *huge waste of memory* in here!

Whenever I was sure my lists are 2-3 thousand items
or less, I could safely use the LIST approach; but if
I couldn't be sure lists won't get bigger then that --
I had to re-write it all into using the SELSET-FOREACH:

(defun SELSET-FOREACH (fun selset / i)
(repeat (Setq i (fix (sslength SELSET)))
(FUN (ssname SELSET (setq i (1- i))))))

I'm using functions unquoted, which gives me another
10% speed improvement as there is no need to build
intermidiate lists of arguments just to APPLY the FUNction
(of course it is more efficient to just code the index
and ssname manually) on them.

So check your code and re-think it to eliminate unnecessary
lists; always set them to NIL immediately after they're
no longer needed; set (alloc) to maximum value possible;
call (expand NN) once when loading your program.

Check the routines you're using. Recursives will be
very hard on memory, for one thing. Make them iterative.
Try to find out what routine is the problem, and improve _it_.
Post it here, if you would like, so we could look into it.

If you're absolutely sure you've done all your functions
in the most efficient way and still have problems,
move some application data into C/ADS; ADS has its own
memory management, and doesn't have any garbage collections,
so you'll be the one and only who's in charge there.
Make your own API for using this data from LISP without
exporting these huge lists to AutoLISP, since exporting them
will also go thru the same (ALLOC)-(GC)-(EXPAND) cycle,
when adding these huge lists to LISP memory.

Tell us more specifics about your app, if you like. :-)

As for Vital LISP -- surprisingly, I found too many times
that the same code run faster in AutoLISP, rather then
VillRTS, even with maximum optimizations. :-(
I guess it's too heavy on DEFUNs and LAMBDAs and so on.


Cheers,

In article <3670D6...@ix.netcom.com>,


pbt...@ix.netcom.com wrote:
>
> I'm building a number of fairly large lists of lists, each in the same
> way - in fact using similar arguments to the same subroutine. The
> process flies along as I would expect, until at some point it slows to a
> crawl. Using (mem) before & after this point I can see that up to the
> critical point there are incremental "collections" (1 or 2 at a time) &
> if I've set up a large pool of nodes to start with, this value has been
> steadily declining. After the breaking point, there have been a large
> number of collections (80+) & if I leave the management to Autolisp, the
> nodes total has been expanded by nearly tenfold.

--
Live long and prosper :)
Vlad <vne...@netvision.net.il>
http://www.netvision.net.il/php/vnestr/

Vladimir Nesterovsky

unread,
Dec 14, 1998, 3:00:00 AM12/14/98
to


Cheers,

On Fri, 11 Dec 1998 00:22:03 -0800, Peter Tobey
<pbt...@ix.netcom.com> wrote in article
<3670D6...@ix.netcom.com>:


>
> I'm building a number of fairly large lists of lists, each in the same
> way - in fact using similar arguments to the same subroutine. The
> process flies along as I would expect, until at some point it slows to a
> crawl. Using (mem) before & after this point I can see that up to the
> critical point there are incremental "collections" (1 or 2 at a time) &
> if I've set up a large pool of nodes to start with, this value has been
> steadily declining. After the breaking point, there have been a large
> number of collections (80+) & if I leave the management to Autolisp, the
> nodes total has been expanded by nearly tenfold.

--

Vladimir Nesterovsky

unread,
Dec 14, 1998, 3:00:00 AM12/14/98
to
Peter Tobey wrote:
> BTW, I finally got through to the NG, but didn't see
> your message posted there.

That's because for some reason I couldn't get to the adesk's
ng-server last night, so have posted the article via Dejanews,
which, apparently, didn't get through here. :-( Interestingly
enough I saw an article by Jessy Danes there that aren't present
here on the server. By some coinsidence that article contained
strong critique (which I disagree with) of AutoLISP's performance.
As I said, I generally consider AutoLISP's performance excellent,
but that statement would have made more sence in a context were
it is really a reference to Jessy's post.

Back to the issue though -- after re-reading your reply (that I
got via e-mail :)), I understand there's one more thing you can
do, and it seems like a real reason in your case: it's an
issue of returning the values from functions versus altering
the global (outer) variables values in subroutines.

You wrote:

> Vlad;
> << NIL out the variables when they're not needed ... especially for big
> and complex nested lists >>
>
> Now here comes more puzzling behavior. After I posted the original
> message I kept looking at what I was doing & realized I was returning
> several nested lists to my calling function, binding both the whole list
> & one sublist to symbols, & then (needlessly) accumulating them both.
> Like you say "my own bug". So I changed it to bind the return to the
> same symbol name each time & nil it after extracting the sub-parts. I
> was also using the ss-list mechanism within the subroutine, so I changed
> it back to ssname/index, & reran the whole thing. It *still* bogged down
> in the same place.

There! It seems that you are returning some big lists from
a subroutine just to take it apart and re-assign to another
symbols in the calling function. This is redundant and may
be the cause of your whole problem. You can just alter the
outer vars from inside your subroutine eliminating any need
to return anything, let alone to set this result to nil
later. :-)

It's like back when I learned CS in the university, using
some Fortran dialect, and couldn't grasp the difference
between the FUNCTION and SUBROUTINE in Fortran. The standard
answer to this question was -- functions return values,
whilst subroutines do not. But more importantly, the subroutine
has a complete access to the caller's variables ------ the
same situation as we have in AutoLISP. :-)

Every subroutine is free to alter some caller's symbol value,
as long as it's not shadowed by another local symbol with
the same name. In more CS-like parlor, functions are executed
to get their values, and subroutines -- for their side effects;
which is the alteration of some global (at the point of
subroutine's execution) symbol value, among other things.

Consider this example. Let's say you have this subroutine
that will add some more info into the list you're building:

(defun afunction (alist aninfo)
(cons aninfo alist))
;; ((because as I hope you know you should never use 'append
;; when building lists, just 'cons and 'return later if you must))

and you're using this in an outer routine as
(defun foo ( / lst inf )
... foreach elt in lst
..... set inf from elt
(setq lst (afunction lst inf))
... )

It's much much efficient to alter the lst's value
directly:
(defun directly-alter (aninfo)
(setq lst (cons aninfo lst))) ; where LST is an outer variable
or even
(defun pure-subroutine ()
(setq lst (cons inf lst)))

Of course the latter approach defeats the principles of structured
programming; we had to create a very specific, non-generic
function that is not a good candidate for code reuse; but what could
we do if the efficiency of our program is defeated otherwise ?? :-)

We could choose to pass the symbols quoted into the subroutine, like
(defun alter-quoted (q_alist q_aninfo)
(set q_alist (cons (eval q_aninfo) (eval q_alist))))
but here we risk the fact of using the same symbol names
to keep the other symbols values to become interfered
(ie., if you have some outer var named Q_ALIST, it
can't be used with this routine as it uses the same symbol
name in different contex). That is so because AutoLISP is a
dynamic language, and can't have lexical variables, as I
noticed in some of my recent posts (to amusement of some).

So another advice would be -- use global vars to keep
your global values. Of course you can make those vars
local in the calling routine, but they will be global
in a contex of your inner subroutines, which will
alter those globals _directly_.

> In a nutshell: I'm making selection sets of blocks (based on a list of
> names), or polylines (based on a list of layer names), then making lists
> which have 4 or 5 key pieces of data for each object - insertion point &
> rotation or vertex list, certain small bits of xdata (that my
> application put there in the first place), layer name, & ename - *not* a
> full (entget) association list. In the (typical) test case, none of
> these lists is more than 500 objects, nor are there many multi-vertex
> plines (none of any length).

> << In short, I can say I find the AutoLISP implementation in r14
> outstanding in terms of speed and efficiency. >>
>

> I agree, which is why I found this situation so surprising.


>
> << Whenever I had problems with memory I could pinpoint the code
> responsible and fix it -- and in general every time it was some
> unnecessarily big and complex list that has been kept unnecessarily in
> memory. So IOW it was _my_ bug. :-)) >>
>

> This is also what I suspect here, but I'll just have to keep looking.

Again, one thing I forgot mentioning -- to make big lists global
and alter them directly inside your subroutines, returning just NIL.
Another thing is -- whenever you get something that you'll going to
use more then once -- put it into some variable instead of
recalculating it and rebuilding the lists again.

Conserning the preallocation of memory -- of course you need
to call (expand) only once, when your program is loading,
right?

> << it is more efficient to just code the index and ssname manually >>
>

> Thanks for the confirmation - this is what I generally do (or *did*
> until I started converting everything to the ss-list mechanism,
> ala Vlad <g>).

500 objects is a very small figure, so you might as well
use SELSET-LIST approach (ala Vlad :-)) , altough I prefer
using SELSET-FOREACH. I use SELSET-LIST only when I want to
apply my (magic) GETK function on the list like
(GETK '(0 8 10)(mapcar'edlgetent(SELSET-LIST selection)))
You can find these functions on my web pages.

But it is strange that you have problems with only 500 blocks.
How much memory do you have on that PC, 16M? ;-)) The reason
might be in some of the functions you're using to deal with
those lists...

Good luck, :-))

Vladimir Nesterovsky

unread,
Dec 14, 1998, 3:00:00 AM12/14/98
to
Something got cleared for me: Jessy's article
wasn't here by the same reason as my original
post wasn't here: it was posted thru DejaNews.
Too bad. :-((((( I think it's better some
autodesk's administrator for these ng's do
something about the DejaNews connectivity.
It's a technical issue that should be resolved
long time ago.

I'm reposting Jessy's article here so everyone
can see it. Hope Jessy don't mind. :-)

On Mon, 14 Dec 1998 17:18:51 GMT, vne...@netvision.net.il (Vladimir
Nesterovsky) wrote:

> Interestingly
>enough I saw an article by Jessy Danes there that aren't present
>here on the server. By some coinsidence that article contained
>strong critique (which I disagree with) of AutoLISP's performance.
>As I said, I generally consider AutoLISP's performance excellent,
>but that statement would have made more sence in a context were
>it is really a reference to Jessy's post.

Live long and prosper :)
Vlad <vne...@netvision.net.il>
http://www.netvision.net.il/php/vnestr/

jda...@my-dejanews.com

unread,
Dec 14, 1998, 3:00:00 AM12/14/98
to

-----------== Posted via Deja News, The Discussion Network

Peter Tobey

unread,
Dec 14, 1998, 3:00:00 AM12/14/98
to
Vlad;

Thanks once again for your thought-provoking replies. I'll have to work
through some of this & get back to you. You pretty well quoted the
pertinent bits of my email to you, so I won't take up space with another
copy here. Thanks also for posting Jessy Danes' message. Like you, I was
unable to access the NGs all yesterday.

Vladimir Nesterovsky

unread,
Dec 14, 1998, 3:00:00 AM12/14/98
to
A little correction. This specific example I used,

>(defun afunction (alist aninfo)
> (cons aninfo alist))
>

> (defun foo ( / lst inf )
> ... foreach elt in lst
> ..... set inf from elt
> (setq lst (afunction lst inf))
> ... )

may be still efficient enough since (afunction)
only uses (cons), but would it use (list)
like in your case,

(defun your-case ( a b )
(list a b))
(defun your-foo ( / x y u )
(setq u (your-case)
x (car x)
y (cadr x)
u nil))

will indeed be inefficient, since it
needlessly constructs the list it returns.

jda...@my-dejanews.com

unread,
Dec 15, 1998, 3:00:00 AM12/15/98
to
In article <368249ed...@adesknews.autodesk.com>,

vne...@netvision.net.il (Vladimir Nesterovsky) wrote:
> Something got cleared for me: Jessy's article
> wasn't here by the same reason as my original
> post wasn't here: it was posted thru DejaNews.

Funny thing is, now I'm seeing double, there are two
of every post since then. This is an Autodesk news
server right? Maybe they will send me another copy
of AutoCAD too.

Vlad, thats OK if you disagree. Thats what the NG
is afterall for, although we seem to have agreed
on some things. Since Autodesk inherited Vital LISP
though, I have had nothing but problems, many of
these memory related. You see, about 97% of everything
I do is in the Visual LISP enviornment, not native AutoLISP.
After reading your newspost, and thinking about some other
problems I was having, it got me to thinking ...
AutoCAD native AutoLISP is handled by AutoCAD and its
memory managed by the same. For a long time I assumed
that memory management for Visual LISP was also handled
by the same AutoCAD engine. Its becoming more apparent
however that it is not. It seems that as they are
running AutoLISP code in entirely seperate realms
(runtimes) that memory is also handled seperately
within each runtime. Not being an ADS programmer,
although I think it would be safe to say, Visual LISP
is being handled by the same engine that handles ADS
memory use. Setting (alloc) in Visual LISP seems not
to affect native AutoLISP (alloc) value and likewise.


In any event, I suggest the (gc) garbage cleanup since
this clears un-used node space immediately, instead of
waiting for AutoCAD to do it when it decides it wants
more memory which is usually when all current available
memory is filled up. This was following setting a variable
to nil when its no longer in use. The idea is to try and
avoid a huge chunk of memory being reclaimed all at once
by clearing it in smaller bits as your code moves along.
Hopefully, the appearance to the end-user is relatively smooth
rather than hitting certain points when the program slows
to a crawl while its reclaiming all that un-used node space.
Here is a rough example I used in my code, this function
is called every time a data list (enclosed in <>) is modified

(defun application-alloc-mem ()
(if
(< ct_data
(setq ct_data
(* 2 (apply '+ (mapcar 'length
<data-list1>
<data-list2>
<data-list3>
<data-list4>))))
(alloc ct_data)
(gc)); end defun

If the combined length of the global data lists in the
program are greater than the previous modification,
the (alloc) is called, roughly 24 bytes per avg. string
value. If less, a (gc) is called to clear up the unused
node space, where its understood that variables were
accordingly set to nil elsewhere in the code. Likewise,
each time any subroutine exited, the variables become nil
however, their memory allocation not re-claimed. The periodic
call of this function and (gc) would take care of that
as well. My fear of not being able to un -(alloc) memory is devoting
too much of it to AutoLISP thus stealing from AutoCAD's alloted
memory for routine drafting operations long after the program
has been closed. However, in terms of Visual LISP, I don't
think this is the case. Using (alloc) in VL would simply
use more RAM, but not steal from memory alloted for use by AutoCAD.

Its seems to have made a big difference. However, one disadvantage
to me is that I am working on a fairly good machine: Dell
Pentium GXPro 200 with 64k RAM that has very few memory related
problems. Not really the boat anchor I would like to be testing
code dealing with memory use. But in terms of the Access violations
I was getting in Visual LISP, its been reduced to virtually none,
in this specific instance anyway.

A little off the subject, is it just me or have you noticed
a sudden appearance of alot of runtime errors with Visual LISP since
Autodesk inherited it from Basis Software? I never use to have
these problems with Vital LISP, and I'm using the same code in
both. But now I'm forced to port everything over to Visual LISP
since Vital LISP is no longer compatible with R14.0.

0 new messages