Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Memory Release Problems?

10 views
Skip to first unread message

Paul E. Lehmann

unread,
Jul 3, 2002, 6:03:35 PM7/3/02
to
Is there a problem with Tcl/TK releasing memory in the Window version. I
saw this on the FAQ and was wondering if has been fixed.


David Gravereaux

unread,
Jul 3, 2002, 9:08:36 PM7/3/02
to
"Paul E. Lehmann" <no_spam_...@fred.net> wrote:

>Is there a problem with Tcl/TK releasing memory in the Window version. I
>saw this on the FAQ and was wondering if has been fixed.
>

Tcl uses HeapAlloc, HeapRealloc and HeapFree as the native memory calls. The
Tcl allocator that never frees is no longer used. This is in 8.4, but I'm not
sure about 8.3.4
--
David Gravereaux <davy...@pobox.com>
Tomasoft Engineering, Hayward, CA
[species: human; planet: earth,milkyway,alpha sector]

Georgios Petasis

unread,
Jul 4, 2002, 5:07:53 AM7/4/02
to

"David Gravereaux" <davy...@pobox.com> wrote in message
news:qh77iu83hiu9jfdb4...@4ax.com...

> "Paul E. Lehmann" <no_spam_...@fred.net> wrote:
>
> >Is there a problem with Tcl/TK releasing memory in the Window version. I
> >saw this on the FAQ and was wondering if has been fixed.
> >
>
> Tcl uses HeapAlloc, HeapRealloc and HeapFree as the native memory calls.
The
> Tcl allocator that never frees is no longer used. This is in 8.4, but I'm
not
> sure about 8.3.4

Yes, but I suppose that the problem where memory allocated for storing
objects
is never freed still remains (as for all platforms).

George


David Gravereaux

unread,
Jul 4, 2002, 6:30:56 AM7/4/02
to
"Georgios Petasis" <pet...@iit.demokritos.gr> wrote:

I could have sworn `#define USE_TCLALLOC 0` is set there somewhere to turn it
all off. I remember a note about it in the ChangeLog.

To Paul, to stop the 'never free' problem just add `-DUSE_TCLALLOC=0` when
compiling.

Paul E. Lehmann

unread,
Jul 4, 2002, 11:59:49 AM7/4/02
to

"David Gravereaux" <davy...@pobox.com> wrote in message

> To Paul, to stop the 'never free' problem just add `-DUSE_TCLALLOC=0` when
> compiling.
> --

Is it only a problem when compiling? Pardon my ignorance, I am a newbie. I
thought TCL/TK was just an interpreted language. I don't intend on
compiling anything. Is the same also true for the Linux version? I just
want to fool around and write some simple stuff, not shrink wrap type
programs.


David Gravereaux

unread,
Jul 4, 2002, 4:08:35 PM7/4/02
to
"Paul E. Lehmann" <no_spam_...@fred.net> wrote:

USE_TCLALLOC is a configuration option when building the core itself. I was
grep'ping for it yesterday in tclWinPort.h, but didn't find it, then realized
later that undefined macros are always zero. I always forget that. So there's
no need to add that switch as it's already present by its absence. I don't know
if the linux version is using the "never free' tcl allocator or not.

Petasis George

unread,
Jul 8, 2002, 1:09:52 AM7/8/02
to
David Gravereaux wrote:
>
> "Paul E. Lehmann" <no_spam_...@fred.net> wrote:
>
> >
> >"David Gravereaux" <davy...@pobox.com> wrote in message
> >
> >> To Paul, to stop the 'never free' problem just add `-DUSE_TCLALLOC=0` when
> >> compiling.
> >> --
> >
> >Is it only a problem when compiling? Pardon my ignorance, I am a newbie. I
> >thought TCL/TK was just an interpreted language. I don't intend on
> >compiling anything. Is the same also true for the Linux version? I just
> >want to fool around and write some simple stuff, not shrink wrap type
> >programs.
> >
>
> USE_TCLALLOC is a configuration option when building the core itself. I was
> grep'ping for it yesterday in tclWinPort.h, but didn't find it, then realized
> later that undefined macros are always zero. I always forget that. So there's
> no need to add that switch as it's already present by its absence. I don't know
> if the linux version is using the "never free' tcl allocator or not.


The fact that tcl objects are not freed by tcl is not related to the
underline
memory allocator used. When tcl needs new objects, it allocates memory space
to hold a fixed number (now 100) of objects (this is the only place where
the allocator is involved). This memory will never released by tcl, as there
is
no safe way to see if all objects in there are not used...

George

Paul E. Lehmann

unread,
Jul 8, 2002, 3:08:32 AM7/8/02
to

"Petasis George" <pet...@iit.demokritos.gr> wrote in message
news:3D291EA0...@iit.demokritos.gr...

> The fact that tcl objects are not freed by tcl is not related to the
> underline
> memory allocator used. When tcl needs new objects, it allocates memory
space
> to hold a fixed number (now 100) of objects (this is the only place where
> the allocator is involved). This memory will never released by tcl, as
there
> is
> no safe way to see if all objects in there are not used...
>
> George

I thought of spending a lot of time learning TCL / TK but if there are
memory problems, I can't see spending a lot of time on a language that has a
major flaw. What am I missing here? I am not trying to bash the language
but I don't want to create programs that hog memory and never release.


Helmut Giese

unread,
Jul 8, 2002, 6:25:50 AM7/8/02
to
>but I don't want to create programs that hog memory and never release.
Nobody *hogs* memory (we talk a couple of kb here) - and of course
this memory is released when the app terminates. Incidentally,
virtually any bigger app shows this behaviour, that is, aquiring some
central memory and never explicitely releasing it, only implicitely
and program termination..
HTH
Helmut

David Gravereaux

unread,
Jul 8, 2002, 6:59:28 AM7/8/02
to
"Paul E. Lehmann" <no_spam_...@fred.net> wrote:

>I can't see spending a lot of time on a language that has a
>major flaw. What am I missing here?

You're taking this conversation too seriously. Go learn tcl. Where George is
referring to a Tcl_Obj never free'ing is outside my understanding.

Tcl_Obj *obj;

obj = Tcl_NewStringObj("hi there", -1);
Tcl_DecrRefCount(obj);

Is marked free for reuse different than free() (which is HeapFree() on win with
msvcrt) or actually free`d and returned to the system by VirtualFree() seems to
be an off-track discussion from the intent of your question.

David Gravereaux

unread,
Jul 8, 2002, 8:37:37 AM7/8/02
to
David Gravereaux <davy...@pobox.com> wrote:

>"Paul E. Lehmann" <no_spam_...@fred.net> wrote:
>
>>I can't see spending a lot of time on a language that has a
>>major flaw. What am I missing here?
>
>You're taking this conversation too seriously. Go learn tcl.

After just finishing some exhaustive leak testing on tclhttpd, I can say that
Tcl leaks absolutely NOWHERE except for some outer-rim stuff found in the
msvcrt. But 27 bytes per-page hit on dynamic .tml files is NOTHING and it isn't
Tcl that's leaking.

There was a small coding mistake in one of the samples, and a small issue in
handling environment variables, and the patches are in the system.

Honestly, Tcl leaks NOWHERE. IIRC, the faq statement about memory on windows
was referring to the old MSVC++ 4.0 compiler and run-time where free() itself
didn't work. Or maybe it was the forced `#define USE_TCLALLOC 1` that now isn't
needed and set to zero by its absence in 8.4. Or maybe you're referring to how
Tcl can mark items as free for reuse but doesn't actually call free() on them to
speed things up with Tcl_NewObj().

It looks if defining PURIFY for the core compile will do away with the Tcl_Obj
free-list caching, if you're interested. I don't know what else it does.

Jeffrey Hobbs

unread,
Jul 8, 2002, 11:47:39 AM7/8/02
to
"Paul E. Lehmann" wrote:
> "Petasis George" <pet...@iit.demokritos.gr> wrote in message
> news:3D291EA0...@iit.demokritos.gr...
> > The fact that tcl objects are not freed by tcl is not related to the
> > underline
> > memory allocator used. When tcl needs new objects, it allocates memory
...

> I thought of spending a lot of time learning TCL / TK but if there are
> memory problems, I can't see spending a lot of time on a language that has a
> major flaw. What am I missing here? I am not trying to bash the language
> but I don't want to create programs that hog memory and never release.

You're missing that this is a basic facet of many languages, and not a
problem. Perl behaves much in a similar way to be remotely efficient.
Pool allocation is a common concept, and isn't "wasting your memory".
Furthermore, the FAQ for Windows needs updating because the extent of the
high-water mark memory allocation not freeing memory is no longer accurate
with the latest core releases.

I would use Tcl first, do some tests and see if you actually have any
problems before worrying about memory usage.

--
Jeff Hobbs The Tcl Guy
Senior Developer http://www.ActiveState.com/
Tcl Support and Productivity Solutions

Darren New

unread,
Jul 8, 2002, 11:57:59 AM7/8/02
to
Jeffrey Hobbs wrote:
> You're missing that this is a basic facet of many languages, and not a
> problem.

Including malloc() and free() in C from just a few years ago. It wasn't
until relatively recently that sbrk() was actually ever called with a
negative value from C's libraries under UNIX.

--
Darren New
San Diego, CA, USA (PST). Cryptokeys on demand.
** http://home.san.rr.com/dnew/DNResume.html **
** http://images.fbrtech.com/dnew/ **

Proud to live in a country where "fashionable"
is denim, "stone-washed" faded, with rips.

Petasis George

unread,
Jul 9, 2002, 4:35:20 AM7/9/02
to

I never said that tcl is leaking memory. Only that once objects are
allocated,
they wont be freed untill the program exits. This is actually a design
decision
and is there to make things really much faster.

When you run even a small tcl program, tcl uses objects internally to hold
everything, from your script, your variables or even your procedures. When
tcl allocates space for storing objects, it allocates the needed space for
storing 100 objects. *All* unused objects are placed on a pool of objects
that
are awaiting to be used. When a new object is needed, tcl gets an object
from
this pool of unused objects. If this pool gets empty, 100 new objects are
allocated
and placed in the pool.

What I was talking about is that when a used object is no longer wanted, it
is
not freed. Instead it is placed back in the pool, in order to be re-used.
This has two reasons: Constantly freing/allocating so small memory amounts
as the size of a tcl object will fragment the memory of your machine, if the
allocator is not good. Second, a lot of time will be spent on this,
as allocating memory is a slow process on many systems. All these
are made under the assumption that once you needed a number of objects
you are going to need them again. And this is true in 99.99% of the
situations.
There are of course some situations where this is not desirable
(i.e. when writting deamons) but you can avoid the problems if you design
the application right.

I thought that you were seeing memory problems, and that is why I mentioned
a potential problem. But now I saw that you simply read a faq :-)
Well, no memory leaks in tcl. Even what you red was not a memory leak.
Tcl had its reason for not releasing the memory, mainly because
the memory allocator on earlier versions of windows was
quite slow. So, tcl simply allocated a piece of memory and
placed a second allocator on top of the system native one.
Believe it or not, this solution was much faster. But this is
not the case any more...

George

Volker Hetzer

unread,
Jul 9, 2002, 7:01:06 AM7/9/02
to

"Petasis George" <pet...@iit.demokritos.gr> wrote in message news:3D2AA048...@iit.demokritos.gr...

> "Paul E. Lehmann" wrote:
> What I was talking about is that when a used object is no longer wanted, it
> is
> not freed. Instead it is placed back in the pool, in order to be re-used.
> This has two reasons: Constantly freing/allocating so small memory amounts
> as the size of a tcl object will fragment the memory of your machine, if the
> allocator is not good. Second, a lot of time will be spent on this,
> as allocating memory is a slow process on many systems. All these
> are made under the assumption that once you needed a number of objects
> you are going to need them again. And this is true in 99.99% of the
> situations.
> There are of course some situations where this is not desirable
> (i.e. when writting deamons) but you can avoid the problems if you design
> the application right.
So, how do I design a daemon right?

Greetings!
Volker


Michael Schlenker

unread,
Jul 9, 2002, 9:18:58 AM7/9/02
to

Does this mean, that the Obj refcounting isn't working???

Does the "never released" mean a program that uses very many objects in
one phase of it's execution, say a thousand times more than in normal
operation, keeps using the maximum memory for it's runtime?

Michael Schlenker

Donal K. Fellows

unread,
Jul 9, 2002, 9:25:52 AM7/9/02
to
Volker Hetzer wrote:
> So, how do I design a daemon right?

You need a bounded object pool, which Tcl does not currently implement. I've
got a *partial* implementation (i.e. not finished, not portable, and not
debugged) of a high-performance bounded object pool available at:
http://www.cs.man.ac.uk/~fellowsd/tcl/patches/objalloc.patch

Let me know if you manage to get it working enough to run the Tcl test suite
without crashes! :^)

Donal.
--
Donal K. Fellows http://www.cs.man.ac.uk/~fellowsd/ fell...@cs.man.ac.uk
-- There are worse futures that burning in hell. Imagine aeons filled with
rewriting of your apps as WinN**X API will mutate through eternity...
-- Alexander Nosenko <n...@titul.ru>

Jeff Hobbs

unread,
Jul 9, 2002, 3:12:07 PM7/9/02
to
Michael Schlenker wrote:
> Does this mean, that the Obj refcounting isn't working???

No.

> Does the "never released" mean a program that uses very many objects in
> one phase of it's execution, say a thousand times more than in normal
> operation, keeps using the maximum memory for it's runtime?

Yes, but only for the object containers, not all the memory that was
used by the objects, although the latter depends a bit on the OS malloc
implementation. Such high-water mark allocators are common, and unlike
to affect 99.95% of users.

--
Jeff Hobbs The Tcl Guy
Senior Developer http://www.ActiveState.com/
Tcl Support and Productivity Solutions

Join us in Sept. for Tcl'2002: http://www.tcl.tk/community/tcl2002/

George A. Howlett

unread,
Jul 9, 2002, 11:42:16 PM7/9/02
to
Petasis George <pet...@iit.demokritos.gr> wrote:

> I thought that you were seeing memory problems, and that is why I
> mentioned a potential problem. But now I saw that you simply read a
> faq :-) Well, no memory leaks in tcl. Even what you red was not a
> memory leak. Tcl had its reason for not releasing the memory,
> mainly because the memory allocator on earlier versions of windows
> was quite slow. So, tcl simply allocated a piece of memory and
> placed a second allocator on top of the system native one. Believe
> it or not, this solution was much faster. But this is not the case
> any more...

Without having really profiled this, I think it's reasonable that the
Tcl_Obj pool allocator is still many times faster than malloc and
free, regardless of the system.

The pool allocator calls malloc only once for each 100 Tcl_Obj
requests. The simple bookkeeping that TclNewObj performs is both
inlined and much faster than a function call to malloc. Since
Tcl_Objs aren't released back to the heap, freeing is just a matter of
chaining them to a free list. There's no requirement to merge freed
segments, since all requests to the pool are the same size. Pool
allocation may also provide better locality for Tcl_Objs.

Having said all that, I agree that this isn't always the best
approach. It's okay when you have a couple hundred of so Tcl_Objs in
use. But if you start using Tcl_Objs are a basis for bigger data
structures (it's easy to use 150,000 in a BLT tree for example) it can
stop being such a good idea. Even if you release all the Tcl_Objs
(refCount == 0), the memory will always earmarked for Tcl_Objs
regardless if you create any more Tcl_Objs. The VM system will
typically page out unused Tcl_Objs.

I had an idea that I haven't had time to play with yet. Let's say we
pre-allocate a pool of 1000 Tcl_Objs (I'm guessing at the number but
it shouldn't be too hard to figure out what a typical working set is).

Any Tcl_Obj allocated beyond that amount would be malloc'ed off the
heap. [We can also add calls to the Tcl C API that indicate that
Tcl_Obj requests should specifically come off the heap]. To test
whether an object is pool or heap allocated, you just have to look at
its address. If it's greater than the last address in the pool, it's
on the heap.

This is a little tricky (hacky) because 1) we're trying to be binary
compatible with the current Tcl_Obj structure definition and 2) it
depends upon the memory allocator and the organization of the heap.
It the pool is malloc'ed, then one has to be careful that new Tcl_Obj
allocations are addressed higher. If the pool is statically
allocated, then one has to make sure that the heap is located after
static data storage.

Right now what's really needed is a better analysis of the
costs/benfits of the Tcl_Obj pool allocator. If switching to
malloc/free results in a small degradation (<2%) of over-all
performance, then we can question why we still bother optimizing
Tcl_Obj allocations.

--gah

Jeffrey Hobbs

unread,
Jul 10, 2002, 1:36:14 AM7/10/02
to
"George A. Howlett" wrote:
> Petasis George <pet...@iit.demokritos.gr> wrote:
...

> > memory leak. Tcl had its reason for not releasing the memory,
> > mainly because the memory allocator on earlier versions of windows
> > was quite slow. So, tcl simply allocated a piece of memory and
> > placed a second allocator on top of the system native one. Believe
> > it or not, this solution was much faster. But this is not the case
> > any more...
>
> Without having really profiled this, I think it's reasonable that the
> Tcl_Obj pool allocator is still many times faster than malloc and
> free, regardless of the system.

I have profiled it, and it is a noticeable speed difference (don't
have the numbers right here, but I believe it was in the 15% range).

> I had an idea that I haven't had time to play with yet. Let's say we
> pre-allocate a pool of 1000 Tcl_Objs (I'm guessing at the number but
> it shouldn't be too hard to figure out what a typical working set is).
>
> Any Tcl_Obj allocated beyond that amount would be malloc'ed off the
> heap. [We can also add calls to the Tcl C API that indicate that

...

Donal Fellows has been toying with a similar idea, but hasn't yet
perfected it.

Donal K. Fellows

unread,
Jul 10, 2002, 10:45:07 AM7/10/02
to
"George A. Howlett" wrote:
> I had an idea that I haven't had time to play with yet. Let's say we
> pre-allocate a pool of 1000 Tcl_Objs (I'm guessing at the number but
> it shouldn't be too hard to figure out what a typical working set is).

If we were serious about this, we'd measure instead of guessing. And make it
something that can be specified in an argument to the configure script.
Possibly also provide a pool mode that does the measurement automatically, so
you can just build a special version of Tcl and get data spat out that lets you
work out what the pool size should be.

> Any Tcl_Obj allocated beyond that amount would be malloc'ed off the
> heap. [We can also add calls to the Tcl C API that indicate that
> Tcl_Obj requests should specifically come off the heap]. To test
> whether an object is pool or heap allocated, you just have to look at
> its address. If it's greater than the last address in the pool, it's
> on the heap.
>
> This is a little tricky (hacky) because 1) we're trying to be binary
> compatible with the current Tcl_Obj structure definition and 2) it
> depends upon the memory allocator and the organization of the heap.
> It the pool is malloc'ed, then one has to be careful that new Tcl_Obj
> allocations are addressed higher. If the pool is statically
> allocated, then one has to make sure that the heap is located after
> static data storage.

So use two comparisons to check if you are before the start or after the end of
the pool (which must be contiguous, but that's not a practical problem.) The
really tricky bit is that this scheme now has a magic value to be tuned; the
base pool size.

A hackier way to do this is to allocate pages of memory directly from the OS's
VM subsystem, and then pack each one with as many objects as we can. Now,
because our basic memory unit is guaranteed to be page-aligned, we can (a) put a
structure at the start of it to hold pool management info[*], (b) find that
structure from the address of the object just using fairly basic address
arithmetic (i.e. mask off the low bits), (c) when the page no longer has any
objs in use in it, we can dispose of it. To prevent thrashing of the VM, it's
probably a good idea to only throw a page away when you have another completely
free page about (otherwise a small alloc-dealloc loop could be surprisingly and
annoyingly slow.) The biggest advantage of this scheme is that it should both
be largely self-tuning, and faster than relying on normal malloc(). And some
libc's don't release memory back to the OS on free(), whereas this scheme should
sidestep that mess.

I've never had the time to finish getting an implementation of this into a fully
fit state.

Donal.
[* I think there's just enough room on 32-bit systems for that structure plus
170 Tcl_Objs in a single 4kB page, while keeping everything 8-byte aligned.
I don't remember the figures for 64-bit platforms. ]

-- A large ASCII art .sig is the Usenet equivalent of walking around in
public with your asscrack sticking out past the top of your sweatpants.
You be the judge. -- Mike Hoye <mh...@prince.carleton.ca>

Volker Hetzer

unread,
Jul 10, 2002, 11:29:14 AM7/10/02
to

"Donal K. Fellows" <fell...@cs.man.ac.uk> wrote in message news:3D2AE460...@cs.man.ac.uk...

> Volker Hetzer wrote:
> > So, how do I design a daemon right?
>
> You need a bounded object pool, which Tcl does not currently implement. I've
> got a *partial* implementation (i.e. not finished, not portable, and not
> debugged) of a high-performance bounded object pool available at:
> http://www.cs.man.ac.uk/~fellowsd/tcl/patches/objalloc.patch
>
> Let me know if you manage to get it working enough to run the Tcl test suite
> without crashes! :^)
Tall job...

Greetings!
Volker


Volker Hetzer

unread,
Jul 10, 2002, 11:29:34 AM7/10/02
to

"Donal K. Fellows" <fell...@cs.man.ac.uk> wrote in message news:3D2C4873...@cs.man.ac.uk...

> A hackier way to do this is to allocate pages of memory directly from the OS's
> VM subsystem, and then pack each one with as many objects as we can. Now,
> because our basic memory unit is guaranteed to be page-aligned, we can (a) put a
> structure at the start of it to hold pool management info[*], (b) find that
> structure from the address of the object just using fairly basic address
> arithmetic (i.e. mask off the low bits), (c) when the page no longer has any
> objs in use in it, we can dispose of it. To prevent thrashing of the VM, it's
> probably a good idea to only throw a page away when you have another completely
> free page about (otherwise a small alloc-dealloc loop could be surprisingly and
> annoyingly slow.) The biggest advantage of this scheme is that it should both
> be largely self-tuning, and faster than relying on normal malloc(). And some
> libc's don't release memory back to the OS on free(), whereas this scheme should
> sidestep that mess.
Since Tcl_Obj's have the nice property that they can be copied, why not provide
some kind of garbage collection command that compacts all the objects and then
gives back the unused segments to the OS?


Greetings!
Volker


Joe English

unread,
Jul 10, 2002, 12:27:52 PM7/10/02
to
George A. Howlett wrote:
>
> [ Re: the Tcl_Obj pool allocator ]

>
>Having said all that, I agree that this isn't always the best
>approach. It's okay when you have a couple hundred of so Tcl_Objs in
>use. But if you start using Tcl_Objs are a basis for bigger data
>structures (it's easy to use 150,000 in a BLT tree for example) it can
>stop being such a good idea. Even if you release all the Tcl_Objs
>(refCount == 0), the memory will always earmarked for Tcl_Objs
>regardless if you create any more Tcl_Objs.

It's a near-certainty that the program *will* create
more Tcl_Objs :-) It might not reach the same high-water
mark again of course.

Using malloc()/free() instead of the custom pool-allocator might
not make a difference at any rate, except for incurring extra
time and space costs from malloc() overhead. If the system
allocator is based on ptmalloc (used in Linux) or any other
descendant of Doug Lea's malloc (IRIX when using -lmalloc,
probably others), then the memory will only be recycled for
other Tcl_Objs or Tcl_Obj-sized structures anyway. In these
implementations, blocks smaller than M_MAXFAST bytes are kept
in "fastbins" and never coalesced. (I suspect that any memory
allocator optimized for C++ usage patterns will work pretty
much the same way.)

>Right now what's really needed is a better analysis of the
>costs/benfits of the Tcl_Obj pool allocator. If switching to
>malloc/free results in a small degradation (<2%) of over-all
>performance, then we can question why we still bother optimizing
>Tcl_Obj allocations.

Be sure to profile it on all supported platforms too.
This might result in a small degradation on Linux, where malloc()
is tuned for speed when allocating small blocks, but do much worse
on systems whose malloc()s do more aggressive coalescing.


--Joe English

jeng...@flightlab.com

George A. Howlett

unread,
Jul 10, 2002, 3:08:50 PM7/10/02
to
Donal K. Fellows <fell...@cs.man.ac.uk> wrote:

> If we were serious about this, we'd measure instead of guessing.
> And make it something that can be specified in an argument to the
> configure script. Possibly also provide a pool mode that does the
> measurement automatically, so you can just build a special version
> of Tcl and get data spat out that lets you work out what the pool
> size should be.

Is pool allocation worth it? Does it give a big enough bang to demand
its use? What percentage of an overall application's performance is
given to Tcl_Obj allocation and deallocation, and is it enough to
warrant a fast pool allocator? The trade-off again is that if you
simply malloc and free Tcl_Objs you can reuse their memory for other
types of things.

On the other hand, I'd like to see examples where the allocation
method performs badly. My own anecdotal data is that I only get
bitten when I try to deallocate lots of Tcl_Objs. This could simply
be because I'm touching all the Tcl_Objs at once (to decrement their
reference counts).

All the other approaches are complicated in various ways. Before
going down that road I'd like to understand what problem I'm solving
first.

--gah

Jeff Hobbs

unread,
Jul 10, 2002, 4:19:49 PM7/10/02
to
"George A. Howlett" wrote:
> Donal K. Fellows <fell...@cs.man.ac.uk> wrote:
> > If we were serious about this, we'd measure instead of guessing.
...

> Is pool allocation worth it? Does it give a big enough bang to demand
> its use? What percentage of an overall application's performance is
> given to Tcl_Obj allocation and deallocation, and is it enough to
> warrant a fast pool allocator? The trade-off again is that if you

OK, I decided to do these numbers again to be exact, and here they are:

Benchmark 1:8.4b1 /home/jeffh/install/linux-ix86/bin/tclsh8.4-single
bbccdeeefghkllmmmmpprrssstuvw 00:07:09 elapsed
Benchmark 2:8.4b1 /home/jeffh/install/linux-ix86/bin/tclsh8.4
bbccdeeefghkllmmmmpprrssstuvw 00:06:27 elapsed

So that's 429 secs vs. 387 secs, or about a 10% difference in favor of
pool allocation. This is over the somewhat elaborate set of cases in
tclbench. Things like regexps aren't much effected, as they don't use
Tcl_Obj's except on the in/out. List operations certainly get hit, and
you can basically say that the more intensive the use of Tcl_Obj's was
in an operation, the faster it went when using pool-allocation.

I'd be glad to pass on my full results to interested parties, but at
500 tclbench cases now, it makes for too much to post here.

--
Jeff Hobbs The Tcl Guy
Senior Developer http://www.ActiveState.com/
Tcl Support and Productivity Solutions

George A. Howlett

unread,
Jul 11, 2002, 3:05:48 AM7/11/02
to
Jeff Hobbs <Je...@activestate.com> wrote:

> OK, I decided to do these numbers again to be exact, and here they are:

> Benchmark 1:8.4b1 /home/jeffh/install/linux-ix86/bin/tclsh8.4-single
> bbccdeeefghkllmmmmpprrssstuvw 00:07:09 elapsed
> Benchmark 2:8.4b1 /home/jeffh/install/linux-ix86/bin/tclsh8.4
> bbccdeeefghkllmmmmpprrssstuvw 00:06:27 elapsed

> So that's 429 secs vs. 387 secs, or about a 10% difference in favor of
> pool allocation. This is over the somewhat elaborate set of cases in
> tclbench. Things like regexps aren't much effected, as they don't use
> Tcl_Obj's except on the in/out. List operations certainly get hit, and
> you can basically say that the more intensive the use of Tcl_Obj's was
> in an operation, the faster it went when using pool-allocation.

> I'd be glad to pass on my full results to interested parties, but at
> 500 tclbench cases now, it makes for too much to post here.

Thanks Jeff. That's pretty interesting. I would have thought the
margin would be bigger, since the Tcl test bench should wag
allocation/deallocation of Tcl_Objs a lot more than let's say a Tk
application.

I'm interested how tclsh8.4-single was compiled? Was this on Windows?
Unix? For example, if you simply define TCL_MEM_DEBUG (rather than
editing the TclNewObj macro, you also turn on all the bookkeeping for
the simple Tcl memory debugger. That's not really an apples-to-apples
test. Also the Win32 memory allocator is infamous for its pokiness.
How do the numbers compare under Unix?

--gah

Donal K. Fellows

unread,
Jul 11, 2002, 4:32:08 AM7/11/02
to
Volker Hetzer wrote:
> Since Tcl_Obj's have the nice property that they can be copied, why not
> provide some kind of garbage collection command that compacts all the
> objects and then gives back the unused segments to the OS?

Do you want to write the code that tracks down all pointers to a particular
object? Or perhaps we should indirect everything through a separate table?
(Compacting GC is harder than simple GC, and Tcl doesn't really implement
either, though the refcounting scheme for objects is an approximation to the
latter.)

Donal.

"A while back I started adding movie reviews to the SEE ALSO sections of some
documentation I wrote. [...] As far as I know, those movie reviews are still
in the product. :-)" -- Bryan Oakley

Volker Hetzer

unread,
Jul 11, 2002, 5:01:29 AM7/11/02
to

"Donal K. Fellows" <fell...@cs.man.ac.uk> wrote in message news:3D2D4288...@cs.man.ac.uk...

> Volker Hetzer wrote:
> > Since Tcl_Obj's have the nice property that they can be copied, why not
> > provide some kind of garbage collection command that compacts all the
> > objects and then gives back the unused segments to the OS?
>
> Do you want to write the code that tracks down all pointers to a particular
> object? Or perhaps we should indirect everything through a separate table?
> (Compacting GC is harder than simple GC, and Tcl doesn't really implement
> either, though the refcounting scheme for objects is an approximation to the
> latter.)
You are right. The fact that they are copyable is only part of the rent. They
(and their pointers) also have to be findable which is particularly hard in
C-extensions. I didn't think of that.

Greetings!
Volker


Jeff Hobbs

unread,
Jul 11, 2002, 5:55:40 PM7/11/02
to
"George A. Howlett" wrote:

> Jeff Hobbs <Je...@activestate.com> wrote:
> > So that's 429 secs vs. 387 secs, or about a 10% difference in favor of
> > pool allocation. This is over the somewhat elaborate set of cases in
> > tclbench. Things like regexps aren't much effected, as they don't use
> > Tcl_Obj's except on the in/out. List operations certainly get hit, and
> > you can basically say that the more intensive the use of Tcl_Obj's was
> > in an operation, the faster it went when using pool-allocation.

> Thanks Jeff. That's pretty interesting. I would have thought the


> margin would be bigger, since the Tcl test bench should wag
> allocation/deallocation of Tcl_Objs a lot more than let's say a Tk
> application.

Well, it's an average over the whole 500 tests, so regexp benchmarks and
large single object allocation (like 1MB objs) aren't affected much,
whereas the any-obj sensitive tests were affected > 10% to get the
average. Examples: base64 was 16-21% slower; eval command with mixed
lists 50% slower; list shuffling ops up to 39% slower; large repeat list
appends up to 50% slower; string reverse code is ~30% slower ...

> I'm interested how tclsh8.4-single was compiled?

This was on SuSE 7.3, a P3-550 Xeon. Very controlled environment. I
added -DPURIFY to the make options for single-obj allocation. I added
this option for purify testing - it only changes the pool allocation to
single-obj alloc/free as that was necessary to find leaks properly with
purify. It also has one other change - doing a memset before realpath,
but this is non-issue. Otherwise the builds were standard -O builds.

Petasis George

unread,
Jul 12, 2002, 3:34:01 AM7/12/02
to

In tcl? By spanning an external process (i.e. a new tclsh) to do the job.
Doing the job inside the deamon *may* be dangerous (depends on the
task). With "dangerous" I mean reaching a memory consumption that will
be occupied untill the deamon is restarted...

George

Volker Hetzer

unread,
Jul 12, 2002, 7:28:08 AM7/12/02
to

"Petasis George" <pet...@iit.demokritos.gr> wrote in message news:3D2E8669...@iit.demokritos.gr...

> Volker Hetzer wrote:
> > So, how do I design a daemon right?
> >
>
> In tcl? By spanning an external process (i.e. a new tclsh) to do the job.
> Doing the job inside the deamon *may* be dangerous (depends on the
> task). With "dangerous" I mean reaching a memory consumption that will
> be occupied untill the deamon is restarted...

That's okay for daemons that do work. But consider daemons that are
mainly used for communication, locking and shared data.
In that case the one-process-per-client doesn't really work unless
the processes communicate again, recreating exactly the problem the
exec'ing was supposed to avoid.

Here's an example:
A daemon is used to store global data. Sometimes a hundred sessions
are open, sometimes two. The actions of the daemon are simple,
namely semaphore management. Getting, freeing and taking care of
the numbering of semaphores. Any client can create semaphores and
destroy them.
Is there any way to avoid the memory usage staying at the peak
of max sessions and max semaphores?

Greetings!
Volker


Joe English

unread,
Jul 12, 2002, 11:03:07 AM7/12/02
to
Volker Hetzer wrote:
>"Petasis George" wrote

>> Volker Hetzer wrote:
>> > So, how do I design a daemon right?
>> In tcl? By spanning an external process (i.e. a new tclsh) to do the job.
>> Doing the job inside the deamon *may* be dangerous (depends on the
>> task). With "dangerous" I mean reaching a memory consumption that will
>> be occupied untill the deamon is restarted...
>
>That's okay for daemons that do work. But consider daemons that are
>mainly used for communication, locking and shared data.
>In that case the one-process-per-client doesn't really work unless
>the processes communicate again, recreating exactly the problem the
>exec'ing was supposed to avoid. [...]

>Is there any way to avoid the memory usage staying at the peak
>of max sessions and max semaphores?

I think that this thread overstates the severity of
the "problem".

Tcl 8.3.4 does not have any known memory leaks.
(There may be some unknown ones, naturally, but
AFAIK nobody has encountered them yet.)

Many people have deployed long-running server applications
written in Tcl, and yet -- if the SourceForge bug tracker
is any indication -- there are no problems with Tcl's memory
usage. Failure to return memory to the malloc/free pool, or
to release unused pages to the OS, simply does not appear
to be a problem in practice. And there are clear performance
benefits to Tcl's custom pool allocator.

--Joe English

jeng...@flightlab.com

Volker Hetzer

unread,
Jul 12, 2002, 11:44:08 AM7/12/02
to

"Joe English" <jeng...@flightlab.com> wrote in message news:agmr3...@enews1.newsguy.com...

> Volker Hetzer wrote:
> >That's okay for daemons that do work. But consider daemons that are
> >mainly used for communication, locking and shared data.
> >In that case the one-process-per-client doesn't really work unless
> >the processes communicate again, recreating exactly the problem the
> >exec'ing was supposed to avoid. [...]
> >Is there any way to avoid the memory usage staying at the peak
> >of max sessions and max semaphores?
> Tcl 8.3.4 does not have any known memory leaks.
No one disputes that.

> Many people have deployed long-running server applications
> written in Tcl, and yet -- if the SourceForge bug tracker
> is any indication -- there are no problems with Tcl's memory
> usage.

I don't know enough of this bug tracker to comment on this.
The problem I *do* have is, how does tcl recover from
a high load situation?

> Failure to return memory to the malloc/free pool, or
> to release unused pages to the OS, simply does not appear
> to be a problem in practice. And there are clear performance
> benefits to Tcl's custom pool allocator.

I agree that most scripts are not daemons and that, furthermore,
many daemons are designed to have one process per session.

Yet, because of the safety of tcl (crashwise) it becomes increasingly
popular to write long running applications with tcl. That also increases
the risk of people arguing that it's not a good idea to have a process
consuming 50 or 100MB running for a long time. Possibly even on a
machine with real users sitting in front of it. Which is bad for tcl.

That is why I think that it's reasonable to try to find ways to make
tcl more daemon-friendly, if possible in a way that is easy to implement.


Maybe it's possible to make the allocation scheme namespace specific?
Or interpreter specific or array or scope specific?

OTOH, making it a build option is probably the easiest way.

Greetings!
Volker


Darren New

unread,
Jul 12, 2002, 11:56:31 AM7/12/02
to
Joe English wrote:
> Failure to return memory to the malloc/free pool, or
> to release unused pages to the OS, simply does not appear
> to be a problem in practice.

Or at least, to the extent that it's a problem, one "works around" it. For
example, I had a multimedia database system. Folks would email in a 300-meg
video file, and I would store it in the right place. The system was written
such that as the message arrived, it was processed by one Tcl script and
stored in a file in a "to do" directory. This script exited at the end of
the message. A cron job fired once every minute or so that swept thru the
directory of files, decoding each one, generating responses, moving the data
to the appropriate directories, etc. It was originally designed this way for
reliability, the ability to "turn off" processing without the users seeing
anything more than a delay before they got their answers, etc. It had the
side effect of ditching the 300+ meg core image. Eventually, I had to limit
the size of incoming emails, as people started posting gigabyte files.

The system also used a semaphore-like manager, to keep track of who paid,
what was running next, and things like that. It ran continuously, also
restarting from cron if it crashed or the machine crashed or whatever. It
sometimes got big, but since it was running continuously, if it got big, it
stayed big, but could be expected to be big again. I.e., with virtual
memory, the idea that a 200-client server running with only 2 clients at the
moment should be smaller is bogus. In another ten minutes, you might need
those 200 connections again; it's part of the normal fluctuation of
processing.

The only real problem is when an image gets so large it starts interfering
with other routines by causing thrashing or running you out of VM. If this
happens, you code around it, restarting the process when it needs to get
significantly smaller. For example, on my semaphore thingie, if nobody was
connected, the server would save state and restart itself, again just more
for 7x24 reliability than anything. If the client tried to send a command
and the socket threw an error, the client library would repeatedly try to
reconnect, pausing a few seconds between each try, until it either
reconnected or timed out. If the server died and cron restarted it, the
clients would for the most part never notice. It was a little harder, but
that's what happens with high-reliability server software. :-)

I think the real question you need to ask is, if this is a long-running
server process, why do you want it to get smaller? If you shrink the space
from what it was at the high-water mark, it just means you have a chance of
not being able to grow it next time you need that much space, so you'll just
fail intermittently under load.

Volker Hetzer

unread,
Jul 12, 2002, 12:22:33 PM7/12/02
to

"Darren New" <dn...@san.rr.com> wrote in message news:3D2EFC50...@san.rr.com...

> The system also used a semaphore-like manager, to keep track of who paid,
> what was running next, and things like that. It ran continuously, also
> restarting from cron if it crashed or the machine crashed or whatever. It
> sometimes got big, but since it was running continuously, if it got big, it
> stayed big, but could be expected to be big again. I.e., with virtual
> memory, the idea that a 200-client server running with only 2 clients at the
> moment should be smaller is bogus. In another ten minutes, you might need
> those 200 connections again; it's part of the normal fluctuation of
> processing.
If it's a dedicated machine it doesn't matter that much, otherwise
you have to explain to the user why the system is sluggish even if currently
there's no load.

> The only real problem is when an image gets so large it starts interfering
> with other routines by causing thrashing or running you out of VM. If this
> happens, you code around it, restarting the process when it needs to get
> significantly smaller.

May sometimes work, but it's ugly and fails in case of the base load being
provided by long term sessions.

> I think the real question you need to ask is, if this is a long-running
> server process, why do you want it to get smaller? If you shrink the space
> from what it was at the high-water mark, it just means you have a chance of
> not being able to grow it next time you need that much space, so you'll just
> fail intermittently under load.

Why does one not have the chance to reallocate memory again from the OS
if it's needed?

Greetings!
Volker


Phil Ehrens

unread,
Jul 12, 2002, 12:35:21 PM7/12/02
to
I will (and this is a rare occurrence) put on my hat
(by attaching my prestigious and awe-inspiring sig)
and tell the world that we run MANY Tcl daemons for
MONTHS and that some of them allocate GIGABYTES of
memory and that Tcl 8.3.4 has no memory leaks either
on Linux or Solaris that I am aware of.

We have memory leaks in our C++ based extensions, and
the Solaris and Linux memory management methods can
vary wildly... but Tcl is as stable and reliable as
the U.S. economy... er, how about a 63' chevy with a
283 V8? Yeah, as stable and reliable as a 63' chevy
with a 283 V8.

And, by god, we use threads... but that's a whole
other can of frijoles.

;^)

Joe English wrote:
>
> I think that this thread overstates the severity of
> the "problem".
>
> Tcl 8.3.4 does not have any known memory leaks.
> (There may be some unknown ones, naturally, but
> AFAIK nobody has encountered them yet.)
>
> Many people have deployed long-running server applications
> written in Tcl, and yet -- if the SourceForge bug tracker
> is any indication -- there are no problems with Tcl's memory
> usage. Failure to return memory to the malloc/free pool, or
> to release unused pages to the OS, simply does not appear
> to be a problem in practice. And there are clear performance
> benefits to Tcl's custom pool allocator.

Phil
--
Phil Ehrens <peh...@ligo.caltech.edu>| Fun stuff:
The LIGO Laboratory, MS 18-34 | http://www.ralphmag.org
California Institute of Technology | http://www.yellow5.com
1200 East California Blvd. | http://www.total.net/~fishnet/
Pasadena, CA 91125 USA | http://slashdot.org
Phone:(626)395-8518 Fax:(626)793-9744 | http://kame56.homepage.com

Cameron Laird

unread,
Jul 12, 2002, 1:15:06 PM7/12/02
to
In article <agn0g9$9...@gap.cco.caltech.edu>, Phil Ehrens <-@-> wrote:
.
.
.

>283 V8? Yeah, as stable and reliable as a 63' chevy
>with a 283 V8.
.
.
.
Which carburetor?

And be thankful you're not using original technology for the battery.
--

Cameron Laird <Cam...@Lairds.com>
Business: http://www.Phaseit.net
Personal: http://starbase.neosoft.com/~claird/home.html

Darren New

unread,
Jul 12, 2002, 1:10:12 PM7/12/02
to
Volker Hetzer wrote:
> If it's a dedicated machine it doesn't matter that much, otherwise
> you have to explain to the user why the system is sluggish even if currently
> there's no load.

If there's no load, why would it be sluggish? Don't you have VM on your
machine?

> May sometimes work, but it's ugly and fails in case of the base load being
> provided by long term sessions.

This is true. That's why I called it a "work-around". :-) I've never tried
to work on systems running both large reliable servers and random user jobs
at the same time. Well, certainly not Windows or UNIX machines. Real
computers don't have a problem with that, tho.



> Why does one not have the chance to reallocate memory again from the OS
> if it's needed?

If the memory is available, why does it matter whether you've freed it? If
the memory isn't available, then trying to allocate it will fail.

Volker Hetzer

unread,
Jul 15, 2002, 6:40:55 AM7/15/02
to

"Darren New" <dn...@san.rr.com> wrote in message news:3D2F0D94...@san.rr.com...

> Volker Hetzer wrote:
> > If it's a dedicated machine it doesn't matter that much, otherwise
> > you have to explain to the user why the system is sluggish even if currently
> > there's no load.
>
> If there's no load, why would it be sluggish? Don't you have VM on your
> machine?
I meant, no load from the daemon. And I have indeed seen a HP braked out
by a tclshell which allocated about 500MB of memory to such an extend
that vi users were complaining. I can tell them that it's due to heavy load
and that it's going to get better in a few minutes, but I can't tell them that
it'll happen at random times *and will persist until they complain*.

> > May sometimes work, but it's ugly and fails in case of the base load being
> > provided by long term sessions.
>
> This is true. That's why I called it a "work-around". :-) I've never tried
> to work on systems running both large reliable servers and random user jobs
> at the same time. Well, certainly not Windows or UNIX machines. Real
> computers don't have a problem with that, tho.

Well, tcl was not intended for large systems and the way this happens is
when a system is designed small and then works so well that people
use it quite a lot and to a much larger extend than anticipated. At which point
you are torn between a redesign with a high risk (because of the high acceptance)
and leaving well alone, risking the load complains. A tcl feature which could
clean out unused memory, even partially, would be a big advantage in this
situation.

> > Why does one not have the chance to reallocate memory again from the OS
> > if it's needed?
> If the memory is available, why does it matter whether you've freed it? If
> the memory isn't available, then trying to allocate it will fail.

Maybe HP's swap differently, but it seems to make a real difference whether
500MB are in the OS's hands or in the app's hands.

Greetings!
Volker

Donal K. Fellows

unread,
Jul 15, 2002, 4:21:04 AM7/15/02
to
Volker Hetzer wrote:
> Why does one not have the chance to reallocate memory again from the OS
> if it's needed?

If you've released memory back to the OS, it might be being used for something
else at the time you want to get it back again, at which point you've no option
but to abort().

If you're running out of memory, then you're running out of memory. All you can
do is shuffle around which process gets the failure. The only case where
releasing memory back to the OS makes real sense is when you have a process that
takes a lot of memory to initialise, but thereafter requires very little.

-- Thanks, but I only sleep with sentient lifeforms. Anything else is merely
a less sanitary form of masturbation.
-- Alistair J. R. Young <avatar...@arkane.demon.co.uk>

Donal K. Fellows

unread,
Jul 15, 2002, 4:14:38 AM7/15/02
to
Volker Hetzer wrote:
> That is why I think that it's reasonable to try to find ways to make
> tcl more daemon-friendly, if possible in a way that is easy to implement.

The key problem is that operating systems allocate memory to processes in units
of hardware memory pages (because that's how memory managers like to operate)
which are 4kB long (on all 32-bit architectures I've heard of; 64-bit
architectures tend to use larger pages.) This is massively too large for most
things you need to allocate, so you need to use a malloc library, like what
comes with your libc. These vary *very* widely, but many of them never return
memory to the OS (they're tuned for short-lived programs that do their duty and
then exit, when the OS reclaims everything.)

> Maybe it's possible to make the allocation scheme namespace specific?
> Or interpreter specific or array or scope specific?

You wouldn't *believe* how much API that would change! :^/ And I don't think
you'd gain from it either.

> OTOH, making it a build option is probably the easiest way.

Absolutely!

Donal K. Fellows

unread,
Jul 15, 2002, 4:59:03 AM7/15/02
to
Joe English wrote:
> Tcl 8.3.4 does not have any known memory leaks.

Tk, OTOH, is not quite as perfect...

Volker Hetzer

unread,
Jul 16, 2002, 8:25:54 AM7/16/02
to

"Donal K. Fellows" <fell...@cs.man.ac.uk> wrote in message news:3D3285F0...@cs.man.ac.uk...

> Volker Hetzer wrote:
> > Why does one not have the chance to reallocate memory again from the OS
> > if it's needed?
>
> If you've released memory back to the OS, it might be being used for something
> else at the time you want to get it back again, at which point you've no option
> but to abort().
When it's out, you're right.
However I was talking about the case where memory isn't out, but or allocated
(and used once) to such an extent that the system becomes slow. So the alternatives
are to have a slow system sometimes or to have a slow system always after the
first peak.

Greetings!
Volker

lvi...@yahoo.com

unread,
Jul 16, 2002, 9:19:33 AM7/16/02
to

According to Volker Hetzer <volker...@ieee.org>:
:Well, tcl was not intended for large systems and the way this happens is

:when a system is designed small and then works so well that people
:use it quite a lot and to a much larger extend than anticipated.

I'm uncertain what you mean when you say "tcl was not intended for large
systems..."

Not intended by whom? And by large systems do you mean "computers with
lots of memory, disk, whatever" or do you mean "software systems larger
than the million lines plus of code regularly mentioned in this newsgroup
by Gerald Lester and others"?


--
Tcl'2002 Sept 16, 2002, Vancouver, BC http://www.tcl.tk/community/tcl2002/
Even if explicitly stated to the contrary, nothing in this posting
should be construed as representing my employer's opinions.
<URL: mailto:lvi...@yahoo.com > <URL: http://www.purl.org/NET/lvirden/ >

Volker Hetzer

unread,
Jul 16, 2002, 10:20:35 AM7/16/02
to

<lvi...@yahoo.com> wrote in message news:ah16h5$ith$1...@srv38.cas.org...

>
> According to Volker Hetzer <volker...@ieee.org>:
> :Well, tcl was not intended for large systems and the way this happens is
> :when a system is designed small and then works so well that people
> :use it quite a lot and to a much larger extend than anticipated.
>
> I'm uncertain what you mean when you say "tcl was not intended for large
> systems..."
I mean, applications where it has to handle lots of data. See the overhead for
the hash tables. I don't remember the exact figure but it was something like
forty bytes per key. Few in-place operations aka lappend.
And the memory issue.

> Not intended by whom?
John O. I derive that from the original designation as a glue language.

OTOH, tcl clearly has gone a *looooong* way towards a large scale
environment, especially on the language side. Namespaces and sub
interpreters are clearly advantageus when it comes towards collaborative
development and isolating modules. So is the thread stuff. In fact, I'd say
that the language per se is 99% ok for large systems, the rest being the
OO stuff (which personally Idon't need).

But things like the memory system or the table stuff may be on the underside
of the language, yet IMHO there tradeoffs should be makeable too for
developers who push tcl to its limits or rather, push the hardware to its limits
via tcl.


Greetings!
Volker


Don Porter

unread,
Jul 16, 2002, 10:41:37 AM7/16/02
to
Volker Hetzer wrote:
> But things like the memory system or the table stuff may be on the
> underside of the language, yet IMHO there tradeoffs should be
> makeable too for developers who push tcl to its limits or rather,
> push the hardware to its limits via tcl.

Isn't that where Tcl's open-source, BSD-ish license comes into play?
Modify your copy of Tcl as required.

--
| Don Porter Mathematical and Computational Sciences Division |
| donald...@nist.gov Information Technology Laboratory |
| http://math.nist.gov/~DPorter/ NIST |
|______________________________________________________________________|

Donal K. Fellows

unread,
Jul 16, 2002, 10:32:47 AM7/16/02
to
Volker Hetzer wrote:
> Darren New wrote:

>> Volker Hetzer wrote:
>>> May sometimes work, but it's ugly and fails in case of the base load being
>>> provided by long term sessions.
>> This is true. That's why I called it a "work-around". :-) I've never tried
>> to work on systems running both large reliable servers and random user jobs
>> at the same time. Well, certainly not Windows or UNIX machines. Real
>> computers don't have a problem with that, tho.
> Well, tcl was not intended for large systems and the way this happens is
> when a system is designed small and then works so well that people
> use it quite a lot and to a much larger extend than anticipated. At which
> point you are torn between a redesign with a high risk (because of the high
> acceptance) and leaving well alone, risking the load complains. A tcl feature
> which could clean out unused memory, even partially, would be a big advantage
> in this situation.

Have a look at:
http://sf.net/tracker/index.php?func=detail&aid=582256&group_id=10894&atid=310894

Only known to work on Solaris8 at the moment, but there it runs the test suite
without crashing (unlike my previous attempts! :^D) It's kind-of nice when you
do this:
set l {}
for {set i 0} {$i<1000000} {incr i} {lappend l $i}
# Measure memory consumption of process here to be a good few MB
unset l
# Measure memory consumption of process here to be a lot less :^)

Note that the patch does not enable the feature by default. You have to turn it
on at configure time (and rebuild the whole core.)

-- Actually, come to think of it, I don't think your opponent, your audience,
or the metropolitan Tokyo area would be in much better shape.
-- Jeff Huo <je...@starfall.com.nospam>

Volker Hetzer

unread,
Jul 16, 2002, 11:41:15 AM7/16/02
to

"Donal K. Fellows" <fell...@cs.man.ac.uk> wrote in message news:3D342E8F...@cs.man.ac.uk...

> Volker Hetzer wrote:
> > Darren New wrote:
> >> Volker Hetzer wrote:
> >>> May sometimes work, but it's ugly and fails in case of the base load being
> >>> provided by long term sessions.
> >> This is true. That's why I called it a "work-around". :-) I've never tried
> >> to work on systems running both large reliable servers and random user jobs
> >> at the same time. Well, certainly not Windows or UNIX machines. Real
> >> computers don't have a problem with that, tho.
> > Well, tcl was not intended for large systems and the way this happens is
> > when a system is designed small and then works so well that people
> > use it quite a lot and to a much larger extend than anticipated. At which
> > point you are torn between a redesign with a high risk (because of the high
> > acceptance) and leaving well alone, risking the load complains. A tcl feature
> > which could clean out unused memory, even partially, would be a big advantage
> > in this situation.
>
> Have a look at:
> http://sf.net/tracker/index.php?func=detail&aid=582256&group_id=10894&atid=310894
Thanks a lot!
I'll try to get it working on HP-UX. What tcl version is it intended for?
Unfortunately I don't have cvs access to the outside world.

Greetings and thanks!
Volker


Jeffrey Hobbs

unread,
Jul 16, 2002, 1:10:42 PM7/16/02
to
"Donal K. Fellows" wrote:
>
> Joe English wrote:
> > Tcl 8.3.4 does not have any known memory leaks.
>
> Tk, OTOH, is not quite as perfect...

I have actually done some work on Tk. It's a bit harder to identify
things, since more often the groundwork libs (like Xlib) seem to indicate
more purify problems themselves, but Tk isn't all that leaky. I did
partch a few things last time around, but Tk should also be good for long
runs. It just doesn't manage memory as well anyway because of the use of
Tk_Uids prevalently instead of the newer Tcl_Objs.

Donal K. Fellows

unread,
Jul 17, 2002, 11:28:58 AM7/17/02
to
Volker Hetzer wrote:
> "Donal K. Fellows" <fell...@cs.man.ac.uk> wrote in message news:3D342E8F...@cs.man.ac.uk...
>> Have a look at:
>> http://sf.net/tracker/index.php?func=detail&aid=582256&group_id=10894&atid=310894
> Thanks a lot!
> I'll try to get it working on HP-UX. What tcl version is it intended for?
> Unfortunately I don't have cvs access to the outside world.

Well, it's developed against the CVS HEAD, but it could well work with 8.4b1

Who knows, it might even work with other versions. Stranger things have
happened...

-- The guy who sells me my audio hardware explained that a computer will never
produce the same level of sound quality that a stereo will b/c stereo have
transistors and sound cards don't. --Matthew Garson <mga...@world.std.com>

Volker Hetzer

unread,
Jul 17, 2002, 11:59:48 AM7/17/02
to

"Donal K. Fellows" <fell...@cs.man.ac.uk> wrote in message news:3D358D3A...@cs.man.ac.uk...

> Volker Hetzer wrote:
> > "Donal K. Fellows" <fell...@cs.man.ac.uk> wrote in message news:3D342E8F...@cs.man.ac.uk...
> >> Have a look at:
> >> http://sf.net/tracker/index.php?func=detail&aid=582256&group_id=10894&atid=310894
> > Thanks a lot!
> > I'll try to get it working on HP-UX. What tcl version is it intended for?
> > Unfortunately I don't have cvs access to the outside world.
>
> Well, it's developed against the CVS HEAD, but it could well work with 8.4b1
OK.

Greetings!
Volker


Donal K. Fellows

unread,
Jul 18, 2002, 4:42:26 AM7/18/02
to
Jeffrey Hobbs wrote:
> I have actually done some work on Tk. It's a bit harder to identify
> things, since more often the groundwork libs (like Xlib) seem to indicate
> more purify problems themselves, but Tk isn't all that leaky. I did
> partch a few things last time around, but Tk should also be good for long
> runs. It just doesn't manage memory as well anyway because of the use of
> Tk_Uids prevalently instead of the newer Tcl_Objs.

Well, all I can say is that tkchat seems to leak memory over runs of a month or
so (;^) and yet I can't find any variable or command that seems to contain all
these extra resources. Whatever the leak is, it is a subtle one...

-- We shall, nevertheless, continue to dispense [insults] on the premise of
giving credit where credit is due, you ill-bred nanowit sack of bovine
fecal matter. -- Xelloss <jfos...@home.com>

lvi...@yahoo.com

unread,
Jul 18, 2002, 8:43:06 AM7/18/02
to

According to Donal K. Fellows <fell...@cs.man.ac.uk>:
:Well, all I can say is that tkchat seems to leak memory over runs of a month or

:so (;^) and yet I can't find any variable or command that seems to contain all
:these extra resources. Whatever the leak is, it is a subtle one...
:

Is there something we could add to tkchat to help track down this problem?

Donal K. Fellows

unread,
Jul 18, 2002, 11:38:34 AM7/18/02
to
lvi...@yahoo.com wrote:
> According to Donal K. Fellows <fell...@cs.man.ac.uk>:
>: Well, all I can say is that tkchat seems to leak memory over runs of a
>: month or so (;^) and yet I can't find any variable or command that seems
>: to contain all these extra resources. Whatever the leak is, it is a
>: subtle one...
>
> Is there something we could add to tkchat to help track down this problem?

I'm not sure. The leak is a slow one for sure...

FWIW, the main thing I'd like to know is whether I can assume that the chat
script itself is not leaking (it's certainly not leaking channels; that'd go
wrong much sooner.) As it happens, if I restart the chat every month or so, it
doesn't take too much memory...

0 new messages