On Mon, Sep 21, 2009 at 3:36 AM, Andrew Straw <ast...@caltech.edu> wrote:
>
> Hi, I'm attempting to auto-generate and call lots of small functions,
> and I'm hitting what appear to be memory leaks. It's entirely possible
> that this is due to me not releasing something, so I wrote a "memory-
> torture-ee.py" test as submitted it as a patch to:
>
> http://code.google.com/p/llvm-py/issues/detail?id=24
The LLVM ModuleProvider object will delete the module that it owns,
when it itself is destroyed. Modules that not attached to any
ModuleProviders have to be cleaned up with LLVMDisposeModule(), but
Modules that are attached to ModuleProviders should *not* be so
treated. Cleanup of the ModuleProvider (LLVMDisposeModuleProvider)
internally (i.e., within LLVM) destroys attached ("owned") modules.
A similar relationship exists between ModuleProviders and
ExecutionEngines -- ExecutionEngines can "own" ModuleProviders.
The Python objects "know" that once they are "owned", they should not
attempt to cleanup the LLVM object they wrap, during destruction.
Regarding the memory leak, any leaked objects can be observed by adding:
import gc
gc.set_debug(gc.DEBUG_LEAK)
at start, and dumping the objects by a "print gc.garbage" at the end.
I did this, and also changed the infinite loop to "for x in
range(0,10000)" and commented "remove_module_provider". Only the
following objects, created by ctypes (because of "import ctypes"),
were leaked:
~/llvm-py/test$ ./memory-torture-ee.py
will now run forever...
gc: collectable <tuple 0xb7be8a8c>
gc: collectable <StgDict 0xb7be95dc>
gc: collectable <_ctypes.ArrayType 0x8f045d4>
gc: collectable <getset_descriptor 0xb7be8b0c>
gc: collectable <getset_descriptor 0xb7be8b2c>
gc: collectable <tuple 0xb7beb48c>
[(<type '_ctypes.Array'>,), {'__module__': 'ctypes._endian',
'__dict__': <attribute '__dict__' of 'c_long_Array_3' objects>,
'__weakref__': <attribute '__weakref__' of 'c_long_Array_3' objects>,
'_length_': 3, '_type_': <class 'ctypes.c_long'>, '__doc__': None},
<class 'ctypes._endian.c_long_Array_3'>, <attribute '__dict__' of
'c_long_Array_3' objects>, <attribute '__weakref__' of
'c_long_Array_3' objects>, (<class 'ctypes._endian.c_long_Array_3'>,
<type '_ctypes.Array'>, <type '_ctypes._CData'>, <type 'object'>)]
~/llvm-py/test$
They go away if the "import ctypes" is removed.
I tested with Python 2.6.2, gcc 4.4.1, LLVM 2.5, on an up-to-date ArchLinux.
Regards,
-Mahadevan.
Thanks for your email. I think the "leak" I was observing is from cyclic
references between the modules and module providers even though my code
had removed those from it's own namespace. (Their __del__ functions were
not getting called.) So, while perhaps not technically a leak, it
results in stuff not being garbage collected and my memory usage growing
until the process is killed by the OS.
Anyhow, with some more LLVM knowledge gained by looking at the unladen
swallow source code (man, now that's a cool idea), I have updated the
test to be much simpler and a more minimal implementation of the core
functionality I need[1]. It still seems to be have unbounded memory
consumption, but the rate is much slower. If I get a chance I'll try
valgrind on the whole thing, but I think the memory consumption might be
low enough that I can push further ahead with my own project for now.
-Andrew
[1] see the patch for comment 2 at
http://code.google.com/p/llvm-py/issues/detail?id=24#c2
--
Andrew D. Straw, Ph.D.
California Institute of Technology
http://www.its.caltech.edu/~astraw/
Getting back to this, I've written a pure C++ implementation of my
original memory torture test. This creates and deletes functions,
modules, module providers in an infinite loop. Memory consumption never
grows. (See below for an error I do get.) So I think my original test
function (0003-add-a-memory-leak-torture-test.patch) in the ticket
_should_ work without memory usage growing in an unbounded fashion.
The test I wrote in C++ is attached to this email. Compile it with:
c++ -g memory-torture-mod.cpp `llvm-config --cxxflags --ldflags --libs
all` -o memory-torture-mod
The error I do get after a 2m42s I get:
JITMemoryManager.cpp:127:
<unnamed>::FreeRangeHeader*<unnamed>::FreeRangeHeader::AllocateBlock():
Assertion `!ThisAllocated && !getBlockAfter().PrevAllocated && "Cannot
allocate an allocated block!"' failed.
Aborted
Anyhow, that looks like an LLVM race condition or something. (I'm using
LLVM 2.5).