Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Python for embedded systems with memory constraints

4 views
Skip to first unread message

vishnu

unread,
Jun 4, 2007, 2:19:08 PM6/4/07
to pytho...@python.org
Hi there,

I am embedding python 2.5 on embedded system running on RTOS where I
had strict memory constraints.
As python is a huge malloc intensive application, I observed huge
memory fragmentation in my system which is leading to out of memory
after running few scripts.
So I decided to re-initialise the python with out restarting the whole python.
I tried to use Py_Finalise() after completion of each script , then
call Py_Initialise as is done in below link.
http://mail.python.org/pipermail/python-list/2001-November/114253.html
Which in every loop it causes a leak of 10K and after some iterations
it leaks of 200K etc. After few more runs this crashes. I read some
where this leak was solved in 2.5, but with 2.5 also I am having
problems....
And also I found Py_Finalise does not completely cleanup the memory,
So how do I re-initialise my memory pool?

Does anybody faced this problem earlier and got any solution hack to
run the python for a embedded system within own managed memory pool of
say 10MB?

Any help/ideas are greatly appreciated!. Thanks in advance.

Jürgen Urner

unread,
Jun 5, 2007, 2:48:41 PM6/5/07
to
Who else is using python (programmers, scientists, finance)?

Me! Graduated in fine arts. Python is what I do when I am fed up with
all those colors. Much easier to manufacture sense with.


Cameron Laird

unread,
Jun 7, 2007, 1:27:53 PM6/7/07
to
In article <mailman.8686.1180981...@python.org>,

Your report is interesting and important--and surprising! I thought
Python memory allocation is "cleaner" than you seem to be observing.

I hope one of the core Python maintainers can address this. I haven't
worked at this level recently enough to speculate on why it's happen-
ing, nor will I soon be in a position to volunteer to research it on
my own (although I'd eagerly contract to do so on a modestly paid
basis).

Depending on your schedule and technology, there are lots of technical
fixes that might apply:
A. quick-starting Python variations that encourage you
to manage memory on a whole-process level;
B. use of one of the many Python variants (even PyPy?)
that might give you a more favorable memory profile;
C. switch to Lua or Tcl as more easily embeddable
alternative languages;
D. custom memory allocator;
...

vishnu

unread,
Jun 9, 2007, 8:33:51 AM6/9/07
to pytho...@python.org
Hi,
Thanks Cameron for your suggestions.
In fact I am using custom memory sub-allocator where I preallocate a
pool of memory during initialization of my application and ensure that
Python doesn't make any system mallocs later . With this arrangement,
python seems to run out of preallocated memory (of 10MB) after running
few simple scripts due to huge external fragmentation. My memory
sub-allocator got a good design which uses the best-fit algorithm and
coaelescing the adjacent blocks during each free call.
If anybody out there used their own memory manager and ran Python
without fragmentation , could provide some inputs on this.

Thanks in advance.

> --
> http://mail.python.org/mailman/listinfo/python-list
>

MRAB

unread,
Jun 9, 2007, 7:02:28 PM6/9/07
to
On Jun 9, 1:33 pm, vishnu <gkkvis...@gmail.com> wrote:
> Hi,
> Thanks Cameron for your suggestions.
> In fact I am using custom memory sub-allocator where I preallocate a
> pool of memory during initialization of my application and ensure that
> Python doesn't make any system mallocs later . With this arrangement,
> python seems to run out of preallocated memory (of 10MB) after running
> few simple scripts due to huge external fragmentation. My memory
> sub-allocator got a good design which uses the best-fit algorithm and
> coaelescing the adjacent blocks during each free call.
> If anybody out there used their own memory manager and ran Python
> without fragmentation , could provide some inputs on this.
>
>From what I remember, the best-fit algorithm isn't a good idea because
unless the free block was exactly the right size you'd tend to get
left with lots of small fragments. (Suppose that the best fit was a
free block only 4 bytes bigger than what you want; what can you do
with a free block of 4 bytes?)

A worst-fit algorithm would leave larger free blocks which are more
useful subsequently, but I think that the recommendation was next-fit
(ie use the first free block that's big enough, starting from where
you found the last one).

Gabriel Genellina

unread,
Jun 11, 2007, 8:55:35 PM6/11/07
to pytho...@python.org
En Mon, 11 Jun 2007 15:59:19 -0300, vishnu <gkkv...@gmail.com> escribió:

> So now I only see the solution to clear my memory pool and restart Python
> without restarting the system (i.e. no power cycle to hardware). I tried
> to
> do this when my memory pool is 60% used in these steps:
> 1) Py_Finalize( )
> 2) Reset my Memory pool (i.e. free list links)
> 3) Then Restart Python by calling Py_Initialize().
>
> But this resulted in Python crash during Py_Initialize(), where I found
> that the static variables within embedded Python source code are still
> holding some of the references to my memory pool. So now my question is
> how
> do I restart Python (i.e. reinitialize Python) without restarting whole
> system. Is there a way to reset/re-initilaize those static variables such
> that it will be possible to re-Initialize Python.

Ouch... I think this should not happen, but anyway I don't know if this
was a design principle. One should inspect all Python code to locate all
static variable references... Maybe you could instrument your allocator to
see which references are still held?
C extensions may be problematic too - there is no way to "uninitialize"
them, and nothing forbids an extension to hold a reference to any object.

--
Gabriel Genellina

0 new messages