The malloc on VxWorks is so slow that proprietary
mechanisms (e.g. memory pools for reusing fixed-size
buffers) could imho significantly reduce fragmentation
and enhance performance.
Thanks in advance for any answer
--Jo
--------------------------------------
Joachim Fabini
Ericsson Austria AG
Pottendorfer Strasse 25-27
A-1121 Wien, Austria
voice: +43 1 81100-5127
fax: +43 1 81100-4686
Please remove no.spam from email address when
replying directly. Thanks.
--------------------------------------
The malloc on VxWorks is so slow that proprietary
mechanisms (e.g. memory pools for reusing fixed-size
buffers) could imho significantly reduce fragmentation
and enhance performance.
Seaweed Systems has a written-from-scratch memLib replacement. It
has all the functionality which Wind River's memLib has...and more!
It has memory leak detection, sanity checks at malloc/realloc/free
time, user-callable memory arena sanity checker, constant-time allocation,
better fragmentation behavior, callbacks for various low-memory
conditions, and more! A real ginzu knife of an app.
Please call or email for information:
in...@seaweed.com
425.895-1721
Bob Schulman
>Is there anyone around who implemented (or tried to
>implement) a proprietary memory-management on VxWorks
>to avoid fragmentation and/or to enhance performance?
>
>The malloc on VxWorks is so slow that proprietary
>mechanisms (e.g. memory pools for reusing fixed-size
>buffers) could imho significantly reduce fragmentation
>and enhance performance.
>
>Thanks in advance for any answer
>--Jo
>--------------------------------------
>Joachim Fabini
>
>Ericsson Austria AG
>Pottendorfer Strasse 25-27
>A-1121 Wien, Austria
>voice: +43 1 81100-5127
>fax: +43 1 81100-4686
>
>Please remove no.spam from email address when
>replying directly. Thanks.
>--------------------------------------
We embellished on an idea that I got out of the VxWorks training
class I took a number of years ago. We allocate a large block of
memory at application startup. Then divide that block of memory into
many smaller blocks (of varying sizes). The pointers to these smaller
blocks of memory are stored as messages in a VxWorks message queue
(actually a number of message queues, each queue holds pointers to
memory buffers of a specific size). Tasks which need to allocate
memory call an allocation function which finds the closest size buffer
available (if a queue is empty, it tries the next larger size buffer
queue) and then calls msgQReceive to get a pointer to the buffer. The
function simply returns the buffer pointer just like malloc. The free
function determines which queue the buffer to be freed belongs on and
then calls msgQSend to place it back on the queue. When we divided the
original large memory block into smaller ones we did it in a manner
that allows us to use the range of the memory addresses to determine
what message queue they belong in.
The benefits to this method are 1.) no fragmentation since memory is
actually only allocated once 2.) it's pretty fast since it basically
is only takes a msgQReceive or msgQSend call to allocate or free a
buffer 3.) it's fairly easy to detect memory leaks since we have
written some utilities to allow us to check memory buffer
utilitization at run time using msgQInfo and other msgQLib calls.
Tom Fuda
Sr. Software Engineer
Northrop Grumman Norden Systems