I've seen cases of my application malloc'ing 100K, but instead of taking it from the freed memory blocks, which has several MB in free block space, it allocates new memory from the OS. The only explanation I can think of is that my memory is somehow fragmented and doesn't have enough contiguous blocks to allocate this from the already-freed blocks. However, I find this hard to believe. I thought because of virtual memory, that memory fragmenting was something that could happen to MS-DOS, but maybe I'm wrong? My application does allocate and free thousands of of link lists (inefficient, I know, but it's not my design, and I hope to change this very soon).
thanks for any help,
K. Jackson
_______________________________________________
Submitted via WebNewsReader of http://www.interbulletin.com
You may wish to take steps to deal with fragmentation. One possible
step is to allocate memory that you expect to free at the same time from
the same pool. That way the whole pool can be freed.
If you use a large number of constant-sized objects, it may help to
slab them.
It's hard to say without knowing more about your application. You may
be able to tune your memory allocator.
DS
A couple of things:
Some `malloc' implementations use mmap() to allocate large
blocks (sometimes the threshold is a page or two, sometimes
more), so this might be part of what you're seeing.
Some programs have allocation patterns that interact badly
with the way certain allocators work. Often, for example,
when some number of objects of a certain size have been
allocated, a future allocation cuts up a page into chunks of
that size, gives you one, and throws the rest onto a free
list. If the allocation pattern of a program is to allocate
many chunks of a certain size, free them, and then allocate
many chunks of a somewhat larger size, the allocator can't
satisfy the latter requests (as bunches of
somewhat-too-small chunks are on the free lists) without
grabbing more address space from the OS (via sbrk()).
This is not necessarily a bad thing, even though it makes it
look like the overall size of the program is expanding;
though the address space may have grown, the pages
containing the `somewhat-too-small' chunks that have been
freed are eventually swapped out; unless they're touched
again their only downside is consumption of swap space.
It's really only a problem for programs with _very_ large
footprints; even in cases like that, at some point most
malloc implementations will `unslice' space from previously
sliced pages.
HTH,
--ag
You might want to look at:
ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps
(it's a survey paper that describes how various allocators
actually do their thing)
--
Artie Gold, Austin, TX (finger the cs.utexas.edu account
for more info)
mailto:ag...@bga.com or mailto:ag...@cs.utexas.edu
--
Clone Bernie!
The application is a stateless server that makes database queries
on behalf of clients. There really is no limit to the size of the
queries, so I could understand unpredictable growth in the address
space, however, the test I was running on returned a fixed number
of rows. When I ran the test over and over again, I assumed that
the total address space should be large enough to accomodate the
largest query, however, I found that it still continued to creep
up, a few KB here, maybe 100KB there. Again, this was the freed
block space that grew, not my the used block space. What's even
more weird is that I tried allocating 250 MB at the beginning
of the program and then immediately freed it, and for the past
1 day I haven't seen the address space budge by 1 byte.
Thanks for your reply,
Kevin Jackson
Thanks for the pointer, I will print out the document and
give it a read. Do you (or anyone else) know of any documents
that talk about memory fragmentation in UNIX (especially in
Solaris) or maybe the memory allocation method? I bought the
Solaris Kernal Internals book, but they don't mention in detail
the memory allocation method per se.
I remember fragmentation being a concern from my DOS days, but
I thought that I wouldn't have to worry about this any more...
I'm shocked to find out that this is still something to
keep in mind!
Thanks again,
Kevin Jackson
Your best bet at this point (particularly since you seem to
be allocating/deallocating a rather large amount of memory),
would likely be to do two things:
1) explicitly mmap()/munmap() large blocks as needed
2) allocate in bulk, i.e. allocate a page's worth of objects
at a time and deallocate them all at once, to avoid the
`only-slightly-too-small-slots-available' problem.
I (or someone else on this newsgroup) might be able to help
you with this task if you run into problems (and you provide
some specifics).
HTH,
--ag
Fragmentation is a generic problem in all programming. As Arthur
Gold noted, in this case it is not "memory fragmentation" (in
physical RAM) that is causing the drag, but rather "address space
fragmentation". What you need is either a different malloc() --
most Unix systems allow you to slide in a new malloc() and have
everything still work right, despite this not being Officially
Sanctioned in the programming language -- or a different strategy
for calling malloc().
There are entire books and PhD theses on memory allocation
strategies, so there is a lot of literature available, but no
silver bullet.
--
In-Real-Life: Chris Torek, Berkeley Software Design Inc
El Cerrito, CA, USA Domain: to...@bsdi.com +1 510 234 3167
http://claw.bsdi.com/torek/ (not always up) I report spam to abuse@.
Kevin