Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

HeapAlloc heap fragmentation.

358 views
Skip to first unread message

Philip Borghesani

unread,
Nov 7, 2005, 6:02:04 PM11/7/05
to
Our application has bumped into what seems to be a limitation of the
HeapAlloc system on Win32.

It appears that if an application allocate a large portion of the available
address space in object of size less then 0x7FFF8 bytes and then releases
this memory there is no way to reclaim the address space for allocations of
size greater then 7FFF8 bytes.

Using the low fragmentation heap does not seem to help.

Are there any known work a-rounds for this issue other then replacing
HeapAlloc (actually standard c++ allocators in Visual C 7)?

Example code:

// TestHeapAlloc.cpp This program will cause a machine with less then 2gb
ram to thrash badly

// do no run with less then 1gb ram.

// if this program is run on a machine with the -3gb boot switch (or
win64) then the sizes

// may need to be modified to produce a failure

#include "stdafx.h"

#include "windows.h"

HANDLE heap=NULL;

void TestLargeBlock(size_t size)

{

void *ptr=HeapAlloc(heap,0,size); // 500 mb

if (ptr!=NULL) {

printf("Allocated %Id bytes\n",HeapSize(heap,0,ptr));

} else {

printf("Failed to allocate %Id bytes\n",size);

}

HeapFree(heap,0,ptr);

}

size_t const MaxPointers=10000;

void TestSmallBlocks(int NumPointers,size_t size)

{

void *pointers[MaxPointers];

int count=0;

memset(pointers,0,sizeof(pointers));

for (int i=0; i< NumPointers; i++)

{

pointers[i]=HeapAlloc(heap,0,size);

if (pointers[i]==NULL) {

break;

} else {

count++;

}

}

printf("Allocated %d items of size %Id for a total of %Iu
bytes\n",count,size,count*size);

// now free the memory

for (int i=0; i< count; i++)

{

HeapFree(heap,0,pointers[i]);

}

}

int _tmain(int argc, _TCHAR* argv[])

{

heap=GetProcessHeap(); // initialize program global heap

size_t const AllocCutoff=0x7FFF8;

size_t const SmallAlloc=AllocCutoff-8000;

size_t const BiggerAlloc=AllocCutoff;

// prove can access and free large memory blocks

TestSmallBlocks(MaxPointers,BiggerAlloc);

TestLargeBlock(0x20000000); // 500 mb

/// Grow the heap

TestSmallBlocks(MaxPointers,SmallAlloc);

// check heap info

size_t maxblock=HeapCompact(heap,0);

printf("HeapCompact erroneously reports a large max block size of
%Id\n",maxblock);

// try to allocate a large block

TestLargeBlock(maxblock/2); // can't even allocate ½ reported size

// try to allocate slightly bigger small blocks this is the most painful
example

// possibly not even one allocation will happen

TestSmallBlocks(MaxPointers,BiggerAlloc);

}


Skywing

unread,
Nov 7, 2005, 8:40:43 PM11/7/05
to
If you know for certain that your allocations are going to be big (several
pages) then it might be better to just use VirtualAlloc/VirtualFree directly
instead of going through the heap manager.

"Philip Borghesani" <Phil...@community.nospam> wrote in message
news:eoMCG9%234FH...@TK2MSFTNGP12.phx.gbl...

Message has been deleted

Ivan Brugiolo [MSFT]

unread,
Nov 8, 2005, 2:18:36 AM11/8/05
to
This is contrary to evidence and implementation (see code and debugger
evidence)
What are you using to conclude that blocks are never released ?

Given the following fragment:

#define MAX_ALLOC 256
#define HEAP_GRANULARITY (2*sizeof(ULONG_PTR))
#define HEAP_ENTRY_MAX 0xFFFE

int __cdecl
wmain(int argc, WCHAR * argv[])
/*++

--*/
{

PVOID Array[MAX_ALLOC];

for (ULONG Iter = 0; Iter < MAX_ALLOC; Iter++){
Array[Iter] = HeapAlloc(GetProcessHeap(),0,HEAP_GRANULARITY *
HEAP_ENTRY_MAX) ;
}

DbgBreakPoint();

for (ULONG Iter = 0; Iter < MAX_ALLOC; Iter++){
HeapFree(GetProcessHeap(),0,Array[Iter]) ;
}

DbgBreakPoint();

return 0;
}


At the first breakpoint:

0:000> !heap -p -all
_HEAP @ c0000
_HEAP_LOOKASIDE @ c0cd0
_HEAP_SEGMENT @ c0c50
CommittedRange @ c0cc0
HEAP_ENTRY: Size : Prev Flags - UserPtr UserSize - state
// snip
VirtualAllocdBlocks @ c0090
2b0030: fffe : N/A [N/A] - 2b0040 (fffe0) - (busy VirtualAlloc)
// other 255 blocks similar to the previous one

At the second breakpoint

0:000> !heap -p -all
_HEAP @ c0000
_HEAP_LOOKASIDE @ c0cd0
_HEAP_SEGMENT @ c0c50
CommittedRange @ c0cc0
HEAP_ENTRY: Size : Prev Flags - UserPtr UserSize - state
// snip
VirtualAllocdBlocks @ c0090
// no blocks, meaning they have been free-ed

--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of any included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm


"Philip Borghesani" <Phil...@community.nospam> wrote in message
news:eoMCG9%234FH...@TK2MSFTNGP12.phx.gbl...

Philip Borghesani

unread,
Nov 8, 2005, 10:22:52 AM11/8/05
to
I think you missed the point. You allocated blocks of size 0x7FFF0 which
were allocated individually using VirtualAlloc by HeapAlloc and are
completely released by HeapFree. The problem is with blocks that are
smaller then 7F000 bytes which are allocated directly from the heap and
cause it to grow. After the heap has grown it never releases virtual
address space preventing large allocations.

Output from a run of the code I posted:
S:\test\TestHeapAlloc> release\TestHeapAlloc.exe
// first run successfully allocates a large number of large blocks
Allocation 3628 failed
Allocated 3628 items of size 524280 for a total of 1902087840 bytes
Allocated 536870912 bytes // after freeing can still allocate a single
large block


// now allocate a slightly smaller size any size less then some cutoff
would do but more allocations would be needed
Allocation 4108 failed
Allocated 4108 items of size 516280 for a total of 2120878240 bytes

// note return from HeapCompact
HeapCompact erroneously reports a large max block size of 536866816

// now we can not allocate anything larger then 0x7FFF0 bytes
Failed to allocate 268433408 bytes
Allocation 0 failed
Allocated 0 items of size 524280 for a total of 0 bytes

"Ivan Brugiolo [MSFT]" <Ivan.B...@online.microsoft.com> wrote in message
news:eIO1jSD5...@TK2MSFTNGP10.phx.gbl...

Philip Borghesani

unread,
Nov 8, 2005, 10:26:55 AM11/8/05
to
Unfortunately the allocation sizes are not known in advance and frequently
the symptom is seen after using a larger number of much smaller allocations.


"Skywing" <skywing_...@valhallalegends.com> wrote in message
news:uAo2vVA...@TK2MSFTNGP15.phx.gbl...

Ivan Brugiolo [MSFT]

unread,
Nov 8, 2005, 12:27:17 PM11/8/05
to
I got wrong the the comparison term.
Sorry about that.
In that case, the behavior is expected, with the following caveats:
Blocks below that threashold are allocated out of a heap-segment,
that is a contiguous address space region whose size
and allocation policy from Mm varies, but, that is normally in the order
of several Megabytes.
Smaller blocks are returned to the free-pool within the segment,
but, the address space is reserved in a whole block,
and committed on demand.

There are a couple of techniques to prevent fragmentation:
- Use LowFragHeap
- Disable the LookAside Frontend, that, as a side effect,
improves the ability to coalesce.

Allocation patterns that tend to exacerbate fragmentation are:
- std::vector::push_back() growing strategy (and other hash-tables
that grows in the same patter)
- mismatched order of allocations and free, with regard location
in the address space.

side-notes:
HeapCompact does not do what most think it will do.
HeapValidate, as a side effect,
will do what most think HeapCompact is supposed to do.

--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of any included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm

"Philip Borghesani" <Phil...@community.nospam> wrote in message

news:OImKKhH5...@TK2MSFTNGP11.phx.gbl...

Philip Borghesani

unread,
Nov 8, 2005, 12:55:47 PM11/8/05
to
The problem is not simple heap fragmentation. The LowFragHeap does not
help. The main issue is that we end up with nearly all virtual memory
(nearly 2gb) reserved in the Heap making it impossible to allocate any
blocks of size > 0x7f000 even when the heap itself has plenty of free space
including blocks of memory larger then 0x7f000 bytes that it will not
allocate from because of the block size limit.

We have implemented our own small block heap quite similar to the Low
Fragmentation Heap that helps to reduce fragmentation but is not the issue
here.


"Ivan Brugiolo [MSFT]" <Ivan.B...@online.microsoft.com> wrote in message

news:umNprmI...@TK2MSFTNGP14.phx.gbl...

Ivan Brugiolo [MSFT]

unread,
Nov 8, 2005, 1:39:12 PM11/8/05
to
At this point, without the (very verbose) output of `!heap -p -all`
and `!address` is very hard to say anything meaningful,
besides correcting my initial understanding of the problem,
and giving very generic recomendations.
At this point is not even clear if you suffer of:
- address space fragmentation
- internal heap fragmentation
- external heap fragmentation

--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of any included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm


"Philip Borghesani" <Phil...@community.nospam> wrote in message

news:eDsGn2I5...@tk2msftngp13.phx.gbl...

Philip Borghesani

unread,
Nov 8, 2005, 3:34:48 PM11/8/05
to
Let me try agian; I want you to understand not because I expect a solution
but because I want people at Microsoft to be aware of the problem because
until our customers upgrade to 64 bit computing they will suffer frequent
out of memoy errors that they do not understand. Just google "matlab out
memory" or search comp.softsys.matlab for the scale of the problem.

The problem is not one of fragmentation but one of not haveing any virtual
addres space avalible, even when there is a huge amount of space avalible in
the standard heap. .

Because address space only goes into the heap any applictaion that allocates
1-2gb of small objects, frees ALL of them and tries to allocate large
(>0x7f000) objects will fail due to out of memory. Worse yet, task manager
shows little memory use becase it shows private bytes and HeapFree has
de-commited the memory.

What is needed is a function to tell the heap to release unused virtual
addres space to make it avalible to VirtualAlloc. The only way I know of to
do this is to call HeapDestroy and it is difficult to guarentee that all
objects in a Heap will be freed so that doing this is posible.

The code I originaly posted shows this. Here is the main portion of the
code with more comments:

int _tmain(int argc, _TCHAR* argv[]){

size_t const SmallAlloc=0x7eff0;
size_t const BiggerAlloc=0x7fff0;

// prove can access and free large memory blocks this runs repeatedly


TestSmallBlocks(MaxPointers,BiggerAlloc);
TestLargeBlock(0x20000000); // 500 mb

/// Grow the heap HERE IS THE PROBLEM
TestSmallBlocks(MaxPointers,SmallAlloc);

// The heap now owns all virtual addres space an will not give it up.


// check heap info
size_t maxblock=HeapCompact(heap,0);
printf("HeapCompact erroneously reports a large max block size of
%Id\n",maxblock);

// Try to allocate a large block


TestLargeBlock(maxblock/2); // can't even allocate ½ reported size
// try to allocate slightly bigger small blocks this is the most painful
example
// possibly not even one allocation will happen
TestSmallBlocks(MaxPointers,BiggerAlloc);
}

"Ivan Brugiolo [MSFT]" <Ivan.B...@online.microsoft.com> wrote in message

news:%23g293OJ...@tk2msftngp13.phx.gbl...

Ivan Brugiolo [MSFT]

unread,
Nov 8, 2005, 4:23:36 PM11/8/05
to
This looks a problem that should be solved with the
new HEAP_SEGMENT management algorithm in Vista.
Before that, HeapDestroy is the only call that will allow
ou to destroy a HEAP_SEGMENT.

--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of any included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm

"Philip Borghesani" <Phil...@community.nospam> wrote in message

news:eOxCePK5...@TK2MSFTNGP12.phx.gbl...

Pavel Lebedinsky [MSFT]

unread,
Nov 9, 2005, 12:54:17 AM11/9/05
to
"Skywing" wrote:

> If you know for certain that your allocations are going to be big (several
> pages) then it might be better to just use VirtualAlloc/VirtualFree
> directly instead of going through the heap manager.

Because of the 64 KB allocation granularity you should only use
VirtualAlloc for allocations that are a multiple of 64 KB. Doing a large
number of 1 or 2 page allocations with VirtualAlloc is extremely wasteful,
but even for larger sizes it's still better to do it in chunks of 64K.

Philip Borghesani

unread,
Nov 9, 2005, 2:05:14 PM11/9/05
to
How do I find out about the "HEAP_SEGMENT" management algorithm in Vista?
A quick web and MSDN search did not turn up anything.


"Ivan Brugiolo [MSFT]" <Ivan.B...@online.microsoft.com> wrote in message

news:OigTvqK5...@tk2msftngp13.phx.gbl...

Jeffrey Tan[MSFT]

unread,
Nov 10, 2005, 3:05:20 AM11/10/05
to
Hi Philip,

I have consulted this issue internally, and it seems this is by design,
below is the feedback:
"The issue here is that the blocks over 512k are direct calls to
VirtualAlloc, and everything else smaller than this are allocated out of
the heap segments. The bad news is that the segments are never released
(entirely or partially) so ones you take the entire address space with
small blocks you cannot use them for other heaps or blocks over 512 K. If
your program exposes such an usage pattern, the easiest workaround would be
to use a separate heap for the short-term small allocations, and destroy
the heap when youĄŻre done with it (and it would be faster too, instead of
freeing blocks one by one)."

Hope this helps

Best regards,
Jeffrey Tan
Microsoft Online Partner Support
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.

0 new messages