It appears that if an application allocate a large portion of the available
address space in object of size less then 0x7FFF8 bytes and then releases
this memory there is no way to reclaim the address space for allocations of
size greater then 7FFF8 bytes.
Using the low fragmentation heap does not seem to help.
Are there any known work a-rounds for this issue other then replacing
HeapAlloc (actually standard c++ allocators in Visual C 7)?
Example code:
// TestHeapAlloc.cpp This program will cause a machine with less then 2gb
ram to thrash badly
// do no run with less then 1gb ram.
// if this program is run on a machine with the -3gb boot switch (or
win64) then the sizes
// may need to be modified to produce a failure
#include "stdafx.h"
#include "windows.h"
HANDLE heap=NULL;
void TestLargeBlock(size_t size)
{
void *ptr=HeapAlloc(heap,0,size); // 500 mb
if (ptr!=NULL) {
printf("Allocated %Id bytes\n",HeapSize(heap,0,ptr));
} else {
printf("Failed to allocate %Id bytes\n",size);
}
HeapFree(heap,0,ptr);
}
size_t const MaxPointers=10000;
void TestSmallBlocks(int NumPointers,size_t size)
{
void *pointers[MaxPointers];
int count=0;
memset(pointers,0,sizeof(pointers));
for (int i=0; i< NumPointers; i++)
{
pointers[i]=HeapAlloc(heap,0,size);
if (pointers[i]==NULL) {
break;
} else {
count++;
}
}
printf("Allocated %d items of size %Id for a total of %Iu
bytes\n",count,size,count*size);
// now free the memory
for (int i=0; i< count; i++)
{
HeapFree(heap,0,pointers[i]);
}
}
int _tmain(int argc, _TCHAR* argv[])
{
heap=GetProcessHeap(); // initialize program global heap
size_t const AllocCutoff=0x7FFF8;
size_t const SmallAlloc=AllocCutoff-8000;
size_t const BiggerAlloc=AllocCutoff;
// prove can access and free large memory blocks
TestSmallBlocks(MaxPointers,BiggerAlloc);
TestLargeBlock(0x20000000); // 500 mb
/// Grow the heap
TestSmallBlocks(MaxPointers,SmallAlloc);
// check heap info
size_t maxblock=HeapCompact(heap,0);
printf("HeapCompact erroneously reports a large max block size of
%Id\n",maxblock);
// try to allocate a large block
TestLargeBlock(maxblock/2); // can't even allocate ½ reported size
// try to allocate slightly bigger small blocks this is the most painful
example
// possibly not even one allocation will happen
TestSmallBlocks(MaxPointers,BiggerAlloc);
}
"Philip Borghesani" <Phil...@community.nospam> wrote in message
news:eoMCG9%234FH...@TK2MSFTNGP12.phx.gbl...
Given the following fragment:
#define MAX_ALLOC 256
#define HEAP_GRANULARITY (2*sizeof(ULONG_PTR))
#define HEAP_ENTRY_MAX 0xFFFE
int __cdecl
wmain(int argc, WCHAR * argv[])
/*++
--*/
{
PVOID Array[MAX_ALLOC];
for (ULONG Iter = 0; Iter < MAX_ALLOC; Iter++){
Array[Iter] = HeapAlloc(GetProcessHeap(),0,HEAP_GRANULARITY *
HEAP_ENTRY_MAX) ;
}
DbgBreakPoint();
for (ULONG Iter = 0; Iter < MAX_ALLOC; Iter++){
HeapFree(GetProcessHeap(),0,Array[Iter]) ;
}
DbgBreakPoint();
return 0;
}
At the first breakpoint:
0:000> !heap -p -all
_HEAP @ c0000
_HEAP_LOOKASIDE @ c0cd0
_HEAP_SEGMENT @ c0c50
CommittedRange @ c0cc0
HEAP_ENTRY: Size : Prev Flags - UserPtr UserSize - state
// snip
VirtualAllocdBlocks @ c0090
2b0030: fffe : N/A [N/A] - 2b0040 (fffe0) - (busy VirtualAlloc)
// other 255 blocks similar to the previous one
At the second breakpoint
0:000> !heap -p -all
_HEAP @ c0000
_HEAP_LOOKASIDE @ c0cd0
_HEAP_SEGMENT @ c0c50
CommittedRange @ c0cc0
HEAP_ENTRY: Size : Prev Flags - UserPtr UserSize - state
// snip
VirtualAllocdBlocks @ c0090
// no blocks, meaning they have been free-ed
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of any included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Philip Borghesani" <Phil...@community.nospam> wrote in message
news:eoMCG9%234FH...@TK2MSFTNGP12.phx.gbl...
Output from a run of the code I posted:
S:\test\TestHeapAlloc> release\TestHeapAlloc.exe
// first run successfully allocates a large number of large blocks
Allocation 3628 failed
Allocated 3628 items of size 524280 for a total of 1902087840 bytes
Allocated 536870912 bytes // after freeing can still allocate a single
large block
// now allocate a slightly smaller size any size less then some cutoff
would do but more allocations would be needed
Allocation 4108 failed
Allocated 4108 items of size 516280 for a total of 2120878240 bytes
// note return from HeapCompact
HeapCompact erroneously reports a large max block size of 536866816
// now we can not allocate anything larger then 0x7FFF0 bytes
Failed to allocate 268433408 bytes
Allocation 0 failed
Allocated 0 items of size 524280 for a total of 0 bytes
"Ivan Brugiolo [MSFT]" <Ivan.B...@online.microsoft.com> wrote in message
news:eIO1jSD5...@TK2MSFTNGP10.phx.gbl...
"Skywing" <skywing_...@valhallalegends.com> wrote in message
news:uAo2vVA...@TK2MSFTNGP15.phx.gbl...
There are a couple of techniques to prevent fragmentation:
- Use LowFragHeap
- Disable the LookAside Frontend, that, as a side effect,
improves the ability to coalesce.
Allocation patterns that tend to exacerbate fragmentation are:
- std::vector::push_back() growing strategy (and other hash-tables
that grows in the same patter)
- mismatched order of allocations and free, with regard location
in the address space.
side-notes:
HeapCompact does not do what most think it will do.
HeapValidate, as a side effect,
will do what most think HeapCompact is supposed to do.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of any included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Philip Borghesani" <Phil...@community.nospam> wrote in message
news:OImKKhH5...@TK2MSFTNGP11.phx.gbl...
We have implemented our own small block heap quite similar to the Low
Fragmentation Heap that helps to reduce fragmentation but is not the issue
here.
"Ivan Brugiolo [MSFT]" <Ivan.B...@online.microsoft.com> wrote in message
news:umNprmI...@TK2MSFTNGP14.phx.gbl...
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of any included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Philip Borghesani" <Phil...@community.nospam> wrote in message
news:eDsGn2I5...@tk2msftngp13.phx.gbl...
The problem is not one of fragmentation but one of not haveing any virtual
addres space avalible, even when there is a huge amount of space avalible in
the standard heap. .
Because address space only goes into the heap any applictaion that allocates
1-2gb of small objects, frees ALL of them and tries to allocate large
(>0x7f000) objects will fail due to out of memory. Worse yet, task manager
shows little memory use becase it shows private bytes and HeapFree has
de-commited the memory.
What is needed is a function to tell the heap to release unused virtual
addres space to make it avalible to VirtualAlloc. The only way I know of to
do this is to call HeapDestroy and it is difficult to guarentee that all
objects in a Heap will be freed so that doing this is posible.
The code I originaly posted shows this. Here is the main portion of the
code with more comments:
int _tmain(int argc, _TCHAR* argv[]){
size_t const SmallAlloc=0x7eff0;
size_t const BiggerAlloc=0x7fff0;
// prove can access and free large memory blocks this runs repeatedly
TestSmallBlocks(MaxPointers,BiggerAlloc);
TestLargeBlock(0x20000000); // 500 mb
/// Grow the heap HERE IS THE PROBLEM
TestSmallBlocks(MaxPointers,SmallAlloc);
// The heap now owns all virtual addres space an will not give it up.
// check heap info
size_t maxblock=HeapCompact(heap,0);
printf("HeapCompact erroneously reports a large max block size of
%Id\n",maxblock);
// Try to allocate a large block
TestLargeBlock(maxblock/2); // can't even allocate ½ reported size
// try to allocate slightly bigger small blocks this is the most painful
example
// possibly not even one allocation will happen
TestSmallBlocks(MaxPointers,BiggerAlloc);
}
"Ivan Brugiolo [MSFT]" <Ivan.B...@online.microsoft.com> wrote in message
news:%23g293OJ...@tk2msftngp13.phx.gbl...
--
This posting is provided "AS IS" with no warranties, and confers no rights.
Use of any included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm
"Philip Borghesani" <Phil...@community.nospam> wrote in message
news:eOxCePK5...@TK2MSFTNGP12.phx.gbl...
> If you know for certain that your allocations are going to be big (several
> pages) then it might be better to just use VirtualAlloc/VirtualFree
> directly instead of going through the heap manager.
Because of the 64 KB allocation granularity you should only use
VirtualAlloc for allocations that are a multiple of 64 KB. Doing a large
number of 1 or 2 page allocations with VirtualAlloc is extremely wasteful,
but even for larger sizes it's still better to do it in chunks of 64K.
"Ivan Brugiolo [MSFT]" <Ivan.B...@online.microsoft.com> wrote in message
news:OigTvqK5...@tk2msftngp13.phx.gbl...
I have consulted this issue internally, and it seems this is by design,
below is the feedback:
"The issue here is that the blocks over 512k are direct calls to
VirtualAlloc, and everything else smaller than this are allocated out of
the heap segments. The bad news is that the segments are never released
(entirely or partially) so ones you take the entire address space with
small blocks you cannot use them for other heaps or blocks over 512 K. If
your program exposes such an usage pattern, the easiest workaround would be
to use a separate heap for the short-term small allocations, and destroy
the heap when youĄŻre done with it (and it would be faster too, instead of
freeing blocks one by one)."
Hope this helps
Best regards,
Jeffrey Tan
Microsoft Online Partner Support
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.