DJ_AM_Juicebox
asked on
heap fragmentation
Hi,
I'm running into a problem allocating memory on winXP. Memory is being allocated in some threads, which looks like:
int ThreadFunction()
{
PauseThread();
Allocate();
}
On my test case, I have 4 threads open which all execute that same function - 902mb are being used in the system at this point - before this point, the memory usage rose to as high as 1.5 gb. Which ever thread of the 4 waiting I choose to let continue first always fails Allocate() with a general memory allocation error (bad alloc). Since the thread dies, it releases its other memory and the other three threads all get through it ok.
I'm guessing this is a fragmentation problem? Is there anyway to compact the heap? Is this an OS specific thing? Is there anyway to specifically check that this is a fragmentation problem?
Thanks
I'm running into a problem allocating memory on winXP. Memory is being allocated in some threads, which looks like:
int ThreadFunction()
{
PauseThread();
Allocate();
}
On my test case, I have 4 threads open which all execute that same function - 902mb are being used in the system at this point - before this point, the memory usage rose to as high as 1.5 gb. Which ever thread of the 4 waiting I choose to let continue first always fails Allocate() with a general memory allocation error (bad alloc). Since the thread dies, it releases its other memory and the other three threads all get through it ok.
I'm guessing this is a fragmentation problem? Is there anyway to compact the heap? Is this an OS specific thing? Is there anyway to specifically check that this is a fragmentation problem?
Thanks
I believe heap fragmentation is independent of the operating system you use, and you would need to write your own memory allocator to get around it.
Creating your own memory pool is another solution, as Ichijo points out. That might be a good solution for you, since all allocation is performed by Allocate() and (I assume) the allocation sizes are all the same -- or at least over a known range.
However, I wonder if we aren't going in the wrong direction. Someone has to be freeing memory along the way, otherwise you will of course run out of memory.
Do you know for sure that allocations are being freed?
Are you using raw pointers (char *myPtr) instead of smart pointers?
However, I wonder if we aren't going in the wrong direction. Someone has to be freeing memory along the way, otherwise you will of course run out of memory.
Do you know for sure that allocations are being freed?
Are you using raw pointers (char *myPtr) instead of smart pointers?
Following up on my smart pointer comment, Wikipedia has a nice article on auto_ptr
http://en.wikipedia.org/wiki/Auto_ptr
Just make sure you aren't storing auto_ptrs into STL collections (auto_ptr carries copy semantics)
http://en.wikipedia.org/wiki/Auto_ptr
Just make sure you aren't storing auto_ptrs into STL collections (auto_ptr carries copy semantics)
ASKER
I'm sure everything is getting freed ok. I'm allocating huge volume buffers (just really big 1D arrays), probably like 80mb each, then low resolution versions of them too, anyway it gets to be about 1.5gb at some point.
I want to avoid using any of the win32 specific memory allocation routines.
Well, I guess I just wanted to know if there was some c++ HeapCompact() function, but I don't think there is.
I guess I just need to allocate everything up front to make sure these huge buffers are all in one segment instead of all over the place.
I want to avoid using any of the win32 specific memory allocation routines.
Well, I guess I just wanted to know if there was some c++ HeapCompact() function, but I don't think there is.
I guess I just need to allocate everything up front to make sure these huge buffers are all in one segment instead of all over the place.
I'm not aware of any library that contains a HeapCompact().
I think you're on the right track. If you preallocate your buffers, then you have control. As you imply, that will tend to give you unfragmented memory.
I suggest allocating larger buffers first, but you've probably already thought of that.
Good luck!
I think you're on the right track. If you preallocate your buffers, then you have control. As you imply, that will tend to give you unfragmented memory.
I suggest allocating larger buffers first, but you've probably already thought of that.
Good luck!
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Very cool, jkr! You are the man.
BTW, if there are private heaps allocated via 'HeapCreate()', you can still
DWORD dwHeaps = GetProcessHeaps(0,NULL);
HANDLE pHeaps = new HANDLE[dwHeaps];
GetProcessHeaps(dwHeaps,pH eaps);
for(DWORD dw = 0; dw < dwHeaps; ++dw) {
HeapCompact(pHeaps[dw],0);
}
DWORD dwHeaps = GetProcessHeaps(0,NULL);
HANDLE pHeaps = new HANDLE[dwHeaps];
GetProcessHeaps(dwHeaps,pH
for(DWORD dw = 0; dw < dwHeaps; ++dw) {
HeapCompact(pHeaps[dw],0);
}
Oops, I forgot to add a hearty
delete [] pHeaps;
in the above...
delete [] pHeaps;
in the above...
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
http://msdn2.microsoft.com/en-us/library/aa366750.aspx