Link to home
Start Free TrialLog in
Avatar of meadowsps
meadowsps

asked on

Why does HeapAlloc() allocates more memory than I ask it to under Vista, but not under XP?

Hi,

We have some DLL code that we developed which has been running for years under XP with no problem. The basic problem seems to be that the routine HeapSize() returns a different value under Vista than it did under XP, or that HeapAlloc() allocates more than the number of bytes specified, we are not sure which one.

You see, in our code, we often allocate zero-length pointers that are destined to be filled with data later on. We rely upon the call to HeapSize() to return the correct size of the allocated pointer, and if the pointer is a zero-length pointer, then we expect a zero value to be returned so we know the buffer has yet to be filled with any data.

On Windows XP, this has worked flawlessly. However, on Vista, it seems that when allocating a zero-length pointer, the value returned from HeapSize() is often >0, typically it is a '1'.

Here is a small snippet of sample code:

            unsigned char * dataPtr = 0;
            int32 pointerSize = 0;
           
            dataPtr = (unsigned char *) HeapAlloc(GetProcessHeap(), 0, 0);
            pointerSize = HeapSize( GetProcessHeap(), 0, dataPtr );
           
In the above series of calls, the pointerSize value is returned as a  '1', even though the allocated memory was specified to be zero.

We have tried explicitly setting the length of the ponter to zero immediately after the call to HeapAlloc(), and then checking the size. In this case, the really odd thing is on the first call to HeapSize(), the size is correctly returned as a 0. However, all subsequent calls to HeapSize return a value of '8'.
           
We have no explanation for the behavior, other than perhaps some strange Vista issue related to zero-length pointers. The documentation for HeapAlloc() says that it will allocate "at least" the number of bytes requested. Not sure what that means. Does this mean you cannot rely upon the allocated block being exactly the same as the requested size? That might explain what is going on.

Thanks for your help... JK
Avatar of evilrix
evilrix
Flag of United Kingdom of Great Britain and Northern Ireland image

This may be a daft Q (I am from a generic C++ background not Windows specific so forgive my ignorance -- although this is posted in generic C++ TA so I'll ask anyway) but I'm just wondering, wouldn't it just be easier to set pointers that are unallocated to be NULL (or 0 as you do when you initialize in your example) and just test for that rather than the expense of calling HeapSize()?
ASKER CERTIFIED SOLUTION
Avatar of evilrix
evilrix
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of meadowsps
meadowsps

ASKER

Agreed, we should have just checked for a NULL pointer, and allocated when we needed it, but alas that is not how the code is structured, and we are relying upon this for some time, years in fact, with no issue whatsoever. It has only cropped up with new Vista users.
>> we are relying upon this for some time, years in fact
At the risk of sounding glib, I'd say that the fact it has worked for this amount of time is by chance rather than by design. The MSDN is clear in that it will allocate at least this much memory if it works. It stated, "If HeapAlloc succeeds, it allocates at least the amount of memory requested. To determine the actual size of the allocated block, use the HeapSize function". This, to me, is conclusive and suggests that you cannot reply on it being 0. I can't see anything in the docs to suggest special behavior of 0 based allocations. In fact, you are also potentially creating yourself an unnecessary performance overhead and run the risk of fragmenting memory. If I were you I'd consider refactoring the code not to do this anymore.

Like I said though, I am not a Windows programming expert (I dabble)... so this is only my conclusion based upon reading the docs and by all means wait for more responses that may prove me wrong (and that's fine if they do).

-Rx.
As an interesting exercise to see that heap allocation 0 bytes doe result in more than 0 bytes being allocated try running the code below but with the indicated line commented out. You'll see that memory usage (Private Bytes) shoots up. This is because every time you call HeapAlloc it is actually allocating memory, even though you've requested 0 bytes. This is because, although you have request 0 bytes for you own use it still needs to allocate additional memory to store the meta-data required to correctly free the memory. When you heap allocate memory this meta data is normally store in one of two ways (although other strategies may be employed). Either it's allocated intrusively, which means the first few bytes of the memory before what your pointer actually points to (e.g. starting at p[-x] bytes, where p is your pointer) is where it's store or a intrusively in an allocation list, which is maintained seperately. Either way, each time you call HeapAlloc it'll use up heap. This will result in unnecessary performance degradation (for various reasons heap allocation is always very slow) and, more concerning, heap fragmentation. You do, I trust, HeapFree even 0 size pointers before augmenting them with HeapAlloc? You can, of course, use HeapReAlloc to avoid this.

You might find this a useful read:
"Heap: Pleasures and Pains"
http://msdn.microsoft.com/en-us/library/ms810466.aspx


#include <windows.h>
#include <cassert> 
 
int main()
{
	void * p = 0;
 
	for(;;)
	{
		p = HeapAlloc(GetProcessHeap(), 0, 1);
		assert(1 == HeapSize(GetProcessHeap(), 0, p));
		HeapFree(GetProcessHeap(), 0, p); // Try commenting out this line and watching Private Bytes of this process
	}
}

Open in new window

Thanks for all the comments. I think we are in the same situation though, which is we have to assume that we can no longer rely upon the requested buffer size being exactly what we asked for. Maybe this is one reason there are so many reported problems with Vista. Say what you want, but I do not believe that code which has functioned since ~1996, through several different Windows operating systems, was getting by on "luck". A request to allocate a pointer of a specific size should return that size and no larger. Or, regardless of how much memory is actually allocated by the OS, at the very least the call to GetHeapSize() should return only the specific amount that was originally requested. IMO this is incredibly sloppy on the part of Microsoft. We're doing essentially the same thing on Macintosh OS, since 1993 BTW, and never a hiccup!
>> through several different Windows operating systems, was getting by on "luck"
But it is if you rely on behavior that isn't documented :)

>> A request to allocate a pointer of a specific size should return that size and no larger
Well, yes if that's what the docs say but heap allocation are generally done in blocks so it's unlikely you'll get that


>> at the very least the call to GetHeapSize() should return only the specific amount that was originally requested
Why? Surely you want to know how big it is, you should already know what you asked for.

>> the call to GetHeapSize() should return only the specific amount that was originally requested.
I'd agree if this wasn't documented... but it is -- clearly int he MSDN. The pre and post conditions are clearly documented, it compiles with them and that's all it must do.

Anyway, the point is you cannot rely on this behavior therefore, unfortunately, it looks like your code will need changing.
Well, here's one more piece of the puzzle. We have submitted the sample code to one of our partners (Adobe) for review. The sample code fails consistently when run by itself, but never fails when run under the debugger. So we'll see what they turn up and I'll keep the membership here to post the results. In the end, we can agree to disagree on this one. To me it is definitely not obvious what is going on here. We have worked with numerous third-party API's and have developed plenty of our own, for storing persistent data in pointers and handles, etc. I can't tell you how many times I have encountered code from major developers that utilize GetHandleSize() [Macintosh] or Windows equivalent to determine the amont of data in a buffer. If it doesn't return the size you expect, something aint right. End of story.
>> To me it is definitely not obvious what is going on here
Why? The symptoms you see and the wording of the MSDN are completely consistent. I'd say that is pretty conclusive.

Anyway, I hope you find a swift resolution.

Good luck.

-Rx.
I guess 15 years of rock-solid code, ambiguous documentation, and what the heck, pure common sense. I'll see what Adobe comes up with. Now I'll wait for your sarcastic reply.
>> I'll wait for your sarcastic reply.
I wasn't being sarcastic... I have no reason nor motivation to be disrespectful to you! Look at some of my recent answers, you'll see how helpful I try to be to all askers.

I am just saying that this seems pretty conclusive to me. I'm sorry if you took it differently.

Anyway, I'd be interested to know what Adobe say.
The answer it in the MSDN: http:#21714522