Link to home
Start Free TrialLog in
Avatar of xyzzer
xyzzer

asked on

Local/Global/Heap-Alloc - which to use?

I wonder, what are the differences in those functions:
LocalAlloc (+Lock/Unlock/Free..)
Global-
Heap- (+Create/Destroy...)

The PSDK says the two first are slower and of lower functionality.
Does anyone have experience as to comment on that fact?
Is it useful to use Heap- functions in applications different than allocating huge memory blocks or large numbers of blocks? Any good ways of managing the Heap?

--Filip
Avatar of xyzzer
xyzzer

ASKER

Oh and I've forgotten the Virtual(Alloc...) functions...
Under Win32, there is no difference between GlobalAlloc() and LocalAlloc(). They are only provided for compatibility to 16 bit Windows, when there was a global and a local heap.
Avatar of xyzzer

ASKER

I know, but they are easier to use. Moreover, standard Windows function use them (FormatMessage and LocalAlloc). Any reasons we should work around that and allocate the memory ourselves using newer functions?
What about VirtualAlloc?
What I've found in my search is that it allows allocating more than 4MB (under Win9x?).
Heap functions have more functionality though...

There is unprecise information as to requirements ("Heap Functions"-topic of MS PSDK states that Virtual functions are useful in "Windows 95/98/Me", but "VirtualAlloc" topic states it requires WinNT series...).
There are no good reasons (apart from speed), I've found, that would make me stop using Global(/Local?)Alloc series...
Anyone can prove me wrong? I hope to hear it.

--Filip
I have only found it useful to use the sytemHeap when allocating memory that must be passed to, or through, the OS and there you use whatever heap API is used by the passing mechnicium.  

The main diffrence between LOCAL and HEAP and GLOBAL memory is that LOCAL is used through handles and therefore is moveable, HEAP and GLOBAL are not. The reallity is that LOCAL and GLOBAL are there to support older API's. So, as you write new code use the HEAP API's, but there is no reason to good reason to change any existing code. One advantage to using the new HEAP API is that it allows the newer memory manage system to work around some of the threading issues easier.    
Avatar of xyzzer

ASKER

That's not exactly what I know. GlobalAlloc doesn't actually differ from LocalAlloc under Win32s and both return handles and are moveable unless called with GMEM_FIXED flag.
Still don't know anything about Virtual-s too...
Avatar of xyzzer

ASKER

I'll continue to increase points to this topic as it will come more interesting, professional and accurate (and as I will have more at disposal).
As for now - a change from 20 to 50.

To define my problem better...

I want to be more sure of my knowledge and possibly broaden it.
There are quite a few functions that enable dynamic memory allocation and that I'm aware of:

-system functions:
LocalAlloc, GlobalAlloc, VirtualAlloc, HeapAlloc...
(series)

-standard C++:
new

-standard C (?):
malloc (with different prefixes)

The problem is: Which one to use and when?

From what I know for now there are a few things that can be considered:

1. Speed of allocation
I think the heap functions (HeapCreate/Alloc) or own heap implementation should be the best here.

2. Granularity or "what is the smallest size of memory that can be allocated" or "how much memory will it take if I use the function to allocate 1 byte.
I think the best solution is as in 1, but what about other options?

3. Simplicity
It's quite easy to write:
buf = new char[SIZE];
Should be the best one here.

4. Safety (that isn't the same as security isn't it?)
Here is the area I'm the most ignorant in...
a) malloc:
-returns a pointer on success, NULL if unable to allocate
-probably won't collect garbage, unless it uses something like GlobalAlloc somehow and the system function will take care of the trash.
b) new:
-the only one that lets you create objects
-throws exception to be caught with: catch (std::bad_alloc).
Any other ones? Any additional info available?
-garbage collection handled (somehow)
c) OS functions:
-can be used with either GMEM_FIXED to return a pointer or as moveable block - handle returned. That looks quite nice for keeping some things in the memory for a longer time and available when needed sometimes.
-returns NULL when an error occurs - GetLastError available for more information - quite a nice thing to do.
I wonder... if new and malloc us OS functions - maybe GetLastError is available too?
It looks to be the best choice in commercial applications.
-does it handle any garbage collection?
What if I GlobalAlloc and then lose the pointer or handle?
Will I get a "nice" (information provided) error if I, say, do:
p = GlobalAlloc(...[GMEM_FIXED/0]...);
p++;
GlobalFree(p); //?


... Finally:
Which of them to use when we have specific needs?

Some things are obvious:
-use new for objects
-use heap when many allocations needed and in generally complicated allocation

But:
-which of (Global/...)Alloc when in need of a larger block for a longer time (Printers' enumeration, DIB, etc...)

-which method to use for simple buffers (like 250 characters long) needed in typical functions?

-when to use local/static/dynamic buffers in functions?
I guess we can negate(?) the usage of local allocations if the function can be called recursively, as it is placed on the stack (isn't it? always?).
Dynamic allocation is more flexible and lets us (e.g.) input a string of an unknown length.
Static and local should be used only when we are sure if the expected maximum size. (so is dynamic, but we can set it in run-time).

Any input will be highly appreciated.
It's not a homework. It's not any task to be paid for (like a lecture). It's just pure curiousity and desire to gain some deeper knowledge of the subject.

--Filip
>> I know, but they are easier to use. Moreover, standard Windows
>> function use them (FormatMessage and LocalAlloc)
Those and a few other functions and techniques like them are the only reasons they exist anymore.  For the most part, your program should allocate memory using the standard technique of the language, like "new" for C++ for example.

>>  Any reasons we should work around that and
>> allocate the memory ourselves using newer functions?
There are no (reasonable) work-arounds for the few cases where you need these functions.

>> What I've found in my search is that it allows
>> allocating more than 4MB (under Win9x?).
For _huge_ allocations they might make sense.  But those are very rare and usually very questioanble!

>> Heap functions have more functionality though..
What heap functions are you referring to?  These ARE heap functions.  There are also the HeapCreate() etc functions, but again there is _usually_ no reason to use them

>> There are no good reasons (apart from speed), I've found, that would make
>> me stop using Global(/Local?)Alloc series...Anyone can prove me wrong?
>> I hope to hear it.
Yes there are good reasons.  

First of all most lanuages have built-in allocations functions that are much eaiser to use and provide some degree of type safety that is not provided by windows API functions.  Thus using your language features will tend to produce code that is safer and easier to write and maintain.  Some languages might not have these sorts of feature, like say assembly language.  In that case, direct use of these functions are probably a good choice

second, these functions are designed for allocating heaps with a small number of large blocks.  Most programs that perform dynamic memory allocation need to allocate a large number of small bocks. Using these functions for large numbers of allocations will probably be very slow.  The heap will not be well constructed for this.   Pluss using it for small allocations is likely to waste a lot of space.   i.e. these functions weren't designed to handle the typical direct requirements of the program.  However, your programming lanauge probably has memory allocation features that are designed for a typical programs needs.  Note that this is not a flaw of the windows API functions.  They are just designed for a different purpose.  For example, many programming languages  will use them to allocate 1 or more large blocks of memory and then create their oens small, fast heaps inside these allocated blocks.

third, many languages have debugging features built into their dynamic allocation procedures.  For example, they may check that deleted blocks never get reused, that you never access memory before or after the deleted block, and that all allocated blocks eventually get deleted    You don't have any of these checks with the API.

>> The main diffrence between LOCAL and HEAP and GLOBAL memory
>> is that LOCAL is used through handles and therefore is moveable,
>> HEAP and GLOBAL are not
Actually none of them move anymore (win32).  And the handles and the pointers are (I think) identical.  i.e. you cna use the handle as a pointer.  (I think).

>> Still don't know anything about Virtual-s too...
The virtual allocation functions are extremely useful in extremely rare circumstances.  These are not for "common" allocattion purposes.  They might be used to impliment huge sparse arrays or similar types of structures.  Or for extremely large structures where you need more direct control over memory paging.
Avatar of xyzzer

ASKER

PS.
Is there such a tool that would show a memory allocation map as e.g. Norton Speed Disk shows drive fragmentation map? Is it possible to make at all? Is some hooking into GlobalAlloc and others possible to enable it? Or maybe just checking the handles would do?
Sorry for any possible language mistakes.
>> 1. Speed of allocation
>> I think the heap functions (HeapCreate/Alloc) or own heap implementation should be the best here.
yes.   GlobalAlloc LocalAlloc were designed for heaps with a small number of large blocks.  A good general purpose heap will be better for a typical program with lots of allocations of varying sizes.


>>  2. Granularity or "what is the smallest size of memory that can be
>> allocated" or "how much memory will it take if I use the function to
>> allocate 1 byte.  I think the best solution is as in 1, but what
>> about other options?
A good heap will try to consider CPU access boundaries and memory page boundaries in its allocation to provide better speed   Which often requires a larger granularity.  also memory blocks can be created and deleted in any order, which tends to produce a fragmented heap.  If you have a 1 byte granularity it may mean that the heap will become fragmented with a few 1 byte blocks that can never be reused and that then cause further fragmentation.  by allocating larger blocks, you get better reallocation of deleted blocks and less fragmentation.


>>          3. Simplicity
>>          It's quite easy to write:
>>          buf = new char[SIZE];
Yes!


>>          4. Safety (that isn't the same as security isn't it?)
>>          Here is the area I'm the most ignorant in...
>>          a) malloc:
>>          -returns a pointer on success, NULL if unable to allocate
>>          -probably won't collect garbage, unless it uses something like GlobalAlloc somehow and the >>system function will take care of the trash.
>>          b) new:
>>          -the only one that lets you create objects
>>          -throws exception to be caught with: catch (std::bad_alloc).
Yes.  the biggest one is type safety.  the fact that C++ (and other laugages) wil insure that the allocated data is the proper size, propper alignment, and propperly initialized.

>>          Any other ones? Any additional info available?
>>          -garbage collection handled (somehow)
Usually neither C or C++  perform garbage collection.

>>          c) OS functions:
>>          -can be used with either GMEM_FIXED to return a pointer or as moveable block - handle
except they aren't actually movable anymore.  That's hardly a problem with virutal memory.  It mattered when a heap was 64k, not when it is a gig.

>>          I wonder... if new and malloc us OS functions - maybe GetLastError is available too?
typically new and malloc don't use localalloc or GlobalAlloc()-- that woudl be a bad choice on the implimenter's part.   But they could use HeapAlloc(), however neight VC or BCB do.


>> -which of (Global/...)Alloc when in need of a larger block for a
>> longer time (Printers' enumeration, DIB, etc...)
they are interchangable now, but originally all that stuff had to be from the global heap, I think.  Whatever the docs say, do that.  Like the clipboard data needs to be from the global heap.  So its safest to continue to use the global heap, even if the local one seems to work fine.


>> -which method to use for simple buffers (like 250 characters long)
>> needed in typical functions?
You languag's own, if any.  HeapAlloc() if not.

>> -when to use local/static/dynamic buffers in functions?
depends.

>> I guess we can negate(?) the usage of local allocations if the function
>> can be called recursively, as it is placed on the stack (isn't it? always?).
can't say for sure in all languages/implimentations.  but certainly almost always.

>>          Static and local should be used only when we are sure if the expected
>> maximum size. (so is dynamic, but we can set it in run-time).
size isn't the big issue.  Its object lifetime.  

>> Is there such a tool that would show a memory allocation map as e.g.
I haven't seen anything like that in a 100 years--or it seems like it.

Remember that in win32 heaps don't have to be packed efficiently like they used to.  virtual memory allows us to be very "sloopy" in the heap packing and costs us nothing.  This isn't really sloppy programming anymore.  A tightly packed heap is slower, but it used to be necessary to save memory.  Now savign memory is not much of a concern.


>>  Is it possible to make at all?
You can use the VC heap functions to walk the VC heap.   There used to be SDK functions for walking the global and local heaps too, but I think those are long gone, I think that was only under win16.

>> Is some hooking into GlobalAlloc and others possible to enable it?
>> Or maybe just checking the handles would do?
what is it that you really want to know?  Most things you coudl fine out are really not of any importance.
Avatar of xyzzer

ASKER

PPS:

1. What about delete? Do I have to call it before returning the function (in C++)?
The problem is - If I have a function that says - print a file. It does a few allocations for opening files, enumerating printers, keeping an EMF and so on. On failure of any of the above operations I should free the previously allocated memory. What's the best way to do that?
Something like this springs to my mind. Anything better?:
ret=FAIL;
if ((h1=Openthis())==SUCESS) {
 if ((h2=Openthat())==SUCESS) {
  if ((h3=Allocsmth())==SUCESS) {
//   do something
   ret=SUCCESS;
  }
  Freesmth(h3)
 }
 closethat(h2)
}
closethis(h1)
return ret;

...The thing is - I often realize how to do some things only after I ask them - sometimes before I get the answer.


2. Another thing is: how much memory can I expect from each of the allocating methods to consume for controling the allocated block? e.g. - if I allocate 1000 1 byte - blocks - how much memory would that take? There certainly are some things to be remembered like handles, info for freeing and so. It probably depends on the system, but I'd like too know more or less...


Avatar of xyzzer

ASKER

PPPS
Should I put every "new" operator in try {} catch blocks?
Can I assume that new char[20] will always work?
>> The problem is - If I have a function that says - print a file.
>> It does a few allocations for opening files, enumerating
>> printers, keeping an EMF and so on. On failure of any
>> of the above operations I should free the previously
>> allocated memory. What's the best way to do that?
In C++ and other OO{ languages, smart pointers!   They are idiot proof--which is good since we are all idiots and exception proof, which is good because we all use code written by idiots.

>> Something like this springs to my mind. Anything better?:
That can get very complex and its not even exception proof yet!  

>> Should I put every "new" operator in try {} catch blocks?
>> Can I assume that new char[20] will always work?
Well, first of all in VC 6 and earlier new never throws an exception.  It is supposed to, but it doesn't... so there may not be much sense in doings so.  However if you use other C++ compilers, or hope to one day use your code on a version of VC that does throw exceptions, you should plan for it

But that is not the right way to use exceptions.  This probably is not the best medium to learn exceptions usage.  For that I would suggest books on the topic, like Exceptional C++ and Effective C++ and The C++ Programming language.

But in short, the idea behind exceptions is to catch them at a limited number of "strategic" points.  Don't catch them after every possible case.  There would be 10,000s of such points in a typical program and thus your code would be full of exception catches, and the exception mechanism's entire purpose is to REDUCE the size and complexity of handling exceptions, not increase it.
> What about delete? Do I have to call it before returning the function (in C++)?

Use smart pointers (std::auto_ptr or http://www.boost.org) and forget about deletes.

In general, all objects should manage their owned resources.  Wrap everything in classes that free allocated resources in the destructor then sit back and admire the beauty of your code (not to mention robustness).

> Should I put every "new" operator in try {} catch blocks?

Why the <beep> for?

Question #1: Does your compiler conform to the standard with regard to new failures?
For example: VC6 will not throw an exception of new fails.

Question #2: What do you want your program to do if it runs out of memory.  In most cases, your oonly choice is exiting the program.  Thus you write something like:

     int main()
     {
          try
          {
               Init();
               DoStuff();
               Finalize();
          }

          catch (std::exception& err)
          {
               // The following func will not allocate memory
               DisplayErrorMessage();
               return 1;
          }

          return 0;
     }
Avatar of DanRollins
>>And the handles and the pointers are (I think) identical.  i.e. you cna use the handle as a pointer.  (I think).

Only right when allocated with the GPTR ot LPTR flags.

      HGLOBAL hGlb=  ::GlobalAlloc(GHND, 1000 ); // 0x860014
      HLOCAL  hLoc=  ::LocalAlloc(LHND, 1000 );  // 0x86001c

      HGLOBAL pGlb=  ::GlobalAlloc(GPTR, 1000 ); // 0x148018
      HLOCAL  pLoc=  ::LocalAlloc(LPTR, 1000 );  // 0x148418

      char* pg=  (char*)hGlb;  // coerce handle to ptr
      char* pl=  (char*)hLoc;  // NOTE: don't do this!

      // strcpy( pg, "copied to global handle"); // NO!

      char* plp= (char*)LocalLock( hLoc );  // 0x147818
      char* pgp= (char*)GlobalLock( hGlb ); // 0x147418
=-=-=-=-=-=-=-
Thus, GlobalAlloc and LocalAlloc seem to be identical in that they allocate right next to each other.  Each handle is a pointer to 8-bytes of data.  The first four bytes contain the address that is later returned if you do a Global[Local]Lock() on the handle.  I don't know what the other four bytes are.

The strcpy wrote data over top of the handle management area and would certainly screw things up!

Note that the physical storage is at a different range of addreses 0x147xxx compared to 0x860014 for the handle-management area.  And GlobalLock() returns adderesses in the same range as GlobalAlloc( GPTR,... )

=-=-=-=-=-=-=-=-=-
Some other tests:
      HGLOBAL hGlb=  ::GlobalAlloc(GHND, 1000 );
      char* pgp= (char*)GlobalLock( hGlb );
      HLOCAL hRet= ::LocalFree( hGlb ); // no error!

LocalFree( <global handle>) did not return an error, lending more proof to the surmise that these are actually exactly the same.
=-=-=-=-=-=-=-=-=-
      char* p = new char[1000];              
      char* p2= (char*)malloc( 1234 );

It is high entertainment to trace these to completion.  Both new and malloc end up in MALLOC.C which calls HeapAlloc().  Since I never called delete[] (or free) after these, I got this message in the debug window:

    Detected memory leaks!
    {101} normal block at 0x002F5C70, 1234 bytes long.
     Data: <                > CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD
    D:\MyProj\zEETests2\D12\D12Dlg.cpp(135) :
    {100} normal block at 0x002F5848, 1000 bytes long.
     Data: <                > CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD
    Object dump complete.

which emphasises neitod's point:  Use language facilities (new and delete) rather than API calls because that provides safety and feedback on programming errors.

=-=-=-=-=-=-=-=-=-
Other miscellaneous comments on parts of this rambling question:

* Just to emphasize:
new/delete calls the constructor/destructor for objects and also correctly handles arrays of objects.

* The speed factor is probably irrelevant.  
new/malloc will be slightly faster than GlobalAlloc() and will probably provide tighter granularity.  But that is really backseat to the other advantages.

* Use try/catch?  
No.  If you allocate 20 bytes or 2000 or 2Mb, or 20Mb, or 200Mb you will get them -- failure is almost unheard of (i just allocated 200MB on my 196MB pc. -- it took about 10 seconds and there was a lot of disk activity).  The system will simply allocate more virtual space and swap other programs out to disk.  As my doctor said about yearly colonoscopy before age 55:  It is contra-indicated.

* VirtualAlloc[Ex] was designed for use by operating systems, not applications.

* All memory allocated by your program is freed at the end of the run (this used to be a big problem in older version of Windows).  You should always free it yourself, out of habit -- becasue someday you might one day write a System Service or other program designed to keep running for weeks or months at a time.  In that case, even a small memory leak will eventually cause serious problems.

* slack/lost space:
The debug version of new/malloc allocates guard bytes before and after the chunk and then at the end of the run tests that they did not get overwritten.  I don't know the internal granularity of the Win32 API calls, but notice that the second 1000 byte allocation was only 1024 bytes from the previous one.  The simple answer is:  Use new/malloc -- trust in the sensibilities of the very clever programmers who wrote that code.  They will align data for fastest access on the target CPU and will minimize lost RAM space at the same time.  What's more, they will probably take into account things you simply can't, such as L1/L2 cache access.

-- Dan
Avatar of xyzzer

ASKER

These posts look very interesting. I'll try to comment on them later on, when I have a little bit more free time as there is a lot to test and think about. I'll also keep the question open for some more time as some more people might want to drop some of their knowledge here. Maybe I'll even try to summarize all this in one post/html file - I mean if You'd agree for this. It would be good as a reference for the future's uses - for me and for everybody.

--Filip
Avatar of xyzzer

ASKER

I'm still working on the note...
One more thing... I've just found out that if I create a buffer (or an array) with new char[len], then I should delete it with "delete[]" instead of "delete". Is the second option illegal in such cases? Or is it compiler dependent? I've never seen the compiler to indicate errors in such cases, only lately, when I've first run CodeGuard under BCBuilder - it showed a memory leak. What is going on then? Does delete work with arrays or does it simply do nothing if called with an array?

--Filip
>> then I should delete it with "delete[]" instead of "delete".
>> Is the second option illegal in such cases?
anything allocated with new must be deleted with delete.  anything allocated with new [] must be deleted with delete [].   That is the rule according to the standard and if you mix these up the effects are unpredictable.  On some compilers, allocating with new [] and deleting with delete instead of delete [] will work fine in every way, in some compilers only the first item in the array will be destroyed (have its destructor called) although the memory for all the items will be released, even thouh the remaining items are never destroyed, on some compilers the delete operation will fail, possibly with a crash or by corrupting the heap and causing a crash later.

>> I've never seen the compiler to indicate errors in such cases
Some do.
If the array is a sequence of chars or other POD, then delete[] is identical to delete.  The main reason to use delete[] is becasue that will call the destructors on each of the objects in the array.  C++ purists will tell you to always use it for any type of array, and it's not a bad habit to get into.  You won't get compiler errors in any case, but you will have memory leaks in you use delete to delete an array of objects which have allocated memory internally.

-- Dan
The diference between delete and delete [] is that delete "knows" that it is deleting only a single item and therefor has to call only one destructor and the destructor is called for the object that the pointer points to directly.   But delete [] deletes an array of an "unknown" number of objects and the destructor must be called for each object in the array.  This means that delete [] needs to determine the number of objects in the array, and this information is not passed to it, like it is passed to new []  

Typically this is done as follows.   When new [] allocates an array it allocates a memory block that is a few bytes (say 4) bigger than required for the array.   In the first 4 (in this example) bytes, it stores the count of the number of items in the array.  then in the remaining part of the data buffer it constructs the objects in the array.  Then when new [] returns a pointer, it doesn't return a pointer to the start of the allocated memory, which would point to the number of items in the array.  Instead it returns a pointer past that location, one that points to the first item in the array.

delete [] then is passed a pointer to the 1st item in the array, but it can look at the 4 bytes before that array and figure out how many objects to destroy.  Also when it needs to free the memory block, it will free it using the address of the actual start of the memory block, not the address of the 1st item in the array.

So if youu have new [] alocating an array in this way, and then delete it with operator delete.  Operator delete will (usually) assume that A) there is only 1 item to destroy and B) that the pointer passed to it points to the start of a dynamically allocated memory block, even though the block starts a few bytes earlier.  The effect is that all the items after the first will not be destroyed and that the memory used might not be deleted, or the heap may become corrupted.

On some compilers, new works like new [] allocating an array of 1 object.  i.e. it creates a memory block that has a count indicating an array of 1 object   Then delete and delete [] both look for array counts when they delete.  On these compilers, interchanging new and delete [] or new [] and delete will be "fine"   Personally I think this is potentially a disservice to the programmer!
>> If the array is a sequence of chars or other POD, then delete[] is identical to delete.
Not necessarily.   If operator new allocates [] a count and operator new does not, then you can not interchange operator delete and operator delete [] because operator delete expects a pointer to the start of the memory block to be freed and operator delete [] expects a pointer that is a few bytes past the start.

>>  C++ purists will tell you to always use it
>> for any type of array, and it's not a bad habit to get into.
Its not a matter of being "pure".  The effects of switching delete and delete [], even on POD, are undefined.  That's like saying C++ purist will tell you to not dereference an a randomly initialized pointer.
I only checked what VC 6 does...
The following is *not* POD.  It has a destructor:
     struct MyStruct {
          ~MyStruct() {m_n=0x1234;}
          int m_n;
          char m_sz[20];
     };
     ...
     MyStruct* p5= new MyStruct[100];
     delete[] p5;
     delete p5;

The two delete calls are, respectively:
     call        @ILT+430(MyStruct::`vector deleting destructor') (004011b3)
and
     call        @ILT+550(MyStruct::`scalar deleting destructor') (0040122b)
 
If I use delete p5, I get a user breakpoint and this in the Debug Ouptut window:
     HEAP[console01.exe]: Invalid Address specified to RtlValidateHeap( 2f0000, 2f2a44 )

Now lets try this:
     struct MyStruct {
          int m_n;
          char m_sz[20];
     };
     ...
     MyStruct* p5= new MyStruct[100];
     delete[] p5;
     delete p5;

In this case, MyStruct *IS* POD.  Voth delete and delete[] work without error and the generated code is exactluy the same for each:

168:      delete[] p5;
mov         edx,dword ptr [ebp-24h]
mov         dword ptr [ebp-11Ch],edx
mov         eax,dword ptr [ebp-11Ch]
push        eax
call        operator delete (0040b870)
add         esp,4
169:      delete p5;
mov         ecx,dword ptr [ebp-24h]
mov         dword ptr [ebp-120h],ecx
mov         edx,dword ptr [ebp-120h]
push        edx
call        operator delete (0040b870)
add         esp,4

Both get the address of p5 from [ebp-24h] and push it before calling operator delete (note: It specifically does *not* call 'vector deleting destructor')

Both of these store the address of p5 into a local variable ([ebp-11Ch] and [ebp-120h]) and I don't know why they do that, but that step is skipped in the Release build, so it must be related to the debug heap checking.

-- Dan
>> I only checked what VC 6 does...
In your previous post, you didn't say that, that could lead to confussion as some people might think the statements were universaly true.

Its interesint that VC has two systems at work here, depending on whethor or not there is a destructor.  i.e. when you allocate an arrray of items with destructors, it stores that array size, but if you allocate an array of POD, it doesn't.  Persionally, I would have choosen just 1 mechanism.
Avatar of xyzzer

ASKER

I'm still to busy to organize the notes... soo...
How can I split the points?

What is a POD?

--Filip
POD is plain old data.  Basically the built--in types (int, double, bool) and structures and unionts that do not have constructors, an overloaded operator = virutal functions, virtual base classes, and any members or base classes that are not also POD
Opps, I meaat to say "classes, structure, and unions" but missed "classes"  I also missed the fact they can't have destructors and I also missed the fact that an array of POD objects is also a POD type.

I think that covers all the posibilities.
Here is what the C/C++ standard has to say about POD types

********************************************************
3.9.10 Arithmetic types (3.9.1), enumeration types, pointer types, and pointer to
member types (3.9.2), and cvqualified versions of these types (3.9.3) are
collectively called scalar types. Scalar types, PODstruct types, PODunion types
(clause 9), arrays of such types and cvqualified versions of these types (3.9.3)
are collectively called POD types.

9.4  A structure is a class defined with the classkey struct; its members and base
classes (clause 10) are public by default (clause 11). A union is a class defined
with the classkey union; its members are public by default and it holds only one
data member at a time (9.5). [Note: aggregates of class type are described in
8.5.1. ] A PODstruct is an aggregate class that has no nonstatic data members of
type pointer to member, nonPODstruct, nonPODunion (or array of such types) or
reference, and has no userdefined copy assignment operator and no userdefined
destructor. Similarly, a PODunion is an aggregate union that has no nonstatic data
members of type pointer to member, nonPODstruct, nonPODunion (or array of such
types) or reference, and has no userdefined copy assignment operator and no
userdefined destructor. A POD class is a class that is either a PODstruct
or a PODunion.
********************************************************

Interestinly, I was wrong about the restruction on constructors.  POD types can have user defined constructors.  I guess this is bacause the POD/NonPOD determination is based more on data layout than behavior.  I'm a little surprised though.
Avatar of xyzzer

ASKER

What about sharing points? I'd like to split the reward?
ASKER CERTIFIED SOLUTION
Avatar of nietod
nietod

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of xyzzer

ASKER

I'm quite poor in available question points and I've wanted to reward at least Dan (apart from You), who gave me quite an important feedback, but I guess he can wait a little longer and I'm sure going to meet him again, so let's close this topic for now...

--Filip
nitod and xyzzer,
For future reference, there is an accepted procedure for splitting points.  It is decribed here:
      http://www.apollois.com/EE/Help/Closing_Questions.htm#Split
-- Dan
(sorry for the typo nietod)
>This should only be done in extreme cases as the customer service people are quite busy.
 
The CS Moderators are more than happy to perform this service... it is one of the simpler aspects of their job.  They can 'force accept' to skip some steps.  Aside: I've seen the early beta of the new EE site code and it provides a means to do point-splitting without Moderator intervention.
-- Dan
That text I posted is a "form letter" from long ago.  Good to know that these things have finally improved.