Link to home
Start Free TrialLog in
Avatar of tonitop
tonitop

asked on

How do I force the program to free memory?

I'm using HP-UX.11 (but we have to port software to Sun and other platforms
so the solution must be portable too).

I have understood that when process allocates memory then it of course
gets it, but never really frees it (even when you call delete/free in your program).
So that memory is never released to other processes but kept reserved for
that single process. So when process calls delete the memory is not freed
to other process and when the same process calls new/malloc() the memory
is take from that memory space that has been reserved for that process.

Is there a way to force the program to release the memory or somehow
configure OS so that it releases the memory?
Avatar of jlevie
jlevie

You are correct in the way memory allocation works. When a process executes a malloc() to gain address space the system will use brk(), if necessary, to allocate a chunk of memory to the calling process. That allocated memory will remain a part of the address space for the life of the task and I know of no way to free memory allocated by brk() other than to exit the task.
ASKER CERTIFIED SOLUTION
Avatar of chris_calabrese
chris_calabrese

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
All about free said by the first 2 comments.

And it's still more worse than that, 'cause most OS don't have a useful garbage collection. So malloc slices the available memory for a process for most calls. Example:
     process allocated 100kbyte and 10kbyte
     process frees these 100kbyte and these 10kbytes (address space still belongs to the process)
     process tries to allocate 101kbytes, now it did not find a continous block of such size and allocates a new 101kbyte block to the process, and so on ...
Imagine a scenario when small blocks are required (like in most C++, or alike, programming languages) and the process is running for a long time, it comes to the point where the process has allocated a lot of memory which is unused by the process but cannot be used 'cause it is sliced down.

AFAIK, the only solution to avoid this is to use your own memory allocation, like chris_calabrese described.
What malloc are you using ahoffmann?  I know some older implementations worked like that, but most modern ones would collapse the 100kbyte block and the 10kbyte block into a single 110kbyte block.
SunOS, Solaris <= 2.3, HP-UX <=9.x, AIX < 4.0, IRIX < 6.0 (probaly also OSF and SINIX)
Didn't check newer versions, so my information may be outdated now.
Hmm, I think Phong Vo made that change to the
Bell Labs malloc in around 1992, and that it
made it into sVr4.2 and also into the later BSD's.

Given that all these systems are based on earlier code
from USL and/or BSD, that's actually not too surprising.
This should be fixed in UnixWare, Linux, and the BSD's.

I'd be surprised if it isn't also in newer releases
of the OS' you mention too, though.
aha, I see I was not talking in miracles ;-)
and also that chris_calabrese seems to be much deeper in the UNIX sources than me.
I don't have UNIX's family tree in my brain, but slightly remember that SunOS (and so Solaris) is a very early branch.
And I think I never watched these symtomes on BSD systems (FreeBSD, NetBSD, Linux).
Nice to get new hints on this (ancient) item. Thanks.
SunOS 4.x is a pretty ancient branch, but most of Solaris comes from sVr4.1 MP.  Don't know if they took the compilers stuff, though.