Link to home
Start Free TrialLog in
Avatar of p_sreekanth
p_sreekanth

asked on

Memory allocation problems in C

Why do segmentation faults like chunk_alloc and chunk_free come?
Avatar of p_sreekanth
p_sreekanth

ASKER

operating system is linux with 64 MB RAM  with 2 GB hard disk.
When memory is allocated for your program, different artifacts are allocated in different parts of the virtual address space.  Each of these segments of virtual address space is mapped to a specific part of the machine's physical address space.

Each chunk of memory has a limited size.  The machine's hardware detects an attempt to read or write outside the program's allocated memory and generates a segment fault.

From the programmer's perspective, you can view this as one of two possibilities ... either you tried to access past the end of an available piece of memory, or you attempted to access memory with an uninitialized (or improperly initialized) pointer.

Note that in a virtual memory operating system, it really doesn't matter how much physical memory you have, only which chunks of memory your program is entitled to.  Physical memory is only a performance thing.
ASKER CERTIFIED SOLUTION
Avatar of amorousdonjuan
amorousdonjuan

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
In a virtual memory operating system, like Windows 95/98/NT, all programs APPEAR to have a lot of memory.  In the case of Windows, there is 4Gb of address space, of which 2 is reserved for the operating system, PER PROGRAM.  Other programs DON'T MATTER.

What does matter is that your program only asks for permission to use certain parts of that 4Gb address space.  If it attempts to use parts that are not allocated, a segment fault occurs.

And the actual, PHYSICAL memory you have doesn't matter either, at least from a segment fault point of view.  It's purely a performance issue.

In fact, in many cases, it's not bad programming practice to allocate a LOT of memory.  The paging algorithms tend to be more efficient than programmatic workarounds (tho not always), so the memory stinginess that is so important in DOS can often work against you in a virtual memory environment.

And there are real, production, systems for which 4Gb is a problem.  That's why we have 64-bit machines.
Oh another common misconception.  A page fault is not actually an error.  In fact, if there were no page faults, no programs would run!
when i say page fault is an error i mean to say that the microprocessor identifies that a particular page is missing and takes corrective action to rectify the problem i.e. it must be branching off to the corresponding interrupt service routine.