<

Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17

x

Virtual Memory Fundamentals

Published on
9,053 Points
2,853 Views
2 Endorsements
Last Modified:
My purpose is to describe the basic concepts of virtual memory as implemented in a modern Windows-based operating system. I will also describe the problems inherent in older systems and how virtual memory solves them.

The dark ages - before virtual memory

All computers of the past, present, and at least the near future, have a physical address space. This contains the RAM, ROM (read only memory), memory mapped hardware devices, etc., that a computer needs to operate. In older computer systems the system software and applications accessed this address space directly. This was simple to implement and manage, an important consideration on early systems with very limited resources, but it did have some serious problems. These problems became acute on systems that supported multitasking - running two or more applications at the same time. By the late 1980's this was no longer a luxury but a necessity.

Problems with direct access

1. Limited  address space: How much memory the system contained it had to be shared by the system and all resident applications. An application asking for more (contiguous) memory than available usually caused a crash, with loss of all unsaved data. Adding memory helps, but only partially. When several large applications are open even 4GB may not be enough.

2. Memory fragmentation: As memory is allocated and released, free space gets broken up into smaller pieces, until no single piece is large enough to meet a request. This can occur even when the total available is more than adequate. This complicates problem #1. For both of these problems the only solution was a system reboot. For a busy server this was a serious problem.

3. Poor stability: It was very easy for an application to accidentally overwrite memory belonging to another application or the system itself, as there was no checking of the OS against allowed memory areas. A faulty application could easily bring down the entire system. An application might intermittently fail or crash for no apparent reason, caused by another application that had no obvious problems. Such issues were very difficult to troubleshoot and sometimes remained unresolved. Even worse was the possibility of an application appearing to work properly, but through no fault of it's own, was processing and writing corrupted data to disk. The opportunities for malicious software were virtually unlimited.

4. Lack of security: An application could read, or even modify, sensitive data being processed by another application, even one belonging to another user. For many business users this was unacceptable.

5. Inefficient use of physical memory: System RAM had to be adequate for the largest workload even if this situation rarely occurred. At other times this memory was unused and wasted. All of the code for a large application had to be loaded into memory all of the time, even if only a fraction of it's features were ever used. Large portions of high performance and (at that time) expensive RAM were often required to store static code and data for long periods of time, even if it was accessed rarely or not at all.  

And that is by no means a complete list. Many solutions had been developed but they were far from satisfactory and often brought new problems of their own. And the solutions were complex and required considerable resources to implement. The "simple" memory model was becoming increasingly complex.

By the late 1980's these problems were becoming serious. Advanced users wanted to run multiple large applications with good performance, security, and long term stability. System designers were well aware of the fact that the existing simple memory model had run it's course. Such systems could barely cope with current requirements and had no hope of meeting the complex challenges of the future. A new and more advanced method of accessing memory and other system resources was necessary. Fortunately, system designers didn't have to look very far to find the solution. Large computer systems had all of these features and more, but until recently the cost of implementing them would have been prohibitive for smaller systems. By the early 1990's this was no longer the case.

In 1992 Microsoft introduced Windows NT3.1. There were no prior versions of NT but this version number was necessary to maintain compatibility with Windows 3.1. It solved or greatly reduced all of the problems outlined above and ran on hardware that was currently available. It was based on and implemented many of the features of VMS, one of the most successful and highly regarded mainframe systems of the 1980's. One of these features, and possibly the most important one, was virtual memory.

Virtual Memory

This system did away with the simple concept of direct access to memory. Each process, even those belonging to the system, was given an artificial or virtual environment,  independent of the computers hardware, in which they would operate.

Each process is given a virtual address space of 4GB, approximately 4 billion bytes. The lower 2GB is private to each process while the upper 2GB is shared among all processes and is accessible only to system level components. The shared upper region is necessary for technical reasons that do not concern us here. It must be understood that the lower 2GB is private and not shared with other processes. Also note that this address space is virtual and is in no way related to or limited by the amount of RAM in the system.

An application has no direct access to physical memory or any hardware devices. It doesn't know how much RAM is in the system, where it is located, or how much is available. The only way an application can learn about these things is to ask the OS for the information. Even then it will be incomplete and not always accurate. All of these details are managed by the system and few applications have any need for this information.

Virtual memory is a complex system that combines the CPU, RAM, and hard disk into a whole that exceeds the sum of it's parts. It is an integral part of the operating system, it is always in use, and can never be disabled.

Advantages of Virtual Memory

1. Consistent Application Interface: Applications have a consistent interface to the system memory that is independent of the computers hardware. An application can run on an older system with little RAM, or a modern system with a full complement and need not even be aware of the difference. The application does not need to adapt to the system, which it in fact knows very little about. This simplifies application development.

2. Large Address Space: Each application has a large 2GB address space for it's own private use, even if the system has much less RAM. A developer can create his application without being bothered by the constraints of physical memory. Depending on how it is used an application may be able to use all of this space, perform well, and not unduly effect other applications.

3. Lower address space fragmentation: Since the address space is large and independent of the influence of other applications, fragmentation will be greatly minimized. In the unlikely event that it becomes a problem a simple application restart will correct it. There is no need for a system restart with all of it's potential problems - a great advantage for a busy server.

4. Greatly improved stability and security: Since an application cannot even see the the address space of another there is no possibility of either reading or modifying it's data. Similarly, system address space cannot be seen or modified by application code. If an application failure occurs it alone will be effected. The system and other applications will not be harmed. The sharing of memory is supported but is closely controlled by the system. Accessing another process' (private) memory is subject to high-level OS privileges (usually necessary for debugging).

5. Physical memory is used more efficiently: Virtual memory relieves RAM from the burden of storing rarely accessed code and data and allows it to be used for more important purposes. RAM is managed by the system and dynamically assigned to system components and applications according to their current needs. RAM that is not needed for other purposes is assigned to the system cache. At all times the system will attempt to use RAM to the fullest possible extent.

6. More Versatile: The size of RAM does not impose a hard limit on the size or number of applications that can be run at the same time, only on how they will perform. A small system can run a large and complex application without errors but performance will be poor.

Conclusion

Almost all modern general purpose operating systems are based on virtual memory. All versions of Windows from NT3.1 on use the principles outlined above, although they were compromised somewhat in Windows 95, 98 and ME. In addition to Windows, all but the smallest distributions of Linux and all versions of the Mac OS X use these principles. This is a mature technology that has been under development for a long time and is currently in a highly developed state. No other system has been developed that performs as well in a general purpose OS. Virtual memory is without doubt one of the most important advances ever in computer science.

I have deliberately omitted any mention of how this apparent magic might be implemented. These details often obscure the fundamentals to the point that the basic concept is lost. Many advanced computer users may have read, or even written, articles about "virtual memory" but have no understanding of the basic concepts. A later article may reveal how the "magic" is done.

The source for much of this article was the Microsoft publication "Inside Windows 2000", third edition. It is available at Amazon and elsewhere for a very reasonable price. There is no more accurate or authoritative source of information about Windows internals.
2
Comment
Author:LMiller7
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
0 Comments

Featured Post

Enroll in September's Course of the Month

This month’s featured course covers 16 hours of training in installation, management, and deployment of VMware vSphere virtualization environments. It's free for Premium Members, Team Accounts, and Qualified Experts!

Join & Write a Comment

This video Micro Tutorial explains how to clone a hard drive using a commercial software product for Windows systems called Casper from Future Systems Solutions (FSS). Cloning makes an exact, complete copy of one hard disk drive (HDD) onto another d…
With the advent of Windows 10, Microsoft is pushing a Get Windows 10 icon into the notification area (system tray) of qualifying computers. There are many reasons for wanting to remove this icon. This two-part Experts Exchange video Micro Tutorial s…

Keep in touch with Experts Exchange

Tech news and trends delivered to your inbox every month