History of Linux virtualization

Jan Janßen
Within this article I try to provide you with a short comprehensive overview of what virtualization means, where it comes from, why you should care about it for your business and finally explain why I prefer KVM. This article is not going to include any step by step guide how you could implement virtualization but instead focus on the theoretical background of this kind of technology and the amazing opportunities it offers.

The first hardware virtualization technology was implemented within the IBM M44/44X an experimental computer from the 1960s. This one was based on the IBM 7044 and was able to simulate multiple virtual IBM 7044 based on a combination of hardware and software virtualization. With the release of VM the IBM virtualization operation system in 1972, IBM was the first to provide a virtualization solution which was able to combine hundreds of regular i386 system servers with in one mainframe, reducing the resources to an minimum.

It took around ten more years until the GNU Movement started in 1983 and ten more years of open source development for the Linux kernel, till it was first released in 1991. But after having a working operation system it still took several years and various methods for linux to become the hypervisor platform it is today. There have been heavy discussions it on which level virtualization should be implemented, to give you a better understanding of these discussions I first want to talk about the advantages of virtualization.

The main reason why people switch to virtualization is the price point, also most companies end up paying the same price and instead of reducing the number of servers, they simply have far more virtual servers. In most cases the reason for this is the enormous increase in reliability virtualization offers. With the correct virtualization solution being implemented, there can be one server for each task. This again allows you to customize each system to perfectly fit your specific task. Many Administrators have been searching for such a solution for several year, as it is one of the main issues with Linux. To explain this a bit more in detail I want to give you a short example.

Your company wants to work with software A and software B, now software A requires software C and D while software B requires software D and software E. When the IT consulting firm sets up the system everything is working fine. Then an upgrade for software C is released, you can not upgrade. The required upgrades for software D and software A follow, but you still can not upgrade as software B is not compatible to the latest version of software D. You have to wait for all your software to support a certain level, obviously the more specific your software is the slower is the overall development and the longer you have to wait.

The issue most administrators bothered with was that it is not possible to install the same software twice on one computer or at least in two different versions. With virtualization you finally can and especially with the current virtualization technologies which start with a general kernel running on specific hardware there is only very little customization required. This provides us with the ability to easily upgrade the system even without rebooting and is next to cluster computing one of the reasons why websites like Facebook or Google scale that good with such a high user load.

Nevertheless still many people are confused with Linux virtualization to use and to make it simple we first reduce the huge amount of virtualization solutions to the two full virtualization providers xen and kvm. Only these two currently provide the ability to use the full advantage of Intel VT and AMD-V, meaning only these two use virtualization based on the hardware rather than simulating some kind of specific hardware which provides an huge performance increase.

From these two, xen is not a Linux solution at all. Sure it might be shared within Linux repositories, but it is not part of the kernel, only certain xen specific drivers are. And all this is because Xen is based on nemesis[1] rather than the Linux kernel. This is a fact many people do not know or simply do not talk about but knowing this helps to understand the limits of xen. While KVM can be enabled, disabled and upgraded with the system still running, xen always needs a full reboot. Also when using xen the hardware support is limited to the support of nemesis.

To summarize all this xen definitely took a huge step in taking linux virtualization from the software level to the kernel integration but as a Linux developer it should be no more than an example. Nevertheless it is a great software and I personally also use it. So in the end if you are searching for a high reliable virtualization solution which provides you with all the advantages linux has to offer KVM definitely is the way to go.

This is my opinion on this discussion and I hope I at least added some value for all those who did not know about the theoretical background and the differences of xen and KVM before.
Jan Janßen

Comments (0)

Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.