Practical Technology

for practical people.

The Old and New Virtualization Models are on their way to a Server near You

| 0 comments

Virtualization has been used for decades in mid-range and mainframes, but now, thanks to faster and more powerful processors, virtualization is moving from IT-managed server rooms to the x86-powered server in the corner.

Virtualization enables a single computer run multiple operating systems simultaneously. This, in turn, enables companies to use a single server more efficiently and cost-effectively while reducing the number of servers. Today, not only can lower-end servers do this, but in addition software designers are experimenting with lightweight ‘containers,’ which virtualize applications spaces instead of an entire operating system.

Al Gillen, VP of system software at IDC, said that virtualization is “a hot topic and is on the minds of customers who want to know how this affects them and their products.”

Specifically, according to Enterprise Management Associates senior analyst Andi Mann, “Almost 75 percent of surveyed enterprises have already deployed virtualization in one form or another, and the virtualization market is increasing by approximately 26 percent on average. Less than 4 percent of surveyed enterprises have no virtualization, and do not plan to implement it.”

Several factors are driving virtualization’s growth. The strongest of these, everyone agrees, is server consolidation. By putting the work of several computers onto a single server in virtualized instances, companies can save significantly.

Another important part of the puzzle is disaster recovery. “I would say about 70% of people who are deploying virtualization for x86 servers are also doing disaster recovery for the first time for many of their servers. That is a very, very big deal,” said Tom Bittman, a Gartner VP.

“With pure server consolidation users get a much more efficient use of computing resources, and we see a three to six-month ROI,” said Dan Chu, senior director of developer and ISV products at leading x86-architecture vendor VMware.

As a result, enterprises are willing to spend on virtualization. Gillen expects virtualization spending to reach $15 billion by 2009.

There are other advantages to this new lightweight approach to virtualization. Mann said that virtualization takes a single computing resource, such as an operating system, an application, or storage device, appear as many logical resources. Thus, while server virtualization will remain the most popular kind of virtualization, Mann expects storage and file system virtualization to experience the highest growth rate in the next five years.

However, virtualization faces several challenges to widespread adoption. A major problem is that IT vendors are still unsure as to how they should bill for virtualization.

The $64-thousand question, said Gillen is: “How do you license for that?” While IBM has long had a metering model from its mainframe business, there’s no agreement on how a business should bill x86-based virtualization services. Many vendors find the idea “Of meter-based pricing is something that is complex and is frightening.”

Working examples of virtual machines go back to at least the mid 1960s and the joint work of IBM and MIT on the M44/44X Project, Soon thereafter, IBM created and began selling a series of virtual machine-enabled operating systems. The most well-known of these is probably VM/370. These were used to maximize usage of IBM’s mainframe systems.

Even in these early days, the most common-way of realizing virtualization was in place: the virtual machine monitor (VMM), the first hypervisor.

A hypervisor provides the guest operating system with a virtualization abstraction of the underlying computer hardware. The operating system then runs on this virtual machine with the hypervisor taking care of transferring its requests to the actual CPU, network and storage.

From a user’s viewpoint, the virtualized operating system should look and feel just as if it were running natively on the hardware. VMware’s VMware and Microsoft Virtual Server are well-known examples of visualization software that uses a hypervisor.

Almost all computer and operating system vendors are adopting virtualization. For example, the CPU manufacturers are getting into the act. Intel’s VT (Virtualization Technology) and AMD’s SVM (Secure Virtual Machine) CPU extensions provide virtualization software with API (application programming interfaces) that enables CPU memory management to be done by the CPU microcode.

With this traditional technique, each virtual machine has its own operating system instance and its applications. Of course, you might not need all of that, and that’s where the newer approaches to virtualization come in.

In the container concept, the developer tries to virtualize not the entire operating system but an application space to run a particular program. This saves on processor and memory use by sharing system calls with the host operating system.

For example, as implemented in Sun Microsystems’ Solaris 10, containers share a common kernel but run as separate entities. The Fair Share Scheduler hypervisor in the host Solaris kernel then apportions system resources from both the local server and other servers connected to it within a networked grid of Solaris boxes.

SWsoft’s Virtuozzo for Linux 3.0 uses different terminology–containers are VPSes (virtual private servers), in Virtuozzo jargon–but it’s the same idea.

One bad thing about this approach is that the applications running in the container must also run on the host operating system, according to Jason Brooks, senor eWEEK technical analyst. Thus, you couldn’t run a Solaris application on a Linux system or vice-versa.

Another popular, recent approach with a similar aim of reducing the hardware’s workload is paravirtulization. This is the path that XenSource’s Xen has taken.

According to Simon Crosby, Xen’s CTO, “Paravirtualization involves making the virtual server operating system aware of the fact that it is being virtualized, and enabling the two to collaborate to achieve optimal performance. On Linux, BSD, or Solaris x86 the paravirtualized operating system sees Xen as an idealized hardware layer–a new form of hardware. For Windows and other guests that are unaware of Xen, the hardware virtualization of Intel VT (virtualization technology), combined with paravirtualizing device drivers in Windows are used.”

This is done, according to Crosby, by a paravirtualizing hypervisor. Because paravirtualization provides a virtual machine for the guest operating system, users can run non-native applications on a given operating system. For example, Crosby said, “Xen-enabled Linux guests will run with full benefits of paravirtualization on the upcoming Windows Hypervisor, code named Viridian.”

How users implement these lightweight-virtualization approaches varies with the system. In Solaris’ containers, it’s done by setting them up with command-line-based scripts. Once, in place, they can be managed by the GUI-based Solaris Container Manager. SWsoft, however, gives users a choice of a fat-client graphical interface a Web-based administration portal, or the command line

With paravirtualization, the host operating system itself already has the hypervisor baked in. So, SLES (SUSE Linux Enterprise Server) 10 comes with Xen ready to go. To use it, a system administrator simply uses the Xen configuration module to install new, virtualized operating systems. Once in place, the virtual system is used just as if were the main operating system.

The business model for all these ways to approach virtualization remains the same as ever: provide services and software to businesses to maximize the value they get from their server hardware.

One of the ways that vendors are doing this is by providing virtualization deployment and management tools. At the same time many see a push towards tighter integration between systems and virtualization management.

Ron Rose, CIO at Priceline.com, believes that we’re on our way to a data center operating system. “Inevitably you will see unifying work from a variety of companies. This is already present in provisioning tools from BladeLogic and Opsware. Virtual Iron, XenSource and VMware are trying to address the distribution of a unit of work.

IBM, for example, is already supporting Xen on SLES with its Virtualization Engine. This, in turn, will enable users of IBM’s Enterprise Workload Manager to administer virtualized systems.

The problem of virtualization management though is far from solved. “The biggest unanswered question,” said Gillen, “is how these multiple operating systems will be managed in a virtualized environment, as well as whether these tools will be integrated into the operating system or offered by third-party vendors.”

SWsoft is also making container management tools for other virtualization solutions. Its Virtuozzo “management tools will include support for other virtualization solutions, including VMware virtual servers. This gives data center managers unprecedented control of virtualized resources and enables them to use various virtualization technologies without being tied to a single vendor’s management tools,” said Serguei Beloussov, SWsoft’s CEO,

The main reason to use containers or paravirtualization is because either way requires fewer computing resources than full-fledged virtual machines.

For example, such components of an operating system as its kernel and an application’s libraries must only be loaded into RAM once. By not requiring that every virtual application have its own complete copy of its operating environment, this saves on both memory and CPU usage.

According to Jeff Jaffe, Novell’s CTO, “By allowing portions of the guest operating system and application to be modified to exploit the new hardware, paravirtualization allows much higher performance than previous approaches.”

As for containers, SWsoft claims that its virtualization technology can be used to provide zero-downtime. This is done by a migration feature that captures the state of an existing virtual server and its contents and migrates it to a new physical sever. “With these new capabilities we are bringing the advantages of efficient virtualization technology to an even wider range of organizations by making it even easier to configure and administer highly available virtual servers,” said Beloussov.

D. Disadvantages

If virtualization is all so wonderful why aren’t we all using it already? Part of the problem is that there are so many different approaches.

Bob Shimp, the vice president of Oracle’s technology business unit, explained, Oracle believes “in one simple universal way to integrate a variety of virtualization solutions, and that is the way that Andrew Morton [the maintainer of the stable Linux kernel] wants to go.”

Indeed, “Oracle is losing its patience over this issue and we are going to be pushing harder and harder on everybody to come to the table with a realistic solution.”

Oracle isn’t the only one that sees it this way. Greg Kroah-Hartman, a Linux kernel maintainer, said “Xen and VMware both supply huge patch-sets and are both trying to do the same thing, but their technologies don’t work with one another, and we are telling them that we do not want to take one over the other, we want them to talk and work it out,”

Many companies are now working to solve this problem. Perhaps the most significant effort in this direction is the work of VMware, XenSource, IBM and Red Hat collaborating on a Linux kernel paravirtualization API. This API could then be used by any Linux-based hypervisor. However, at this time, there is little to show for these efforts.

Another problem with the newer methods, said VMware president Diane Greene, is that by tying an operating system to a particular virtualization technology, as is the case with Xen and Linux and the container approaches, “It could be at a very high cost in terms of providing freedom of choice.”

Users, Greene continued, “When a customer goes to deploy a service or an application, they’re focused on the application. They’re not focused on the operating system. So when they go to deploy that application, they just want to get the optimal software stack to run that application.”

Another problem has nothing to do with the technology per se, but everything to do with its pricing. The usual software business model assumes that a single operating system will run on a computer with one or more processors. Now CPUs not only have multiple cores, but thanks to virtualization, they also run multiple operating systems, and with containers, multiple instances of applications.

For instance, Brent Callinicos, Microsoft’s corporate VP for worldwide licensing, announced in 2005 that Microsoft licenses its virtualizatied operating systems and applications “by running instance, which is to say the number of images, installations and/or copies of the original software stored on a local or storage network.”

Only a few months later, though, Microsoft had somewhat backed off on this plan. The company will let users of its forthcoming Enterprise Edition of Windows Server 2003 R2 run up four instances of the operating system at no additional charge.

IBM, with easily the most experience of any company in virtualization, uses IBM Tivoli Usage and Accounting Manager, to charge customers by their software usage. Rich Lechner, IBM’s VP of virtualization explained that this solves the pricing problem, “A key inhibitor to widespread adoption of virtualization is the ability to track usage and accurately allocate costs of a shared infrastructure to internal departments, or in the case of IT outsourcing providers, to charge their clients for the amount of IT their are consuming.”

VMWare’s Green however sees virtualization licensing “working toward a per-virtual machine model that is slightly different from a per-server model,”

The truth is that there is still nothing like common-ground on virtualization billings.

Besides its use in servers, virtualization software, both traditional and paravirtualization, is also finding its way into desktop PCs. For example, Jason Perlow, a systems architect with Unisys’ open-source solutions, uses VMware Player, “to prototype systems in a jiffy, try new software, and run Windows programs on my Linux system.”

The question today isn’t what companies are selling virtualization solutions, but rather which ones aren’t.

Red Hat and Novell, the two top Linux sellers, for example, have begun building Xen virtualization software into their operating systems with paravirtualization. Specifically, Xen is already in SLES 10 and it will also be in the forthcoming RHEL (Red Hat Enterprise Linux) 5 in December 2006.

SWsoft besides making Virtuozzo container technology for Linux also has a Windows product. In addition, SWsoft recently released its core container code to the open-source OpenVZ project. OpenVZ, in turn, has been adopted by the Debian Linux community.

Microprocessor companies like AMD and Intel, hardware companies like IBM and Dell, and operating system companies like Sun and Microsoft, everyone is deploying virtualization solutions as fast as possible.

Because of this, Gillen thinks that the real conflict will be “about the managing and provisioning and tracking of all this layered software through its full life cycle, and that’s where the biggest competitive and financial battle is likely to come from going forward.”

Gartner’s Bittman also sees another use of virtualization gaining in popularity. This will be “Virtual relocation tools, which enable you to move a virtual machine clear off a physical server to another location will be big. VMware’s VMotion is the first of these, but others are quickly following.”

Another new approach for paravirtualization, ala Xen, according to Jaffe, is the creation of NetWare as a virtualized operating system under SLES. Novell expects this approach, which can keep older applications and operating systems running on modern systems, will find an audience

Finally, Crosby would like to point out that, “Xen is in its earliest days of product shipment, and the standard model for getting software into production in the enterprise will apply. Users will kick the tires, there will be tests, more tests and more tests. When users are ready to trust Xen with mission critical workloads, it will go into the data center.”

So, is virtualization on its way to almost all servers and many desktops? The answer is clearly yes. But, it’s not going to happen tomorrow. First, companies need to accept the new virtualization models, management tools must mature, and a commonly-agreed upon pricing system must come into effect. Then, and only then, will virtualization become as much a part of the IT infrastructure as operating systems are today.

Leave a Reply