Your SlideShare is downloading. ×
Virtual Machine Monitors: Current Technology and Future Trends
Virtual Machine Monitors: Current Technology and Future Trends
Virtual Machine Monitors: Current Technology and Future Trends
Virtual Machine Monitors: Current Technology and Future Trends
Virtual Machine Monitors: Current Technology and Future Trends
Virtual Machine Monitors: Current Technology and Future Trends
Virtual Machine Monitors: Current Technology and Future Trends
Virtual Machine Monitors: Current Technology and Future Trends
Virtual Machine Monitors: Current Technology and Future Trends
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Virtual Machine Monitors: Current Technology and Future Trends

1,081

Published on

Published in: Technology, Education
2 Comments
1 Like
Statistics
Notes
  • describe briefly abt yours device rather than going to long..
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • wat is this
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Views
Total Views
1,081
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
34
Comments
2
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. COVER FEATURE Virtual Machine Monitors: Current Technology and Future Trends Developed more than 30 years ago to address mainframe computing problems, virtual machine monitors have resurfaced on commodity platforms, offering novel solutions to challenges in security, reliability, and administration. Mendel t the end of the 1960s, the virtual machine AMD, Sun Microsystems, and IBM are developing Rosenblum VMware Inc. Tal Garfinkel Stanford University A monitor (VMM) came into being as a soft- ware-abstraction layer that partitions a hardware platform into one or more virtual machines.1 Each of these virtual machines was sufficiently similar to the underlying physical machine to run existing software unmodified. At the time, general-purpose computing was the virtualization strategies that target markets with revenues in the billions and growing. In research labs and universities, researchers are developing approaches based on virtual machines to solve mobility, security, and manageability problems. What happened between the VMM’s essential retirement and its current resurgence? domain of large, expensive mainframe hardware, In the 1990s, Stanford University researchers and users found that VMMs provided a compelling began to look at the potential of virtual machines way to multiplex such a scarce resource among mul- to overcome difficulties that hardware and operat- tiple applications. Thus, for a brief period, this tech- ing system limitations imposed: This time the prob- nology flourished both in industry and in academic lems stemmed from massively parallel processing research. (MPP) machines that were difficult to program and The 1980s and 1990s, however, brought modern could not run existing operating systems. With vir- multitasking operating systems and a simultaneous tual machines, researchers found they could make drop in hardware cost, which eroded the value of these unwieldy architectures look sufficiently sim- VMMs. As mainframes gave way to minicomput- ilar to existing platforms to leverage the current ers and then PCs, VMMs disappeared to the extent operating systems. From this project came the peo- that computer architectures no longer provided the ple and ideas that underpinned VMware Inc. necessary hardware to implement them efficiently. (www.vmware.com), the original supplier of By the late 1980s, neither academics nor industry VMMs for commodity computing hardware. The practitioners viewed VMMs as much more than a implications of having a VMM for commodity plat- historical curiosity. forms intrigued both researchers and entrepreneurs. Fast forwarding to 2005, VMMs are again a hot topic in academia and industry: Venture capital WHY THE REVIVAL? firms are competing to fund startup companies tout- Ironically, the capabilities of modern operating ing their virtual-machine-based technologies. Intel, systems and the drop in hardware cost—the very 0018-9162/05/$20.00 © 2005 IEEE Published by the IEEE Computer Society May 2005 39
  • 2. rection between the software running in the virtual Figure 1. Classic machine (layer above the VMM) and the hardware. VMM. The VMM is a App App App App App This level of indirection lets the VMM exert thin software layer tremendous control over how guest operating sys- that exports a virtual tems (GuestOSs)—operating systems running machine Operating Operating Operating system system system inside a virtual machine—use hardware resources. abstraction. The A VMM provides a uniform view of underlying abstraction looks Virtual machine monitor hardware, making machines from different vendors enough like the Hardware with different I/O subsystems look the same, which hardware that any means that virtual machines can run on any avail- software written for able computer. Thus, instead of worrying about that hardware will individual machines with tightly coupled hardware run in the virtual combination that had obviated the use of VMMs and software dependencies, administrators can machine. during the 1980s—began to cause problems that view hardware simply as a pool of resources that researchers thought VMMs might solve. Less can run arbitrary services on demand. expensive hardware had led to a proliferation of Because the VMM also offers complete encap- machines, but these machines were often under- sulation of a virtual machine’s software state, the used and incurred significant space and manage- VMM layer can map and remap virtual machines ment overhead. And the increased functionality to available hardware resources at will and even that had made operating systems more capable had migrate virtual machines across machines. Load also made them fragile and vulnerable. balancing among a collection of machines thus To reduce the effects of system crashes and break- becomes trivial, and there is a robust model for ins, system administrators again resorted to a com- dealing with hardware failures or for scaling sys- puting model with one application running per tems. When a computer fails and must go offline machine. This in turn increased hardware require- or when a new machine comes online, the VMM ments, imposing significant cost and management layer can simply remap virtual machines accord- overhead. Moving applications that once ran on ingly. Virtual machines are also easy to replicate, many physical machines into virtual machines and which lets administrators bring new services online consolidating those virtual machines onto just a as needed. few physical platforms increased use efficiency and Encapsulation also means that administrators reduced space and management costs. Thus, the can suspend virtual machines and resume them at VMM’s ability to serve as a means of multiplexing arbitrary times or checkpoint them and roll them hardware—this time in the name of server consol- back to a previous execution state. With this gen- idation and utility computing—again led it to eral-purpose undo capability, systems can easily prominence. recover from crashes or configuration errors. Moving forward, a VMM will be less a vehicle Encapsulation also supports a very general mobil- for multitasking, as it was originally, and more a ity model, since users can copy a suspended virtual solution for security and reliability. In many ways machine over a network or store and transport it on VMMs give operating systems developers another removable media. opportunity to develop functionality no longer The VMM can also provide total mediation of all practical in today’s complex and ossified operating interactions between the virtual machine and under- systems, where innovation moves at a geologic lying hardware, thus allowing strong isolation pace. Functions like migration and security that between virtual machines and supporting the mul- have proved difficult to achieve in modern operat- tiplexing of many virtual machines on a single hard- ing systems seem much better suited to implemen- ware platform. The VMM can then consolidate a tation at the VMM layer. In this context, VMMs collection of virtual machines with low resources provide a backward-capability path for deploying onto a single computer, thereby lowering hardware innovative operating system solutions, while pro- costs and space requirements. viding the ability to safely pull along the existing Strong isolation is also valuable for reliability and software base. security. Applications that previously ran together on one machine can now separate into different vir- DECOUPLING HARDWARE AND SOFTWARE tual machines. If one application crashes the oper- As Figure 1 shows, the VMM decouples the soft- ating system because of a bug, the other appli- ware from the hardware by forming a level of indi- cations are isolated from this fault and can con- 40 Computer
  • 3. tinue running undisturbed. Further, if attackers quent interrupts to the virtual machine until compromise a single application, the attack is con- it reenables interrupts. The central design tained to just the compromised virtual machine. Consequently, the key to providing virtu- Thus, VMMs are a tool for restructuring systems alizable architecture is to provide trap seman- goals for VMMs are to enhance robustness and security—without tics that let a VMM safely, transparently, and compatibility, imposing the space or management overhead that directly use the CPU to execute the virtual performance, and would be required if applications executed on sep- machine. With these semantics, the VMM simplicity. arate physical machines. can use direct execution to create the illusion of a normal physical machine for the software VMM IMPLEMENTATION ISSUES running inside the virtual machine. The VMM must be able to export a hardware Challenges. Unfortunately, most modern CPU interface to the software in a virtual machine that architectures were not designed to be virtualizable, is roughly equivalent to raw hardware and simul- including the popular x86 architecture. For exam- taneously maintain control of the machine and ple, x86 operating systems use the x86 POPF retain the ability to interpose on hardware access. instruction (pop CPU flags from stack) to set and Various techniques can help achieve this, each offer- clear the interrupt-disable flag. When it runs in ing different design tradeoffs. unprivileged mode, POPF does not trap. Instead, When evaluating these tradeoffs, the central it simply ignores the changes to the interrupt flag, design goals for VMMs are compatibility, perfor- so direct execution techniques will not work for mance, and simplicity. Compatibility is clearly privileged-mode code that uses this instruction. important, since the VMM’s chief benefit is its abil- Another challenge of the x86 architecture is that ity to run legacy software. The goal of performance, unprivileged instructions let the CPU access privi- a measure of virtualization overhead, is to run the leged state. Software running in the virtual machine virtual machine at the same speed as the software can read the code segment register to determine the would run on the real machine. Simplicity is par- processor’s current privilege level. A virtualizable ticularly important because a VMM failure is likely processor would trap this instruction, and the to cause all the virtual machines running on the VMM could then patch what the software running computer to fail. In particular, providing secure iso- in the virtual machine sees to reflect the virtual lation requires that the VMM be free of bugs that machine’s privilege level. The x86, however, doesn’t attackers could use to subvert the system. trap the instruction, so with direct execution, the software would see the wrong privilege level in the CPU virtualization code segment register. A CPU architecture is virtualizable if it supports Techniques. Several techniques address how to the basic VMM technique of direct execution— implement VMMs on CPUs that can’t be virtual- executing the virtual machine on the real machine, ized, the most prevalent being paravirtualization2 while letting the VMM retain ultimate control of and direct execution combined with fast binary the CPU. translation. With paravirtualization, the VMM Implementing basic direct execution requires builder defines the virtual machine interface by running the virtual machine’s privileged (operat- replacing nonvirtualizable portions of the original ing-system kernel) and unprivileged code in the instruction set with easily virtualized and more effi- CPU’s unprivileged mode, while the VMM runs in cient equivalents. Although operating systems privileged mode. Thus, when the virtual machine must be ported to run in a virtual machine, most attempts to perform a privileged operation, the normal applications run unmodified. CPU traps into the VMM, which emulates the priv- Disco,3 a VMM for the nonvirtualizable MIPS ileged operation on the virtual machine state that architecture, used paravirtualization. Disco design- the VMM manages. ers changed the MIPS interrupt flag to be simply a The VMM handling of an instruction that dis- special memory location in the virtual machine ables interrupts provides a good example. Letting rather than a privileged register in the processor. a guest operating system disable interrupts would They replaced the MIPS equivalent of the x86 POPF not be safe since the VMM could not regain con- instruction and the read access to the code segment trol of the CPU. Instead, the VMM would trap the register with accesses to this special memory loca- operation to disable interrupts and then record that tion. This replacement also eliminated virtualiza- interrupts were disabled for that virtual machine. tion overhead such as traps on privileged in- The VMM would then postpone delivering subse- structions, which resulted in increased performance. May 2005 41
  • 4. The designers then modified a version of the ating system or applications, the binary translator Irix operating system to take advantage of applies the changes when the code first executes. Building a VMM this paravirtualized version of the MIPS archi- While binary translation does incur some over- that offers full tecture. head, it is negligible on most workloads. The trans- compatibility and The biggest drawback to paravirtualization lator runs only a fraction of the code, and execution high performance is incompatibility. Any operating system run speeds are nearly indistinguishable from direct exe- in a paravirtualized VMM must be ported to cution once the trace cache has warmed up. is a significant that architecture. Operating system vendors Binary translation is also a way to optimize direct engineering must cooperate, legacy operating systems execution. For example, privileged code that fre- challenge. cannot run, and existing machines cannot quently traps can incur significant additional over- easily migrate into virtual machines. With head when using direct execution since each trap years of excellent backward-compatible x86 transfers control from the virtual machine to the hardware, huge amounts of legacy software monitor and back. Binary translation can eliminate are still in use, which means that giving up back- many of these traps, which results in a lower overall ward compatibility is not trivial. virtualization overhead. This is particularly true on In spite of these drawbacks, academic research CPUs with deep instruction pipelines, such as the projects have favored paravirtualization because modern x86 CPUs, where traps incur high overhead. building a VMM that offers full compatibility and Future support. In the near term, both Intel with high performance is a significant engineering chal- its Vanderpool technology and AMD with its lenge. Pacifica technology have announced hardware sup- To provide fast, compatible virtualization of the port for x86 CPU VMMs. Rather than making x86 architecture, VMware developed a new virtu- existing execution modes virtualizable, both the alization technique that combines traditional direct Intel and AMD technologies add a new execution execution with fast, on-the-fly binary translation. mode to the processor that lets a VMM safely and In most modern operating systems, the processor transparently use direct execution for running vir- modes that run normal application programs are tual machines. To improve performance, the mode virtualizable and hence can run using direct execu- attempts to reduce both the traps needed to imple- tion. A binary translator can run privileged modes ment virtual machines and the time it takes to per- that are nonvirtualizable, patching the nonvirtual- form the traps. izable x86 instructions. The result is a high-perfor- When these technologies become available, mance virtual machine that matches the hardware direct-execution-only VMMs could be possible on and thus maintains total software compatibility. x86 processors, at least for operating system envi- Others have developed binary translators4 that ronments that do not use these new execution translate code between CPUs with different instruc- modes. tion sets. VMware’s binary translation is much sim- If this hardware support works as well as the pler because the source and target instruction sets IBM mainframe virtualization support of the early are nearly identical. The VMM’s basic technique is days, it should be possible to decrease performance to run privileged mode code (kernel code) under overhead even more, as well as simplifying the control of the binary translator. The translator implementation of virtualization techniques. translates the privileged code into a similar block, Lessons from the past indicate that adequate replacing the problematic instructions, which lets hardware support can decrease overhead, even the translated block run directly on the CPU. The without paravirtualization, to the point that the binary translation system caches the translated value of having a fully compatible virtual machine block in a trace cache so that translation does not abstraction overrides any performance benefits occur on subsequent executions. from breaking compatibility. The translated code looks much like the results from the paravirtualized approach: Normal instruc- Memory virtualization tions execute unchanged, while the translator The traditional implementation technique for vir- replaces instructions that need special treatment, like tualizing memory is to have the VMM maintain a POPF and reads from the code segment registers with shadow of the virtual machine’s memory-manage- an instruction sequence similar to what a paravirtu- ment data structure. This data structure, the shadow alized virtual machine would need to run. There is page table, lets the VMM precisely control which one important difference, however: Rather than pages of the machine’s memory are available to a vir- applying the changes to the source code of the oper- tual machine. 42 Computer
  • 5. When the operating system running in a virtual server products. In this scheme, the VMM machine establishes a mapping in its page table, the tracks the contents of physical pages, noting Resource VMM detects the changes and establishes a map- if they are identical. If so, the VMM modifies management ping in the corresponding shadow page table entry the virtual machine’s shadow page tables to that points to the actual page location in the hard- point to only a single copy. The VMM can holds great ware memory. When the virtual machine is execut- then deallocate the redundant copy, thereby promise as an ing, the hardware uses the shadow page table for freeing the memory for other uses. area for future memory translation so that the VMM can always As with a normal copy-on-write page-shar- research. control what memory each virtual machine is using. ing scheme, the VMM gives each virtual Like a traditional operating system’s virtual mem- machine its own copy of the page if the con- ory subsystems, the VMM can page the virtual tents later diverge. To give an idea of poten- machine to a disk so that the memory allocated to tial savings, an x86 computer might have 30 virtual virtual machines can exceed the hardware’s physi- machines running Microsoft Windows 2000 but cal memory size. Because this effectively lets the only one copy of the Windows kernel in the com- VMM overcommit the machine memory, the virtual puter’s memory—a significant reduction in physi- machine workload requires less hardware. The cal memory use. VMM can dynamically control how much memory Future support. Operating systems make frequent each virtual machine gets according to what it needs. changes to their page tables, so keeping shadow Challenges. The VMM’s virtual memory subsys- copies up to date in software can incur undesirable tem constantly controls how much memory goes overhead. Hardware-managed shadow page tables to a virtual machine, and it must periodically have long been present in mainframe virtualization reclaim some of that memory by paging a portion architectures and would prove a fruitful direction of the virtual machine out to disk. The operating for accelerating x86 CPU virtualization. system running in the virtual machine (the Resource management holds great promise as an GuestOS), however, is likely to have much better area for future research. Much work remains in information than a VMM’s virtual memory system investigating ways for VMMs and guest operating about which pages are good candidates for paging systems to make cooperative resource management out. For example, a GuestOS might note that the decisions. In addition, research must look at process that created a page has exited, which means resource management at the entire data center level, nothing will access the page again. The VMM oper- and we expect significant strides will be made in ating at the hardware level does not see this and this area in the coming decade. might wastefully page out that page. To address this problem, VMware’s ESX Server5 I/O virtualization adopted a paravirtualization-like approach, in Thirty years ago, the I/O subsystems of IBM which a balloon process running inside the mainframes used a channel-based architecture, in GuestOS can communicate with the VMM. When which access to the I/O devices was through com- the VMM wants to take memory away from a vir- munication with a separate channel processor. By tual machine, it asks the balloon process to allo- using a channel processor, the VMM could safely cate more memory, essentially “inflating” the export I/O device access directly to the virtual process. The GuestOS then uses its superior knowl- machine. The result was a very low virtualization edge about page replacement to select the pages to overhead for I/O. Rather than communicating with give to the balloon process, which the process then the device using traps into the VMM, the software passes to the VMM for reallocation. The increased in the virtual machine could directly read and write memory pressure caused by inflating the balloon the device. This approach worked well for the I/O process causes the GuestOS to intelligently page devices of that time, such as text terminals, disks, memory to the virtual disk. card readers, and card punches. A second challenge for memory virtualization is Challenges. Current computing environments, the size of modern operating systems and applica- with their richer and more diverse collection of I/O tions. Running multiple virtual machines can waste devices, make virtualizing I/O much more difficult. considerable memory by storing redundant copies The x86-based computing environments support a of code and data that are identical across virtual huge collection of I/O devices from different ven- machines. dors with different programming interfaces. Con- To address this challenge, VMware designers sequently, the job of writing a VMM layer that talks developed content-based page sharing for their to these various devices becomes a huge effort. In May 2005 43
  • 6. environment and then transition through the Figure 2. VMware’s HostOS’s software layers to talk to the I/O devices. hosted architecture. For server environments with high-performance Rather than running App network and disk subsystems, the resulting over- as a layer below all head was unacceptably high. other software, the App App I/O Another problem is that modern operating sys- hosted architecture VMM GuestOS tems such as Windows and Linux do not have the shares the hardware resource-management support to provide perfor- with an existing HostOS VMM mance isolation and service guarantees to the operating system Standard x86 PC hardware virtual machines—a feature that many server envi- (HostOS). ronments require. ESX Server5 adopts a more traditional VMM approach, running directly on the hardware with- addition, some devices such as a modern PC’s out a host operating system. In addition to sophis- graphics subsystem or a modern server’s network ticated scheduling and resource management, ESX interface have extremely high performance require- Server has a highly optimized I/O subsystem for ments. This makes low-overhead virtualization an network and storage devices. even more critical prerequisite for widespread The ESX Server kernel can use device drivers acceptance. from the Linux kernel to talk directly to the device, Exporting a standard device interface means that resulting in significantly lower virtualization over- the virtualization layer must be able to communi- head for I/O devices. VMware could use this cate with the computer’s I/O devices. To provide approach because relatively few network and stor- this capability, VMware Workstation, a product age I/O devices have passed certification to run in targeting desktop computers, developed the hosted major x86 vendor server machines. Limiting sup- architecture6 shown in Figure 2. In this architec- port to these I/O devices makes directly managing ture, the virtualization layer uses the device drivers the I/O devices feasible for servers. of a host operating system (HostOS) such as Win- Yet another performance optimization in dows or Linux to access devices. Because most I/O VMware’s products is the ability to export special devices have drivers for these operating systems, highly optimized virtual I/O devices that don’t cor- the virtualization layer can support any I/O device. respond to any existing I/O devices. Like the par- When the GuestOS gives the command to read or avirtualization approach for CPUs, this use of write blocks from the virtual disk, the virtual layer paravirtualization requires that GuestOS environ- translates the command into a system call that reads ments use a special device driver to access the I/O or writes a file in the HostOS’s file system. Similarly, devices. The result is a more virtualization-friendly the I/O VMM renders the virtual machine’s virtual I/O device interface with lower overhead for com- display card in a window on the HostOS, which lets municating the I/O commands from the GuestOS the HostOS control, drive, and manage the virtual and thus higher performance. machine’s I/O display devices regardless of what Future support. Like CPU trends, industry trends devices the GuestOS thinks are present. in I/O subsystems point toward hardware support The hosted architecture has three important for high-performance I/O device virtualization. advantages. First, the VMM is simple to install Discrete I/O devices, such as the standard x86 PC because users can install it like an application on the keyboard controller and IDE disk controllers that HostOS rather than on the raw hardware, as with date back to the original IBM PC, are giving way traditional VMMs. Second, the hosted architecture to channel-like I/O devices, such as USB and SCSI. fully accommodates the rich diversity of I/O devices Like the IBM mainframe I/O channels, these I/O in the x86 PC marketplace. Third, the VMM can interfaces greatly ease implementation complex- use the scheduling, resource management, and other ity and reduce virtualization overhead. services the HostOS environment offers. With adequate hardware support, safely passing The disadvantages of the hosted architecture these channel I/O devices directly to the software in became material when VMware started to develop the virtual machine should be possible, effectively products for the x86 server marketplace. The eliminating all I/O virtualization overhead. For this hosted architecture greatly increases the perfor- to work, I/O devices will need to know about vir- mance overhead for I/O device virtualization. Each tual machines and be able to support multiple vir- I/O request must transfer control to the HostOS tual interfaces so that the VMM can safely map the 44 Computer
  • 7. interface into the virtual machine. In this way, the technology, will let virtual machines move virtual machine’s device drivers will be able to com- rapidly between physical machines according Virtual machines municate directly with the I/O device without the to the data center’s needs. The VMM can han- provide a powerful overhead of trapping into the VMM. dle traditional hardware-management prob- I/O devices that perform direct memory access lems, such as hardware failure, simply by unifying paradigm will require address remapping. The remapping placing the virtual machines running on the for restructuring ensures that the memory addresses that the device failed computer onto other correctly function- desktop driver running in the virtual machine specifies will ing hardware. The ability to move running vir- management. get mapped to the locations in the computer’s mem- tual machines also eases some hardware ory that the shadow page tables specify. For the iso- challenges, such as scheduling preventive main- lation property to hold, the device should be able tenance, dealing with equipment lease ends, to access only memory belonging to the virtual and deploying hardware upgrades. Administrators machine regardless of how the driver in the virtual can use hot migration to perform these tasks with- machine programs the device. out service interruptions. In a system with multiple virtual machines using Today, manual migration is the norm, but the the same I/O device, the VMM will need an effi- future should see a virtual machine infrastructure cient mechanism for routing device completion that automatically performs load balancing, detects interrupts to the correct virtual machine. Finally, impending hardware failures and migrates virtual virtualizable I/O devices will need to interface to machines accordingly, and creates and destroys vir- the VMM to maintain isolation between hardware tual machines according to demand for particular and software and ensure that the VMM can con- services. tinue to migrate and take a checkpoint of the vir- tual machines. I/O devices that provide this kind of Beyond the machine room support could minimize virtualization overhead, As the pervasive use of virtual machines moves allowing the use of virtual machines for even the from the server room to the desktop, their effects on most I/O-intensive workloads. Besides perfor- computing will become even more profound. mance, a significant benefit is the improved secu- Virtual machines provide a powerful unifying par- rity and reliability gained from removing complex adigm for restructuring desktop management.7 The device driver code from the VMM. provisioning benefits that VMMs bring to the machine room apply equally to the desktop and WHAT’S AHEAD? help solve the management challenges that large An examination of current products and recent collections of desktop and laptop machines impose. research provides some interesting insights into the Solving problems in the VMM layer benefits all future of VMMs and the demands they will place software running in the virtual machine, regardless on virtualization technology. of the software’s age (legacy or latest release) or its vendor. This operating system independence also Server side reduces the need to buy and maintain redundant In the data center, administrators will be able to infrastructure. Instead of n versions of help desk or quickly provision, monitor, and manage thousands backup software, for example, only one version— of virtual machines running on hundreds of phys- the one that operates at the VMM level—would ical boxes—all from a single console. Rather than require support. configuring individual computers, system admin- Virtual machines could also significantly change istrators will create new servers by instantiating a how users think about computers. If ordinary users new virtual machine from an existing template and can easily create, copy, and share virtual machines, mapping these virtual machines onto physical the use models could be vastly different from those resources according to specific administration poli- in computing environments with hardware avail- cies. Rather than thinking of any computer as pro- ability constraints. Software developers, for exam- viding a particular fixed service, administrators will ple, can use products like VMware Workstation to view computers simply as part of a pool of generic easily set up a network of machines for testing, or hardware resources. An example of this technol- they can keep their own set of test machines for ogy is VMware’s Virtual Center. every target platform. This mapping of a virtual machine to hardware The increased mobility of virtual machines will resources will be highly dynamic. Hot migration also significantly change machine use. Projects such capabilities, such as those in VMware’s VMotion as The Collective7 and Internet Suspend/Resume8 May 2005 45
  • 8. demonstrate the feasibility of migrating a VMMs are particularly interesting in that they VMMs offer the user’s entire computing environment over the support the ability to run multiple software stacks local and wide area. The availability of large- with different security levels. Because they can spec- potential to capacity, inexpensive removable media in the ify the software stack from the hardware up, virtual restructure existing form of USB hard drives might mean that machines provide maximum flexibility in trading software systems users can bring their computing environ- off performance, backward compatibility, and to provide ments with them wherever they go. assurance. Further, specifying an application’s com- The increasingly dynamic character of vir- plete software stack simplifies reasoning about its greater security. tual machine-based environments will also security. In contrast, it is almost impossible to rea- require more dynamic network topologies. son about the security of a single application in Virtual switches, virtual firewalls, and overlay today’s operating systems because processes are networks will be an integral part of a future in which poorly isolated from one another. Thus, an appli- the logical computing environment is decoupled cation’s security depends on the security of every from the physical location. other application on the machine. These capabilities make VMMs particularly well Security improvements suited for building trusted computing, as the Terra VMMs offer the potential to restructure existing system11 demonstrates. In Terra, the VMM can software systems to provide greater security, while authenticate software running inside a virtual also facilitating new approaches to building secure machine to remote parties, in a process called attes- systems. Current operating systems provide poor tation. isolation, leaving host-based security mechanisms Suppose, for example, that a user’s desktop subject to attack. Moving these capabilities outside machine is running multiple virtual machines simul- a virtual machine—so that they run alongside an taneously. The user might have a relatively low-secu- operating system but are isolated from it—offers rity Windows virtual machine for Web browsing, a the same functionality but with much stronger resis- higher-security virtual machine with a hardened tance to attack. Two research examples of such sys- Linux virtual machine for day-to-day work, and a tems are Livewire,9 a system that uses a VMM for still higher-security virtual machine comprising a advanced intrusion detection on the software in the special-purpose high-security operating system and virtual machines, and ReVirt,10 which uses the a dedicated mail client for sensitive internal mail. VMM layer to analyze the damage hackers might A remote server could require attestation from have caused during the break-in. These systems not each virtual machine to confirm its contents; for only gain greater attack resistance from operating example, the company file server might allow only outside the virtual machine, but also benefit from the hardened Linux virtual machine to interact with the ability to interpose and monitor the system it, while the secure-mail virtual machine might be inside the virtual machine at a hardware level. able to connect only to a dedicated mail server. In Placing security outside a virtual machine pro- both scenarios the servers are also likely to be run- vides an attractive way to quarantine the net- ning in virtual machines, permitting mutual authen- work—limiting a virtual machine’s access to a tication to take place. network to ensure that it is neither malicious nor Finally, the flexible resource management that vulnerable to attack. By controlling network access VMMs provide can make systems more resistant at the virtual machine layer and inspecting virtual to attack. The ability to rapidly replicate virtual machines before permitting (or limiting) access, vir- machines and dynamically adapt to large work- tual machines become a powerful tool for limiting loads can provide a powerful tool for dealing with the spread of malicious code in networks. the scaling demands that flash crowds and distrib- Virtual machines are also particularly well suited uted denial-of-service attacks can impose. as a building block for constructing high-assurance systems. The US National Security Administration’s Software distribution NetTop architecture, for example, uses VMware’s For the software industry, the ubiquitous deploy- VMM to isolate multiple environments, each of ment of VMMs has significant implications. The which has access to separate networks with varying VMM layer provides exciting possibilities for soft- security classifications. Applications like this illus- ware companies to distribute entire virtual machines trate the need to continue researching and develop- containing complex software environments. Oracle, ing support for building ever smaller VMMs with for example, has distributed more than 10,000 fully increasingly higher assurance. functional copies of its latest database environment 46 Computer
  • 9. in virtual machines. Rather than having to install 2. A. Whitaker, M. Shaw, and S. Gribble, “Scale and the entire complex environment to test the software, Performance in the Denali Isolation Kernel,” ACM users simply boot the virtual machine. SIGOPS Operating Systems Rev., vol. 36, no. SI, Although the use of virtual machines as a distri- Winter 2002, pp. 195-209. bution mechanism is widespread for software 3. E. Bugnion et al., “Disco: Running Commodity demonstration, the model could also work well for Operating Systems on Scalable Multiprocessors,” production environments, creating a fundamentally ACM Trans. Computer Systems, vol. 15, no. 4, 1997, different way of distributing software. Admini- pp. 412-447. strators using VMware’s ACE product can publish 4. R. Sites et al., “Binary Translation,” Comm. ACM, virtual machines and control how these virtual Feb. 1993, pp. 69-81. machines can be used. The Collective project 5. C. Waldspurger, “Memory Resource Management in explored in depth the idea of bundling applications VMware ESX Server,” ACM SIGOPS Operating Sys- into virtual appliances. The idea is to provide file tems Rev., vol. 36, no. SI, Winter 2002, pp. 181-194. servers, desktop applications, and so on in a form 6. J. Sugerman, G. Venkitachalam, and B. Lim, “Virtu- that lets users treat the virtual machines as a stand- alizing I/O Devices on VMware Workstation’s alone application. An appliance maintainer han- Hosted Virtual Machine Monitor,” Proc. Usenix dles issues like patch management, thus relieving Ann. Technical Conf., Usenix, 2002, pp. 1-14. normal users of the maintenance burden. 7. R. Chandra et al., “The Collective: A Cache-Based The virtual machine-based distribution model Systems Management Architecture,” Proc. Symp. will require software vendors to update their license Network Systems Design and Implementation, agreements. Software that is licensed to run on a Usenix, 2005, to appear. particular CPU or physical machine will not trans- 8. M. Kozuch and M. Satyanarayanan, “Internet Sus- late as well into this new environment, relative to pend/Resume,” Proc. IEEE Workshop Mobile Com- licenses based on use or to sitewide licenses. Users puting Systems and Applications, IEEE Press, 2002, and system administrators will tend to favor oper- pp. 40-46. ating system environments that they can easily and 9. T. Garfinkel and M. Rosenblum, “A Virtual Machine inexpensively distribute in virtual machines, rather Introspection-Based Architecture for Intrusion Detec- than more restrictive and expensive options. tion,” Proc. Network and Distributed Systems Secu- rity Symp., The Internet Society, 2003, pp. 191-206. he VMM resurgence seems to be fundamen- T 10. G. Dunlap et al., “ReVirt: Enabling Intrusion Analy- tally altering the way software and hardware sis through Virtual-Machine Logging and Replay,” designers view, manage, and structure complex ACM SIGOPS Operating Systems Rev., vol. 36, no. software environments. VMMs also provide a back- SI, Winter 2002, pp. 211-224. ward-capability path for deploying innovative oper- 11. T. Garfinkel et al., “Terra: A Virtual-Machine-Based ating system solutions that both meet current needs Platform for Trusted Computing,” Proc. ACM Symp. and safely pull along the existing software base. This Operating Systems Principles, ACM Press, 2003, pp. capability will be key to meeting future computing 192-206. challenges. Companies are increasingly abandoning the strategy of procuring individual machines and Mendel Rosenblum is an associate professor of tightly bundling complex software environments. computer science at Stanford University and a VMMs are giving these fragile, difficult-to-manage cofounder and chief scientist at VMware Inc. His systems new freedom. In coming years, virtual research interests include system software, distrib- machines will move beyond their simple provi- uted systems, computer architecture, and security. sioning capabilities and beyond the machine room Rosenblum received a PhD in computer science to provide a fundamental building block for mobil- from the University of California, Berkeley. Con- ity, security, and usability on the desktop. Indeed, tact him at mendel@cs.stanford.edu. VMM capabilities should continue to be an impor- tant part of the shift in the computing landscape. I Tal Garfinkel is a PhD candidate in computer sci- ence at Stanford University. His research interests include operating systems, distributed systems, com- References puter architecture, and security. He received a BA in 1. R.P. Goldberg, “Survey of Virtual Machine computer science from the University of California, Research,” Computer, June 1974, pp. 34-45. Berkeley. Contact him at talg@cs.stanford.edu. May 2005 47

×