Virtual Machines

616 views
543 views

Published on

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
616
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
22
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Virtual Machines

  1. 1. CS136, Advanced Architecture Virtual Machines
  2. 2. Outline <ul><li>Virtual Machines </li></ul><ul><li>Xen VM: Design and Performance </li></ul><ul><li>Conclusion </li></ul>
  3. 3. Introduction to Virtual Machines <ul><li>VMs developed in late 1960s </li></ul><ul><ul><li>Remained important in mainframe computing over the years </li></ul></ul><ul><ul><li>Largely ignored in single-user computers of 1980s and 1990s </li></ul></ul><ul><li>Recently regained popularity due to: </li></ul><ul><ul><li>Increasing importance of isolation and security in modern systems, </li></ul></ul><ul><ul><li>Failures in security and reliability of standard operating systems, </li></ul></ul><ul><ul><li>Sharing of single computer among many unrelated users, </li></ul></ul><ul><ul><li>Dramatic increases in raw speed of processors, making VM overhead more acceptable </li></ul></ul>
  4. 4. What Is a Virtual Machine (VM)? <ul><li>Broadest definition: </li></ul><ul><ul><li>Any abstraction that provides a Turing-complete and standardized programming interface </li></ul></ul><ul><ul><li>Examples: x86 ISA; Java bytecode; even Python and Perl </li></ul></ul><ul><ul><li>As level gets higher, utility of definition gets lower </li></ul></ul><ul><li>Better definition: </li></ul><ul><ul><li>An abstract machine that provides a standardized interface similar to a hardware ISA, but at least partly under control of software that provides added features </li></ul></ul><ul><ul><li>Best to distinguish “true” VM from emulators (although Java VM is entirely emulated) </li></ul></ul><ul><li>Often, VM is partly supported in hardware, with minimal software control </li></ul><ul><ul><li>E.g., give multiple virtual x86s on one real one, similar to way virtual memory gives illusion of more memory than reality </li></ul></ul>
  5. 5. System Virtual Machines <ul><li>“ (Operating) System Virtual Machines ” provide complete system-level environment at binary ISA </li></ul><ul><ul><li>Assumes ISA always matches native hardware </li></ul></ul><ul><ul><li>E.g., IBM VM/370, VMware ESX Server, and Xen </li></ul></ul><ul><li>Presents illusion that VM users have an entire private computer, including copy of OS </li></ul><ul><li>Single machine runs multiple VMs, and can support multiple (and different) OSes </li></ul><ul><ul><li>On conventional platform, single OS “owns” all HW resources </li></ul></ul><ul><ul><li>With VM, multiple OSes all share HW resources </li></ul></ul><ul><li>Underlying HW platform is host ; its resources are shared among guest VMs </li></ul>
  6. 6. Virtual Machine Monitors (VMMs) <ul><li>Virtual machine monitor (VMM) or hypervisor is software that supports VMs </li></ul><ul><li>VMM determines how to map virtual resources to physical ones </li></ul><ul><li>Physical resource may be time-shared, partitioned, or emulated in software </li></ul><ul><li>VMM much smaller than a traditional OS; </li></ul><ul><ul><li>Isolation portion of a VMM is  10,000 lines of code </li></ul></ul>
  7. 7. VMM Overhead <ul><li>Depends on workload </li></ul><ul><li>Goal for system VMs: </li></ul><ul><ul><li>Run almost all instructions directly on native hardware </li></ul></ul><ul><li>User-level CPU-bound programs (e.g., SPEC) have near-zero virtualization overhead </li></ul><ul><ul><li>Run at native speeds since OS rarely invoked </li></ul></ul><ul><li>I/O-intensive workloads are OS-intensive </li></ul><ul><ul><li>Execute many system calls and privileged instructions </li></ul></ul><ul><ul><li>Can result in high virtualization overhead </li></ul></ul><ul><li>But if I/O-intensive workload is also I/O-bound </li></ul><ul><ul><li>Processor utilization is low (since waiting for I/O) </li></ul></ul><ul><ul><li>Processor virtualization can be hidden in I/O costs </li></ul></ul><ul><ul><li>So virtualization overhead is low </li></ul></ul>
  8. 8. Important Uses of VMs <ul><li>Multiple OSes </li></ul><ul><ul><li>No more dual boot! </li></ul></ul><ul><ul><li>Can even transfer data (e.g., cut-and-paste) between VMs </li></ul></ul><ul><li>Protection </li></ul><ul><ul><li>Crash or intrusion in one OS doesn’t affect others </li></ul></ul><ul><ul><li>Easy to replace failed OS with fresh, clean one </li></ul></ul><ul><li>Software Management </li></ul><ul><ul><li>VMs can run complete SW stack, even old OSes like DOS </li></ul></ul><ul><ul><li>Run legacy OS, stable current, test release on same HW </li></ul></ul><ul><li>Hardware Management </li></ul><ul><ul><li>Independent SW stacks can share HW </li></ul></ul><ul><ul><ul><li>Run application on own OS (helps dependability) </li></ul></ul></ul><ul><ul><li>Migrate running VM to different computer </li></ul></ul><ul><ul><ul><li>To balance load or to evacuate from failing HW </li></ul></ul></ul>
  9. 9. Virtual Machine Monitor Requirements <ul><li>VM Monitor </li></ul><ul><ul><li>Presents SW interface to guest software </li></ul></ul><ul><ul><li>Isolates guests’ states from each other </li></ul></ul><ul><ul><li>Protects itself from guest software (including guest OSes) </li></ul></ul><ul><li>Guest software should behave exactly as if running on native HW </li></ul><ul><ul><li>Except for performance-related behavior or limitations of fixed resources shared by multiple VMs </li></ul></ul><ul><ul><li>Hard to achieve perfection in real system </li></ul></ul><ul><li>Guest software shouldn’t be able to change allocation of real system resources directly </li></ul><ul><li>Hence, VMM must control  everything even though guest VM and OS currently running is temporarily using them </li></ul><ul><ul><li>Access to privileged state, address translation, I/O, exceptions and interrupts, … </li></ul></ul>
  10. 10. Virtual Machine Monitor Requirements (continued) <ul><li>VMM must be at higher privilege level than guest VM, which generally runs in user mode </li></ul><ul><ul><li>Execution of privileged instructions handled by VMM </li></ul></ul><ul><li>E.g., timer or I/O interrupt: </li></ul><ul><ul><li>VMM suspends currently running guest </li></ul></ul><ul><ul><li>Saves state </li></ul></ul><ul><ul><li>Handles interrupt </li></ul></ul><ul><ul><ul><li>Possibly handle internally, possibly delivers to a guest </li></ul></ul></ul><ul><ul><li>Decides which guest to run next </li></ul></ul><ul><ul><li>Loads its state </li></ul></ul><ul><ul><li>Guest VMs that want timer are given virtual one </li></ul></ul>
  11. 11. Hardware Requirements <ul><li>Hardware needs are roughly same as paged virtual memory: </li></ul><ul><li>At least 2 processor modes, system and user </li></ul><ul><li>Privileged subset of instructions </li></ul><ul><ul><li>Available only in system mode </li></ul></ul><ul><ul><li>Trap if executed in user mode </li></ul></ul><ul><ul><li>All system resources controllable only via these instructions </li></ul></ul>
  12. 12. ISA Support for Virtual Machines <ul><li>If ISA designers plan for VMs, easy to limit: </li></ul><ul><ul><li>What instructions VMM must handle </li></ul></ul><ul><ul><li>How long it takes to emulate them </li></ul></ul><ul><li>Because chip makers ignored VM technology, ISA designers didn’t “Plan Ahea d ” </li></ul><ul><ul><li>Including 80x86 and most RISC architectures </li></ul></ul><ul><li>Guest system must see only virtual resources </li></ul><ul><ul><li>Guest OS runs in user mode on top of VMM </li></ul></ul><ul><ul><li>If guest tries to touch HW-related resource, must trap to VMM </li></ul></ul><ul><ul><ul><li>Requires HW support to initiate trap </li></ul></ul></ul><ul><ul><ul><li>VMM must then insert emulated information </li></ul></ul></ul><ul><ul><li>If HW built wrong, guest will see or change privileged stuff </li></ul></ul><ul><ul><ul><li>VMM must then modify guest’s binary code </li></ul></ul></ul>
  13. 13. ISA Impact on Virtual Machines <ul><li>Consider x86 PUSHF/POPF instructions </li></ul><ul><ul><li>Push flags register on stack or pop it back </li></ul></ul><ul><ul><li>Flags contains condition codes (good to be able to save/restore) but also interrupt enable flag (IF) </li></ul></ul><ul><li>Pushing flags isn’t privileged </li></ul><ul><ul><li>Thus, guest OS can read IF and discover it’s not the way it was set </li></ul></ul><ul><ul><ul><li>VMM isn’t invisible any more </li></ul></ul></ul><ul><li>Popping flags in user mode ignores IF </li></ul><ul><ul><li>VMM now doesn’t know what guest wants IF to be </li></ul></ul><ul><ul><li>Should trap to VMM </li></ul></ul><ul><li>Possible solution: modify code, replacing pushf/popf with special interrupting instructions </li></ul><ul><ul><li>But now guest can read own code and detect VMM </li></ul></ul>
  14. 14. Hardware Support for Virtualization <ul><li>Old “correct” implementation: trap on every pushf/popf so VM can fix up results </li></ul><ul><ul><li>Very expensive, since pushf/popf used frequently </li></ul></ul><ul><li>Alternative: IF shouldn’t be in same place as condition codes </li></ul><ul><ul><li>Pushf/popf can be unprivileged </li></ul></ul><ul><ul><li>IF manipulation is now very rare </li></ul></ul><ul><li>Pentium now has even better solution </li></ul><ul><ul><li>In user mode, VIF (“Virtual Interrupt Flag”) holds what guest wants IF to be </li></ul></ul><ul><ul><li>Pushf/popf manipulate VIF instead of IF </li></ul></ul><ul><ul><li>Host can now control real IF, guest sees virtual one </li></ul></ul><ul><ul><li>Basic idea can be extended for many similar “OS-only” flags and registers </li></ul></ul>
  15. 15. Impact of VMs on Virtual Memory <ul><li>Each guest manages own page tables </li></ul><ul><ul><li>How to make this work? </li></ul></ul><ul><li>VMM separates real and physical memory </li></ul><ul><ul><li>Real memory is intermediate level between virtual and physical </li></ul></ul><ul><ul><li>Some call it virtual , physical , and machine memory </li></ul></ul><ul><ul><li>Guest maps virtual to real memory via its page tables </li></ul></ul><ul><ul><ul><li>VMM page tables map real to physical </li></ul></ul></ul><ul><li>VMM maintains shadow page table that maps directly from guest virtual space to HW physical address space </li></ul><ul><ul><li>Rather than pay extra level of indirection on every memory access </li></ul></ul><ul><ul><li>VMM must trap any attempt by guest OS to change its page table or to access page table pointer </li></ul></ul>
  16. 16. ISA Support for VMs & Virtual Memory <ul><li>IBM 370 architecture added additional level of indirection that was managed by VMM </li></ul><ul><ul><li>Guest OS kept page tables as before, so shadow pages were unnecessary </li></ul></ul><ul><li>To virtualize software TLB, VMM manages real one and has copy of contents for each guest VM </li></ul><ul><ul><li>Any instruction that accesses TLB must trap </li></ul></ul><ul><li>Hardware TLB still managed by hardware </li></ul><ul><ul><li>Must flush on VM switch unless PID tags available </li></ul></ul><ul><li>HW or SW TLBs with PID tags can mix entries from different VMs </li></ul><ul><ul><li>Avoids flushing TLB on VM switch </li></ul></ul>
  17. 17. Impact of I/O on Virtual Machines <ul><li>Most difficult part of virtualization </li></ul><ul><ul><li>Increasing number of I/O devices attached to computer </li></ul></ul><ul><ul><li>Increasing diversity of I/O device types </li></ul></ul><ul><ul><li>Sharing real device among multiple VMs </li></ul></ul><ul><ul><li>Supporting myriad of device drivers, especially with differing guest OSes </li></ul></ul><ul><li>Give each VM generic versions of each type of I/O device, and let VMM handle real I/O </li></ul><ul><ul><li>Drawback: slower than giving VM direct access </li></ul></ul><ul><li>Method for mapping virtual I/O device to physical depends on type: </li></ul><ul><ul><li>Disks partitioned by VMM to create virtual disks for guests </li></ul></ul><ul><ul><li>Network interfaces shared between VMs in short time slices </li></ul></ul><ul><ul><ul><li>VMM tracks messages for virtual network addresses </li></ul></ul></ul><ul><ul><ul><li>Routes to proper guest </li></ul></ul></ul><ul><ul><li>USB might be directly attached to VM </li></ul></ul>
  18. 18. Example: Xen VM <ul><li>Xen: Open-source System VMM for 80x86 ISA </li></ul><ul><ul><li>Project started at University of Cambridge, GNU license </li></ul></ul><ul><li>Original vision of VM is running unmodified OS </li></ul><ul><ul><li>Significant wasted effort just to keep guest OS happy </li></ul></ul><ul><li>“ Paravirtualization ” - small modifications to guest OS to simplify virtualization </li></ul><ul><li>Three examples of paravirtualization in Xen: </li></ul><ul><li>To avoid flushing TLB when invoking VMM, Xen mapped into upper 64 MB of address space of each VM </li></ul><ul><li>Guest OS allowed to allocate pages, just check that it didn’t violate protection restrictions </li></ul><ul><li>To protect guest OS from user programs in VM, Xen takes advantage of 80x86’s four protection levels </li></ul><ul><ul><li>Most x86 OSes keep everything at privilege levels 0 or at 3. </li></ul></ul><ul><ul><li>Xen VMM runs at highest level (0) </li></ul></ul><ul><ul><li>Guest OS runs at next level (1) </li></ul></ul><ul><ul><li>Applications run at lowest (3) </li></ul></ul>
  19. 19. Xen Changes for Paravirtualization <ul><li>Port of Linux to Xen changed  3000 lines, or  1% of 80x86-specific code </li></ul><ul><ul><li>Doesn’t affect application binary interfaces (ABI/API) of guest OS </li></ul></ul><ul><li>OSes supported in Xen 2.0: </li></ul>http://wiki.xensource.com/xenwiki/OSCompatibility Yes No FreeBSD 5 Yes No Plan 9 Yes Yes NetBSD 3.0 Yes No NetBSD 2.0 Yes Yes Linux 2.6 Yes Yes Linux 2.4 Runs as guest OS Runs as host OS OS
  20. 20. Xen and I/O <ul><li>To simplify I/O, privileged VMs assigned to each hardware I/O device: “ driver domains ” </li></ul><ul><ul><li>Xen Jargon: “ domains ” = Virtual Machines </li></ul></ul><ul><li>Driver domains run physical device drivers </li></ul><ul><ul><li>Interrupts still handled by VMM before being sent to appropriate driver domain </li></ul></ul><ul><li>Regular VMs (“ guest domains ” ) run simple virtual device drivers </li></ul><ul><ul><li>Communicate with physical device drivers in driver domains to access physical I/O hardware </li></ul></ul><ul><li>Data sent between guest and driver domains by page remapping </li></ul>
  21. 21. Xen Performance <ul><li>Performance relative to native Linux for Xen, for 6 benchmarks (from Xen developers) </li></ul><ul><li>But are these user-level CPU-bound programs? I/O-intensive workloads? I/O-bound I/O-Intensive? </li></ul>
  22. 22. Xen Performance, Part II <ul><li>Subsequent study noticed Xen experiments based on 1 Ethernet network interface card (NIC), and single NIC was performance bottleneck </li></ul>
  23. 23. Xen Performance, Part III <ul><li>> 2X instructions for guest VM + driver VM </li></ul><ul><li>> 4X L2 cache misses </li></ul><ul><li>12X – 24X Data TLB misses </li></ul>
  24. 24. Xen Performance, Part IV <ul><li>> 2X instructions : caused by page remapping and transfer between driver and guest VMs, and by communication over channel between 2 VMs </li></ul><ul><li>4X L2 cache misses : Linux uses zero-copy network interface that depends on ability of NIC to do DMA from different locations in memory </li></ul><ul><ul><li>Since Xen doesn’t support “gather” DMA in its virtual network interface, it can’t do true zero-copy in the guest VM </li></ul></ul><ul><li>12X – 24X Data TLB misses : 2 Linux optimizations </li></ul><ul><ul><li>Superpages for part of Linux kernel space: 4MB pages lowers TLB misses versus using 1024 4 KB pages. Not in Xen </li></ul></ul><ul><ul><li>PTEs marked global aren’t flushed on context switch, and Linux uses them for kernel space. Not in Xen </li></ul></ul><ul><li>Future Xen may address 2. and 3., but 1. inherent? </li></ul>
  25. 25. Conclusion <ul><li>VM Monitor presents SW interface to guest software, isolates guest states, and protects itself from guest software (including guest OSes) </li></ul><ul><li>Virtual Machine revival </li></ul><ul><ul><li>Overcome security flaws of large OSes </li></ul></ul><ul><ul><li>Manage software, manage hardware </li></ul></ul><ul><ul><li>Processor performance no longer highest priority </li></ul></ul><ul><li>Virtualization challenges for processor, virtual memory, and I/O </li></ul><ul><ul><li>Paravirtualization to cope with those difficulties </li></ul></ul><ul><li>Xen as example VMM using paravirtualization </li></ul><ul><ul><li>2005 performance on non-I/O bound, I/O intensive apps: 80% of native Linux without driver VM, 34% with driver VM </li></ul></ul>

×