Virtual Machines
Upcoming SlideShare
Loading in...5
×
 

Virtual Machines

on

  • 627 views

 

Statistics

Views

Total Views
627
Views on SlideShare
627
Embed Views
0

Actions

Likes
1
Downloads
20
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Virtual Machines Virtual Machines Presentation Transcript

  • CS136, Advanced Architecture Virtual Machines
  • Outline
    • Virtual Machines
    • Xen VM: Design and Performance
    • Conclusion
  • Introduction to Virtual Machines
    • VMs developed in late 1960s
      • Remained important in mainframe computing over the years
      • Largely ignored in single-user computers of 1980s and 1990s
    • Recently regained popularity due to:
      • Increasing importance of isolation and security in modern systems,
      • Failures in security and reliability of standard operating systems,
      • Sharing of single computer among many unrelated users,
      • Dramatic increases in raw speed of processors, making VM overhead more acceptable
  • What Is a Virtual Machine (VM)?
    • Broadest definition:
      • Any abstraction that provides a Turing-complete and standardized programming interface
      • Examples: x86 ISA; Java bytecode; even Python and Perl
      • As level gets higher, utility of definition gets lower
    • Better definition:
      • An abstract machine that provides a standardized interface similar to a hardware ISA, but at least partly under control of software that provides added features
      • Best to distinguish “true” VM from emulators (although Java VM is entirely emulated)
    • Often, VM is partly supported in hardware, with minimal software control
      • E.g., give multiple virtual x86s on one real one, similar to way virtual memory gives illusion of more memory than reality
  • System Virtual Machines
    • “ (Operating) System Virtual Machines ” provide complete system-level environment at binary ISA
      • Assumes ISA always matches native hardware
      • E.g., IBM VM/370, VMware ESX Server, and Xen
    • Presents illusion that VM users have an entire private computer, including copy of OS
    • Single machine runs multiple VMs, and can support multiple (and different) OSes
      • On conventional platform, single OS “owns” all HW resources
      • With VM, multiple OSes all share HW resources
    • Underlying HW platform is host ; its resources are shared among guest VMs
  • Virtual Machine Monitors (VMMs)
    • Virtual machine monitor (VMM) or hypervisor is software that supports VMs
    • VMM determines how to map virtual resources to physical ones
    • Physical resource may be time-shared, partitioned, or emulated in software
    • VMM much smaller than a traditional OS;
      • Isolation portion of a VMM is  10,000 lines of code
  • VMM Overhead
    • Depends on workload
    • Goal for system VMs:
      • Run almost all instructions directly on native hardware
    • User-level CPU-bound programs (e.g., SPEC) have near-zero virtualization overhead
      • Run at native speeds since OS rarely invoked
    • I/O-intensive workloads are OS-intensive
      • Execute many system calls and privileged instructions
      • Can result in high virtualization overhead
    • But if I/O-intensive workload is also I/O-bound
      • Processor utilization is low (since waiting for I/O)
      • Processor virtualization can be hidden in I/O costs
      • So virtualization overhead is low
  • Important Uses of VMs
    • Multiple OSes
      • No more dual boot!
      • Can even transfer data (e.g., cut-and-paste) between VMs
    • Protection
      • Crash or intrusion in one OS doesn’t affect others
      • Easy to replace failed OS with fresh, clean one
    • Software Management
      • VMs can run complete SW stack, even old OSes like DOS
      • Run legacy OS, stable current, test release on same HW
    • Hardware Management
      • Independent SW stacks can share HW
        • Run application on own OS (helps dependability)
      • Migrate running VM to different computer
        • To balance load or to evacuate from failing HW
  • Virtual Machine Monitor Requirements
    • VM Monitor
      • Presents SW interface to guest software
      • Isolates guests’ states from each other
      • Protects itself from guest software (including guest OSes)
    • Guest software should behave exactly as if running on native HW
      • Except for performance-related behavior or limitations of fixed resources shared by multiple VMs
      • Hard to achieve perfection in real system
    • Guest software shouldn’t be able to change allocation of real system resources directly
    • Hence, VMM must control  everything even though guest VM and OS currently running is temporarily using them
      • Access to privileged state, address translation, I/O, exceptions and interrupts, …
  • Virtual Machine Monitor Requirements (continued)
    • VMM must be at higher privilege level than guest VM, which generally runs in user mode
      • Execution of privileged instructions handled by VMM
    • E.g., timer or I/O interrupt:
      • VMM suspends currently running guest
      • Saves state
      • Handles interrupt
        • Possibly handle internally, possibly delivers to a guest
      • Decides which guest to run next
      • Loads its state
      • Guest VMs that want timer are given virtual one
  • Hardware Requirements
    • Hardware needs are roughly same as paged virtual memory:
    • At least 2 processor modes, system and user
    • Privileged subset of instructions
      • Available only in system mode
      • Trap if executed in user mode
      • All system resources controllable only via these instructions
  • ISA Support for Virtual Machines
    • If ISA designers plan for VMs, easy to limit:
      • What instructions VMM must handle
      • How long it takes to emulate them
    • Because chip makers ignored VM technology, ISA designers didn’t “Plan Ahea d ”
      • Including 80x86 and most RISC architectures
    • Guest system must see only virtual resources
      • Guest OS runs in user mode on top of VMM
      • If guest tries to touch HW-related resource, must trap to VMM
        • Requires HW support to initiate trap
        • VMM must then insert emulated information
      • If HW built wrong, guest will see or change privileged stuff
        • VMM must then modify guest’s binary code
  • ISA Impact on Virtual Machines
    • Consider x86 PUSHF/POPF instructions
      • Push flags register on stack or pop it back
      • Flags contains condition codes (good to be able to save/restore) but also interrupt enable flag (IF)
    • Pushing flags isn’t privileged
      • Thus, guest OS can read IF and discover it’s not the way it was set
        • VMM isn’t invisible any more
    • Popping flags in user mode ignores IF
      • VMM now doesn’t know what guest wants IF to be
      • Should trap to VMM
    • Possible solution: modify code, replacing pushf/popf with special interrupting instructions
      • But now guest can read own code and detect VMM
  • Hardware Support for Virtualization
    • Old “correct” implementation: trap on every pushf/popf so VM can fix up results
      • Very expensive, since pushf/popf used frequently
    • Alternative: IF shouldn’t be in same place as condition codes
      • Pushf/popf can be unprivileged
      • IF manipulation is now very rare
    • Pentium now has even better solution
      • In user mode, VIF (“Virtual Interrupt Flag”) holds what guest wants IF to be
      • Pushf/popf manipulate VIF instead of IF
      • Host can now control real IF, guest sees virtual one
      • Basic idea can be extended for many similar “OS-only” flags and registers
  • Impact of VMs on Virtual Memory
    • Each guest manages own page tables
      • How to make this work?
    • VMM separates real and physical memory
      • Real memory is intermediate level between virtual and physical
      • Some call it virtual , physical , and machine memory
      • Guest maps virtual to real memory via its page tables
        • VMM page tables map real to physical
    • VMM maintains shadow page table that maps directly from guest virtual space to HW physical address space
      • Rather than pay extra level of indirection on every memory access
      • VMM must trap any attempt by guest OS to change its page table or to access page table pointer
  • ISA Support for VMs & Virtual Memory
    • IBM 370 architecture added additional level of indirection that was managed by VMM
      • Guest OS kept page tables as before, so shadow pages were unnecessary
    • To virtualize software TLB, VMM manages real one and has copy of contents for each guest VM
      • Any instruction that accesses TLB must trap
    • Hardware TLB still managed by hardware
      • Must flush on VM switch unless PID tags available
    • HW or SW TLBs with PID tags can mix entries from different VMs
      • Avoids flushing TLB on VM switch
  • Impact of I/O on Virtual Machines
    • Most difficult part of virtualization
      • Increasing number of I/O devices attached to computer
      • Increasing diversity of I/O device types
      • Sharing real device among multiple VMs
      • Supporting myriad of device drivers, especially with differing guest OSes
    • Give each VM generic versions of each type of I/O device, and let VMM handle real I/O
      • Drawback: slower than giving VM direct access
    • Method for mapping virtual I/O device to physical depends on type:
      • Disks partitioned by VMM to create virtual disks for guests
      • Network interfaces shared between VMs in short time slices
        • VMM tracks messages for virtual network addresses
        • Routes to proper guest
      • USB might be directly attached to VM
  • Example: Xen VM
    • Xen: Open-source System VMM for 80x86 ISA
      • Project started at University of Cambridge, GNU license
    • Original vision of VM is running unmodified OS
      • Significant wasted effort just to keep guest OS happy
    • “ Paravirtualization ” - small modifications to guest OS to simplify virtualization
    • Three examples of paravirtualization in Xen:
    • To avoid flushing TLB when invoking VMM, Xen mapped into upper 64 MB of address space of each VM
    • Guest OS allowed to allocate pages, just check that it didn’t violate protection restrictions
    • To protect guest OS from user programs in VM, Xen takes advantage of 80x86’s four protection levels
      • Most x86 OSes keep everything at privilege levels 0 or at 3.
      • Xen VMM runs at highest level (0)
      • Guest OS runs at next level (1)
      • Applications run at lowest (3)
  • Xen Changes for Paravirtualization
    • Port of Linux to Xen changed  3000 lines, or  1% of 80x86-specific code
      • Doesn’t affect application binary interfaces (ABI/API) of guest OS
    • OSes supported in Xen 2.0:
    http://wiki.xensource.com/xenwiki/OSCompatibility Yes No FreeBSD 5 Yes No Plan 9 Yes Yes NetBSD 3.0 Yes No NetBSD 2.0 Yes Yes Linux 2.6 Yes Yes Linux 2.4 Runs as guest OS Runs as host OS OS
  • Xen and I/O
    • To simplify I/O, privileged VMs assigned to each hardware I/O device: “ driver domains ”
      • Xen Jargon: “ domains ” = Virtual Machines
    • Driver domains run physical device drivers
      • Interrupts still handled by VMM before being sent to appropriate driver domain
    • Regular VMs (“ guest domains ” ) run simple virtual device drivers
      • Communicate with physical device drivers in driver domains to access physical I/O hardware
    • Data sent between guest and driver domains by page remapping
  • Xen Performance
    • Performance relative to native Linux for Xen, for 6 benchmarks (from Xen developers)
    • But are these user-level CPU-bound programs? I/O-intensive workloads? I/O-bound I/O-Intensive?
  • Xen Performance, Part II
    • Subsequent study noticed Xen experiments based on 1 Ethernet network interface card (NIC), and single NIC was performance bottleneck
  • Xen Performance, Part III
    • > 2X instructions for guest VM + driver VM
    • > 4X L2 cache misses
    • 12X – 24X Data TLB misses
  • Xen Performance, Part IV
    • > 2X instructions : caused by page remapping and transfer between driver and guest VMs, and by communication over channel between 2 VMs
    • 4X L2 cache misses : Linux uses zero-copy network interface that depends on ability of NIC to do DMA from different locations in memory
      • Since Xen doesn’t support “gather” DMA in its virtual network interface, it can’t do true zero-copy in the guest VM
    • 12X – 24X Data TLB misses : 2 Linux optimizations
      • Superpages for part of Linux kernel space: 4MB pages lowers TLB misses versus using 1024 4 KB pages. Not in Xen
      • PTEs marked global aren’t flushed on context switch, and Linux uses them for kernel space. Not in Xen
    • Future Xen may address 2. and 3., but 1. inherent?
  • Conclusion
    • VM Monitor presents SW interface to guest software, isolates guest states, and protects itself from guest software (including guest OSes)
    • Virtual Machine revival
      • Overcome security flaws of large OSes
      • Manage software, manage hardware
      • Processor performance no longer highest priority
    • Virtualization challenges for processor, virtual memory, and I/O
      • Paravirtualization to cope with those difficulties
    • Xen as example VMM using paravirtualization
      • 2005 performance on non-I/O bound, I/O intensive apps: 80% of native Linux without driver VM, 34% with driver VM