• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Xen and the Art of Virtualization

Xen and the Art of Virtualization






Total Views
Views on SlideShare
Embed Views



0 Embeds 0

No embeds



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    Xen and the Art of Virtualization Xen and the Art of Virtualization Presentation Transcript

    • Xen and the Art of Virtualization Paul Barham et. al. University of Cambridge Computer Laboratory Presentor: Devdatta Kulkarni
    • Outline
      • Overview of Xen’s approach of virtualization
      • Detailed Design of Xen
      • Experimentation and Evaluation
      • Discussion
    • Problem Statement
      • Current approaches to virtualization have performance problems
      • Goal: Provide performance as close as possible to non-virtualized system setup
      • Solution: Use paravirtualization by selectively exposing real hardware
    • Xen’s Virtualization Approach
      • Full Virtualization is a performance evil
        • Advantages of exposing real hardware
          • Bettter support time-sensitive tasks with real and virtual times
          • Performance improvement by using superpages or page coloring
      • Provide an abstraction that is “similar” to underlying hardware
    • Implications of Providing “Similar” and not “Exact” Hardware abstraction
      • Changes required to guest OSes
      • No changes are required to guest applications
    • Terminology Used
      • Guest OS: OSes that Xen can host
      • Domain: Running virtual machine within which a guest OS executes
      • Xen: Hypervisor
    • Virtual Machine Interface
      • Memory Management
      • CPU
      • Device I/O
    • Memory Management Interface
      • Guest OSes responsible for managing h/w page tables
      • Xen restricted to 64 MB section of every address space preventing TLB flush while entering hypervisor
    • CPU Interface
      • Xen executes in ring 0 : Most privileged mode
      • Guest OSes execute in ring 1
      • Page Fault Handling
        • Write the faulting address into Extended Stack frame on the guest OS stack
      • System Calls: Fast Exception Handlers bypassing Xen
        • Double Faulting code is terminated by Xen
    • Device I/O
      • Data transferred from each Domain via Xen using shared-memory, asynchronous buffer descriptor rings
      • Light weight event delivery mechanism which involves “callbacks” in the guest OSes
    • Xen System Architecture
    • Control Transfer: Hypercalls and Events
      • Hypercalls: similar to system calls
      • Events: similar to Unix signals
        • Event-callback handler invoked by Xen
    • Data Transfer: I/O Rings
      • Guiding Principles:
        • Resource management/accountability
        • Event notification
      • I/O Descriptor Rings:
        • Both XEN and Guest OS act as producers and consumers for the data
    • I/O rings Structure
    • Subsystem Virtualization
      • CPU Scheduling
        • Borrowed Virtual Time (BVT) scheduling
          • Low latency wakeup
      • Time and Timers
        • Real time, virtual time and wall-clock time
    • Virtual Address Translation
      • Register Guest OS page tables directly with MMU
      • Restrict guest OS to read only access
      • Type and reference count associated with each machine page frame
      • A page frame can be reused only when unpinned and its reference count is zero
      • Use batch updates in a single hypercall
        • Commit updates before TLB flush
    • Physical Memory
      • Statically partitioned between domains
        • XenoLinux uses balloon driver to adjust the memory
      • Physical Memory
        • Virtual, contiguous
      • Hardware Memory
        • Actual hardware memory, sparse
      • Physical to Hardware : Done by guest OS
      • Hardware to Physical : Done by Xen
        • Using a shared translation array
    • Network
      • Xen provides abstrations for
        • Virtual firewall-router (VFR)
        • Virtual network interfaces
          • 2 I/O rings of buffer descriptors
          • Each direction has associated rule
            • (<pattern>, <action>)
        • Transmission: Guest OS enqueues buffer descriptor onto transmit ring
        • Receiption: Guest OS exchanges unused page frame for each packet it receives
    • Disk
      • Domains access storage through virtual block devices (VBDs)
      • VBD
        • Ownership and access control information
        • Accessed via I/O ring mechanism
        • Requests can be reordered by guest OS and also by Xen
      • Translation table maintained in Xen
      • Batching done by Xen to improve performance
    • Evaluation
      • Entities used in experiments
        • XenoLinux port (based on Linux 2.4.21)
        • Vmware Workstation 3.2 running on top of Linux host OS
        • User-mode Linux (UML) on Linux host
      • Experimental Setup
          • Dell 2650 dual processor 2.4GHz Xeon server with 2GB RAM
          • Broadcom Tigon 3 Gigabit Ethernet NIC
          • Single Hitachi DK 32EJ 146GB 10k RMP SCSI disk
    • Relative Performance
    • Operating System Benchmarks Process Related Performance
    • Operating System Benchmarks Context Switch Performance
    • Operating System Benchmarks File and VM system latencies
    • Network Performance
    • Concurrent Virtual Machines
    • Concurrent Virtual Machines
    • Scalability
    • Summary
      • Xen shows performance results close to native Linux system
      • Improved performance achieved at the cost of
        • Modifying OS code
    • Discussion Questions
      • Xen or VMware ?
        • Performance
        • Virtualization support in hardware
        • CPUs not having multiple privilege execution rings