Your SlideShare is downloading. ×
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply



Published on

Published in: Technology

  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. Extending Xen * with Intel ® Virtualization Technology 2006. 11. 13. Mobile Embedded System lab. @SNUCSE Choi, Jin-yong Vol.10 No.3, 2006, Intel ® Technology Journal
  • 2. Table of Contents
    • Introduction
    • Intel ® Virtualization Technology
    • Extending Xen * with Intel ® VT
      • Processor Virtualization
      • Memory Virtualization
      • Device Virtualization
    • Performance Tuning VT-x Guests
    • Benchmark Performance
    • Current Status and Prospect
  • 3. Introduction
    • Virtualization Holes (x86 virtualization on x86)
      • Ring compression
      • Non-trapping instructions
      • Interrupt virtualization issues
      • Address space compression
    • Xen provides near native performance with “paravirtualization” technique.
      • But guest OSes must be modified to run on the Xen hypervisor
    • SW-based virtualization requires frequent VMM intervention
    paravirtualization: modify guest OS code binary translation: modify guest OS binary “on-the-fly” HW RING0 RING1 RING2 RING3 OS APPs RING0 RING1 RING2 RING3 OSes APPs VMM VMM Ring 0? OS App Ring 3 Ring 0
  • 4. Intel ® Virtualization Technology
    • What is Intel VT? (formerly known as Vanderpool)
      • Silicon level virtualization support to eliminate virtualization holes
      • Unmodified guest OSes can be executed.
      • VT-x : for the IA-32 architecture
      • VT-i : for the Itanium architecture
      • VT-d : for Directed I/O
      • cf. AMD-V (known as Pacifica)
    • Benefits with VT-x
      • Reduce size and complexity of VMM SW
      • Reduce the need for VMM intervention
      • Reduce the need for memory overhead (no sidetable…)
      • Avoids need to modify guest OSes allowing them to run directly on the HW
    Processor focus
  • 5. Intel ® Virtualization Technology (cont’d)
    • VT-x : extension to the IA-32 Intel architecture
    • Virtual Machine Extension (VMX) operation
      • More-privileged mode (VMX root)
      • Less-privileged mode (VMX non-root)
      • 10 new VMX instructions
      • Virtual Machine Control Structure (VMCS)
        • manages VM entry/exit
        • holds guest and host state
        • VMCS is created for each virtual CPU.
    • 4 privilege levels (ring 0-3)
    VM entry VM exit Shared Physical Hardware Intel® Virtualization Technology Ring 3 Ring 0 VMX Root VMM Apps OS Apps OS VM Exit VM Entry VM VM
  • 6. Extending Xen * with Intel ® VT
    • Processor Virtualization
    • Memory Virtualization
    • Device Virtualization
    • HVM (Hardware-based Virtual Machine)
      • fully virtualized domain (unmodified guest OSes)
    • Control Panel
      • creating, controlling, and destroying HVM domains
      • load the guest FW into HVM domain
      • create the device model thread in Dom0
        • service I/O request
      • then, HVM guest is started, and control is passed to the first instruction in the guest FW.
      • The HVM guest executes at native speed until it encounters an event that requires special handling by Xen.
    • small hypervisor
  • 7. Processor Virtualization
    • The Virtual CPU module
      • provides the abstraction of processor(s) to the HVM guest.
      • manages the virtual processor and associated virtualization events.
    • for the IA-32 architecture
      • VMCS is created for each CPU in a HVM domain.
      • Instructions, such as CPUID , MOV from/to CR3 , are intercepted as VM exit.
      • Exceptions/faults, such as page fault , are intercepted as VM exit s, and virtualized exceptions/faults are injected on VM entry to guests.
      • External interrupts unrelated to guests are intercepted as VM exit s, and virtualized interrupts are injected on VM entry to the guests.
  • 8. Memory Virtualization
    • Xen presents the abstraction of a HW MMU to the HVM domain
    • IA-32 Memory Virtualization
      • supports various kind of page table (2/3/4-level PT with 4KB size)
      • maintains a shadow page table for the guest.
      • extends Xen’s shadow page table to support both paravirtualized and fully virtualized guests.
    • Optimized shadow page table management
      • Shadow page table code is the most critical section for the performance
      • To detect any attempt to modify the guest page table, write protect the corresponding guest page table page.
      • Upon page fault against a guest page table, save a “snapshot” of the page and give write permission to the page
      • This page is then added to an “out-of-sync” list
      • When the flush TLB operation is executed, reflect all the entries on the “out-of-sync” list to the shadow page table
  • 9. MMU Virtualization
    • Xen/VT-x HVM implement shadow page table
      • Shadow TLB is inefficient in x86
        • Host page fault (VM exit) is very expensive
        • Guest OS purge entire TLBs at process switch time (CR3 write)
        • Excessive page fault will be raised if implementing shadow TLB
      • Shadow page table
        • Much effective than shadow TLB, but
        • Duplicating page table consume both CPU cycle & memory
    • Xen/VT-i HVM implement shadow TLB
      • Shadow TLB is highly efficient in Itanium
        • IA-64 use RID to differentiate TLBs from different process, thus guest OS rarely flush entire TLBs
  • 10. Device Virtualization
    • reuse open source QEMU project emulation module
    • run an instance of the device models in Dom0 per HVM
    • for optimization
      • performance critical models are moved into the hypervisor
      • communication between the I/O device model and the Xen hypervisor uses a shared memory
    • I/O Port Access
      • port Xen’s VBD and VNIF to HVM domains
    • Memory-Mapped I/O Handling
    • Interrupts Handling
      • HVM guests only see virtualized external interrupts.
    • Virtual Device Drivers
      • define a way to allow the hypervisor to access guest virtual address
      • define a way to signal Xen events to the virtual driver
    VM exit
  • 11. Performance Tuning VT-x Guests
    • extending Xentrace to support HVM domains
      • counting the occurrence of event s and their handling time in the hypervisor
      • tracing VT-x specific information
    • extending Xenoprof to support HVM domains
      • tracking clock cycle count, instruction retirements, TLB misses, and cache misses
    • running a workload and obtaining information with the tools above
    • many VM exit s are caused by I/O instruction or shadow page table operations
      • I/O instruction takes the longest handling time and requires a context switch to Dom0
      • about 40% of the hypervisor time was spent in the shadow code
  • 12. Performance Tuning VT-x Guests (cont’d)
    • Modify reused device model (QEMU project)
      • Move hot devices to hypervisor
      • Buffer I/O write in hypervisor to reduce context switch
        • Standard VGA frame buffer
      • Enhance network device model to be event driven
        • Reduce network package response time and thus throughput
      • Enable DMA to reduce the excessive I/O data transfer
        • Block device
    • Optimized shadow page table management
  • 13. Benchmark Performance
    • Intel ® S3E2340
      • 2.3GHz/800MHz FSB dual-core Intel ® Xeon ® processor
      • 4GB DDR2 533 MHz memory
      • 160GB seagate SATA HDD
      • Intel ® E100 Ethernet
    • RHEL4U1 is used as the OS in Dom0, DomU, and HVM
    • Dom0: dual virtual CPU and 512MB memory
    • DomU & HVM: single virtual CPU, 512MB memory, and 20GB virtual disk
  • 14. Current Status and Prospect
    • Novel and Redhat are incorporating Xen into their upcoming releases.
    • VirtualIron and XenSource are developing products that will leverage Xen and Intel VT
    • Intel VT and AMD-V products will be released very soon!
      • Mainboard vendor must support these new architecture
    • XenSource and Microsoft: A Strategic Relationship
    • Let’s watch how the situation develops
  • 15. References
    • Yaozu Dong and et al., Extending Xen * with Intel ® Virtualization Technology , 2006
    • Intel, Intel ® Vanderpool Technology for IA-32 processors (VT-x) Preliminary Specification , 2005
    • Hugues Morin , Increasing IT Flexibility Responsiveness through Virtualization , 2006
    • Yaozu Dong and et al., Xen and Intel ® Virtualization Technology for IA-64 , 2006