0
Virtualizing the Data Center with Xen Steve Hand University of Cambridge and XenSource
What is Xen Anyway? <ul><li>Open source  hypervisor </li></ul><ul><ul><li>run multiple OSes on one machine </li></ul></ul>...
Problem: Success of Scale-out <ul><li>“ OS+app per server” provisioning leads to server sprawl </li></ul><ul><li>Server ut...
Solution: Virtualize With Xen <ul><li>Consolidation : fewer servers    slashes CapEx and OpEx </li></ul><ul><li>Higher uti...
Further Virtualization Benefits <ul><li>Separating the OS from the hardware </li></ul><ul><ul><li>Users no longer forced t...
Virtualization Possibilities <ul><li>Value-added functionality from outside OS: </li></ul><ul><ul><li>Fire-walling / netwo...
This talk <ul><li>Introduction </li></ul><ul><li>Xen 3 </li></ul><ul><ul><li>para-virtualization, I/O architecture </li></...
Xen 3.0 (5th Dec 2005) <ul><li>Secure isolation between VMs </li></ul><ul><li>Resource control and QoS </li></ul><ul><li>x...
Xen 3.x Architecture Event Channel Virtual MMU Virtual CPU  Control IF Hardware (SMP, MMU, physical memory, Ethernet, SCSI...
Para-Virtualization in Xen <ul><li>Xen extensions to x86 arch  </li></ul><ul><ul><li>Like x86, but Xen invoked for privile...
x86 CPU virtualization  <ul><li>Xen runs in ring 0 (most privileged) </li></ul><ul><li>Ring 1/2 for guest OS, 3 for user-s...
Para-Virtualizing the MMU <ul><li>Guest OSes allocate and manage own PTs </li></ul><ul><ul><li>Hypercall to change PT base...
SMP Guest Kernels <ul><li>Xen 3.x supports multiple VCPUs </li></ul><ul><ul><li>Virtual IPI’s sent via Xen event channels ...
Performance: SPECJBB Average 0.75% overhead Native 3 GHz Xeon 1GB memory / guest 2.6.9.EL vs. 2.6.9.EL-xen
Performance: dbench < 5% Overhead up to 8 way SMP
Kernel build 32b PAE; Parallel make,  4 processes per CPU 5% 8% 13% 18% 23% Source: XenSource, Inc: 10/06 # Virtual CPUs
I/O Architecture <ul><li>Xen  IO-Spaces  delegate guest OSes protected access to specified h/w devices </li></ul><ul><ul><...
Isolated Driver Domains Event Channel Virtual MMU Virtual CPU  Control IF Hardware (SMP, MMU, physical memory, Ethernet, S...
Isolated Driver VMs <ul><li>Run device drivers in separate domains </li></ul><ul><li>Detect failure e.g. </li></ul><ul><ul...
Hardware Virtualization (1) <ul><li>Paravirtualization… </li></ul><ul><ul><li>has fundamental benefits… (c/f MS Viridian) ...
Hardware Virtualization (2) <ul><li>CPU is only part of the system </li></ul><ul><ul><li>also need to consider memory and ...
Hardware Virtualization (3) <ul><li>Finally we need to solve the I/O issue </li></ul><ul><ul><li>non-PV OSes don’t know ab...
Xen 3: Hardware Virtual Machines <ul><li>Enable Guest OSes to be run without modification </li></ul><ul><ul><li>E.g. legac...
Native Device Drivers Control Panel (xm/xend) Front end Virtual Drivers Linux xen64 Device Models Guest BIOS Unmodified OS...
Progressive Paravirtualization <ul><li>Hypercall API available to HVM guests </li></ul><ul><li>Selectively add PV extensio...
Native Device Drivers Control Panel (xm/xend) Front end Virtual Drivers Linux xen64 Device Models Guest BIOS Unmodified OS...
HVM I/O Performance Measured with ttcp  (1500 byte MTU) Emulated I/O  PV on HVM  Pure PV  Source:  XenSource, Sep 06
Native Device Drivers Control Panel (xm/xend) Front end Virtual Drivers Linux xen64 Guest BIOS Unmodified OS Domain  N Lin...
HW Virtualization Summary <ul><li>CPU virtualization available today </li></ul><ul><ul><li>lets Xen support legacy/proprie...
What is XenEnterprise? <ul><li>Multi-Operating System </li></ul><ul><ul><li>Windows, Linux and Solaris </li></ul></ul><ul>...
Get Started with XenEnterprise <ul><li>Packaged and supported virtualization platform for x86 servers </li></ul><ul><li>In...
A few XE 3.2 Screenshots… Resource usage statistics Windows 2003 Server Windows XP Easy VM Creation Clone and Export VMs R...
SPECjbb2005 Sun JVM installed
w2k3 SPECcpu2000 integer (multi)
w2k3 Passmark-CPU results vs native
w2k3 Passmark memory vs native
Performance: Conclusions <ul><li>Xen PV guests typically pay ~1% (1VCPU) or ~2% (2VCPU) compared to native </li></ul><ul><...
Live Migration of VMs: Motivation <ul><li>VM migration enables: </li></ul><ul><ul><li>High-availability </li></ul></ul><ul...
Assumptions <ul><li>Networked storage </li></ul><ul><ul><li>NAS: NFS, CIFS </li></ul></ul><ul><ul><li>SAN: Fibre Channel <...
Challenges <ul><li>VMs have lots of state in memory </li></ul><ul><li>Some VMs have soft real-time requirements </li></ul>...
Relocation Strategy Stage 0: pre-migration Stage 1: reservation Stage 2: iterative pre-copy Stage 3: stop-and-copy Stage 4...
Pre-Copy Migration: Round 1
Pre-Copy Migration: Round 1
Pre-Copy Migration: Round 1
Pre-Copy Migration: Round 1
Pre-Copy Migration: Round 1
Pre-Copy Migration: Round 2
Pre-Copy Migration: Round 2
Pre-Copy Migration: Round 2
Pre-Copy Migration: Round 2
Pre-Copy Migration: Round 2
Pre-Copy Migration: Final
<ul><li>Pages that are dirtied must be re-sent </li></ul><ul><ul><li>Super hot pages </li></ul></ul><ul><ul><ul><li>e.g. p...
Rate Limited Relocation <ul><li>Dynamically adjust resources committed to performing page transfer </li></ul><ul><ul><li>D...
Web Server Migration
Iterative Progress: SPECWeb 52s
Iterative Progress: Quake3
Quake 3 Server relocation
Xen 3.1 Highlights <ul><li>Released May 18 th  2007 </li></ul><ul><li>Lots of new features:  </li></ul><ul><ul><li>32-on-6...
Xen 3.x Roadmap <ul><li>Continued improved of full-virtualization </li></ul><ul><ul><li>HVM (VT/AMD-V) optimizations </li>...
Backed By All Major IT Vendors * Logos are registered trademarks of their owners
Summary <ul><li>Xen is re-shaping the IT industry </li></ul><ul><li>Commoditize the hypervisor </li></ul><ul><li>Key to vo...
Thanks! <ul><li>EuroSys 2008 </li></ul><ul><li>Venue: Glasgow, Scotland, Apr 2-4 2008 </li></ul><ul><li>Important dates:  ...
Upcoming SlideShare
Loading in...5
×

Xen Euro Par07

972

Published on

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
972
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
49
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • Transcript of "Xen Euro Par07"

    1. 1. Virtualizing the Data Center with Xen Steve Hand University of Cambridge and XenSource
    2. 2. What is Xen Anyway? <ul><li>Open source hypervisor </li></ul><ul><ul><li>run multiple OSes on one machine </li></ul></ul><ul><ul><li>dynamic sizing of virtual machine </li></ul></ul><ul><ul><li>much improved manageability </li></ul></ul><ul><li>Pioneered paravirtualization </li></ul><ul><ul><li>modify OS kernel to run on Xen </li></ul></ul><ul><ul><li>(applications mods not required) </li></ul></ul><ul><ul><li>extremely low overhead (~1%) </li></ul></ul><ul><li>Massive development effort </li></ul><ul><ul><li>first (open source) release 2003. </li></ul></ul><ul><ul><li>today have hundreds of talented community developers </li></ul></ul>
    3. 3. Problem: Success of Scale-out <ul><li>“ OS+app per server” provisioning leads to server sprawl </li></ul><ul><li>Server utilization rates <10% </li></ul><ul><li>Expensive to maintain, house, power, and cool </li></ul><ul><li>Slow to provision, inflexible to change or scale </li></ul><ul><li>Poor resilience to failures </li></ul>
    4. 4. Solution: Virtualize With Xen <ul><li>Consolidation : fewer servers slashes CapEx and OpEx </li></ul><ul><li>Higher utilization : make the most of existing investments </li></ul><ul><li>“ Instant on” provisioning : any app on any server, any time </li></ul><ul><li>Robustness to failures and “auto-restart” of VMs on failure </li></ul>
    5. 5. Further Virtualization Benefits <ul><li>Separating the OS from the hardware </li></ul><ul><ul><li>Users no longer forced to upgrade OS to run on latest hardware </li></ul></ul><ul><li>Device support is part of the platform </li></ul><ul><ul><li>Write one device driver rather than N </li></ul></ul><ul><ul><li>Better for system reliability/availability </li></ul></ul><ul><ul><li>Faster to get new hardware deployed </li></ul></ul><ul><li>Enables “Virtual Appliances” </li></ul><ul><ul><li>Applications encapsulated with their OS </li></ul></ul><ul><ul><li>Easy configuration and management </li></ul></ul>
    6. 6. Virtualization Possibilities <ul><li>Value-added functionality from outside OS: </li></ul><ul><ul><li>Fire-walling / network IDS / “inverse firewall” </li></ul></ul><ul><ul><li>VPN tunnelling; LAN authentication </li></ul></ul><ul><ul><li>Virus, mal-ware and exploit scanning </li></ul></ul><ul><ul><li>OS patch-level status monitoring </li></ul></ul><ul><ul><li>Performance monitoring and instrumentation </li></ul></ul><ul><ul><li>Storage backup and snapshots </li></ul></ul><ul><ul><li>Local disk as just a cache for network storage </li></ul></ul><ul><ul><li>Carry your world on a USB stick </li></ul></ul><ul><ul><li>Multi-level secure systems </li></ul></ul>
    7. 7. This talk <ul><li>Introduction </li></ul><ul><li>Xen 3 </li></ul><ul><ul><li>para-virtualization, I/O architecture </li></ul></ul><ul><ul><li>hardware virtualization (today + tomorrow) </li></ul></ul><ul><li>XenEnterprise: overview and performance </li></ul><ul><li>Case Study: Live Migration </li></ul><ul><li>Outlook </li></ul>
    8. 8. Xen 3.0 (5th Dec 2005) <ul><li>Secure isolation between VMs </li></ul><ul><li>Resource control and QoS </li></ul><ul><li>x86 32/PAE36/64 plus HVM; IA64, Power </li></ul><ul><li>PV guest kernel needs to be ported </li></ul><ul><ul><li>User-level apps and libraries run unmodified </li></ul></ul><ul><li>Execution performance close to native </li></ul><ul><li>Broad (linux) hardware support </li></ul><ul><li>Live Relocation of VMs between Xen nodes </li></ul><ul><li>Latest stable release is 3.1 (May 2007) </li></ul>
    9. 9. Xen 3.x Architecture Event Channel Virtual MMU Virtual CPU Control IF Hardware (SMP, MMU, physical memory, Ethernet, SCSI/IDE) Native Device Drivers GuestOS (XenLinux) Device Manager & Control s/w VM0 GuestOS (XenLinux) Unmodified User Software VM1 Front-End Device Drivers GuestOS (XenLinux) Unmodified User Software VM2 Front-End Device Drivers Unmodified GuestOS (WinXP)) Unmodified User Software VM3 Safe HW IF Xen Virtual Machine Monitor Back-End HVM x86_32 x86_64 IA64 AGP ACPI PCI SMP Front-End Device Drivers
    10. 10. Para-Virtualization in Xen <ul><li>Xen extensions to x86 arch </li></ul><ul><ul><li>Like x86, but Xen invoked for privileged ops </li></ul></ul><ul><ul><li>Avoids binary rewriting </li></ul></ul><ul><ul><li>Minimize number of privilege transitions into Xen </li></ul></ul><ul><ul><li>Modifications relatively simple and self-contained </li></ul></ul><ul><li>Modify kernel to understand virtualised env. </li></ul><ul><ul><li>Wall-clock time vs. virtual processor time </li></ul></ul><ul><ul><ul><li>Desire both types of alarm timer </li></ul></ul></ul><ul><ul><li>Expose real resource availability </li></ul></ul><ul><ul><ul><li>Enables OS to optimise its own behaviour </li></ul></ul></ul>
    11. 11. x86 CPU virtualization <ul><li>Xen runs in ring 0 (most privileged) </li></ul><ul><li>Ring 1/2 for guest OS, 3 for user-space </li></ul><ul><ul><li>GPF if guest attempts to use privileged instr </li></ul></ul><ul><li>Xen lives in top 64MB of linear addr space </li></ul><ul><ul><li>Segmentation used to protect Xen as switching page tables too slow on standard x86 </li></ul></ul><ul><li>Hypercalls jump to Xen in ring 0 </li></ul><ul><ul><li>Indirection via hypercall page allows flexibility </li></ul></ul><ul><li>Guest OS may install ‘fast trap’ handler </li></ul><ul><ul><li>Direct user-space to guest OS system calls </li></ul></ul>
    12. 12. Para-Virtualizing the MMU <ul><li>Guest OSes allocate and manage own PTs </li></ul><ul><ul><li>Hypercall to change PT base </li></ul></ul><ul><li>Xen must validate PT updates before use </li></ul><ul><ul><li>Allows incremental updates, avoids revalidation </li></ul></ul><ul><li>Validation rules applied to each PTE: </li></ul><ul><ul><li>1. Guest may only map pages it owns* </li></ul></ul><ul><ul><li>2. Pagetable pages may only be mapped RO </li></ul></ul><ul><li>Xen traps PTE updates and emulates, or ‘unhooks’ PTE page for bulk updates </li></ul>
    13. 13. SMP Guest Kernels <ul><li>Xen 3.x supports multiple VCPUs </li></ul><ul><ul><li>Virtual IPI’s sent via Xen event channels </li></ul></ul><ul><ul><li>Currently up to 32 VCPUs supported </li></ul></ul><ul><li>Simple hotplug/unplug of VCPUs </li></ul><ul><ul><li>From within VM or via control tools </li></ul></ul><ul><ul><li>Optimize one active VCPU case by binary patching spinlocks </li></ul></ul><ul><li>NB: Many applications exhibit poor SMP scalability – often better off running multiple instances each in their own OS </li></ul>
    14. 14. Performance: SPECJBB Average 0.75% overhead Native 3 GHz Xeon 1GB memory / guest 2.6.9.EL vs. 2.6.9.EL-xen
    15. 15. Performance: dbench < 5% Overhead up to 8 way SMP
    16. 16. Kernel build 32b PAE; Parallel make, 4 processes per CPU 5% 8% 13% 18% 23% Source: XenSource, Inc: 10/06 # Virtual CPUs
    17. 17. I/O Architecture <ul><li>Xen IO-Spaces delegate guest OSes protected access to specified h/w devices </li></ul><ul><ul><li>Virtual PCI configuration space </li></ul></ul><ul><ul><li>Virtual interrupts </li></ul></ul><ul><ul><li>(Need IOMMU for full DMA protection) </li></ul></ul><ul><li>Devices are virtualised and exported to other VMs via Device Channels </li></ul><ul><ul><li>Safe asynchronous shared memory transport </li></ul></ul><ul><ul><li>‘ Backend’ drivers export to ‘frontend’ drivers </li></ul></ul><ul><ul><li>Net: use normal bridging, routing, iptables </li></ul></ul><ul><ul><li>Block: export any blk dev e.g. sda4,loop0,vg3 </li></ul></ul><ul><li>(Infiniband / Smart NICs for direct guest IO) </li></ul>
    18. 18. Isolated Driver Domains Event Channel Virtual MMU Virtual CPU Control IF Hardware (SMP, MMU, physical memory, Ethernet, SCSI/IDE) Native Device Driver GuestOS (XenLinux) Device Manager & Control s/w VM0 Native Device Driver GuestOS (XenLinux) VM1 Front-End Device Drivers GuestOS (XenLinux) Unmodified User Software VM2 Front-End Device Drivers GuestOS (XenBSD) Unmodified User Software VM3 Safe HW IF Xen Virtual Machine Monitor Back-End Back-End Driver Domain
    19. 19. Isolated Driver VMs <ul><li>Run device drivers in separate domains </li></ul><ul><li>Detect failure e.g. </li></ul><ul><ul><li>Illegal access </li></ul></ul><ul><ul><li>Timeout </li></ul></ul><ul><li>Kill domain, restart </li></ul><ul><li>E.g. 275ms outage from failed Ethernet driver </li></ul>0 50 100 150 200 250 300 350 0 5 10 15 20 25 30 35 40 time (s)
    20. 20. Hardware Virtualization (1) <ul><li>Paravirtualization… </li></ul><ul><ul><li>has fundamental benefits… (c/f MS Viridian) </li></ul></ul><ul><ul><li>but is limited to OSes with PV kernels. </li></ul></ul><ul><li>Recently seen new CPUs from Intel, AMD </li></ul><ul><ul><li>enable safe trapping of ‘difficult’ instructions </li></ul></ul><ul><ul><li>provide additional privilege layers (“rings”) </li></ul></ul><ul><ul><li>currently shipping in most modern server, desktop and notebook systems </li></ul></ul><ul><li>Solves part of the problem, but… </li></ul>
    21. 21. Hardware Virtualization (2) <ul><li>CPU is only part of the system </li></ul><ul><ul><li>also need to consider memory and I/O </li></ul></ul><ul><li>Memory: </li></ul><ul><ul><li>OS wants contiguous physical memory, but Xen needs to share between many OSes </li></ul></ul><ul><ul><li>Need to dynamically translate between guest physical and ‘real’ physical addresses </li></ul></ul><ul><ul><li>Use shadow page tables to mirror guest OS page tables (and implicit ‘no paging’ mode) </li></ul></ul><ul><li>Xen 3.0 includes software shadow page tables; future x86 processors will include hardware support. </li></ul>
    22. 22. Hardware Virtualization (3) <ul><li>Finally we need to solve the I/O issue </li></ul><ul><ul><li>non-PV OSes don’t know about Xen </li></ul></ul><ul><ul><li>hence run ‘standard’ PC ISA/PCI drivers </li></ul></ul><ul><li>Just emulate devices in software? </li></ul><ul><ul><li>complex, fragile and non-performant… </li></ul></ul><ul><ul><li>… but ok as backstop mechanism. </li></ul></ul><ul><li>Better: </li></ul><ul><ul><li>add PV (or “enlightened”) device drivers to OS </li></ul></ul><ul><ul><li>well-defined driver model makes this relatively easy </li></ul></ul><ul><ul><li>get PV performance benefits for I/O path </li></ul></ul>
    23. 23. Xen 3: Hardware Virtual Machines <ul><li>Enable Guest OSes to be run without modification </li></ul><ul><ul><li>E.g. legacy Linux, Solaris x86, Windows XP/2003 </li></ul></ul><ul><li>CPU provides vmexits for certain privileged instrs </li></ul><ul><li>Shadow page tables used to virtualize MMU </li></ul><ul><li>Xen provides simple platform emulation </li></ul><ul><ul><li>BIOS, apic, iopaic, rtc, net (pcnet32), IDE emulation </li></ul></ul><ul><li>Install paravirtualized drivers after booting for high-performance IO </li></ul><ul><li>Possibility for CPU and memory paravirtualization </li></ul><ul><ul><li>Non-invasive hypervisor hints from OS </li></ul></ul>
    24. 24. Native Device Drivers Control Panel (xm/xend) Front end Virtual Drivers Linux xen64 Device Models Guest BIOS Unmodified OS Domain N Linux xen64 Callback / Hypercall VMExit Virtual Platform 0D Backend Virtual driver Native Device Drivers Domain 0 Event channel 0P 1/3P 3P Guest BIOS Unmodified OS VMExit Virtual Platform 3D HVM Architecture Guest VM (HVM) (32-bit mode) Guest VM (HVM) (64-bit mode) I/O: PIT, APIC, PIC, IOAPIC Processor Memory Control Interface Hypercalls Event Channel Scheduler
    25. 25. Progressive Paravirtualization <ul><li>Hypercall API available to HVM guests </li></ul><ul><li>Selectively add PV extensions to optimize </li></ul><ul><ul><li>Network and Block IO </li></ul></ul><ul><ul><li>XenAPIC (event channels) </li></ul></ul><ul><ul><li>MMU operations </li></ul></ul><ul><ul><ul><li>multicast TLB flush </li></ul></ul></ul><ul><ul><ul><li>PTE updates (faster than page fault) </li></ul></ul></ul><ul><ul><ul><li>page sharing </li></ul></ul></ul><ul><ul><li>Time (wall-clock and virtual time) </li></ul></ul><ul><ul><li>CPU and memory hotplug </li></ul></ul>
    26. 26. Native Device Drivers Control Panel (xm/xend) Front end Virtual Drivers Linux xen64 Device Models Guest BIOS Unmodified OS Domain N Linux xen64 Callback / Hypercall VMExit Virtual Platform 0D Backend Virtual driver Native Device Drivers Domain 0 Event channel 0P 1/3P 3P FE Virtual Drivers Guest BIOS Unmodified OS VMExit Virtual Platform FE Virtual Drivers 3D PIC/APIC/IOAPIC emulation Guest VM (HVM) (32-bit mode) Guest VM (HVM) (64-bit mode) Xen 3.0.3 Enhanced HVM I/O I/O: PIT, APIC, PIC, IOAPIC Processor Memory Control Interface Hypercalls Event Channel Scheduler
    27. 27. HVM I/O Performance Measured with ttcp (1500 byte MTU) Emulated I/O PV on HVM Pure PV Source: XenSource, Sep 06
    28. 28. Native Device Drivers Control Panel (xm/xend) Front end Virtual Drivers Linux xen64 Guest BIOS Unmodified OS Domain N Linux xen64 Callback / Hypercall VMExit Virtual Platform 0D Guest VM (HVM) (32-bit mode) Backend Virtual driver Native Device Drivers Domain 0 Event channel 0P 1/3P 3P FE Virtual Drivers Guest BIOS Unmodified OS VMExit Virtual Platform Guest VM (HVM) (64-bit mode) FE Virtual Drivers 3D IO Emulation IO Emulation Future plan: I/O stub domains. Device Models PIC/APIC/IOAPIC emulation I/O: PIT, APIC, PIC, IOAPIC Processor Memory Control Interface Hypercalls Event Channel Scheduler
    29. 29. HW Virtualization Summary <ul><li>CPU virtualization available today </li></ul><ul><ul><li>lets Xen support legacy/proprietary OSes </li></ul></ul><ul><li>Additional platform protection imminent </li></ul><ul><ul><li>protect Xen from IO devices </li></ul></ul><ul><ul><li>full IOMMU extensions coming soon </li></ul></ul><ul><li>MMU virtualization also coming soon: </li></ul><ul><ul><li>avoid the need for s/w shadow page tables </li></ul></ul><ul><ul><li>should improve performance and reduce complexity </li></ul></ul><ul><li>Device virtualization arriving from various folks: </li></ul><ul><ul><li>networking already here (ethernet, infiniband) </li></ul></ul><ul><ul><li>[remote] storage in the works (NPIV, VSAN) </li></ul></ul><ul><ul><li>graphics and other devices sure to follow… </li></ul></ul>
    30. 30. What is XenEnterprise? <ul><li>Multi-Operating System </li></ul><ul><ul><li>Windows, Linux and Solaris </li></ul></ul><ul><li>Bundled Management Console </li></ul><ul><li>Easy to use </li></ul><ul><ul><li>Xen and guest installers and P2V tools </li></ul></ul><ul><ul><li>For standard servers and blades </li></ul></ul><ul><li>High Performance Paravirtualization </li></ul><ul><ul><li>Exploits hardware virtualization </li></ul></ul><ul><ul><li>Per guest resource guarantees </li></ul></ul><ul><li>Extensible Architecture </li></ul><ul><ul><li>Secure, tiny, low maintenance </li></ul></ul><ul><ul><li>Extensible by ecosystem partners </li></ul></ul>“ Ten minutes to Xen”
    31. 31. Get Started with XenEnterprise <ul><li>Packaged and supported virtualization platform for x86 servers </li></ul><ul><li>Includes the Xen TM Hypervisor </li></ul><ul><li>High performance bare metal virtualization for both Windows and Linux guests </li></ul><ul><li>Easy to use and install </li></ul><ul><li>Management console included for standard management of Xen systems and guests </li></ul><ul><li>Microsoft supports Windows customers on XenEnterprise </li></ul><ul><li>Download free from www.xensource.com and try it out! </li></ul>“ Ten minutes to Xen”
    32. 32. A few XE 3.2 Screenshots… Resource usage statistics Windows 2003 Server Windows XP Easy VM Creation Clone and Export VMs Red Hat Linux 4
    33. 33. SPECjbb2005 Sun JVM installed
    34. 34. w2k3 SPECcpu2000 integer (multi)
    35. 35. w2k3 Passmark-CPU results vs native
    36. 36. w2k3 Passmark memory vs native
    37. 37. Performance: Conclusions <ul><li>Xen PV guests typically pay ~1% (1VCPU) or ~2% (2VCPU) compared to native </li></ul><ul><ul><li>ESX 3.0.1 pays 12-21% for same benchmark </li></ul></ul><ul><li>XenEnterprise includes: </li></ul><ul><ul><li>hardened and optimized OSS Xen </li></ul></ul><ul><ul><li>proprietary PV drivers for MS OSes </li></ul></ul><ul><ul><li>matches or beats ESX performance in nearly all cases </li></ul></ul><ul><li>Some room for further improvement… </li></ul>
    38. 38. Live Migration of VMs: Motivation <ul><li>VM migration enables: </li></ul><ul><ul><li>High-availability </li></ul></ul><ul><ul><ul><li>Machine maintenance </li></ul></ul></ul><ul><ul><li>Load balancing </li></ul></ul><ul><ul><ul><li>Statistical multiplexing gain </li></ul></ul></ul>Xen Xen
    39. 39. Assumptions <ul><li>Networked storage </li></ul><ul><ul><li>NAS: NFS, CIFS </li></ul></ul><ul><ul><li>SAN: Fibre Channel </li></ul></ul><ul><ul><li>iSCSI, network block dev </li></ul></ul><ul><ul><li>drdb network RAID </li></ul></ul><ul><li>Good connectivity </li></ul><ul><ul><li>common L2 network </li></ul></ul><ul><ul><li>L3 re-routeing </li></ul></ul>Xen Xen Storage
    40. 40. Challenges <ul><li>VMs have lots of state in memory </li></ul><ul><li>Some VMs have soft real-time requirements </li></ul><ul><ul><li>E.g. web servers, databases, game servers </li></ul></ul><ul><ul><li>May be members of a cluster quorum </li></ul></ul><ul><ul><li>Minimize down-time </li></ul></ul><ul><li>Performing relocation requires resources </li></ul><ul><ul><li>Bound and control resources used </li></ul></ul>
    41. 41. Relocation Strategy Stage 0: pre-migration Stage 1: reservation Stage 2: iterative pre-copy Stage 3: stop-and-copy Stage 4: commitment VM active on host A Destination host selected (Block devices mirrored) Initialize container on target host Copy dirty pages in successive rounds Suspend VM on host A Redirect network traffic Synch remaining state Activate on host B VM state on host A released
    42. 42. Pre-Copy Migration: Round 1
    43. 43. Pre-Copy Migration: Round 1
    44. 44. Pre-Copy Migration: Round 1
    45. 45. Pre-Copy Migration: Round 1
    46. 46. Pre-Copy Migration: Round 1
    47. 47. Pre-Copy Migration: Round 2
    48. 48. Pre-Copy Migration: Round 2
    49. 49. Pre-Copy Migration: Round 2
    50. 50. Pre-Copy Migration: Round 2
    51. 51. Pre-Copy Migration: Round 2
    52. 52. Pre-Copy Migration: Final
    53. 53. <ul><li>Pages that are dirtied must be re-sent </li></ul><ul><ul><li>Super hot pages </li></ul></ul><ul><ul><ul><li>e.g. process stacks; top of page free list </li></ul></ul></ul><ul><ul><li>Buffer cache </li></ul></ul><ul><ul><li>Network receive / disk buffers </li></ul></ul><ul><li>Dirtying rate determines VM down-time </li></ul><ul><ul><li>Shorter iterations -> less dirtying -> … </li></ul></ul>Writeable Working Set
    54. 54. Rate Limited Relocation <ul><li>Dynamically adjust resources committed to performing page transfer </li></ul><ul><ul><li>Dirty logging costs VM ~2-3% </li></ul></ul><ul><ul><li>CPU and network usage closely linked </li></ul></ul><ul><li>E.g. first copy iteration at 100Mb/s, then increase based on observed dirtying rate </li></ul><ul><ul><li>Minimize impact of relocation on server while minimizing down-time </li></ul></ul>
    55. 55. Web Server Migration
    56. 56. Iterative Progress: SPECWeb 52s
    57. 57. Iterative Progress: Quake3
    58. 58. Quake 3 Server relocation
    59. 59. Xen 3.1 Highlights <ul><li>Released May 18 th 2007 </li></ul><ul><li>Lots of new features: </li></ul><ul><ul><li>32-on-64 PV guest support </li></ul></ul><ul><ul><ul><li>run PAE PV guests on a 64-bit Xen! </li></ul></ul></ul><ul><ul><li>Save/restore/migrate for HVM guests </li></ul></ul><ul><ul><li>Dynamic memory control for HVM guests </li></ul></ul><ul><ul><li>Blktap copy-on-write support (incl checkpoint) </li></ul></ul><ul><ul><li>XenAPI 1.0 support </li></ul></ul><ul><ul><ul><li>XML configuration files for virtual machines </li></ul></ul></ul><ul><ul><ul><li>VM life-cycle management operations; and </li></ul></ul></ul><ul><ul><ul><li>Secure on- or off-box XML-RPC with many bindings. </li></ul></ul></ul>
    60. 60. Xen 3.x Roadmap <ul><li>Continued improved of full-virtualization </li></ul><ul><ul><li>HVM (VT/AMD-V) optimizations </li></ul></ul><ul><ul><li>DMA protection of Xen, dom0 </li></ul></ul><ul><ul><li>Optimizations for (software) shadow modes </li></ul></ul><ul><li>Client deployments: support for 3D graphics, etc </li></ul><ul><li>Live/continuous checkpoint / rollback </li></ul><ul><ul><li>disaster recovery for the masses </li></ul></ul><ul><li>Better NUMA (memory + scheduling) </li></ul><ul><li>Smart I/O enhancements </li></ul><ul><li>“ XenSE” : Open Trusted Computing </li></ul>
    61. 61. Backed By All Major IT Vendors * Logos are registered trademarks of their owners
    62. 62. Summary <ul><li>Xen is re-shaping the IT industry </li></ul><ul><li>Commoditize the hypervisor </li></ul><ul><li>Key to volume adoption of virtualization </li></ul><ul><li>Paravirtualization in the next release of all OSes </li></ul><ul><li>XenSource Delivers Volume Virtualization </li></ul><ul><li>XenEnterprise shipping now </li></ul><ul><li>Closely aligned with our ecosystem to deliver full-featured, open and extensible solutions </li></ul><ul><li>Partnered with all key OSVs to deliver an interoperable virtualized infrastructure </li></ul>
    63. 63. Thanks! <ul><li>EuroSys 2008 </li></ul><ul><li>Venue: Glasgow, Scotland, Apr 2-4 2008 </li></ul><ul><li>Important dates: </li></ul><ul><ul><li>14 Sep 2007: submission of abstracts </li></ul></ul><ul><ul><li>21 Sep 2007: submission of papers </li></ul></ul><ul><ul><li>20 Dec 2007: notification to authors </li></ul></ul><ul><li>More details at http://www.dcs.gla.ac.uk/Conferences/EuroSys2008 </li></ul>
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×