3. Padmashree Apparao - Intel
Mike Day - IBM
Lamia Youseff - UCSB
Muli Ben-Yehuda – IBM
Dan Magenheimer - Oracle
Jun Nakajima - Intel
Jose Renato Santos – HP
Thanks for all your great efforts in putting
together this event!
Xen Summit Boston 2008
4. Breakout Room
Separate room available next door for discussions
Wireless Setup
SSID is “usenix”
Lunch
Food court just inside mall down the hall
Cheer’s Party at Faneuil Hall
Meet at 6:10 pm in Sheraton lobby for group walk to Faneuil Hall
Collect 3 drink tickets per attendee
Xen Summit Boston 2008
5. Event T-Shirt & USB Key Drives
◦ Please send email to stephen.spector@xen.org if
you were not able to get either at this event
Apologies, event registration jumped in the
last couple of weeks so we did not order
enough
Additional items will be ordered and delivered
to attendees requesting via email
Xen Summit Boston 2008
6. TIME TOPIC
9 – 9:30 am Welcome & Project Status
9:30 – 10 am Roadmap & Releases
10:30 – Noon Novel Applications of Xen
OVF
Cloud Computing Made Agile
1:15 – 3:25 pm Virtualization in Network Appliances
Inter-VM Network Communication
Debugging Xen
Capacity Planning
Quantitative Xen vs KVM
3:45 pm – 5:15pm Power Friendly Xen
Guests Spinning
Paravirt Ops in Linux IA64
Xen Summit Boston 2008
7. TIME TOPIC
9 – 10:00 am Fedora & Paravirt Ops
Secure Xen on ARM
Client Virtualization
10:30 – 11:00 am Self IO Emulation
Memory Overcommit
Stub Domains
1:15 – 3:20 pm Virtual Networking
Network Topology
SR-IOV Devices and VT-d PCI
Cache Attribute Virtualization
3:45 – 5:15pm Identify VMs based on VTPM
VM Synchronization
Higher Security Xen
OpenSolaris Fault Management
Xen Summit Boston 2008
9. Xen 3.2 Released 16 Jan 2008
Xen.org and the Xen Advisory Board
◦ Stephen Spector as full-time xen.org PM
Linux paravirt_ops transition
The Xen Client Initiative
◦ Creating an industry standard hypervisor for
laptops and desktops
Xen 3.3 due in early August
◦ Enhanced Security, Performance & Scalability plus
New Features
Xen Summit Boston 2008
10. Build the industry standard open source hypervisor
◦ Core quot;enginequot; that is incorporated into multiple vendors’ products
Maintain Xen’s industry-leading performance
◦ Be first to exploit new hardware acceleration features
◦ Help OS vendors paravirtualize their OSes
Maintain Xen’s reputation for stability and quality
◦ Security must now be paramount
Support multiple CPU types; big and small systems
◦ From server to client to mobile phone
Foster innovation
Drive interoperability
11. Original Xen 32b-only paravirt_ops went
upstream to kernel.org in 2.6.22, July 2007
64b support likely to go into 2.6.27, thanks
to work by Eduardo Habkost and Jeremy
Fitzhardinge
◦ Now shipping in Fedora Core 9
Full privileged domain support being added
by Juan Quintella and Stephen Tweedie
IA64 paravirt_ops added by Isaku Yamahata
Ongoing tuning and optimization work
◦ All vendors should be using paravirt_ops...
Xen Summit Boston 2008
13. Pooling effort to accelerate Xen on clients
◦ Create ‘kit of parts’ which vendors can select from
and build into their products
◦ Encourage co-ordinated open development on the
usual xen.org email lists, wiki, bugzilla etc.
Participating vendors
◦ AMD, AMI, Citrix, Dell, DeviceVM, HP, IBM, Intel,
Lenovo, Neocleus, Novell, Phoenix, Red Hat, Sun
14. Instantly available ‘lite’ VMs
1.
◦ Web browser, Blu-ray player, Email, productivity apps
Service VMs for security and manageability
2.
◦ Execution environment for OS support functions
◦ Firewalls, Virus Scanners, VPN,
VMs for App Encapsulation and Mobility
3.
◦ Enhanced isolation and security for critical Apps
◦ Information flow between VMs tightly controlled
◦ VM mobility enables execution to be moved from
client to server and back
15. Power Management Native UEFI support
◦ Enhanced host P&C state Security
◦ S3 Suspend to RAM ◦ TPM/TXT integration
◦ Rapid Boot ◦ Emulated TPM support
Graphics ◦ Example Service Domains
◦ GPU pass-through with Hypervisor installation
IOMMU support ◦ Boot from Flash and disk
◦ 3D virtualization via Service Domain Framework
Gallium serialization
◦ Packaging, Installation,
USB Configuration, Interposition
◦ Device pass-through APIs, Update
WiFi / WiMax ◦ Extend OVF specification
16. Move Device Emulation out of dom0 and into
a small domain tightly coupled to the guest
◦ Implemented using MiniOS, newlib
Prime motivation was Security
◦ Safely contain device emulator even if compromised
Resource accounting and QoS improved
Extra benefit of improved performance
◦ Round-trip-time to ioemu now excellent due to
close coupling of guest and Emu Dom
◦ Enhanced Scalability
◦ No OS scheduler to get in the way...
Xen Summit Boston 2008
18. New Out-of-Sync additions to shadow2
◦ Hybrid design combines the best of shadow1&2
◦ Automatically optimize for single vs. bulk updates
◦ Allow pages to go out-of-sync with their shadows
during bulk updates
◦ Use snapshots to optimize resync
Credit to Gianluca Guida and Tim Deegan
The world’s best shadow pagetable algorithm
just got better...
Xen Summit Boston 2008
20. Enhanced Intel TXT/TPM integration
◦ Secure Xen launch
PVGrub
◦ Replaces PyGrub with in-guest domain builder
based on MiniOS, newlib and Grub
◦ Narrows the interface, circumvents bug risk
IOMMU support for PV and HVM Guests
◦ Enables devices to be safely passed through even to
buggy or malicious guests
◦ Further reduces trust required of dom0
Xen Summit Boston 2008
21. Intel EPT and enhanced AMD NPT support
◦ 2MB page support to reduce #memory accesses
MSI / MSI-X
◦ Avoid need to call into Xen to unmask interrupt
Virtual Framebuffer Scanning Optimization
◦ Use PTE dirty bits to optimize scan
◦ Reduce overhead from 7% to 0.2%
OpenGL rendering of framebuffer
◦ Offload scaling to GPU
Domain Lock removal for PV PTE updates
◦ Improves performance of guests with many VCPUs
Xen Summit Boston 2008
22. Parallel kernel build on an 8 VCPU PV Linux guest (32b and 64b)
32 bit, Intel server 64 bit AMD server
460
320
440
310
420
300
400
290
380
280 360
270 340
320
260
(S) (S)
native xen/old xen/new
native xen/old xen/new
Overhead reduced from 20% to 15%
Overhead reduced from 14% to 10%
23. Full 16b Emulation on Intel systems
◦ Fixes incompatibilities with some boot loaders, now
runs DOS, Win3.1, OS/2 etc
Jun Kamada’s SCSI front/back driver
◦ Allows selective SCSI operation on raw LUNs
◦ Can optionally expose underlying FC topology
CPU CPUID virtualization
◦ Enables selective exposure of CPU features to VMs
◦ Enhanced live relocation portability between hosts
◦ Expose VCPUs as threads, cores, or sockets
Xen Summit Boston 2008 5/14/2008
24. Xen continues to offer best performance
while taking a hard-line approach to security
◦ Xen’s true type-1 thin hypervisor architecture sets it
apart as being serious about security
Xen Client has a great opportunity to become
an industry standard
The Xen Community continues to grow from
strength to strength
◦ More vendors, more developers, more xen-based
products
Thanks for coming, enjoy the summit!
Xen Summit Boston 2008
26. Current stable releases: 3.1.4 and 3.2.1
◦ Both released end of April
Next releases: 3.2.2 and 3.3.0
◦ Both anticipated late July - mid August
Strategy:
◦ Maintain two stable branches until the later one has
matured enough for switchover
◦ Quarterly releases from stable branches
◦ Six to nine months between major releases
27. Features for 3.3 are now pretty well established
◦ Almost all now in xen-unstable
◦ 3.3 is going to be a big release
Need to plan features for 3.4 and beyond
◦ Maintain aggressive development momentum
◦ Avoid duplicated (or pointless) effort
28. Server
◦ Performance and scalability optimizations
◦ Smart NICs
Security
◦ Domain0 disaggregation
◦ Service domains
◦ Interface penetration testing
Client
◦ Power management
Suspend and hibernate; Clock management
◦ 3D video
direct h/w access; high-performance guest virtualization
29. Network virtualisation is particularly hard
◦ High packet rates; latency sensitive
Existing netfront/back drivers have limitations
◦ High cost for packet receive
◦ Not designed for next-generation NICs
Ongoing work on netchannel2 to address this
Lazy copy in the guest (reduces dom0 load)
◦ Provide guest a copy-only, sub-page, revocable grant
Support multi-queue NICs
◦ DMA directly to guest buffers
Reusable extensible ring architecture
30. Potential for reducing memory pressure by
sharing identical pages across VMs
◦ Significant savings in ‘ideal’ cases
◦ Rather smaller gains in typical heterogeneous
scenarios (10-20%)
How to find identical pages?
◦ Memory scanning ; identical disc blocks
Demand paging is a prerequisite
Dan Magenheimer is presenting a simpler
scheme for ‘virtual’ overcommit
32. Big improvements for 3.3
◦ Cx/Px state management in the hypervisor
More could be done
◦ Better support for C3
◦ Deeper sleeps than C3
◦ Power-aware scheduling
◦ Schedule-aware Cx/Px governors
Client applications
◦ Pass through power information to console OS
◦ Collect power hints from guests
33. Key feature for graphical applications
Multiplex 3D hardware acceleration
VMGL / Blink
◦ Virtualizes OpenGL over Chromium-on-Ethernet
A new approach under investigation:
◦ Use Gallium3D interface as virtualisation interface
◦ Designed to be OS neutral and gfx hw neutral
◦ Designed for modern hardware (programmable
shaders, etc)
◦ Make use of planned work to build translation layers
from Direct3D and OpenGL
34. Native Hyper-V hypervisor interface
◦ Many hypercalls designed for Hyper-V’s CPU/MMU
virtualisation (e.g., simple shadow mode)
◦ Some have benefit for Xen too (eg.TLB shootdowns)
◦ Measurements look very good
High availability
◦ VM replication (UBC’s Remus project)
◦ Machine-check support (Christoph Egger, Sun)
XenAPI/CIM management interfaces
◦ Meet full DMTF virtualization profile
35. Still plenty of cool stuff to work on!
The roadmap is not set in stone
Come talk to me about features you would like
to see (and implement!) in Xen 3.4