Russell Pavlicek
Xen Project Evangelist
Russell.Pavlicek@XenProject.org
Xen Project: Hypervisor for Clouds
@RCPavlicek
So Who’s the Fat Geek up Front?
• Linux user since 1995; Linux desktop since 1997
• Linux advocate before I ever saw the software
• Early Linux advocate at Digital Equipment Corporation, Compaq
• Former FOSS columnist for Infoworld, Processor magazines
• Former panelist on The Linux Show webcast
• Wrote book, Embracing Insanity: Open Source Software Development (2000)
• Speaker at 50+ Open Source conferences
• Became Xen Project Evangelist employed by Citrix in January 2013
• Formerly with Cassatt Corporation in San Jose, cloud startup (2004-2009)
About the Speaker...
About the Xen Project Stack
• The main components:
– Xen Project Hypervisor, the central FOSS project
– Xen Project API, the Cloud enabled subproject
• Better known as “XAPI”
– Xen Project is a Linux Foundation Collaborative Project
– These are the subjects of this talk
• And then there’s:
– XenServer, a popular Xen Project-based product
• Was partially closed source; open-sourced by Citrix in 2013
The Cloud “Problem”
IT Before the Cloud
• Stability is Paramount
– The value of IT to the corporation is consistent service availability
– Service capacity specified a year or more in advance
– What’s up, stays up
• Change is Bad
– Change to status quo is disruptive and dangerous
– Changes are beaten into submission until they become part of
the new status quo – and then they are no longer changes
IT Reinvented in the Cloud
• Availability of Services is Paramount
– The value of IT to the corporation is consistent service availability
at levels matching dynamic business demand
– Service capacity must move with business needs
– What’s up when depends on what’s needed when
• Change is Good
– Services must change to cover the needs of the moment
– Lack of change = lack of value
Cloud 101: Layers of the Cloud
App
Operating System Layer
Virtualization Layer
Cloud Orchestration Layer
Virtualization in the Cloud
• It must be stable
• It must be secure
• It must be configurable on a large scale
– The “user at machine” paradigm does not work
– If it requires a mouse, you’re in trouble
• It must take orchestration (APIs, command line)
• It must be multi-tenant
• It must not lock you into one concept or provider of Cloud
Xen Project: Highly Stable
• Solid track record
–Amazon’s AWS cloud business uses Xen Project
–Verizon launched a new Xen Project-based cloud
• Linux Foundation Project Partners:
–Amazon, AMD, ARM, CA, Cisco, Citrix,
Google, Intel, NetApp, Oracle, Rackspace,
Verizon, and more
Xen Project: Highly Secure
• SELINUX
• FLASK
– SELINUX capabilities at the VM level by the same team
• Disaggregation
– Segment device drivers into discrete VMs
• Architectural advantages of a Type-1 Hypervisor
– See the slides of my Advanced Security talk on XenProject.org or
join us on September 15 in New York City for User Summit
Xen Project: Configurable at Scale
• Toolstacks give rich API and command line capabilities
• Not GUI-centric
• Empowers orchestration via scripting, power tools (Puppet,
Chef, etc.), GUIs (XenServer’s XenCenter, Xen Orchestra,
etc.), and Cloud layers (OpenStack, CloudStack, Open
Nebula, etc.)
Single Host
Basic Functions
Multiple Hosts
Additional Functionality
Xen Project: Rich Toolstacks
Increased level of functionality and integration with other components
Default / XL (XM)Toolstack / Console Libvirt / VIRSH XAPI / XE
Hypervisor
Single Host
Additional Functionality
Xen Project Hypervisor
14
Xen Project: Tools for Different Solutions
Increased level of functionality and integration with other components
Default / XL (XM)Toolstack / Console Libvirt / VIRSH
Products Oracle VM Huawei UVP XenServer
Project
XAPI / XE
Xen Project Hypervisor
15
Xen Project: Tools for Different Clouds
Increased level of functionality and integration with other components
Default / XL (XM)Toolstack / Console Libvirt / VIRSH
Used by …
Project
XAPI / XE
Products Oracle VM Huawei UVP XenServer
Xen Project Hypervisor
Xen Project: A Multi-tenant Solution
• Multiple groups share common resources securely
– Clouds require sharing common resources
– Organizations often need their VMs to be visible to each other,
but entirely invisible to all other VMs
– XAPI makes this happen
– Critical ability for hosting providers
Xen Project: Doesn’t lock you in
• Xen Project does not force its view of the Cloud on you
• Xen Project does not force you to use a “favored” Cloud
solution
• This is one of the reasons why Cloud innovation happens in
the world of FOSS: It gives power to the Cloud, but allows
Cloud orchestration solutions to innovate
• There is no attempt to bend your efforts to the will of
some corporate business plan
XAPI : Orchestration Choices
Multiple Hosts
Additional Functionality
XAPI / XE
The Hypervisor
Xen Project Healthcheck
• See the following teams on the new XenProject.org site:
– Hypervisor
– XAPI
– ARM Hypervisor (for Servers as well as Mobile Devices)
– Mirage OS
• Governance : mixture between Linux Kernel and Apache
– Consensus decision making
– Sub-project life-cycle (aka incubator)
– PMC style structure for team leadership
2013: Xen Project Joins Linux Foundation
Xen Project Contributor Community is Diversifying
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
2010 2011 2012
Citrix UPC
SUSE Amazon
University AMD
GridCentric Individual
NSA Intel
Fujitsu iWeb
Misc Oracle
Spectralogic University of British Columbia
• The number of “significant”
active vendors is increasing
• New feature development driving
new participation
More Xen Project Features…
• Unikernel development and support (Mirage OS, etc.)
• ARM hardware support
• Live Migration of VMs: XenMotion (via XAPI)
• High Availability: Remus (& COLO for non-stop)
• Wide variety of Control Domains supported
• Even wider variety of Guest Domains
• Multiple virtualization modes improve performance
Hypervisor Architecture
Hypervisor Architectures
Type 1: Bare metal Hypervisor
A pure Hypervisor that runs directly on the
hardware and hosts Guest OS’s.
Provides partition isolation + reliability,
higher security
Host HW
Memory CPUsI/O
HypervisorScheduler
MMUDevice Drivers/Models
VMn
VM1
VM0
Guest OS
and Apps
Hypervisor Architectures
Type 1: Bare metal Hypervisor
A pure Hypervisor that runs directly on the
hardware and hosts Guest OS’s.
Type 2: OS ‘Hosted’
A Hypervisor that runs within a Host OS and hosts
Guest OS’s inside of it, using the host OS services
to provide the virtual environment.
Provides partition isolation + reliability,
higher security
Low cost, no additional drivers
Ease of use & installation
Host HW
Memory CPUsI/O
Host HW
Memory CPUsI/O
HypervisorScheduler
MMUDevice Drivers/Models
VMn
VM1
VM0
Guest OS
and Apps
Host OS
Device Drivers
Ring-0 VM Monitor
“Kernel “
VMn
VM1
VM0
Guest OS
and Apps
User
Apps
User-level VMM
Device Models
Xen Project: Type 1 with a Twist
Type 1: Bare metal Hypervisor
Host HW
Memory CPUsI/O
HypervisorScheduler
MMUDevice Drivers/Models
VMn
VM1
VM0
Guest OS
and Apps
Xen Project: Type 1 with a Twist
Type 1: Bare metal Hypervisor
Host HW
Memory CPUsI/O
HypervisorScheduler
MMUDevice Drivers/Models
VMn
VM1
VM0
Guest OS
and Apps
Host HW
Memory CPUsI/O
Hypervisor
VMn
VM1
VM0
Guest OS
and Apps
Xen Project Architecture
Scheduler MMU
Xen Project: Type 1 with a Twist
Type 1: Bare metal Hypervisor
Host HW
Memory CPUsI/O
HypervisorScheduler
MMUDevice Drivers/Models
VMn
VM1
VM0
Guest OS
and Apps
Host HW
Memory CPUsI/O
Hypervisor
VMn
VM1
VM0
Guest OS
and Apps
Xen Project Architecture
Scheduler MMU
Control domain
(dom0)
Drivers
Device Models
Linux & BSD
Xen Project and Linux
• Xen Project Hypervisor is not in the Linux kernel
• BUT: everything needed to run the hypervisor is
• Xen Project packages in all distributions (not in
RHEL6, but CentOS 6 via Xen4CentOS)
– Install Control Domain (Dom0) Linux distribution
– Install Xen Project package(s) or meta package
– Reboot
– Configure stuff: set up disks, peripherals, etc.
Basic Xen Project Concepts
30
Control domain
(dom0)
Host HW
VMn
VM1
VM0
Guest OS
and Apps
Memory CPUsI/O
Console
• Interface to the outside world
Control Domain aka Dom0
• Dom0 kernel with drivers
• Xen Management Toolstack
Guest Domains
• Your apps
Driver/Stub/Service Domain(s)
• A “driver, device model or control
service in a box”
• De-privileged and isolated
• Lifetime: start, stop, kill
Dom0 Kernel
HypervisorScheduler MMU XSM
Trusted Computing Base
Basic Xen Project Concepts: Toolstack+
31
Control domain
(dom0)
Host HW
VMn
VM1
VM0
Guest OS
and Apps
Console
Memory CPUsI/O
Dom0 Kernel
Toolstack
HypervisorScheduler MMU XSM
Console
• Interface to the outside world
Control Domain aka Dom0
• Dom0 kernel with drivers
• Xen Project Management Toolstack
Guest Domains
• Your apps
Driver/Stub/Service Domain(s)
• A “driver, device model or control
service in a box”
• De-privileged and isolated
• Lifetime: start, stop, kill
Trusted Computing Base
Basic Xen Project Concepts: Disaggregation
32
Control domain
(dom0)
Host HW
VMn
VM1
VM0
Guest OS
and Apps
Console
Memory CPUsI/O
One or more
driver, stub or
service domains
Dom0 Kernel
Toolstack
HypervisorScheduler MMU XSM
Console
• Interface to the outside world
Control Domain aka Dom0
• Dom0 kernel with drivers
• Xen Project Management Toolstack
Guest Domains
• Your apps
Driver/Stub/Service Domain(s)
• A “driver, device model or control
service in a box”
• De-privileged and isolated
• Lifetime: start, stop, kill
Trusted Computing Base
Xen Project: Types of Virtualization
Xen Project Virtualization Vocabulary
• PV – Paravirtualization
– Hypervisor provides API used by the OS of the Guest VM
– Guest OS needs to be modified to provide the API
• HVM – Hardware-assisted Virtual Machine
– Uses CPU VM extensions to handle Guest requests
– No modification to Guest OS
– But CPU must provide VM extensions
• FV – Full Virtualization (another name for HVM)
Xen Project Virtualization Vocabulary
• PVHVM – PV on HVM drivers
– Allows H/W virtualized guests to use PV disk and I/O drivers
– No modifications to guest OS
– Better performance than straight HVM
• PVH – PV in HVM Container (new in 4.4)
– Almost fully PV
– Uses HW extensions to eliminate PV MMU
– Eventually best mode for CPUs with virtual H/W extensions
The Virtualization Spectrum
Fully Virtualized (FV) VS VS VS VH
FV with PV for disk & network P VS VS VH
PVHVM P P VS VH
PVH P P P VH
Fully Paravirtualized (PV) P P P P
VH Virtualized (HW)
P Paravirtualized
VS Virtualized (SW)
HVM mode/domain
PV mode/domain
Xen Project 4.4
The Virtualization Spectrum
Fully Virtualized (FV) VS VS VS VH
FV with PV for disk & network P VS VS VH
PVHVM P P VS VH
PVH P P P VH
Fully Paravirtualized (PV) P P P P
Scope for improvement
Poor performance
Optimal performance
HVM mode/domain
Xen Project 4.4
PV mode/domain
Split Control Domain into Driver,
Stub and Service Domains
– See: ”Breaking up is hard to do” @ Xen Papers
– See: “Domain 0 Disaggregation for XCP and XenServer”
Used today by Qubes OS and Citrix XenClient XT
Prototypes for XAPI
Disaggregation
See qubes-os.org
Different windows run
in different VMs
More Security
Increased serviceability and flexibility
Better Robustness
Better Performance
Better Scalability
Benefits of Disaggregation
Ability to safely restart parts of the system
(e.g. just 275ms outage from failed Ethernet driver)
Next: XAPI Architecture Diagram
Before and After Disaggregation
CPUCPU
RAM RAMNIC
(or SR-
IOV VF)
NIC
(or SR-
IOV VF)
NIC
(or SR-
IOV VF)
NIC
(or SR-
IOV VF)
RAID
Xen Project Hypervisor
Dom0Network
drivers
NFS/
iSCSI
drivers
Qemu xapi Local
storage
drivers
NFS/
iSCSI
drivers
Network
drivers
Qemu
eth eth eth eth scsi
User VM User VM
NB gntdev NB
NF BF NF BF
qemu qemu
xapi
vswitch
networkd
tapdisk
blktap3
storaged
syslogd
vswitch
networkd
tapdisk
blktap3
storaged
tapdisk
blktap3
storaged
gntdev gntdev
Dom0
xenopsd
libxl
healthd
Domain
manager
Dom0
.
.
.
.
Xen Project Hypervisor
xapi
CPUCPU
RAM RAMNIC
(or SR-
IOV VF)
NIC
(or SR-
IOV VF)
NIC
(or SR-
IOV VF)
NIC
(or SR-
IOV VF)
RAID
Dom0 Network
driver
domain
NFS/
iSCSI
driver
domain
Qemu
domain
xapi
domain
Logging
domain
Local
storage
driver
domain
NFS/
iSCSI
driver
domain
Network
driver
domain
User VM User VM
NB gntdev NB
NF BF NF BF
dbus over v4v
qemu
xapi
xenopsd
libxl
healthd
Domain
manager
vswitch
networkd
tapdisk
blktap3
storaged
syslogd vswitch
networkd
tapdisk
blktap3
storaged
tapdisk
blktap3
storaged
gntdev gntdev
eth eth eth eth scsi
Xen Project Hypervisor Xen Project Hypervisor
D
o
m
0
dbus over v4v
.
.
.
43
Xen Project Security Advantages
• Even without Advanced Security Features
– Well-defined trusted computing base (much smaller than on type-2 HV)
– Minimal services in hypervisor layer
• Xen Project Security Modules (or XSM) and FLASK
– XSM is Xen Project equivalent of LSM (Linux Security Modules)
– FLASK is Xen Project equivalent of SELinux
– Developed, maintained and contributed to Xen Project by NSA
– Compatible with SELinux (tools, architecture)
– XSM object classes maps onto Xen Project features
More info: http://www.xenproject.org/component/allvideoshare/video/latest/
lfnw2014-advanced-security-features-of-xen-project-hypervisor.html
44
Xen Project Security Modules: FLASK
• What does FLASK provide?
– Granular security
• Can a guest domain talk with other guest domains?
• Can a guest domain only communicate with the Control Domain?
• Can a Guest domain have memory which cannot be read by the Control Domain?
• What type of device model is used in this domain?
• The ability to define multiple security roles on the domain level
• User types can be defined and assign roles
• Policy constraint logic
More info: http://wiki.xenproject.org/wiki/Xen_Security_Modules_:_XSM-FLASK
CPUCPU
RAM RAMNIC
(or SR-
IOV VF)
NIC
(or SR-
IOV VF)
NIC
(or SR-
IOV VF)
NIC
(or SR-
IOV VF)
RAID
Xen Project Hypervisor
Dom0 Network
driver
domain
NFS/
iSCSI
driver
domain
Qemu
domain
xapi
domain
Logging
domain
Local
storage
driver
domain
NFS/
iSCSI
driver
domain
Network
driver
domain
eth eth eth eth scsi
User VM User VM
NB gntdev NB
NF BF NF BF
qemu
xapi
xenopsd
libxl
healthd
Domain
manager
vswitch
networkd
tapdisk
blktap3
storaged
syslogd vswitch
networkd
tapdisk
blktap3
storaged
tapdisk
blktap3
storaged
gntdev gntdev
FLASK policy
restricting access
D
o
m
0
.
.
.
dbus over v4v dbus over v4v
Xen Project Hypervisor
ARM Hypervisor
• Fully functional for ARM v7 & v8
• ARM v7:
– Versatile Express, Arndale, Samsung Chromebook,
Cortex A15, Allwinner A20/A31
• ARM v8: Fast Model, APM X-Gene “Mustang”
Xen Project for ARM Servers
http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions
ARM SOC
Xen Project + ARM = A Perfect Match
ARM Architecture Features for Virtualization
Hypervisor mode : EL2
Kernel mode : EL1
User mode : EL0
GIC
v2
GT
2 stage
MMU
I/O
Device Tree describes …
Hypercall Interface HVC
ARM SOC ARM Architecture Features for Virtualization
EL2
EL1
EL0
GIC
v2
GT
2 stage
MMU
I/O
Device Tree describes …
HVC
Xen Project + ARM = A Perfect Match
Xen Project Hypervisor
ARM SOC ARM Architecture Features for Virtualization
EL2
EL1
EL0
GIC
v2
GT
2 stage
MMU
I/O
Device Tree describes …
HVC
Xen Project + ARM = A Perfect Match
Xen Project Hypervisor
Any Xen Project Guest VM (including Dom0)
Kernel
User Space
HVC
ARM SOC ARM Architecture Features for Virtualization
EL2
EL1
EL0
GIC
v2
GT
2 stage
MMU
I/O
Device Tree describes …
HVC
Xen Project + ARM = A Perfect Match
Xen Project Hypervisor
Dom0
only
Any Xen Project Guest VM (including Dom0)
Kernel
User Space
I/O
PV
back
PV
frontI/O
HVC
One mode to rule them all
x86: PVHVM P P VS VH
x86: PVH P P P VH
ARM v7 & v8 P VH VH VH
Scope for improvement
Optimal performance
HVM mode/domain
PV mode/domain
Code Size of x86 and ARM Hypervisors
X86 Hypervisor 100K -120K LOC Any x86 CPU
ARM Hypervisor for
mobile Devices
60K LOC ARM v5 – v7
(no virtual extensions)
(extra code for RT)
ARM Hypervisor for
Servers
17K LOC ARM v7+
(w/ virtual extensions)
Mirage OS
Application stacks only running on Xen Project APIs
Works on any Xen Project cloud or hosting service
Examples
– ErlangOnXen.org : Erlang
– HalVM : Haskell
– Mirage OS : Ocaml
– Osv: Java, C
Benefits:
– Small footprint
– Low startup latency
– Extremely fast migration of VMs
Library Operating Systems
Xen Project Hypervisor
Control domain
(dom0)
Host HW
Guest VMn
Apps
HW Drivers
PV Back Ends
Library OS
embedded
in Language
run-time
Dom0 Kernel
• Part of the Xen Project incubator
• V2.0 Released July 2014
• Light and small like Docker, but with the full security of the
Xen Project Hypervisor
• Clean-slate protocols implementations, e.g.
– TCP/IP, DNS, SSH, Openflow (switch/controller), HTTP, XMPP
Mirage OS
More info: http://www.xenproject.org/developers/teams/mirage-os.html
What’s Next?
• PVH mode is here!
• Updated and improved libvirt support
• Xen4CentOS: Xen Project for CentOS 6
• Experimental EFI support & nested virtualization
• Improved ARM, SPICE, GlusterFS support
New in Xen Project 4.4 (April 2014)
See slides: http://www.xenproject.org/component/allvideoshare/video/latest/
lf-collaboration-summit-xen-project-4-4-features-and-futures.html
• PVH mode performance improvements
• More Mirage OS and unikernel support
• Even more ARM, libvirt improvements
• REMUS reworked (COLO still in development)
• And much, much more…
Coming in Xen Project 4.5 (Dec 2014)
See status: http://wiki.xenproject.org/wiki/Xen_Project_Hypervisor_Roadmap/4.5
• Establish a shared test infrastructure
– Most major contributors are duplicating effort
• Usability and better distribution integration
• More focus on downstreams
– Examples: CloudStack and Xen Orchestra
• Xen Automotive
• XenGT (GPU Passthrough)
• Better Libvirt and virt-manager integration
– Embed Xen Project more into the Linux ecosystem and provide benefits for the
wider Linux community
What’s next (and already happening)
• Document Days (monthly)
• Test Days (prior to release)
• Mailing Lists , IRC, Newsletter
• XenProject.org (sign up, it’s free!)
Getting Started with Xen Project
Hackathon: Next one expected Spring 2015
Developer Summit: Aug 18-19 @ LinuxCon
User Summit: Sept 15 in New York City
• We’ve got a great lineup of sessions!
• Topics include:
– Security, Cloud Integration, Unikernels, Orchestration, …
– SUSE Cloud, OpenStack, XenServer, CentOS, Xen Orchestra, OSv,
HaLVM, COLO and more
• Regular price $79; use code below to register for half price!
CODE: XenMDMeetup
Xen Project User Summit, Sept 15 in NYC
Thank You!
Slides available under CC-BY-SA 3.0
From www.slideshare.net/xen_com_mgr
@RCPavlicek
• News: blog.XenProject.org
• Web: XenProject.org
– Help for IRC, Mailing Lists, …
– Stackoverflow-like Q&A
• Wiki: wiki.XenProject.org
• Presentations & Videos: see XenProject.org

Xen Project Hypervisor for the Cloud

  • 1.
    Russell Pavlicek Xen ProjectEvangelist Russell.Pavlicek@XenProject.org Xen Project: Hypervisor for Clouds @RCPavlicek
  • 2.
    So Who’s theFat Geek up Front?
  • 3.
    • Linux usersince 1995; Linux desktop since 1997 • Linux advocate before I ever saw the software • Early Linux advocate at Digital Equipment Corporation, Compaq • Former FOSS columnist for Infoworld, Processor magazines • Former panelist on The Linux Show webcast • Wrote book, Embracing Insanity: Open Source Software Development (2000) • Speaker at 50+ Open Source conferences • Became Xen Project Evangelist employed by Citrix in January 2013 • Formerly with Cassatt Corporation in San Jose, cloud startup (2004-2009) About the Speaker...
  • 4.
    About the XenProject Stack • The main components: – Xen Project Hypervisor, the central FOSS project – Xen Project API, the Cloud enabled subproject • Better known as “XAPI” – Xen Project is a Linux Foundation Collaborative Project – These are the subjects of this talk • And then there’s: – XenServer, a popular Xen Project-based product • Was partially closed source; open-sourced by Citrix in 2013
  • 5.
  • 6.
    IT Before theCloud • Stability is Paramount – The value of IT to the corporation is consistent service availability – Service capacity specified a year or more in advance – What’s up, stays up • Change is Bad – Change to status quo is disruptive and dangerous – Changes are beaten into submission until they become part of the new status quo – and then they are no longer changes
  • 7.
    IT Reinvented inthe Cloud • Availability of Services is Paramount – The value of IT to the corporation is consistent service availability at levels matching dynamic business demand – Service capacity must move with business needs – What’s up when depends on what’s needed when • Change is Good – Services must change to cover the needs of the moment – Lack of change = lack of value
  • 8.
    Cloud 101: Layersof the Cloud App Operating System Layer Virtualization Layer Cloud Orchestration Layer
  • 9.
    Virtualization in theCloud • It must be stable • It must be secure • It must be configurable on a large scale – The “user at machine” paradigm does not work – If it requires a mouse, you’re in trouble • It must take orchestration (APIs, command line) • It must be multi-tenant • It must not lock you into one concept or provider of Cloud
  • 10.
    Xen Project: HighlyStable • Solid track record –Amazon’s AWS cloud business uses Xen Project –Verizon launched a new Xen Project-based cloud • Linux Foundation Project Partners: –Amazon, AMD, ARM, CA, Cisco, Citrix, Google, Intel, NetApp, Oracle, Rackspace, Verizon, and more
  • 11.
    Xen Project: HighlySecure • SELINUX • FLASK – SELINUX capabilities at the VM level by the same team • Disaggregation – Segment device drivers into discrete VMs • Architectural advantages of a Type-1 Hypervisor – See the slides of my Advanced Security talk on XenProject.org or join us on September 15 in New York City for User Summit
  • 12.
    Xen Project: Configurableat Scale • Toolstacks give rich API and command line capabilities • Not GUI-centric • Empowers orchestration via scripting, power tools (Puppet, Chef, etc.), GUIs (XenServer’s XenCenter, Xen Orchestra, etc.), and Cloud layers (OpenStack, CloudStack, Open Nebula, etc.)
  • 13.
    Single Host Basic Functions MultipleHosts Additional Functionality Xen Project: Rich Toolstacks Increased level of functionality and integration with other components Default / XL (XM)Toolstack / Console Libvirt / VIRSH XAPI / XE Hypervisor Single Host Additional Functionality Xen Project Hypervisor
  • 14.
    14 Xen Project: Toolsfor Different Solutions Increased level of functionality and integration with other components Default / XL (XM)Toolstack / Console Libvirt / VIRSH Products Oracle VM Huawei UVP XenServer Project XAPI / XE Xen Project Hypervisor
  • 15.
    15 Xen Project: Toolsfor Different Clouds Increased level of functionality and integration with other components Default / XL (XM)Toolstack / Console Libvirt / VIRSH Used by … Project XAPI / XE Products Oracle VM Huawei UVP XenServer Xen Project Hypervisor
  • 16.
    Xen Project: AMulti-tenant Solution • Multiple groups share common resources securely – Clouds require sharing common resources – Organizations often need their VMs to be visible to each other, but entirely invisible to all other VMs – XAPI makes this happen – Critical ability for hosting providers
  • 17.
    Xen Project: Doesn’tlock you in • Xen Project does not force its view of the Cloud on you • Xen Project does not force you to use a “favored” Cloud solution • This is one of the reasons why Cloud innovation happens in the world of FOSS: It gives power to the Cloud, but allows Cloud orchestration solutions to innovate • There is no attempt to bend your efforts to the will of some corporate business plan
  • 18.
    XAPI : OrchestrationChoices Multiple Hosts Additional Functionality XAPI / XE The Hypervisor
  • 19.
  • 20.
    • See thefollowing teams on the new XenProject.org site: – Hypervisor – XAPI – ARM Hypervisor (for Servers as well as Mobile Devices) – Mirage OS • Governance : mixture between Linux Kernel and Apache – Consensus decision making – Sub-project life-cycle (aka incubator) – PMC style structure for team leadership 2013: Xen Project Joins Linux Foundation
  • 21.
    Xen Project ContributorCommunity is Diversifying 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 2010 2011 2012 Citrix UPC SUSE Amazon University AMD GridCentric Individual NSA Intel Fujitsu iWeb Misc Oracle Spectralogic University of British Columbia • The number of “significant” active vendors is increasing • New feature development driving new participation
  • 22.
    More Xen ProjectFeatures… • Unikernel development and support (Mirage OS, etc.) • ARM hardware support • Live Migration of VMs: XenMotion (via XAPI) • High Availability: Remus (& COLO for non-stop) • Wide variety of Control Domains supported • Even wider variety of Guest Domains • Multiple virtualization modes improve performance
  • 23.
  • 24.
    Hypervisor Architectures Type 1:Bare metal Hypervisor A pure Hypervisor that runs directly on the hardware and hosts Guest OS’s. Provides partition isolation + reliability, higher security Host HW Memory CPUsI/O HypervisorScheduler MMUDevice Drivers/Models VMn VM1 VM0 Guest OS and Apps
  • 25.
    Hypervisor Architectures Type 1:Bare metal Hypervisor A pure Hypervisor that runs directly on the hardware and hosts Guest OS’s. Type 2: OS ‘Hosted’ A Hypervisor that runs within a Host OS and hosts Guest OS’s inside of it, using the host OS services to provide the virtual environment. Provides partition isolation + reliability, higher security Low cost, no additional drivers Ease of use & installation Host HW Memory CPUsI/O Host HW Memory CPUsI/O HypervisorScheduler MMUDevice Drivers/Models VMn VM1 VM0 Guest OS and Apps Host OS Device Drivers Ring-0 VM Monitor “Kernel “ VMn VM1 VM0 Guest OS and Apps User Apps User-level VMM Device Models
  • 26.
    Xen Project: Type1 with a Twist Type 1: Bare metal Hypervisor Host HW Memory CPUsI/O HypervisorScheduler MMUDevice Drivers/Models VMn VM1 VM0 Guest OS and Apps
  • 27.
    Xen Project: Type1 with a Twist Type 1: Bare metal Hypervisor Host HW Memory CPUsI/O HypervisorScheduler MMUDevice Drivers/Models VMn VM1 VM0 Guest OS and Apps Host HW Memory CPUsI/O Hypervisor VMn VM1 VM0 Guest OS and Apps Xen Project Architecture Scheduler MMU
  • 28.
    Xen Project: Type1 with a Twist Type 1: Bare metal Hypervisor Host HW Memory CPUsI/O HypervisorScheduler MMUDevice Drivers/Models VMn VM1 VM0 Guest OS and Apps Host HW Memory CPUsI/O Hypervisor VMn VM1 VM0 Guest OS and Apps Xen Project Architecture Scheduler MMU Control domain (dom0) Drivers Device Models Linux & BSD
  • 29.
    Xen Project andLinux • Xen Project Hypervisor is not in the Linux kernel • BUT: everything needed to run the hypervisor is • Xen Project packages in all distributions (not in RHEL6, but CentOS 6 via Xen4CentOS) – Install Control Domain (Dom0) Linux distribution – Install Xen Project package(s) or meta package – Reboot – Configure stuff: set up disks, peripherals, etc.
  • 30.
    Basic Xen ProjectConcepts 30 Control domain (dom0) Host HW VMn VM1 VM0 Guest OS and Apps Memory CPUsI/O Console • Interface to the outside world Control Domain aka Dom0 • Dom0 kernel with drivers • Xen Management Toolstack Guest Domains • Your apps Driver/Stub/Service Domain(s) • A “driver, device model or control service in a box” • De-privileged and isolated • Lifetime: start, stop, kill Dom0 Kernel HypervisorScheduler MMU XSM Trusted Computing Base
  • 31.
    Basic Xen ProjectConcepts: Toolstack+ 31 Control domain (dom0) Host HW VMn VM1 VM0 Guest OS and Apps Console Memory CPUsI/O Dom0 Kernel Toolstack HypervisorScheduler MMU XSM Console • Interface to the outside world Control Domain aka Dom0 • Dom0 kernel with drivers • Xen Project Management Toolstack Guest Domains • Your apps Driver/Stub/Service Domain(s) • A “driver, device model or control service in a box” • De-privileged and isolated • Lifetime: start, stop, kill Trusted Computing Base
  • 32.
    Basic Xen ProjectConcepts: Disaggregation 32 Control domain (dom0) Host HW VMn VM1 VM0 Guest OS and Apps Console Memory CPUsI/O One or more driver, stub or service domains Dom0 Kernel Toolstack HypervisorScheduler MMU XSM Console • Interface to the outside world Control Domain aka Dom0 • Dom0 kernel with drivers • Xen Project Management Toolstack Guest Domains • Your apps Driver/Stub/Service Domain(s) • A “driver, device model or control service in a box” • De-privileged and isolated • Lifetime: start, stop, kill Trusted Computing Base
  • 33.
    Xen Project: Typesof Virtualization
  • 34.
    Xen Project VirtualizationVocabulary • PV – Paravirtualization – Hypervisor provides API used by the OS of the Guest VM – Guest OS needs to be modified to provide the API • HVM – Hardware-assisted Virtual Machine – Uses CPU VM extensions to handle Guest requests – No modification to Guest OS – But CPU must provide VM extensions • FV – Full Virtualization (another name for HVM)
  • 35.
    Xen Project VirtualizationVocabulary • PVHVM – PV on HVM drivers – Allows H/W virtualized guests to use PV disk and I/O drivers – No modifications to guest OS – Better performance than straight HVM • PVH – PV in HVM Container (new in 4.4) – Almost fully PV – Uses HW extensions to eliminate PV MMU – Eventually best mode for CPUs with virtual H/W extensions
  • 36.
    The Virtualization Spectrum FullyVirtualized (FV) VS VS VS VH FV with PV for disk & network P VS VS VH PVHVM P P VS VH PVH P P P VH Fully Paravirtualized (PV) P P P P VH Virtualized (HW) P Paravirtualized VS Virtualized (SW) HVM mode/domain PV mode/domain Xen Project 4.4
  • 37.
    The Virtualization Spectrum FullyVirtualized (FV) VS VS VS VH FV with PV for disk & network P VS VS VH PVHVM P P VS VH PVH P P P VH Fully Paravirtualized (PV) P P P P Scope for improvement Poor performance Optimal performance HVM mode/domain Xen Project 4.4 PV mode/domain
  • 38.
    Split Control Domaininto Driver, Stub and Service Domains – See: ”Breaking up is hard to do” @ Xen Papers – See: “Domain 0 Disaggregation for XCP and XenServer” Used today by Qubes OS and Citrix XenClient XT Prototypes for XAPI Disaggregation See qubes-os.org Different windows run in different VMs
  • 39.
    More Security Increased serviceabilityand flexibility Better Robustness Better Performance Better Scalability Benefits of Disaggregation Ability to safely restart parts of the system (e.g. just 275ms outage from failed Ethernet driver)
  • 40.
    Next: XAPI ArchitectureDiagram Before and After Disaggregation
  • 41.
    CPUCPU RAM RAMNIC (or SR- IOVVF) NIC (or SR- IOV VF) NIC (or SR- IOV VF) NIC (or SR- IOV VF) RAID Xen Project Hypervisor Dom0Network drivers NFS/ iSCSI drivers Qemu xapi Local storage drivers NFS/ iSCSI drivers Network drivers Qemu eth eth eth eth scsi User VM User VM NB gntdev NB NF BF NF BF qemu qemu xapi vswitch networkd tapdisk blktap3 storaged syslogd vswitch networkd tapdisk blktap3 storaged tapdisk blktap3 storaged gntdev gntdev Dom0 xenopsd libxl healthd Domain manager Dom0 . . . . Xen Project Hypervisor xapi
  • 42.
    CPUCPU RAM RAMNIC (or SR- IOVVF) NIC (or SR- IOV VF) NIC (or SR- IOV VF) NIC (or SR- IOV VF) RAID Dom0 Network driver domain NFS/ iSCSI driver domain Qemu domain xapi domain Logging domain Local storage driver domain NFS/ iSCSI driver domain Network driver domain User VM User VM NB gntdev NB NF BF NF BF dbus over v4v qemu xapi xenopsd libxl healthd Domain manager vswitch networkd tapdisk blktap3 storaged syslogd vswitch networkd tapdisk blktap3 storaged tapdisk blktap3 storaged gntdev gntdev eth eth eth eth scsi Xen Project Hypervisor Xen Project Hypervisor D o m 0 dbus over v4v . . .
  • 43.
    43 Xen Project SecurityAdvantages • Even without Advanced Security Features – Well-defined trusted computing base (much smaller than on type-2 HV) – Minimal services in hypervisor layer • Xen Project Security Modules (or XSM) and FLASK – XSM is Xen Project equivalent of LSM (Linux Security Modules) – FLASK is Xen Project equivalent of SELinux – Developed, maintained and contributed to Xen Project by NSA – Compatible with SELinux (tools, architecture) – XSM object classes maps onto Xen Project features More info: http://www.xenproject.org/component/allvideoshare/video/latest/ lfnw2014-advanced-security-features-of-xen-project-hypervisor.html
  • 44.
    44 Xen Project SecurityModules: FLASK • What does FLASK provide? – Granular security • Can a guest domain talk with other guest domains? • Can a guest domain only communicate with the Control Domain? • Can a Guest domain have memory which cannot be read by the Control Domain? • What type of device model is used in this domain? • The ability to define multiple security roles on the domain level • User types can be defined and assign roles • Policy constraint logic More info: http://wiki.xenproject.org/wiki/Xen_Security_Modules_:_XSM-FLASK
  • 45.
    CPUCPU RAM RAMNIC (or SR- IOVVF) NIC (or SR- IOV VF) NIC (or SR- IOV VF) NIC (or SR- IOV VF) RAID Xen Project Hypervisor Dom0 Network driver domain NFS/ iSCSI driver domain Qemu domain xapi domain Logging domain Local storage driver domain NFS/ iSCSI driver domain Network driver domain eth eth eth eth scsi User VM User VM NB gntdev NB NF BF NF BF qemu xapi xenopsd libxl healthd Domain manager vswitch networkd tapdisk blktap3 storaged syslogd vswitch networkd tapdisk blktap3 storaged tapdisk blktap3 storaged gntdev gntdev FLASK policy restricting access D o m 0 . . . dbus over v4v dbus over v4v Xen Project Hypervisor
  • 46.
  • 47.
    • Fully functionalfor ARM v7 & v8 • ARM v7: – Versatile Express, Arndale, Samsung Chromebook, Cortex A15, Allwinner A20/A31 • ARM v8: Fast Model, APM X-Gene “Mustang” Xen Project for ARM Servers http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions
  • 48.
    ARM SOC Xen Project+ ARM = A Perfect Match ARM Architecture Features for Virtualization Hypervisor mode : EL2 Kernel mode : EL1 User mode : EL0 GIC v2 GT 2 stage MMU I/O Device Tree describes … Hypercall Interface HVC
  • 49.
    ARM SOC ARMArchitecture Features for Virtualization EL2 EL1 EL0 GIC v2 GT 2 stage MMU I/O Device Tree describes … HVC Xen Project + ARM = A Perfect Match Xen Project Hypervisor
  • 50.
    ARM SOC ARMArchitecture Features for Virtualization EL2 EL1 EL0 GIC v2 GT 2 stage MMU I/O Device Tree describes … HVC Xen Project + ARM = A Perfect Match Xen Project Hypervisor Any Xen Project Guest VM (including Dom0) Kernel User Space HVC
  • 51.
    ARM SOC ARMArchitecture Features for Virtualization EL2 EL1 EL0 GIC v2 GT 2 stage MMU I/O Device Tree describes … HVC Xen Project + ARM = A Perfect Match Xen Project Hypervisor Dom0 only Any Xen Project Guest VM (including Dom0) Kernel User Space I/O PV back PV frontI/O HVC
  • 52.
    One mode torule them all x86: PVHVM P P VS VH x86: PVH P P P VH ARM v7 & v8 P VH VH VH Scope for improvement Optimal performance HVM mode/domain PV mode/domain
  • 53.
    Code Size ofx86 and ARM Hypervisors X86 Hypervisor 100K -120K LOC Any x86 CPU ARM Hypervisor for mobile Devices 60K LOC ARM v5 – v7 (no virtual extensions) (extra code for RT) ARM Hypervisor for Servers 17K LOC ARM v7+ (w/ virtual extensions)
  • 54.
  • 55.
    Application stacks onlyrunning on Xen Project APIs Works on any Xen Project cloud or hosting service Examples – ErlangOnXen.org : Erlang – HalVM : Haskell – Mirage OS : Ocaml – Osv: Java, C Benefits: – Small footprint – Low startup latency – Extremely fast migration of VMs Library Operating Systems Xen Project Hypervisor Control domain (dom0) Host HW Guest VMn Apps HW Drivers PV Back Ends Library OS embedded in Language run-time Dom0 Kernel
  • 56.
    • Part ofthe Xen Project incubator • V2.0 Released July 2014 • Light and small like Docker, but with the full security of the Xen Project Hypervisor • Clean-slate protocols implementations, e.g. – TCP/IP, DNS, SSH, Openflow (switch/controller), HTTP, XMPP Mirage OS More info: http://www.xenproject.org/developers/teams/mirage-os.html
  • 57.
  • 58.
    • PVH modeis here! • Updated and improved libvirt support • Xen4CentOS: Xen Project for CentOS 6 • Experimental EFI support & nested virtualization • Improved ARM, SPICE, GlusterFS support New in Xen Project 4.4 (April 2014) See slides: http://www.xenproject.org/component/allvideoshare/video/latest/ lf-collaboration-summit-xen-project-4-4-features-and-futures.html
  • 59.
    • PVH modeperformance improvements • More Mirage OS and unikernel support • Even more ARM, libvirt improvements • REMUS reworked (COLO still in development) • And much, much more… Coming in Xen Project 4.5 (Dec 2014) See status: http://wiki.xenproject.org/wiki/Xen_Project_Hypervisor_Roadmap/4.5
  • 60.
    • Establish ashared test infrastructure – Most major contributors are duplicating effort • Usability and better distribution integration • More focus on downstreams – Examples: CloudStack and Xen Orchestra • Xen Automotive • XenGT (GPU Passthrough) • Better Libvirt and virt-manager integration – Embed Xen Project more into the Linux ecosystem and provide benefits for the wider Linux community What’s next (and already happening)
  • 61.
    • Document Days(monthly) • Test Days (prior to release) • Mailing Lists , IRC, Newsletter • XenProject.org (sign up, it’s free!) Getting Started with Xen Project Hackathon: Next one expected Spring 2015 Developer Summit: Aug 18-19 @ LinuxCon User Summit: Sept 15 in New York City
  • 62.
    • We’ve gota great lineup of sessions! • Topics include: – Security, Cloud Integration, Unikernels, Orchestration, … – SUSE Cloud, OpenStack, XenServer, CentOS, Xen Orchestra, OSv, HaLVM, COLO and more • Regular price $79; use code below to register for half price! CODE: XenMDMeetup Xen Project User Summit, Sept 15 in NYC
  • 63.
    Thank You! Slides availableunder CC-BY-SA 3.0 From www.slideshare.net/xen_com_mgr @RCPavlicek • News: blog.XenProject.org • Web: XenProject.org – Help for IRC, Mailing Lists, … – Stackoverflow-like Q&A • Wiki: wiki.XenProject.org • Presentations & Videos: see XenProject.org