SlideShare a Scribd company logo
1 of 248
Download to read offline
OSCON: From the Datacenter to the Cloud
                Featuring Xen and XCP

            Steve Maresca Josh West
              Zentific LLC One.com
George Dunlap                       Patrick F. Wilbur
   Xen.org                          PFW Research
                                          LLC
Schedule

●   Unit 1:   09:00 - 09:45   Introducing Xen and XCP
●   Unit 2:   09:50 - 10:45   Devops
●   Break:    10:45 - 11:00
●   Unit 3:   11:00 - 11:55   XCP in the Enterprise
●   Unit 4:   12:00 - 12:30   Future of Xen
Unit 1
Introducing Xen and XCP
Unit 1 Overview

● Introduction & Xen vs. Xen Cloud Platform

● Xen/XCP Installation & Configuration

● XCP Concepts: pools, hosts, storage, networks, VMs
Introduction &
Xen vs. Xen Cloud Platform

    Xen, XCP, Project Kronos
Types of Virtualization

● Emulation
  Fully-emulate the underlying hardware architecture
● Full virtualization
  Simulate the base hardware architecture
● Paravirtualization
  Abstract the base architecture
● OS-level virtualization
  Shared kernel (and architecture), separate user spaces
Types of Virtualization

● Emulation
  Fully-emulate the underlying hardware architecture
● Full virtualization         - Xen does this!
  Simulate the base hardware architecture
● Paravirtualization          - Xen does this!
  Abstract the base architecture
● OS-level virtualization
  Shared kernel (and architecture), separate user spaces
What is Xen?

● Xen is a virtualization system supporting both
   paravirtualization and hardware-assisted full virtualization

● Initially created by University of Cambridge Computer
   Laboratory

● Open source (licensed under GPL)
What is Xen Cloud Platform (XCP)?

● Xen Cloud Platform (XCP) is a turnkey virtualization
   solution that provides out-of-the-box virtualization/cloud
   computing

● XCP includes:
   ○ Open-source Xen hypervisor
   ○ Enterprise-level XenAPI (XAPI) mgmt. tool stack
   ○ Support for Open vSwitch (open-source, standards-
      compliant virtual switch)
What is Project Kronos?

● Port of XCP's XenAPI toolstack to Deb & Ubuntu dom0

● Gives users the ability to install Debian or Ubuntu, then
   apt-get install xcp-xapi

● Provides Xen users with the option of using the same API
   and toolstack that XCP and XenServer provide

● Early adopters can try new changes to XenAPI before they
   get released in mainstream XCP & XenServer versions
Case for Virtualization

● Enterprise:
   ○ Rapid provisioning, recovery
   ○ Portability across pools of resources
   ○ Reduced phy resource usage = reduced costs

● Small business:
   ○ Rapid provisioning, recovery
   ○ Virt resources replace lack of phy res. to begin with!
Who Uses Xen?

● Debian Popularity Contest:
   ○ 3x more people have Xen vs. KVM installed
   ○ 3x more people have used Xen in the last 30 days
     compared to KVM
  ○ 19% of Debian users have Xen installed & 9% used it
     in last 30 days - how many Debian users exist?
● ~12% of Ubuntu Server users use Xen as a host
● Millions of users from a source that can't be named

      ... How many total users do you guess?
Who Uses Xen?

                 Believed to be at least
         10-12 MILLION open-source Xen users!

           (According to conservative assumptions about 
              big distros and information we know)

Of course:
● Overall Xen hosts must be much higher - 1/2 Million Xen
    hosts at Amazon alone 
● Number likely to be much higher considering commercial
    products & Xen clones (client virt., EmbeddedXen, etc.) 
Xen, XCP, and Various Toolstack Users
Who Uses Xen?

Some sources for reference:

● http://popcon.debian.org 
● http://www.zdnet.com/blog/open-source/amazon-ec2-
  cloud-is-made-up-of-almost-half-a-million-linux-
  servers/10620 
● http://www.gartner.com/technology/reprints.do?id=1-
  1AVRXJO&ct=120612&st=sb 
Guest
    OSes


   Type 2
Hypervisor   ?

  Host OS




   PC




        Type 2 versus Type 1 Hypervisor
Guest
    OSes


   Type 2                     Guest
Hypervisor   ?                OSes


                             Type 1
  Host OS                 Hypervisor
                              (Xen)




   PC                        PC




        Type 2 versus Type 1 Hypervisor
Security in Xen

● True Type 1 hypervisor:
   ○ Reduced size trusted computing base (TCB)
   ○ Versatile Dom0 (Linux, BSD, Solaris all possible)
   ○ Dom0 disaggregation (storage domains, stub domains,
     restartable management domain)
   ○ Inherent separation between VMs & system resources

● Best security, isolation, performance, scalability mix
The Case for Xen

● Xen is mature
 
● Open source (even XenAPI)
 
● XenAPI is better than libvirt, especially for enterprise
  use*




* Detailed by Ewan Mellor: http://wiki.openstack.org/XenAPI 
The Case for Xen

● Proven enterprise use
   (Citrix XenServer, Oracle VM, etc.)

● Hypervisor of choice for cloud
   (Amazon, Rackspace, Linode, Google, etc.)

● Hypervisor of choice for client
   (XenClient, Virtual Computer's NxTop, Qubes OS, etc.)
So, Why Xen?

● Open source

● Proven to be versatile

● Amazing community

● Great momentum in various directions
Xen Definitions

● Xen provides a virtual machine monitor (or
   hypervisor), which a physical machine runs to manage
   virtual machines

● There exist one or more virtual machines (or domains)
   running beneath the hypervisor

● The management virtual machine (called Domain0 or
   dom0) interacts with the hypervisor & runs device drivers

● Other virtual machines are called guests (guest domains)
Virtualization in Xen
Paravirtualization:
● Uses a modified Linux kernel
● Front-end and back-end virtual device model
● Cannot run Windows
● Guest "knows" it's a VM and cooperates with hypervisor

Hardware-assisted full virtualization (HVM):
● Uses the same, normal, OS kernel
● Guest contains grub and kernel
● Normal device drivers
● Can run Windows
● Guest doesn't "know" it's a VM, so hardware manages it
Virtualization in Xen
Paravirtualization:
● High performance (claim to fame)
● High scalability
● Runs a modified operating system


Hardware-assisted full virtualization (HVM):
● "Co-evolution" of hardware & software on x86 arch
● Uses an unmodified operating system
Xen: Hypervisor Role

● Thin, privileged abstraction layer between the hardware
   and operating systems

● Defines the virtual machine that guest domains see instead
   of physical hardware:
   ○ Grants portions of physical resources to each guest
   ○ Exports simplified devices to guests
   ○ Enforces isolation among guests
Xen: Domain0 (dom0) Role

● Creates and manages guest VMs
   xl (Xen management tool)
         A client application to send commands to Xen, replaces xm

● Supplies device and I/O services:
   ○ Runs (backend) device drivers
   ○ Provides domain storage
Normal Linux Boot Process

BIOS
       
          Master Boot Record (MBR)

GRUB
 
          Kernel module
Linux
The Xen Boot Process

GRUB starts
               
             Kernel

Hypervisor starts
              
             Module
Domain0 starts
               
             xl command
Guest domain starts
               
Guest Relocation (Migration) in Xen

● Cold Relocation

● Warm Migration

● Live Migration
Cold Relocation

Motivation:
  Moving guest between hosts without shared storage or
  with different architectures or hypervisor versions

Process:
  1. Shut down a guest on the source host
  2. Move the guest from one Domain0's file system to
     another's by manually copying the guest's disk image
     and configuration files
  3. Start the guest on the destination host
Cold Relocation

Benefits:
● Hardware maintenance with less downtime
● Shared storage not required
● Domain0s can be different
● Multiple copies and duplications

Limitation:
● More manual process
● Service will be down during copy
Warm Migration

Motivation:
  Move a guest between hosts when uptime is not critical

Result:
  1. Pauses a guest's execution
  2. Transfers guest's state across network to a new host
  3. Resumes guest's execution on destination host
Warm Migration

Benefits:
● Guest and processes remains running
● Less data transfer than live migration

Limitations:
● For a short time, the guest is not externally accessible
● Requires shared storage
● Network connections to and from guest are interrupted and
   will probably timeout
Live Migration

Motivation:
  Load balancing, hardware maintenance, and
  power management

Result:
  1. Begins transferring guest's state to new host
  2. Repeatedly copies dirtied guest memory (due to
     continued execution) until complete
  3. Re-routes network connections, and guest continues
     executing with execution and network uninterrupted
Live Migration

Benefits:
● No downtime
● Network connections to and from guest remain active and
  uninterrupted
● Guest and its services remain available

Limitations:
● Requires shared storage
● Hosts must be on the same layer 2 network
● Sufficient spare resources needed on target machine
● Hosts must be configured similarly
What's New in Xen 4.0+?

● Better performance and scalability
● blktap2 for virtual hard drive image support (snapshots,
    cloning)
●   Improved IOMMU PCI passthru
●   VGA primary graphics card GPU passthru for HVM
    guests
●   Memory page sharing (Copy-on-Write) between VMs
●   Online resize of guest disks
What's New in Xen 4.0+?

●   Remus Fault Tolerance (live VM synchronization)
●   Physical CPU/memory hotplug
●   libxenlight (libxl) replaces xend
●   PV-USB passthru
●   WHQL-certified Windows PV drivers (included in XCP)
What's New in XCP 1.5?

● Internal improvements
      (Xen 4.1, smaller dom0)
●   GPU pass through
      (for VMs serving high end graphics)
●   Performance and scalability
      (1 TB mem/host, 16 VCPUs/VM, 128 GB/VM)
●   Networking
      (Open vSwitch backend, Active-Backup NIC Bonding)
●   More guest OS templates
XCP 1.6 (available Sept/Oct '12)

● Xen 4.1.2, CentOS 5.7 w/ 2.6.32.43, Open vSwitch 1.4.1
● New format Windows drivers, installable by Windows
  Update Service
● Net: Better VLAN scalability, LACP bonding, IPv6
● More guest OS templates: Ubuntu Precise 12.04, RHEL,
  CentOS, Oracle Enterprise Linux 6.1 & 6.2, Windows 8
● Storage XenMotion:
  ○ Migrate VMs between hosts/pools w/o shared storage
  ○ Move a VM’s disks between storage repositories while
     VM is running
Xen/Xen Cloud Platform
Installation, Configuration

    Xen Light, XCP Installer
Installing Xen

Xen installation instructions, including from source: 
http://wiki.xen.org/wiki/Xen_Overview 

1. Install Linux distro
2. Install Xen hypervisor package
3. Install a dom0 kernel (pkgs available for many distros)
4. Modify GRUB config to boot Xen hypervisor instead
 
Result: A working Xen hypervisor and "Xen Light"
installation
Installing XCP

1. Download latest XCP ISO:
      http://xen.org/download/xcp/index.html

2. Boot from ISO and proceed through XCP installer




Result: A ready-to-go Xen hypervisor, dom0, XAPI
Xen Cloud Platform
         Concepts

Pools, hosts, storage, networks, VMs
Xen Cloud Platform (XCP)

● XCP was originally derived from Citrix XenServer (a free
   enterprise product), is open-source, and is free

● XCP promises to contain cutting-edge features that will
   drive future developments of Citrix XenServer
Xen Cloud Platform (XCP)

● Again, XCP includes:
   ○ Open-source Xen hypervisor
   ○ Enterprise-level XenAPI (XAPI) management tool
     stack
   ○ Support for Open vSwitch (open-source, standards-
     compliant virtual switch)
XCP Features

● Fully-signed Windows PV drivers

● Heterogeneous machine resource pool support

● Installation by templates for many different guest OSes
XCP XenAPI Mgmt Tool Stack

● VM lifecycle: live snapshots, checkpoint, migration
● Resource pools: live relocation, auto configuration,
  disaster recovery
● Flexible storage, networking, and power management
● Event tracking: progress, notification
● Upgrade and patching capabilities
● Real-time performance monitoring and alerting
XCP's xsconsole (SSH or Local)
XCP Command Line Interface

# xe template-list
     (or   # xe vm-import filename=lenny.xva )

# xe vm-install template=<template> new-name-label=<name>

# xe vm-param-set uuid=<uuid of new VM> other-config:install-
repository=http://ftp.debian.org/

# xe network-list

# xe vif-create network-uuid=<network uuid from above> vm-uuid=<uuid of
new VM> device=0

# xe vm-start vm=<name of VM>
Further Information

● http://pdub.net/2011/12/03/howto-install-xcp-in-kvm/
Unit 2: Nuts and Bolts
Steve Maresca
 ● Wearer of many hats
   ○ Security analyst at a top 20 public univ in the Northeast
   ○ Developer for the Zentific virtualization management
     suite Zentific with a team of developers
Involved in the Xen world since 2005
Steve Maresca
● Why do I use Xen?
  ○ Original impetus: malware/rootkit research
  ○ Mature research community built around Xen
  ○ Flexibility of the architecture and codebase permits
    infinite variation
  ○ Using it today for infrastructure as well as continuing
    with security research
     ■ LibVMI, introspection
Unit 2: Overview

●   Structure of this presentation follows the general path we take
    while mentally approaching virtualization
     ○ Start simple, increase in level of sophistication
●   Overall flow:
     ○ Why Virtualization?
     ○ XCP Deployment
     ○ Management
     ○ VM Deployment
     ○ Monitoring
     ○ Advanced Monitoring and Automation
     ○ Best Practices
Why virtualization?

 ●   We're all familiar with the benefits
      ○ When the power bill drops by 25% and the server room is
        ten degrees cooler, everyone wins
 ●   Bottom line: more efficient resource utilization
      ○ Requires proper planning and resource allocation
      ○ Every industry publication technical and otherwise has made
        'cloud' a household term
      ○ Expectations set high, then reality arrives with different
        opinions
Why virtualization?

 ●   Many of us will have or have had difficulty making the leap
       ○ Growing pains: shared resources of virtualization hardware
         stretched thin
       ○ Recognition that it requires both capital and staffing
         investment
 ●   Certainly, you CAN use virtualization with traditional approaches
     used with real hardware
       ○ E.g.: VM creation wizard. upload ISO. attach iso, boot,
         install, configure. repeat.
          ■ almost everyone does this
       ○ Without much effort, you have consolidated 10 boxes into
         one or two; many organizations find success at this scale
 ●   ..but: we have much more flexibility at our disposal; use it!
Why virtualization?

● Virtualization provides the tools to avoid the endless
  parade of one-off installations and software deployments
● Repeatable and measurable efficiency is attainable
  ○ Why install apache 25 times when one well-tuned
      configuration meets your needs?
Unit 2: Nuts and Bolts

Deployment Methodologies for
  Infrastructure and Virtual
          Machines
Existing deployment methods
 ● Traditional deployment method: install from CD
    ○ still works for virtualization and new XCP hosts
    ○ If installing for the first time, this is the simplest way to
      get your feet wet
    ○ ISOs available at xen.org
    ○ For deploying 5-10 systems, this method is manageable
    ○ Don't fix what isn't broken: if it works for you, go for it
    ○ For deploying 10-50 systems, this hurts
 ● We've all installed from CD/DVD a thousand times
    ○ That's probably 950 times too many
    ○ But..there are alternatives, and better ones at that
Existing deployment methods

 ● XCP can be installed on a standard linux system thanks to
   Project Kronos
    ○ apt-get install xcp-xapi
    ○ Patrick discussed this earlier
 ● XCP can be installed via more advanced means
 ● Virtual machines can be deployed via templates and clones
    ○ Golden images
    ○ Snapshots
    ○ Linked clones
    ○ Templates
    ○ These methods are here to stay
Preboot Execution Environment
(PXE)
 ● Extraordinarily convenient mechanism to leverage network
   infrastructure to deploy client devices, often lacking any local
   disk
 ● Uses DHCP, TFTP; often uses NFS/HTTP after initial bootstrap
 ● Intel and partners produced spec in 1999
Preboot Execution Environment
(PXE)
 ● Most commonly encountered over the years for:
    ○ a remote firmware update tool
    ○ thin-client remote boot
    ○ LSTP Linux terminal server project
    ○ Windows Deployment Services (Remote Installation Services)
    ○ Option ROMs on NICs
 ● Lightly used in many regards, foreign to many
 ● By no means a dead technology
Preboot Execution Environment
(PXE)
 ● To facilitate PXE
    ○ early in its boot process, a PXE-capable device emits a DHCP
      request
    ○ This a DHCP request is answered with extra fields indicating
      a PXE environment is available (typically, this is the 'next-
      server' option pointing the DHCP client at an adjacent TFTP
      server for the next steps)
       ■ PXE-unaware clients requesting an IP ignore the extra data
    ○ the DHCP client, having obtained an IP, obtains a small
      bootloader from the TFTP server
    ○ Additionally, a configuration file is downloaded with boot
      information (location of kernel, command line, etc)
PXE Architecture

    Deployment VLAN
                                                 Production VLAN


        New VM                                       New VM




                  Network switches and routers




           DHCP                 TFTP                 WDS
PXE Architecture: Components

 ● DHCP
    ○ ISC-DHCP, Windows, almost anything works..
 ● TFTPd
    ○ TFTP is an extraordinarily simple protocol, so..
    ○ If it is a TFTP server, it's perfect
 ● Windows Deployment Services
 ● HTTP or FTP
    ○ Apache, nginx, lighttpd, IIS, a bash script, ..
    ○ Optional, but very useful for serving scripts,
      configuration files, etc
 ● Roll your own on one server with very modest resources
PXE Architecture: Components

 ● Purpose-built solutions
    ○ Cobbler
       ■ Fedora project, Red Hat supported
       ■ Supports KVM, Xen, VMware

   ○ LTSP (Linux Terminal Server Project)

   ○ Windows Deployment Services

   ○ FOG Project
So what does PXE buy us?

● Near zero-footprint deployment model
● Leverages services you almost certainly already have in
  place
● Guaranteed reproducible deployments
● Agnostic relative to Virtual/Physical, OS
● Goes where a no USB key or optical drive is even in
  existence
Requirements for deployment via
PXE
 ● Server requires a NIC with a PXE ROM available
 ● NIC Enabled for booting
 ● Very nice if you're using a blade chassis or ILO; easy to
   reconfigure on the fly
 ● Requires an answer file prepped for the host
 ● Configured DHCP server
 ● Configured TFTP server
Mechanisms for automated install

 ● General concept is often called an "answer file"
   ○ Some file with a list of instructions is delivered to the
     OS installer with device configuration info, a list of
     packages to install, possibly including custom scripts,
     etc.
 ● Linux
   ○ Centos/RHEL: kickstart
   ○ Debian/Ubuntu: preseed (though kickstart files are
     gaining popularity in the Debian world)
 ● Windows
   ○ WAIK or Windows Automated Installation Kit
Example infrastructure setup

 ● Debian as the base OS

 ● ISC-DHCP as a means of advertising next-server DHCP
   option

 ● tftpd-hpa for a tftp daemon

 ● also running Apache for serving scripts and a variety of
   other files as installation helpers
Our configuration: ISC-DHCP
shared-network INSTALL {
    subnet 192.168.2.0 netmask 255.255.255.0 {
         option routers 192.168.2.1;
         range 192.168.2.2 192.168.2.254;
         allow booting;
         allow bootp;
         option domain-name "zentific";
         option subnet-mask 255.255.255.0;
         option broadcast-address 192.168.2.255;
         option domain-name-servers 4.2.2.1;
         option routers 192.168.2.1;
         next-server 192.168.2.1;
         filename "pxelinux.0";
    }
}
Deploying XCP via PXE

 ● Requires an "answer file" to configure the XCP system in
   an unattended fashion

 ● Also leverages HTTP to host the answer file and some
   installation media

 ● TFTP serves a pxeconfig referencing the answer file and
   providing basic configuration for the installer (console
   string, minimum RAM, etc)
Deploying XCP via PXE: pxeconfig

DEFAULT xcp
LABEL xcp
kernel mboot.c32
append /xcp/xen.gz dom0_max_vcpus=2
dom0_mem=2048M com1=115200,8n1 console=com1
--- /xcp/vmlinuz xencons=hvc console=hvc0
console=tty0 answerfile=http://192.168.2.1
/xcp_install/xcp_install_answerfile install --- /xcp/install.
img
Deploying XCP via PXE:answerfile
<?xml version="1.0"?>
 <installation>
   <primary-disk>sda</primary-disk>
   <keymap>us</keymap>
   <root-password>pandas</root-password>
   <source type="url">http://192.168.2.1/xcp_install</source>
      <post-install-script type="url" stage="filesystem-populated">
      http://192.168.2.1/xcp_install/post.sh
      </post-install-script>
   <admin-interface name="eth0" proto="static">
      <ip>192.168.2.172</ip>
      <subnet-mask>255.255.255.0</subnet-mask>
      <gateway>192.168.2.1</gateway>
   </admin-interface>
   <nameserver>4.2.2.1</nameserver>
   <timezone>America/New_York</timezone>
 </installation>
Deploying XCP via PXE
Deploying XCP via PXE
Deploying XCP via PXE
Deploying XCP via PXE
Deploying XCP via PXE
Deploying XCP via PXE
Deploying XCP via PXE
Deploying XCP via PXE
Deploying XCP via PXE, complete
Unit 2: Nuts and Bolts

Deployment Methodologies for
      Virtual Machines
Existing deployment methods


 ● Again, traditional methods
    ○ VM creation wizard. upload ISO. attach iso, boot,
      install, configure. repeat.
    ○ almost everyone does this
 ● Virtual machines can be deployed via templates and clones
    ○ Golden images
    ○ Snapshots
    ○ Linked clones
    ○ Templates
    ○ These methods are here to stay
Existing deployment methods

 ● XCP makes deployment of VMs simple
    ○ templates:
       # xe template-list | grep name-label | wc -l
       84
    ○ clones: xe vm-clone
 ● Virtual machines can be deployed via templates and clones
    ○ Golden images
    ○ Snapshots
    ○ Linked clones
    ○ Templates
    ○ These methods are here to stay
Deploying Centos via PXE

 ● Customization via Kickstart

 ● Anaconda installer uses "one binary to rule them all"
   so customization at installation time is more
   restrictive than other distributions

 ● Standard pxeconfig
Deploying Centos : PXE config

SERIAL 0 115200
CONSOLE 0
DEFAULT centos_5.6_x86_64_install
LABEL centos_5.6_x86_64_install
kernel centos/5.6/x86_64/vmlinuz
append vga=normal console=tty initrd=centos/5.
6/x86_64/initrd.img syslog=192.168.1.2 loglevel=debug
ksdevice=eth0 ks=http://192.168.2.1/centos-minimal.ks
--
PROMPT 0
TIMEOUT 0
Deploying Centos : Kickstart
install
text
lang en_US.UTF-8
key --skip
skipx
logging --host=192.168.1.125
network --device eth0 --bootproto dhcp
url --url http://mirrors.greenmountainaccess.net/centos/5/os/x86_64
rootpw --iscrypted $1$j/VY6xJ6$xxxxxxxxx
firewall --enabled --port=22:tcp
authconfig --enableshadow --enablemd5
selinux --enforcing
timezone --utc America/New_York
zerombr
bootloader --location=mbr --driveorder=hda
clearpart --initlabel --all
autopart
reboot
Deploying Centos : Kickstart
●   Make a new VM using the "other" template
    ○ # SRDISKUUID refers to the identifer for the storage repository ID
    ○ xe vm-install new-name-label=$VMNAME sr-uuid=$SRDISKUUID
       template="Other install media"
●   Set boot orderBoot order: DVD, Network, Hard-Drive
●   xe vm-param-set uuid=$VMUUID HVM-boot-params:order="ndc"
●
Deploying Centos via PXE
Unit 2: Nuts and Bolts

  XCP: Modifying the OS
  Just a quick comment
Installing software
Or, Reminding XCP of its Linux
Heritage
 ● XCP is by no means a black box, forever sealed away
 ● It's only lightly locked down and easy to modify
     ○ Take care, it's not designed for significant upheaval
     ○ Very convenient to install utilities, SNMP, etc
 ● Just: yum --disablerepo=citrix --enablerepo=base install
   screen
 ● Helps a lot with additional monitoring utilities
Unit 2: Nuts and Bolts

 Monitoring and Automation
Automation and response
   XCP Event Publisher
   (XAPI)

         VM
               VM                                 Adaptive
                                                  feedback loop
         VM




          AMQP or IF-MAP or 0MQ ..




   IDS                   Firewall    Middleware
Exploring the XCP API
What it is

● The XCP API is the backbone of the platform
   ○ Provides the glue between components
   ○ Is the backend for all management applications

● Call it XAPI or XenAPI
   ○ occasionally when searching, XAPI can be a bit better
     to differentiate from earlier work in traditional open
     source xen deployment

● It's a XML-RPC style API, served via HTTPS
   ○ provided by a service on every XCP dom0 host
What it is
● API bindings are available for many languages
   ○ .NET
   ○ Java
   ○C
   ○ Powershell
   ○ Python
● Documentation available via the Citrix Developers'
  Network (in this regard, XCP==Xenserver)
   ○ http://docs.vmd.citrix.com/XenServer/6.0.0/1.
     0/en_gb/api/
   ○ http://community.citrix.
     com/display/xs/Introduction+to+XenServer+XAPI
What it is

● Official API bindings not available for your language of
  choice? No problem

● Protocol choice of XML-RPC means that most languages
  can support the API natively

● Ease of integration is superb. Here's an example using
  python (but ignoring the official bindings)
What it is
import xmlrpclib
x=xmlrpclib.Server("https://localhost")
sessid=x.session.login_with_password("root","pass")
['Value']
# go forth, that's all you needed to begin

allvms=x.VM.get_all_records(sessid)['Value']
What it is

● xapi is available on for use on any xenserver or xcp system

● In addition as mentioned in our opening segment, XAPI is
  accessible via the kronos project Ubuntu/Debian systems
What XAPI isn't

● Not exactly 1:1 with the xe commands from the XCP
  command line
   ○ significant overlap, but not exact

● NOT an inflexible beast like some APIs
   ○ can be extended via plugins
   ○ and (of course) it is open source if you want to get
     your hands dirty
      ■ LGPL 2.1
Comparisons to other APIs in the
virtualization space
 ● Generally speaking
    ○ XAPI is well-designed and well-executed
    ○ XAPI makes it pleasantly easy to achieve quick
      productivity
    ○ Some SOAPy lovers of big XML envelopes and
      WSDLs scoff at XML-RPC, but it certainly gets the job
      done with few complaints
Comparisons to other APIs in the
virtualization space
 ● Amazon EC2
    ○ greater "surface area" than amazon EC2, which is a
      classic example of doing a lot with rather a little API
    ○ in particular, XAPI brings you closer to the virtual
      machine and underlying infrastructure than EC2
    ○ XAPI provides considerable introspection into the
      virtual machine itself
       ■ data reported by xen-aware tools within the guest is
         reported as part of VM metrics
       ■ Data can be injected into VM using the xenstore
Comparisons to other APIs in the
virtualization space
 ● Oracle VM (also xen based)
   ○ similar heritage; derives partly from the traditional
     XenAPI of which XAPI is a distant relative
   ○ generally speaking, the oracle VM api is on-par for
     typically needed features, but XAPI is more
     powerful (e.g., networking capabilities)
Comparisons to other APIs in the
virtualization space
 ● VMware
    ○ XAPI is far more tightly constructed than VMWare's
      huge (very capable, impressive) API
    ○ By nature of protocol construction, XAPI is XML-RPC
      vs heavier VMWare SOAP API. Measurably lower
      bandwidth requirements, parsing overhead.
    ○ VMware's API has a distinct feel of organic growth (
      "one of these things is not like the other" is a common
      tune whistled while working with it
    ○ Speaking from a personal developer standpoint, sanity
      with XAPI in comparison is much higher. (We,
      Zentific, have worked very closely with both APIs)
API Architecture
API Architecture: General shape
and form
 ● All elements on the diagram just shown are called classes

 ● Note: The diagram omits another twenty or more minor
   classes
    ○ Visit the SDK documentation for documentation of all
      classes

 ● Classes are the objects XCP knows about and
   exposes through API bindings

 ● Each class has attributes called fields and functions
   called messages. We'll stick with 'attributes' and
   'functions.'
API Architecture: General shape
and form
 ● Class attributes can be read-only or read-write

 ● All class attributes are exposed via setter and
   accessor functions
    ○ e.g. for a class named C with attribute X: C.get_X
    ○ There's a corresponding C.set_X too if the attribute
      is read-write. Absent if read-only.
    ○ For mapping type attributes, there are C.add_to_X
      and C.remove_from_X for each key/pair
API Architecture: General shape
and form
 ● Class functions are of two forms: implicit and explicit
    ○ Implicit class functions include:
       ■ a constructor (typically named "create")
       ■ a destructor (typically named "destroy")
       ■ Class.get_by_name_label
       ■ Class.get_by_uuid
       ■ Class.get_record
       ■ Class.get_all_records

   ○ Explicit class functions include every other
     documented function for the given class, which are
     generally quite specific to the intent of that class
      ■ e.g. VM.start
API Architecture: General shape
and form
 ● Multiple forms UUIDs and OpaqueRefs
 A note on of unique identifier are used in XCP
   ○ Universally Unique Identifiers (UUIDs)
   ○ OpaqueRefs
   ○ Class-specific identifiers
   ○ name-labels

 ● Both can be encountered in API calls and xe
   commands
    ○ Conversion between UUIDs and OpaqueRefs will
      be commonly required
    ○ Parallel naming convention is acknowledged odd
      consequence of development aiming at unique
      identifiers
API Architecture: Major Classes

 ● All major classes are shown in the inner circle of the API
   diagram
    ○ VM: A virtual machine
    ○ Host: A physical XCP host system
    ○ SR: Storage repository
    ○ VDI: Virtual disk image
    ○ PBD: physical block device through which an SR is
      accessed
    ○ VDB: Virtual block device
    ○ Network: A virtual network
    ○ VIF: A virtual network interface
    ○ PIF: A physical network interface
API Architecture: Minor Classes

 ● Minor classes are documented in the official Xenserver
   SDK documentation
    ○ pool: XCP host pool information and actions
    ○ event: Asynchronous event registrations
    ○ task: Used to track asynchronous operations with a long
      runtime
    ○ session: API session management login, password
      changes, etc
API Architecture: Linking Classes

 ● Linking classes are those that create a conceptual bridge between
   a virtual object and the underlying physical entity

    ○ VDI<>VBD<>VM
       ■ VBD: Bridges the representation of a virtual machine's
         internal disk with the actual disk image used to provide it

    ○ Network<>VIF<>VM
       ■ VIF: Bridges the internal VM network interface with the
         physical network to which it is ultimately plumbed

 ● When building complex objects, it's often necessary to build the
   linkages too, or failure will occur
API Architecture: Other Classes

 ● SM: storage manager plugin - for third-party storage
   integration (e.g. Openstack Glance)

 ● Tunnel: represents a tunnel interface between
   networks/hosts in a pool

 ● VLAN: assists in mapping a VLAN to a PIF, designating
   tagged/untagged interfaces. Each VLAN utilizes one PIF
API Architecture: Order of
Operations
 ● Using a correct order of operations for API calls is important,
   though not particularly well documented

 ● Example: deleting a disk
    ○ Resources must not be in use
    ○ If deleting a VDI, make certain that no VBDs currently
      reference it

 ● Generally, common sense dictates here in terms of the operations
   required

 ● When something is executed out of order, an exception is thrown
API Architecture: Target the right
destination
 ● When running calls against a standalone xcp system, no need for
   extra consideration

 ● When running operations against a pool, it's necessary to target
   the pool master
    ○ Otherwise an API exception will be thrown if you attempt to
      initiate an action against a slave (type XenAPI.Failure if using
      the provided Python bindings)

 ● It's reasonably easy to code around this problem (the pool master
   may rotate, after all): http://community.citrix.
   com/display/xs/A+pool+checking+plugin+for+nagios
API Architecture: Target the right
destination
import XenAPI
host="x"
user="y"
pass="p"
try:
   session=XenAPI.Session('https://'+host)
   session.login_with_password(user, pass)
except XenAPI.Failure, e:
   if e.details[0]=='HOST_IS_SLAVE':
      session=XenAPI.Session('https://'+e.details[1])
      session.login_with_password(username, password)
   else:
      raise
s=session.xenapi
XAPI is Extensible: Plugins

 ● Extensible API via plugins
    ○ These are scripts that you place in the XCP host.
       ■ Check out /etc/xapi.d/plugins/
    ○ Can be invoked via the api
       ■ See host.call_plugin(...)
 ● Affords huge flexibility for customization
 ● Used today by projects like Openstack to provide greater
   integration with XCP
 ● Example code
    ○ http://bazaar.launchpad.net/~nova-
      core/nova/github/files/head:
      /plugins/xenserver/xenapi/etc/xapi.d/plugins/
    ○ https://github.com/xen-org/xen-
      api/blob/master/scripts/examples/python/XenAPIPlugin.py
Things to know
 ● To access VM console, a valid session ID must be appended to
   the request
     ○ See http://foss-boss.blogspot.com/2010/01/taming-xen-cloud-
       platform-consoles.html
 ● Metrics
     ○ ${class}_metrics are instantaneous values; this is an older
       XCP/Xenserver style of providing such data
     ○ Same metrics provided via RRD backend are historical and
       can show trending (rather than needing to aggressively poll for
       instantaneous values)
 ● It's possible to add xenstore values for a VM, enables an agent in
   VM to act upon that data
     ○ consider: root password reset via xenstore; directed actions
Unit 2: Nuts and Bolts

     Best Practices
Best Practices

These are primarily 'general' best practices

Common-sense best practices are especially critical for
virtualization given:
 ● the sharing of scarce resources (and the complex

    interplay thereof when it comes to performance)
 ● Many eggs are in one basket: failures are felt very

    strongly
Best Practices: Less is more

 ●   Often, fewer vcpus per VM are better
      ○ Allocate only what's needed for the workload

      ○ If unknown, begin with 1 VCPU and work up as

        needed
 ●   Always account for the CPU needs of the hypervisor
 ●   Never allocate more VCPUs for a VM than the
     number of available PCPUs (even if you “can”)
 ●   Great video by George Dunlap for more guidance :
     http://www.citrix.com/tv/#videos/2930
Best Practices: Workload grouping

 ●   Group VMs logically based upon expected (or observed)
     workload and behavior
      ○ Workloads which are randomly 'bursty' from an IO or CPU
        standpoint
      ○ Regularly scheduled workloads demanding high CPU when
        running:
      ○ interleave schedule if possible so each VM has the maximal
        share of resources
Best Practices: Workload separation

 ●   Separate VMs logically based upon expected (or observed)
     workload and behavior
      ○ Workloads which always require the majority that the
        hardware can provide for performance (like an I/O
        bottleneck on the network when the pipe is only so wide)
      ○ Workloads like databases that can be heavy on memory
        utilization and bandwidth
Best Practices: Resource allocation

 ●   If needed, guarantee resources for a workload
       ○ grant higher scheduling priority
       ○ VCPU pinning to physical cores
       ○ Balloon VM in anticipating of memory usage, then return
         memory to the pool
 ●   WARNING: use with caution
       ○ possible to reduce performance for adjacent workloads on
         the same host
       ○ possible to lock a VM to a host (migration becomes
         problematic)
Best Practices: Compartmentalize Risk

 ●   Segregate VMs operating in distinct security domains
      ○ a good practice no matter what the context
      ○ certainly your user-facing services don't need access to the
        same network that allows switch/router management.
        Applies similarly to VMs
 ●   Especially important if required by compliance/regulations
      ○ Example: PCI-DSS (Payment Card Industry Data Security
        Standard)
         ■ https://www.pcisecuritystandards.
            org/documents/Virtualization_InfoSupp_v2.pdf
      ○ Example: DOD regulations regarding data classification and
        separation of networks
         ■   Crossing the streams causes total protonic reversal
Best Practices: Monitor your environment!

 ●   Log aggregation AND analysis:
       ○ if you don't know how to identify when a problem is
          occurring, how can you circumvent/fix/prevent it?
 ●   Forecasting for the future
 ●   Virtual environments are dynamic enough that problems can
     sneak up on you
 ●   If you have a head start on hardware failure, you can migrate
     VMs from a failing host to a hot spare to enable
     repair/replacement (without downtime)
 ●   Don't forget to monitor hardware temperature. HVAC failures
     are not much fun.
       ○ The virtual fallout can be enormous:
       ○ high power density-->high heat takes out high-visibility, high
          value resources by the dozen
Best Practices: When not to virtualize

 ●   Knowing when to prefer real hardware over virtualization is as
     important as being able to recognize when virtualization will
     benefit
      ○ Virtualization is not a panacea
 ●   Problematic workloads
      ○ Highly parallel computations requiring many CPUs acting in
        concert
      ○ Heavy IO demands of network or storage
      ○ Tasks which require exceptionally stable clocks
        (nanosecond granularity)
 ●   But: technology is improving at breakneck speed
      ○ 10 gb Ethernet at line rate is possible for a virtual machine
      ○ CPU improvements have improved or eliminated many
        bottlenecks (clock stability is much better, for example)
Best Practices: Resource Modeling

 ●   Build a simple model for your environment
      ○ Try to do so before virtualizing a service and afterward, then
         compare
      ○ Helps with cost management and expenditure justification
      ○ Measures success or failure of virtualization to solve a
         problem
 ●   E.g. $x/gb of ram + $x/vcpu + $x/hr labor + $licensing/vm + VM
     importance factor
 ●   Calculate worst case perspective for model and then graph
     current state relative to that
OSCON: From the
Datacenter to the Cloud -
Featuring Xen and XCP
      XCP in the Enterprise
           Josh West
Table of Contents
● Introduction: XCP in the Enterprise

● Storage in Xen Cloud Platform

● Advanced Networking in Xen Cloud Platform

● Statistics & Monitoring in XCP

● Enterprise Cloud Orchestration
Introduction: XCP in the Enterprise
● Xen hypervisor has already been proven as a solid
  choice as platform for IT systems:

   ● Amazon                    ● Oracle VM
   ● Rackspace                 ● dom0 Mainline
● No need to run Xen on distribution flavor of choice and
  build from ground up, just for hosting IT business
  systems.
● Many choices (Vmware, RHEV, Oracle VM, Citrix
  XenServer).
So... Why use XCP?
● Excellent blend of enterprise quality code and next
  generation technologies.

● Developed by Citrix/XenSource.

● Enhanced by the open source community.

● Compatible with Citrix XenCenter for management.

● Rapid deployment:
  ○ PXEBOOT
  ○ Boot from SAN
XCP and Pools
● Pools allow you to combine multiple XCP hosts into one
   managed cluster.

   ○ Live migration.

   ○ Single API connection & management connection.

   ○ Single configuration.

   ○ Shared storage.

● Single master, multiple slaves.
XCP or Citrix XenServer?
Citrix XenServer:           Xen Cloud Platform:

● Professional Support      ● Community Support

● High Availability         ● DIY High Availability

● Advanced Storage          ● Standard Storage

● Cloudstack & Openstack    ● Cloudstack & Openstack

● Benefits from XCP         ● Benefits from Citrix
  Community contributions     developers & codebase
DIY? Roll Your Own
● Still not convinced? See Project Kronos.

● Benefits of XAPI in a *.deb Package.

● Run on Debian or Ubuntu dom0 with Xen Hypervisor.

● http://wiki.xen.org/wiki/Project_Kronos
Enough Promo!

Let's see the cool stuff!
Storage in XCP
Storage in XCP
● Supports major storage technologies & protocols

● Local storage, for standalone & scratch VM's.

● Centralized storage, for live migration & scaling:

   ○ LVMoISCSI and LVMoFC and LVMoAOE

      ■ Software iSCSI Initiator
      ■ HBA (Qlogic & Emulex)
      ■ Coraid has drivers for AOE

   ○ VHD on NFS
Under the Hood: VHD
● VDI's are stored in Virtual Hard Disk (VHD) format.*

● From Microsoft! (Connectix), under Microsoft Open
   Specification Promise.

● Type's of VHD's:
   ○ Fixed hard disk image (Appliances).
   ○ Dynamic hard disk image (XCP).
   ○ Differencing hard disk image (Snapshots, Cloning).

● Tools from Microsoft & Virtualbox for
   working/converting.
Under the Hood: LVM on XCP
● LVM is used on all block storage in XCP.

● XCP organizes with a simple mapping:
   ○ Storage Repository (SR) = LVM Volume Group

   ○ Virtual Disk Image (VDI) = LVM Logical Volume

● Locking is not handled like cLVM.

● XCP Pool Master toggles access w/ lvchange -ay/an.
Under the Hood: LVM on XCP
Under the Hood: LVM on XCP
Under the Hood: LVM on XCP
● XCP uses VHD dynamic disk images on top of LVM.

● So we have VHDoLVMo(ISCSI|FC|AOE).

● And then all our VM's will probably use LVM:

● LVMoVHDoLVMo(ISCSI|FC|AOE). :-)

● VHD differencing disk images for VM/VDI snapshots,
   not LVM snapshots.
   ○ Portable between Storage Repository types.
   ○ No LVM snapshot performance issues.
Under the Hood: NFS on XCP
● NFSv3 w/ TCP is used for NFS based SR's.

● Mounted at /var/run/sr-mount/<SR UUID>/

● Mounted with 'sync' flag; no 'async' delayed operation
   as this would be unwise and unsafe for VM's.

● NFS lets you get closer to VHD's - they're stored as
   files.

● Perhaps could integrate better with your backup
   solution.
Under the Hood: NFS on XCP
● Choose NFS platform wisely for proper performance.

● Just a Linux box w/ NFS export not enough: ~32 MB/s.

● Need cache system on your NAS (e.g. NetApp PAM).

● DIY? Look into using SSD's or BBU NVRAM w/
   Facebook's Flashcache or upcoming Bcache.

● Gluster has NFS server and Gluster is tunable.
XCP Storage: Which to Choose?
● All good choices. Depends on your shop & experience.

● If you have an enterprise NAS/SAN, use it!
   ○ Caching for performance.
   ○ Enterprise support contracts.
   ○ Alerting and monitoring.

● No budget? No space left? No problem. You can build
   your own SAN for use with XCP.

● Test labs, recycling equipment, PoC, and small
   production deployments.
DIY H.A./F.T. SAN for XCP
● Easy to build a storage system (that actually performs
   well) for use with XCP:

   ○ Highly Available / Fault Tolerant.
   ○ Manageable / Not Too Complicated.

● XCP let's you connect to multiple SR's.

● If you outgrow your DIY SAN, or find it going from a test
   lab purpose to hosting production critical VM's, XCP will
   let you move VM's between SR's with ease.

● Just attach your expensive shiny SAN/NAS and move.
DIY H.A./F.T. SAN: What We'll Build
● Lightweight Linux-based, clustered SAN for XCP SR.

● Active/Standby with automatic failover & takeover.

● Synchronous storage replication between storage
   nodes.

● iSCSI presentation to XCP hosts.

● Built with two open source software projects:
   ○ DRBD
   ○ Pacemaker
TripAdvisor XCP + XSG Lab




● Built at TripAdvisor, with 19.33TB storage.

● Two Dell PowerEdge 1950's + Cisco 6513 Catalyst.
DIY H.A./F.T. SAN: Overview

                              Stacked Switches
                                Stacked Switches



                 I                                        iSC
                S                                            SI
            iSC


    eth0             eth1                          eth0           eth1

    XCP Storage Node 1                             XCP Storage Node 2

    eth2             eth3                          eth2           eth3

                                    DRBD

                       Corosync / Pacemaker
Step 1: Hardware RAID
● Configure your hardware RAID controller.

● Use features such as Adaptive Read-Ahead and Write-
   Back, to enable caching.

● Battery backed up cache is important.

● Recommended: RAID 1, 5, or 6 for internal disks.

● Recommended: RAID 10, 50, or 60 for DAS shelves.
Step 2: ILO / DRAC / LOM
● Configure your dedicated ILO card.

● Using Dell Remote Access Controller (DRAC) in our
   example lab.

● Enable IPMI support. Needed for STONITH.

● Set & remember the credentials. Can test with ipmitool
   from external host.

● Dedicated NIC recommended!
Step 3: Install OS
● Install CentOS x86_64. Tested this with 5.8 & 6.0.

● Partition and configure accordingly.

● Leave space for attached storage.

● Partition the dedicated storage as LVM Physical
   Volume.

● Use gpartd if >2TB.
Step 4: Configure Networking
● Bond eth0 + eth1 front end interfaces w/ LACP (bond0).

● Crossover eth2 to eth2, eth3 to eth3 backend interfaces.
   ○ eth2: Dedicated for corosync + pacemaker.
   ○ eth3: Dedicated for DRBD replication.
                                  Storage Node 1     Storage Node 2
           Management   bond0      192.168.0.10       192.168.0.11
 Corosync + Pacemaker   eth2        10.168.0.10        10.168.0.11
                 DRBD   eth3        10.168.1.10        10.168.1.11
             *Floating iSCSI IP              192.168.0.20
Step 4: Configure Networking

                                   Stacked Switches
                                     Stacked Switches



                                                               19
                              10                                 2.
                        8 .0.                                       168
                       6                                                .0.
                   2.1                 [ 192.168.0.20 ]                     11
                 19

    eth0         eth1                                           eth0             eth1

    XCP Storage Node 1                                         XCP Storage Node 2

    eth2         eth3                                           eth2             eth3

                                   10.168.0.10 & 10.168.0.11


                      10.168.1.10 & 10.168.1.11
Step 5: Configure LVM
● Setup dedicated storage partition:
       $ pvcreate /dev/sdb1
       $ vgcreate vg-xcp /dev/sdb1
       $ lvcreate -l 100%FREE -n lv-xcp vg-xcp


● Adjust /etc/lvm/lvm.conf filters and run vgscan:
     filter = [ "a|sd.*|", "r|.*|" ]


● XCP will put LVM on top of iSCSI LUN's (LVMoISCSI).

● SAN should not scan local DRBD resource content.
Step 6: Install DRBD
● Latest stable... Constantly in motion.
   $ yum install gcc kernel-devel rpm-build flex


● Fetch from http://oss.linbit.com/drbd/ (8.4.1)
   $ mkdir -p ~/redhat/{RPMS,SRPMS,SPECS,SOURCE,BUILD}
   $ tar -xvzf drbd-8.4.1.tar.gz
   $ cd drbd-8.4.1
   $ make rpm km-rpm
   $ yum install /usr/src/redhat/RPMS/x86_64/drbd*.rpm
   or
   $ yum install ~/redhat/RPMS/x86_64/drbd*.rpm
Step 7: Configure DRBD
● Four major sections to adjust:
   ○   syncer { ... }
   ○   net { ... }
   ○   disk { ... }
   ○   handlers { ... }

● See DRBD documentation for full details.

● http://www.drbd.org/docs/about
Step 7: global_common.conf
syncer {                     handlers {
    rate 1G;                     ... [ snip ] ...
    verify-alg "crc32c";         fence-peer "/usr/lib/drbd/crm-
    al-extents 1087;         fence-peer.sh";
}                                after-resync-target
                             "/usr/lib/drbd/crm-unfence-peer.sh";
                                 ... [ snip ] ...
disk {
                             }
    on-io-error detach;
    fencing resource-only;
                             net {
}
                                 sndbuf-size 0;
                                 max-buffers 8000;
                                 max-epoch-size 8000;
                                 unplug-watermark 8000;
                             }
Step 7: d_xcp.res
resource d_xcp {
    net {
        allow-two-primaries;
    }
    on xsgnode1 {
        device     /dev/drbd0;
        disk       /dev/vg-xcp/lv-xcp;
        address    10.168.1.10:7000;
        meta-disk internal;
    }
    on xsgnode2 {
        device     /dev/drbd0;
        disk       /dev/vg-xcp/lv-xcp;
        address    10.168.1.11:7000;
        meta-disk internal;
    }
}
Review
● Two servers with equal storage space.

● First two NIC's bonded to network.

● Third NIC crossover, dedicated for
   corosync/pacemaker.

● Fourth NIC crossover, dedicated for DRBD.

● We've setup LVM and then DRBD on top.

● Now time to cluster and present to XCP.
Step 8: Corosync + Pacemaker
● Install Yum repo's from EPEL + Clusterlabs
   ○   EPEL is needed on CentOS/RHEL 5 and 6
   ○   Clusterlabs repo only needed on CentOS/RHEL 5
   ○   Red Hat now includes pacemaker :-)
   ○   http://fedoraproject.org/wiki/EPEL

● Installation & Configuration:
   ○ http://clusterlabs.org/wiki/Install
   ○ $ yum install pacemaker.x86_64 heartbeat.x86_64   corosync.x86_64
       iscsi-initiator-utils
   ○ http://clusterlabs.org/wiki/Initial_Configuration
Pacemaker Review
● Nodes                  ● Cluster Information Base

● Resource Agents        ● Master/Slave Sets (MS)

● Resources/Primitives   ● Constraints: Location

● Resource Groups        ● Constratints: Colocation

● CRM Shell              ● STONITH
Pacemaker CRM Shell
What Should Pacemaker Do?
● Manage floating IP address 192.168.0.20 - iSCSI target.

● Configure an iSCSI Target Daemon.

● Present an iSCSI LUN from iSCSI Target Daemon.

● Ensure DRBD is running, with Primary/Secondary.

● Ensure DRBD Primary is colocated with floating IP,
   iSCSI Target Daemon, and iSCSI LUN.

● Ordering: DRBD, iSCSI Target, iSCSI LUN, floating IP.
Step 9: Pacemaker Configuration
              Unblock iSCSI Port

                  Floating IP

                  iSCSI LUN
    Start




                                     Stop
                 iSCSI Target

               Block iSCSI Port

            DRBD Primary/Secondary
Step 9: Pacemaker Configuration
property $id="cib-bootstrap-options" 
        dc-version="1.0.11-..." 
        cluster-infrastructure="openais" 
        expected-quorum-votes="2" 
        no-quorum-policy="ignore" 
        default-resource-stickiness="100" 
        stonith-enabled="false" 
        maintenance-mode="false" 
        last-lrm-refresh="1311719446" 

rsc_defaults $id="rsc-options" 
        resource-stickiness="100"
Step 9: Pacemaker Configuration
primitive res_ip_float ocf:heartbeat:IPaddr2 
    params ip="192.168.0.20" cidr_netmask="20" 
    op monitor interval="10s"

primitive res_portblock_xcp_block ocf:heartbeat:portblock 
    params action="block" portno="3260" ip="192.168.0.20" protocol="tcp"
primitive res_portblock_xcp_unblock ocf:heartbeat:portblock 
    params action="unblock" portno="3260" ip="192.168.0.20" protocol="tcp"

primitive res_drbd_xcp ocf:linbit:drbd 
    params drbd_resource="d_xcp"

ms ms_drbd_xcp res_drbd_xcp 
    meta master-max="1" master-node-max="1" 
        clone-max="2" clone-node-max="1" notify="true"
Step 9: Pacemaker Configuration
primitive res_target_xcp ocf:tripadvisor:iSCSITarget 
    params implementation="tgt" tid="1" 
        iqn="iqn.2011-12.com.example:storage.example.xsg" 
        incoming_username="target_xcp" incoming_password="target_xcp" 
        additional_parameters="MaxRecvDataSegmentLength=131072
            MaxXmitDataSegmentLength=131072" 
        op monitor interval="10s"

primitive res_lun_xcp_lun1 ocf:heartbeat:iSCSILogicalUnit 
    params target_iqn="iqn.2011-12.com.example:storage.example.xsg" 
        lun="1" 
        path="/dev/drbd/by-res/d_xcp" scsi_id="xcp_1" 
        op monitor interval="10s"
Step 9: Pacemaker Configuration
group rg_xcp 
    res_portblock_xcp_block 
    res_target_xcp 
    res_lun_xcp_lun1 
    res_ip_float 
    res_portblock_xcp_unblock

colocation c_xcp_on_drbd inf: rg_xcp ms_drbd_xcp:Master

order o_drbd_before_xcp inf: ms_drbd_xcp:promote rg_xcp:start
Step 9: Pacemaker Configuration
              Unblock iSCSI Port

                  Floating IP

                  iSCSI LUN
    Start




                                     Stop
                 iSCSI Target

               Block iSCSI Port

            DRBD Primary/Secondary
Step 10: STONITH Configuration
primitive stonith-xsgnode1 stonith:external/ipmi 
    params hostname="xsgnode1.example.com" ipaddr="192.168.0.30" 
        userid="root" passwd="shootme"
primitive stonith-xsgnode2 stonith:external/ipmi 
    params hostname="xsgnode2.example.com" ipaddr="192.168.0.31" 
        userid="root" passwd="shootme"

location loc_stonith_xsgnode1 stonith-xsg01n -inf: xsgnode1.example.com
location loc_stonith_xsgnode2 stonith-xsg02n -inf: xsgnode2.example.com

property stonith-enabled="true"
Step 11: Review Pacemaker
● Make sure resources are OK: crm status

● Make sure floating IP configured: ip addr

● Make sure DRBD primary/secondary: drbd-overview

● Make sure iSCSI LUN's presented: tgt-admin -s
Step 12: Connect SR in XCP!
XCP and High Availability
● We've just shown how to build a highly-available / fault-
   tolerant SAN, using DRBD and Pacemaker.

● EXT4oVHDoLVMoISCSIoDRBDoLVM :-)

● We did this on CentOS 5.x (and 6.x).

● XCP is based on CentOS 5.x.

● XCP can use Pacemaker for H.A.!
XCP Storage Future
● XCP 1.6 will support Storage XenMotion
   ○ Migration of VM's and their storage, live!
   ○ Can evacuate a host with local SR attached VM's.

● Cluster Filesystems:
   ○   Citrix is looking into Gluster and Ceph.
   ○   Gluster client builds and works on XCP 1.5b.
   ○   Relatively easy for us to write a Gluster SR driver.
   ○   Ceph integration is a bit trickier.
Advanced Networking
with Xen Cloud Platform
Advanced Networking wtih XCP
● Bonding and VLAN's

● OpenvSwitch and OpenFlow

● Distributed Virtual Switch Controller

● GRE Tunnels & Private VM Networks
Advanced Networking with XCP
NIC Bonding review
● Means of combining multiple NIC's together for:
   ○ Failover
   ○ Load Balancing
   ○ More Bandwidth

● Available since Linux Kernel 2.0.x. Stable and proven.

● Many modes of bonding NIC's:
   ○ Active/Standby.
   ○ Active/Active.
NIC Bonding Modes
● mode = 1: active-backup    <--

● mode = 2: balance-xor

● mode = 3: broadcast

● mode = 4: 802.3ad (LACP)   <--

● mode = 5: balance-tlb

● mode = 6: balance-alb

● mode = 7: balance-slb      <--
XCP Bonding: Source Level
Balancing
● XCP + XenServer introduce optimized bonding for
   virtualization.

● mode = 7, aka balance-slb.

● Derived from balance-alb.

● Spread VIF's across PIF's.

● Provides load balancing and failover.

● Active/Active.
XCP Bonding: Source Level
Balancing
● New VIF source MAC's assigned a PIF w/ lowest util.

● Rebalances VIF's/MAC's across PIF's every 10 sec.
   ○ No GARP during rebalance necessary.
   ○ Switch will see new traffic and update tables.
   ○ Still need to connect PIF's to same/stacked switch.

● Up/Down delay of 31s/200ms.

● Failover on link down handled with GARP for fast
   updates.
XCP Bonding: Source Level
Balancing
● Limitation: 16 unbonded NIC's or 8 bonded.

● Limitation: Only 2 NIC's per bond in XenCenter.

● Can override with xe command line:

●   xe bond-create network-uuid=... pif-uuids=...,...,...


● Can override bonding mode if desired:

●   xe pif-param-set uuid=<bond pif uuid> 
        other-config:bond-mode=<active-backup, 802.3ad>
XCP VLAN's
● PIF but with a tag.

● Can apply to Ethernet NIC's and Bonds.

●   xe vlan-create network-uuid=... pif-uuid=... tag=...
Traditional Advanced Networking
● Manual configuration process.
   ○ Bonding? /etc/modprobe.conf and ifenslave

   ○ Bridges? brctl from bridge-utils

   ○ Vlans? vconfig

   ○ GRE? IPSEC? QoS/Rate Limiting?

● Distribution specific configuration files.
Virtualization and Advanced
Networking
● Virtualization brought network switching into the server
   itself.

● Systems & services no longer fixed.

● Nomadic... VM's move around w/o Network Admin
   knowing.

● SPAN ports for IDS? Netflow information for a specific
   VM? QoS and rate limiting? How is this handled?
OpenvSwitch
● Software switch like Cisco Nexus 1000V.

● Distribution agnostic. Plugs right into Linux kernel.

● Reuses existing Linux kernel network subsystems.

● Compatible with traditional userspace tools.

● Free and Open Source - hence the "open"... ;-)

● http://openvswitch.org/
Why use OpenvSwitch?
● Why use it in general?

● Why does XCP/XenServer use OpenvSwitch?
OpenvSwitch Centralized
Management
● Software Defined Networking. Keep data plane,
   centralize control plane.

● Distributed Virtual Switch Controller (DVSC):
   ○ OpenFlow
   ○ OVSDB Management Protocol

● Ensures sFLOW, QoS, SPAN, Security policies follow
   VM's as they move & migrate between XCP hosts.

● Citrix XenServer DVSC works with XCP.
Cross Server Private Networks
● Traditional Approach:
   ○ Use dedicated NIC's with separate switches.

   ○ Use a private dedicated non-routed VLAN.

● Management and scalability issues.

● Works for small deployments.
Cross Server Private Networks
● New Approach: GRE Tunnels

● GRE Tunnel between each XCP host.

● Build/Teardown as needed. Don't need to waste b/w.

● Administration nightmare?
   ○ Not if you had some sort of... controller... to manage
     it for you...?

   ○ Oh wait! We have one of those!
XCP Tunnel PIF
● Special PIF called "tunnel" in XCP.

● Commands: xe tunnel-*

● Placeholder for OpenvSwitch & DVSC to work with.
XCP Tunnel PIF
1. Create new network in XCP:
    xe network-create name-label="Cross Server Private Network"


2. Create tunnel PIF on each XCP host for use w/ this net:
    xe tunnel-create network-uuid=<uuid> pif-uuid=<uuid>


3. Add VIF's of VM's to this private network.

DVSC will handle the setup/teardown of GRE tunnels
between XCP hosts automatically as needed.
Statistics and Monitoring
with Xen Cloud Platform
Statistics, Monitoring, Analysis
● Citrix XenCenter

● Existing Solutions (Hyperic, Nagios, Cacti, Observium)

● Programmable Means:
   ○ API

   ○ SSH

   ○ SNMP
Citrix XenCenter
● Built in graphical presentation of all
  XenServer/XCP metrics.

● Live view of current activity.

● Memory allocation per host, per pool.

● Excellent way to get solid overview of XCP
  deployment.

● VirtualBox/Parallels/Vmware + Windows
XCP and Nagios
● XCP == CentOS 5.x (+ Xen + Kernel + XAPI)

● Install NRPE on dom0.

● Monitor just like any other Linux box.
XCP and SNMP
● net-snmp installed on XCP.

● Simple steps to enable SNMP:
   a. Open UDP/161 in /etc/sysconfig/iptables

   b. Adjust /etc/snmp/snmpd.conf permissions

   c. chkconfig snmpd on && service snmpd start

● Standard Linux host metrics.
Monitoring XCP with the XenAPI
● Linux SNMP and Nagios NRPE only give basics.

● SR usage? Pool utilization?

● VM metrics? VIF/VBD rates?

● All of this information is available.
Monitoring XCP with the XenAPI
XenAPI and SR Metrics
>>> import XenAPI
>>> from pprint import pprint
>>> session = XenAPI.Session('http://127.0.0.1')
>>> session.login_with_password('root', 'secret')
>>> session.xenapi.SR.get_all()
['OpaqueRef:18c80a5d-cef6-c2e8-59d1-a03cfbed97e5',
'OpaqueRef:94f13ac8-6d8b-9bc0-2c71-fd29c9636f4e', ...]
>>>
>>> pprint(session.xenapi.SR.get_record('OpaqueRef:
18c80a5d-cef6-c2e8-59d1-a03cfbed97e5'))
XenAPI and Events
>>>   import XenAPI
>>>   from pprint import pprint
>>>   session = XenAPI.Session('http://127.0.0.1')
>>>   session.login_with_password('root', 'secret')
>>>   session.xenapi.event.register(["*"])
''
>>>   session.xenapi.event.next()


See examples on http://community.citrix.com/
Enterprise Cloud
Orchestration and XCP
Enterprise Cloud Orchestration
● Hypervisor Agnostic* approach to orchestrating your
   cloud(s).

● Suited for solving multi-tenancy requirements.

● Orchestrate vs Manage?

● I'm not a cloud provider. Why do I care?
   ○ Traditional approach.

   ○ Developer delegation
IaaS Orchestration & XCP




 OpenStack   http://www.openstack.com



 CloudStack http://www.cloudstack.org
OpenStack Overview
● Rackspace & NASA w/ other major contributors:
   ○   Intel & AMD
   ○   Red Hat, Canonical, SUSE
   ○   Dell, HP, IBM
   ○   Yahoo! & Cisco

● Hypervisor Support:
   ○   KVM & QEMU
   ○   LXC
   ○   Xen (via libvirt)
   ○   XenServer, Xen Cloud Platform, XenAPI (Kronos)
OpenStack Overview
● Language: Python

● Packages for Ubuntu and RHEL/CentOS (and more)

● MySQL and PostgreSQL (yay!) Database Support

● Larger project than CloudStack, encompassing many
   more functional areas:

   ○ Storage (swift, nova volume --> cinder)
   ○ Networking (nova network, quantum)
   ○ Load Balancing (Atlas)
OpenStack and XCP
● http://wiki.openstack.org/XenServer/GettingStarted

● http://wiki.openstack.
  org/XenServer/XenXCPAndXenServer

● Optimize for XenDesktop on Installation (EXT vs LVM)

● Plugins for XCP host: /etc/xapi.d/plugins

● Different way of thinking -- the Xen way

   ○ Run OpenStack services on host/dom0? No!
   ○ Each XCP host has a dedicated nova VM.
   ○ OpenStack VM will control XCP host via XenAPI
OpenStack and XCP Pools
● XCP Pools / OpenStack Host Aggregates
   ○ http://wiki.openstack.org/host-aggregates
   ○ Informs OpenStack that the XCP hosts have a
     collection of shared resources.

   ○ Works but incomplete -- e.g. if pool master changes?

   ○ Recommended that you don't pool your XCP hosts
     when orchestrating via OpenStack, for now...

● Traditional vs Cloud Workloads
OpenStack and XCP Storage
● Optimize for XenDesktop on XCP installation.
   ○ Local SR uses EXT instead of LVM

● Plugins need raw access to VHD files on host/dom0.

● Can use NFS for instance image storage:
   ○ Switch default SR to an NFS SR.
   ○ nova.conf: sr_matching_filter="default-sr:true"

● OpenStack Cinder will use Storage XenMotion
CloudStack Overview
● VMOps aka Cloud.com ---> Citrix July 2011

● Hypervisor Support:
   ○   Citrix XenServer (thus XCP)
   ○   KVM
   ○   VMware vSphere
   ○   Oracle VM

● Multiple hypervisors in single deployment

● Languages: Java and C
CloudStack and XCP
● CloudStack doesn't provide storage -- no nova-volume

● CloudStack uses existing SAN/NAS appliances:
   ○ Dell Equalogic (iSCSI)
   ○ NetApp (NFS and iSCSI)

● Primary and Secondary Storage (tiering)

● Supports use of additional XenServer SR's (e.g. FC)
   instead of NFS/iSCSI.
{Open,Cloud}Stack -- Which?
● Depends on your team, experience, and intentions.

● CloudStack:
   ○ Want a cloud *now*?
   ○ Very mature and full featured.
   ○ Integrates well w/ both traditional & cloud workloads.

● OpenStack:
   ○ Have some time?
   ○ Easily extendable to do new things (Python).
   ○ XS/XCP support needs work, but its getting there.
Questions?
Unit 4
  The Future of Xen
Update from the Xen.org
         team
Outline
Xen.org development: Who / What?
Xen 4.2
Microsoft, UEFI secure boot, and Win8
Xen 4.3
Other activities
Xen.org development
Who develops Xen?
 7 full-time developers from Citrix
 Full-time devs from SuSe, Oracle
 Frequent contributions from Intel, AMD
What do we develop?
 Xen hypervisor, toolstack
 Linux
 qemu
Xen 4.2 features
pvops dom0 support
New toolstack: libxl/xl
cpupools
New scheduler: credit2
memory sharing, page swapping
nested virtualization
Live fail-over (Remus)
libxl/xl
The motivation:
  xend: Daemon, python
  xapi: duplicated low-level code
The solution
  libxl: lightweight library for basic tasks
  xl: lightweight, xm-compatible replacement
cpupools
The motivation
   Service model: rent cpus, run as many VMS
as you want
   Allow customers to use "weight"
The solution: cpupools
   pools can be created at run-time
   cpus added or removed from pools
   domains assigned to pools
   each pool has a separate scheduler
cpupools, con't
Uses
  New service model
  Different schedulers
  Stronger isolation
  NUMA-split
UEFI secure boot
Microsoft, UEFI, and Windows 8 logo
What that means for Linux
Fedora's solution
Ubuntu's solution
What it means for Xen
Xen 4.3
Performance
NUMA issues
*BSD dom0 support
Memory sharing / hypervisor swap
ARM servers
blktap3
Other areas of focus
Distro integration
Doc days
Questions?
Closing Remarks
Useful Resources and References

Community:
● Xen Mailing List: http://www.xen.org/community/
● Xen Wiki: http://wiki.xen.org
● Xen Blog: http://blog.xen.org


Discussion:
● http://www.xen.org/community/xenpapers.html
● Abstracts, slides, and videos from Xen Summits
● http://pcisecuritystandards.
   org/organization_info/special_interest_groups.php
Image Credits

● http://en.wikipedia.org/wiki/File:Tux.png
● http://en.wikipedia.org/wiki/File:Intertec_Superbrain.jpg
● http://wiki.xen.org/wiki/Xen_Overview
Thank You!
Enjoy the rest of OSCON 2012!
XCP Architecture
Acknowledgments

This work is based upon many materials from the 2011 Xen Day Boston slides, by
Todd Deshane, Steve Maresca, Josh West, and Patrick F. Wilbur.

Portions of this work are derived from the 2010 Xen Training / Tutorial, by Todd
Deshane and Patrick F. Wilbur, which is derived from the 2009 Xen Training /
Tutorial as updated by Zach Shepherd and Jeanna Matthews from the original
version written by Zach Shepherd and Wenjin Hu, originally derived from
materials written by Todd Deshane and Patrick F. Wilbur. A mouthful!

Portions of this work are derived from Mike McClurg's The Xen Cloud Platform
slides from the July 2012 Virtual Build a Cloud Day.

Portions are based upon Jeremy Fitzhardinge's Pieces of Xen slides.

More Related Content

What's hot

S4 xen hypervisor_20080622
S4 xen hypervisor_20080622S4 xen hypervisor_20080622
S4 xen hypervisor_20080622Todd Deshane
 
BACD July 2012 : The Xen Cloud Platform
BACD July 2012 : The Xen Cloud Platform BACD July 2012 : The Xen Cloud Platform
BACD July 2012 : The Xen Cloud Platform The Linux Foundation
 
Xen, XenServer, and XAPI: What’s the Difference?-XPUS13 Bulpin,Pavlicek
Xen, XenServer, and XAPI: What’s the Difference?-XPUS13 Bulpin,PavlicekXen, XenServer, and XAPI: What’s the Difference?-XPUS13 Bulpin,Pavlicek
Xen, XenServer, and XAPI: What’s the Difference?-XPUS13 Bulpin,PavlicekThe Linux Foundation
 
Linuxcon EU : Virtualization in the Cloud featuring Xen and XCP
Linuxcon EU : Virtualization in the Cloud featuring Xen and XCPLinuxcon EU : Virtualization in the Cloud featuring Xen and XCP
Linuxcon EU : Virtualization in the Cloud featuring Xen and XCPThe Linux Foundation
 
Linaro Connect Asia 13 : Citrix - Xen on ARM plenary session
Linaro Connect Asia 13 : Citrix - Xen on ARM plenary sessionLinaro Connect Asia 13 : Citrix - Xen on ARM plenary session
Linaro Connect Asia 13 : Citrix - Xen on ARM plenary sessionThe Linux Foundation
 
Xen cloud platform v1.1 (given at Build a Cloud Day in Antwerp)
Xen cloud platform v1.1 (given at Build a Cloud Day in Antwerp)Xen cloud platform v1.1 (given at Build a Cloud Day in Antwerp)
Xen cloud platform v1.1 (given at Build a Cloud Day in Antwerp)The Linux Foundation
 
Xen PV Performance Status and Optimization Opportunities
Xen PV Performance Status and Optimization OpportunitiesXen PV Performance Status and Optimization Opportunities
Xen PV Performance Status and Optimization OpportunitiesThe Linux Foundation
 
Getting Started with XenServer and OpenStack.pptx
Getting Started with XenServer and OpenStack.pptxGetting Started with XenServer and OpenStack.pptx
Getting Started with XenServer and OpenStack.pptxOpenStack Foundation
 
Erlang on Xen: Redefining the cloud software stack
Erlang on Xen:  Redefining the cloud software stackErlang on Xen:  Redefining the cloud software stack
Erlang on Xen: Redefining the cloud software stackViktor Sovietov
 
Why Choose Xen For Your Cloud?
Why Choose Xen For Your Cloud? Why Choose Xen For Your Cloud?
Why Choose Xen For Your Cloud? Todd Deshane
 
Leveraging CentOS and Xen for the Go Daddy Private Cloud
Leveraging CentOS and Xen for the Go Daddy Private CloudLeveraging CentOS and Xen for the Go Daddy Private Cloud
Leveraging CentOS and Xen for the Go Daddy Private CloudThe Linux Foundation
 
Securing your cloud with Xen's advanced security features
Securing your cloud with Xen's advanced security featuresSecuring your cloud with Xen's advanced security features
Securing your cloud with Xen's advanced security featuresThe Linux Foundation
 
Xen Cloud Platform at Build a Cloud Day at SCALE 10x
Xen Cloud Platform at Build a Cloud Day at SCALE 10x Xen Cloud Platform at Build a Cloud Day at SCALE 10x
Xen Cloud Platform at Build a Cloud Day at SCALE 10x The Linux Foundation
 

What's hot (20)

Aplura virtualization slides
Aplura virtualization slidesAplura virtualization slides
Aplura virtualization slides
 
S4 xen hypervisor_20080622
S4 xen hypervisor_20080622S4 xen hypervisor_20080622
S4 xen hypervisor_20080622
 
BACD July 2012 : The Xen Cloud Platform
BACD July 2012 : The Xen Cloud Platform BACD July 2012 : The Xen Cloud Platform
BACD July 2012 : The Xen Cloud Platform
 
Xen, XenServer, and XAPI: What’s the Difference?-XPUS13 Bulpin,Pavlicek
Xen, XenServer, and XAPI: What’s the Difference?-XPUS13 Bulpin,PavlicekXen, XenServer, and XAPI: What’s the Difference?-XPUS13 Bulpin,Pavlicek
Xen, XenServer, and XAPI: What’s the Difference?-XPUS13 Bulpin,Pavlicek
 
Xen ATG case study
Xen ATG case studyXen ATG case study
Xen ATG case study
 
Linuxcon EU : Virtualization in the Cloud featuring Xen and XCP
Linuxcon EU : Virtualization in the Cloud featuring Xen and XCPLinuxcon EU : Virtualization in the Cloud featuring Xen and XCP
Linuxcon EU : Virtualization in the Cloud featuring Xen and XCP
 
Linaro Connect Asia 13 : Citrix - Xen on ARM plenary session
Linaro Connect Asia 13 : Citrix - Xen on ARM plenary sessionLinaro Connect Asia 13 : Citrix - Xen on ARM plenary session
Linaro Connect Asia 13 : Citrix - Xen on ARM plenary session
 
Xen cloud platform v1.1 (given at Build a Cloud Day in Antwerp)
Xen cloud platform v1.1 (given at Build a Cloud Day in Antwerp)Xen cloud platform v1.1 (given at Build a Cloud Day in Antwerp)
Xen cloud platform v1.1 (given at Build a Cloud Day in Antwerp)
 
BSDcon Asia 2015: Xen on FreeBSD
BSDcon Asia 2015: Xen on FreeBSDBSDcon Asia 2015: Xen on FreeBSD
BSDcon Asia 2015: Xen on FreeBSD
 
Art of Using Xen at Scale
Art of Using Xen at ScaleArt of Using Xen at Scale
Art of Using Xen at Scale
 
Xen PV Performance Status and Optimization Opportunities
Xen PV Performance Status and Optimization OpportunitiesXen PV Performance Status and Optimization Opportunities
Xen PV Performance Status and Optimization Opportunities
 
Why xen slides
Why xen slidesWhy xen slides
Why xen slides
 
Getting Started with XenServer and OpenStack.pptx
Getting Started with XenServer and OpenStack.pptxGetting Started with XenServer and OpenStack.pptx
Getting Started with XenServer and OpenStack.pptx
 
Erlang on Xen: Redefining the cloud software stack
Erlang on Xen:  Redefining the cloud software stackErlang on Xen:  Redefining the cloud software stack
Erlang on Xen: Redefining the cloud software stack
 
Xen summit amd_2010v3
Xen summit amd_2010v3Xen summit amd_2010v3
Xen summit amd_2010v3
 
Why Choose Xen For Your Cloud?
Why Choose Xen For Your Cloud? Why Choose Xen For Your Cloud?
Why Choose Xen For Your Cloud?
 
Leveraging CentOS and Xen for the Go Daddy Private Cloud
Leveraging CentOS and Xen for the Go Daddy Private CloudLeveraging CentOS and Xen for the Go Daddy Private Cloud
Leveraging CentOS and Xen for the Go Daddy Private Cloud
 
Securing your cloud with Xen's advanced security features
Securing your cloud with Xen's advanced security featuresSecuring your cloud with Xen's advanced security features
Securing your cloud with Xen's advanced security features
 
Xen in the Cloud at SCALE 10x
Xen in the Cloud at SCALE 10xXen in the Cloud at SCALE 10x
Xen in the Cloud at SCALE 10x
 
Xen Cloud Platform at Build a Cloud Day at SCALE 10x
Xen Cloud Platform at Build a Cloud Day at SCALE 10x Xen Cloud Platform at Build a Cloud Day at SCALE 10x
Xen Cloud Platform at Build a Cloud Day at SCALE 10x
 

Viewers also liked

Under the Hood: Open vSwitch & OpenFlow in XCP & XenServer
Under the Hood: Open vSwitch & OpenFlow in XCP & XenServerUnder the Hood: Open vSwitch & OpenFlow in XCP & XenServer
Under the Hood: Open vSwitch & OpenFlow in XCP & XenServerThe Linux Foundation
 
Mirage: extreme specialisation of virtual appliances
Mirage: extreme specialisation of virtual appliancesMirage: extreme specialisation of virtual appliances
Mirage: extreme specialisation of virtual appliancesThe Linux Foundation
 
Deep Dive Into How To Monitor MySQL or MariaDB Galera Cluster / Percona XtraD...
Deep Dive Into How To Monitor MySQL or MariaDB Galera Cluster / Percona XtraD...Deep Dive Into How To Monitor MySQL or MariaDB Galera Cluster / Percona XtraD...
Deep Dive Into How To Monitor MySQL or MariaDB Galera Cluster / Percona XtraD...Severalnines
 
LF Collaboration Summit: Xen Project 4 4 Features and Futures
LF Collaboration Summit: Xen Project 4 4 Features and FuturesLF Collaboration Summit: Xen Project 4 4 Features and Futures
LF Collaboration Summit: Xen Project 4 4 Features and FuturesThe Linux Foundation
 
Rootlinux17: An introduction to Xen Project Virtualisation
Rootlinux17:  An introduction to Xen Project VirtualisationRootlinux17:  An introduction to Xen Project Virtualisation
Rootlinux17: An introduction to Xen Project VirtualisationThe Linux Foundation
 

Viewers also liked (9)

Under the Hood: Open vSwitch & OpenFlow in XCP & XenServer
Under the Hood: Open vSwitch & OpenFlow in XCP & XenServerUnder the Hood: Open vSwitch & OpenFlow in XCP & XenServer
Under the Hood: Open vSwitch & OpenFlow in XCP & XenServer
 
Mirage: extreme specialisation of virtual appliances
Mirage: extreme specialisation of virtual appliancesMirage: extreme specialisation of virtual appliances
Mirage: extreme specialisation of virtual appliances
 
Engaging the xen community
Engaging the xen communityEngaging the xen community
Engaging the xen community
 
Deep Dive Into How To Monitor MySQL or MariaDB Galera Cluster / Percona XtraD...
Deep Dive Into How To Monitor MySQL or MariaDB Galera Cluster / Percona XtraD...Deep Dive Into How To Monitor MySQL or MariaDB Galera Cluster / Percona XtraD...
Deep Dive Into How To Monitor MySQL or MariaDB Galera Cluster / Percona XtraD...
 
Xen 4.3 Roadmap
Xen 4.3 RoadmapXen 4.3 Roadmap
Xen 4.3 Roadmap
 
µ-Xen
µ-Xenµ-Xen
µ-Xen
 
LF Collaboration Summit: Xen Project 4 4 Features and Futures
LF Collaboration Summit: Xen Project 4 4 Features and FuturesLF Collaboration Summit: Xen Project 4 4 Features and Futures
LF Collaboration Summit: Xen Project 4 4 Features and Futures
 
Performance Tuning Xen
Performance Tuning XenPerformance Tuning Xen
Performance Tuning Xen
 
Rootlinux17: An introduction to Xen Project Virtualisation
Rootlinux17:  An introduction to Xen Project VirtualisationRootlinux17:  An introduction to Xen Project Virtualisation
Rootlinux17: An introduction to Xen Project Virtualisation
 

Similar to OSCON: Introducing Xen and XCP

LinuxCon Japan 13 : 10 years of Xen and Beyond
LinuxCon Japan 13 : 10 years of Xen and BeyondLinuxCon Japan 13 : 10 years of Xen and Beyond
LinuxCon Japan 13 : 10 years of Xen and BeyondThe Linux Foundation
 
RHEL5 XEN HandOnTraining_v0.4.pdf
RHEL5 XEN HandOnTraining_v0.4.pdfRHEL5 XEN HandOnTraining_v0.4.pdf
RHEL5 XEN HandOnTraining_v0.4.pdfPaul Yang
 
LinuxTag13: 10 years of Xen and beyond
LinuxTag13: 10 years of Xen and beyondLinuxTag13: 10 years of Xen and beyond
LinuxTag13: 10 years of Xen and beyondThe Linux Foundation
 
XPDDS19: The Xen-Blanket for 2019 - Christopher Clark and Kelli Little, Star ...
XPDDS19: The Xen-Blanket for 2019 - Christopher Clark and Kelli Little, Star ...XPDDS19: The Xen-Blanket for 2019 - Christopher Clark and Kelli Little, Star ...
XPDDS19: The Xen-Blanket for 2019 - Christopher Clark and Kelli Little, Star ...The Linux Foundation
 
Xen: Hypervisor for the Cloud from Frontier Meetup Mountain View CA 2013-10-14
Xen: Hypervisor for the Cloud from Frontier Meetup Mountain View CA 2013-10-14Xen: Hypervisor for the Cloud from Frontier Meetup Mountain View CA 2013-10-14
Xen: Hypervisor for the Cloud from Frontier Meetup Mountain View CA 2013-10-14The Linux Foundation
 
Xen Project Update LinuxCon Brazil
Xen Project Update LinuxCon BrazilXen Project Update LinuxCon Brazil
Xen Project Update LinuxCon BrazilThe Linux Foundation
 
Openvz booth
Openvz boothOpenvz booth
Openvz boothOpenVZ
 
Xen: Hypervisor for the Cloud - CCC13
Xen: Hypervisor for the Cloud - CCC13Xen: Hypervisor for the Cloud - CCC13
Xen: Hypervisor for the Cloud - CCC13The Linux Foundation
 
XenServer and OpenStack
XenServer and OpenStackXenServer and OpenStack
XenServer and OpenStackJohn Garbutt
 
Kernel Recipes 2014 - Xen as a foundation for cloud infrastructure
Kernel Recipes 2014 - Xen as a foundation for cloud infrastructureKernel Recipes 2014 - Xen as a foundation for cloud infrastructure
Kernel Recipes 2014 - Xen as a foundation for cloud infrastructureAnne Nicolas
 
Linaro connect : Introduction to Xen on ARM
Linaro connect : Introduction to Xen on ARMLinaro connect : Introduction to Xen on ARM
Linaro connect : Introduction to Xen on ARMThe Linux Foundation
 
Virtualization in the Cloud @ Build a Cloud Day SFO May 2012
Virtualization in the Cloud @ Build a Cloud Day SFO May 2012Virtualization in the Cloud @ Build a Cloud Day SFO May 2012
Virtualization in the Cloud @ Build a Cloud Day SFO May 2012The Linux Foundation
 
Open source hypervisors in cloud
Open source hypervisors in cloudOpen source hypervisors in cloud
Open source hypervisors in cloudChetna Purohit
 

Similar to OSCON: Introducing Xen and XCP (20)

Why Choose Xen For Your Cloud?
Why Choose Xen For Your Cloud? Why Choose Xen For Your Cloud?
Why Choose Xen For Your Cloud?
 
2010 xen-lisa
2010 xen-lisa2010 xen-lisa
2010 xen-lisa
 
OpenVZ Linux Containers
OpenVZ Linux ContainersOpenVZ Linux Containers
OpenVZ Linux Containers
 
LinuxCon Japan 13 : 10 years of Xen and Beyond
LinuxCon Japan 13 : 10 years of Xen and BeyondLinuxCon Japan 13 : 10 years of Xen and Beyond
LinuxCon Japan 13 : 10 years of Xen and Beyond
 
RHEL5 XEN HandOnTraining_v0.4.pdf
RHEL5 XEN HandOnTraining_v0.4.pdfRHEL5 XEN HandOnTraining_v0.4.pdf
RHEL5 XEN HandOnTraining_v0.4.pdf
 
LinuxTag13: 10 years of Xen and beyond
LinuxTag13: 10 years of Xen and beyondLinuxTag13: 10 years of Xen and beyond
LinuxTag13: 10 years of Xen and beyond
 
vBACD July 2012 - Xen Cloud Platform
vBACD July 2012 - Xen Cloud PlatformvBACD July 2012 - Xen Cloud Platform
vBACD July 2012 - Xen Cloud Platform
 
XPDDS19: The Xen-Blanket for 2019 - Christopher Clark and Kelli Little, Star ...
XPDDS19: The Xen-Blanket for 2019 - Christopher Clark and Kelli Little, Star ...XPDDS19: The Xen-Blanket for 2019 - Christopher Clark and Kelli Little, Star ...
XPDDS19: The Xen-Blanket for 2019 - Christopher Clark and Kelli Little, Star ...
 
Xen: Hypervisor for the Cloud from Frontier Meetup Mountain View CA 2013-10-14
Xen: Hypervisor for the Cloud from Frontier Meetup Mountain View CA 2013-10-14Xen: Hypervisor for the Cloud from Frontier Meetup Mountain View CA 2013-10-14
Xen: Hypervisor for the Cloud from Frontier Meetup Mountain View CA 2013-10-14
 
Xen Project Update LinuxCon Brazil
Xen Project Update LinuxCon BrazilXen Project Update LinuxCon Brazil
Xen Project Update LinuxCon Brazil
 
Openvz booth
Openvz boothOpenvz booth
Openvz booth
 
A Xen Case Study
A Xen Case StudyA Xen Case Study
A Xen Case Study
 
OSSNA18: Xen Beginners Training
OSSNA18: Xen Beginners Training OSSNA18: Xen Beginners Training
OSSNA18: Xen Beginners Training
 
Xen: Hypervisor for the Cloud - CCC13
Xen: Hypervisor for the Cloud - CCC13Xen: Hypervisor for the Cloud - CCC13
Xen: Hypervisor for the Cloud - CCC13
 
XenServer and OpenStack
XenServer and OpenStackXenServer and OpenStack
XenServer and OpenStack
 
Kernel Recipes 2014 - Xen as a foundation for cloud infrastructure
Kernel Recipes 2014 - Xen as a foundation for cloud infrastructureKernel Recipes 2014 - Xen as a foundation for cloud infrastructure
Kernel Recipes 2014 - Xen as a foundation for cloud infrastructure
 
Linaro connect : Introduction to Xen on ARM
Linaro connect : Introduction to Xen on ARMLinaro connect : Introduction to Xen on ARM
Linaro connect : Introduction to Xen on ARM
 
Virtualization in the cloud
Virtualization in the cloudVirtualization in the cloud
Virtualization in the cloud
 
Virtualization in the Cloud @ Build a Cloud Day SFO May 2012
Virtualization in the Cloud @ Build a Cloud Day SFO May 2012Virtualization in the Cloud @ Build a Cloud Day SFO May 2012
Virtualization in the Cloud @ Build a Cloud Day SFO May 2012
 
Open source hypervisors in cloud
Open source hypervisors in cloudOpen source hypervisors in cloud
Open source hypervisors in cloud
 

More from The Linux Foundation

ELC2019: Static Partitioning Made Simple
ELC2019: Static Partitioning Made SimpleELC2019: Static Partitioning Made Simple
ELC2019: Static Partitioning Made SimpleThe Linux Foundation
 
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...The Linux Foundation
 
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...The Linux Foundation
 
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...The Linux Foundation
 
XPDDS19 Keynote: Unikraft Weather Report
XPDDS19 Keynote:  Unikraft Weather ReportXPDDS19 Keynote:  Unikraft Weather Report
XPDDS19 Keynote: Unikraft Weather ReportThe Linux Foundation
 
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...The Linux Foundation
 
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxXPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxThe Linux Foundation
 
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...The Linux Foundation
 
XPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
XPDDS19: Memories of a VM Funk - Mihai Donțu, BitdefenderXPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
XPDDS19: Memories of a VM Funk - Mihai Donțu, BitdefenderThe Linux Foundation
 
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...
OSSJP/ALS19:  The Road to Safety Certification: Overcoming Community Challeng...OSSJP/ALS19:  The Road to Safety Certification: Overcoming Community Challeng...
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...The Linux Foundation
 
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
 OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making... OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...The Linux Foundation
 
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, CitrixXPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, CitrixThe Linux Foundation
 
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltdXPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltdThe Linux Foundation
 
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...The Linux Foundation
 
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&DXPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&DThe Linux Foundation
 
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM SystemsXPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM SystemsThe Linux Foundation
 
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...The Linux Foundation
 
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...The Linux Foundation
 
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...The Linux Foundation
 
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSEXPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSEThe Linux Foundation
 

More from The Linux Foundation (20)

ELC2019: Static Partitioning Made Simple
ELC2019: Static Partitioning Made SimpleELC2019: Static Partitioning Made Simple
ELC2019: Static Partitioning Made Simple
 
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
XPDDS19: How TrenchBoot is Enabling Measured Launch for Open-Source Platform ...
 
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
XPDDS19 Keynote: Xen in Automotive - Artem Mygaiev, Director, Technology Solu...
 
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
XPDDS19 Keynote: Xen Project Weather Report 2019 - Lars Kurth, Director of Op...
 
XPDDS19 Keynote: Unikraft Weather Report
XPDDS19 Keynote:  Unikraft Weather ReportXPDDS19 Keynote:  Unikraft Weather Report
XPDDS19 Keynote: Unikraft Weather Report
 
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
XPDDS19 Keynote: Secret-free Hypervisor: Now and Future - Wei Liu, Software E...
 
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, XilinxXPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
XPDDS19 Keynote: Xen Dom0-less - Stefano Stabellini, Principal Engineer, Xilinx
 
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
XPDDS19 Keynote: Patch Review for Non-maintainers - George Dunlap, Citrix Sys...
 
XPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
XPDDS19: Memories of a VM Funk - Mihai Donțu, BitdefenderXPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
XPDDS19: Memories of a VM Funk - Mihai Donțu, Bitdefender
 
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...
OSSJP/ALS19:  The Road to Safety Certification: Overcoming Community Challeng...OSSJP/ALS19:  The Road to Safety Certification: Overcoming Community Challeng...
OSSJP/ALS19: The Road to Safety Certification: Overcoming Community Challeng...
 
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
 OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making... OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
OSSJP/ALS19: The Road to Safety Certification: How the Xen Project is Making...
 
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, CitrixXPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
XPDDS19: Speculative Sidechannels and Mitigations - Andrew Cooper, Citrix
 
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltdXPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
XPDDS19: Keeping Coherency on Arm: Reborn - Julien Grall, Arm ltd
 
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
XPDDS19: QEMU PV Backend 'qdevification'... What Does it Mean? - Paul Durrant...
 
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&DXPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
XPDDS19: Status of PCI Emulation in Xen - Roger Pau Monné, Citrix Systems R&D
 
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM SystemsXPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
XPDDS19: [ARM] OP-TEE Mediator in Xen - Volodymyr Babchuk, EPAM Systems
 
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
XPDDS19: Bringing Xen to the Masses: The Story of Building a Community-driven...
 
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
XPDDS19: Will Robots Automate Your Job Away? Streamlining Xen Project Contrib...
 
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
XPDDS19: Client Virtualization Toolstack in Go - Nick Rosbrook & Brendan Kerr...
 
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSEXPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
XPDDS19: Core Scheduling in Xen - Jürgen Groß, SUSE
 

Recently uploaded

Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentationphoebematthew05
 

Recently uploaded (20)

Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentation
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 

OSCON: Introducing Xen and XCP

  • 1. OSCON: From the Datacenter to the Cloud Featuring Xen and XCP Steve Maresca Josh West Zentific LLC One.com George Dunlap Patrick F. Wilbur Xen.org PFW Research LLC
  • 2. Schedule ● Unit 1: 09:00 - 09:45 Introducing Xen and XCP ● Unit 2: 09:50 - 10:45 Devops ● Break: 10:45 - 11:00 ● Unit 3: 11:00 - 11:55 XCP in the Enterprise ● Unit 4: 12:00 - 12:30 Future of Xen
  • 4. Unit 1 Overview ● Introduction & Xen vs. Xen Cloud Platform ● Xen/XCP Installation & Configuration ● XCP Concepts: pools, hosts, storage, networks, VMs
  • 5. Introduction & Xen vs. Xen Cloud Platform Xen, XCP, Project Kronos
  • 6. Types of Virtualization ● Emulation Fully-emulate the underlying hardware architecture ● Full virtualization Simulate the base hardware architecture ● Paravirtualization Abstract the base architecture ● OS-level virtualization Shared kernel (and architecture), separate user spaces
  • 7. Types of Virtualization ● Emulation Fully-emulate the underlying hardware architecture ● Full virtualization - Xen does this! Simulate the base hardware architecture ● Paravirtualization - Xen does this! Abstract the base architecture ● OS-level virtualization Shared kernel (and architecture), separate user spaces
  • 8. What is Xen? ● Xen is a virtualization system supporting both paravirtualization and hardware-assisted full virtualization ● Initially created by University of Cambridge Computer Laboratory ● Open source (licensed under GPL)
  • 9. What is Xen Cloud Platform (XCP)? ● Xen Cloud Platform (XCP) is a turnkey virtualization solution that provides out-of-the-box virtualization/cloud computing ● XCP includes: ○ Open-source Xen hypervisor ○ Enterprise-level XenAPI (XAPI) mgmt. tool stack ○ Support for Open vSwitch (open-source, standards- compliant virtual switch)
  • 10. What is Project Kronos? ● Port of XCP's XenAPI toolstack to Deb & Ubuntu dom0 ● Gives users the ability to install Debian or Ubuntu, then apt-get install xcp-xapi ● Provides Xen users with the option of using the same API and toolstack that XCP and XenServer provide ● Early adopters can try new changes to XenAPI before they get released in mainstream XCP & XenServer versions
  • 11. Case for Virtualization ● Enterprise: ○ Rapid provisioning, recovery ○ Portability across pools of resources ○ Reduced phy resource usage = reduced costs ● Small business: ○ Rapid provisioning, recovery ○ Virt resources replace lack of phy res. to begin with!
  • 12. Who Uses Xen? ● Debian Popularity Contest: ○ 3x more people have Xen vs. KVM installed ○ 3x more people have used Xen in the last 30 days compared to KVM ○ 19% of Debian users have Xen installed & 9% used it in last 30 days - how many Debian users exist? ● ~12% of Ubuntu Server users use Xen as a host ● Millions of users from a source that can't be named ... How many total users do you guess?
  • 13. Who Uses Xen? Believed to be at least 10-12 MILLION open-source Xen users! (According to conservative assumptions about  big distros and information we know) Of course: ● Overall Xen hosts must be much higher - 1/2 Million Xen hosts at Amazon alone  ● Number likely to be much higher considering commercial products & Xen clones (client virt., EmbeddedXen, etc.) 
  • 14. Xen, XCP, and Various Toolstack Users
  • 15. Who Uses Xen? Some sources for reference: ● http://popcon.debian.org  ● http://www.zdnet.com/blog/open-source/amazon-ec2- cloud-is-made-up-of-almost-half-a-million-linux- servers/10620  ● http://www.gartner.com/technology/reprints.do?id=1- 1AVRXJO&ct=120612&st=sb 
  • 16. Guest OSes Type 2 Hypervisor ? Host OS PC Type 2 versus Type 1 Hypervisor
  • 17. Guest OSes Type 2 Guest Hypervisor ? OSes Type 1 Host OS Hypervisor (Xen) PC PC Type 2 versus Type 1 Hypervisor
  • 18. Security in Xen ● True Type 1 hypervisor: ○ Reduced size trusted computing base (TCB) ○ Versatile Dom0 (Linux, BSD, Solaris all possible) ○ Dom0 disaggregation (storage domains, stub domains, restartable management domain) ○ Inherent separation between VMs & system resources ● Best security, isolation, performance, scalability mix
  • 19. The Case for Xen ● Xen is mature   ● Open source (even XenAPI)   ● XenAPI is better than libvirt, especially for enterprise use* * Detailed by Ewan Mellor: http://wiki.openstack.org/XenAPI 
  • 20. The Case for Xen ● Proven enterprise use (Citrix XenServer, Oracle VM, etc.) ● Hypervisor of choice for cloud (Amazon, Rackspace, Linode, Google, etc.) ● Hypervisor of choice for client (XenClient, Virtual Computer's NxTop, Qubes OS, etc.)
  • 21. So, Why Xen? ● Open source ● Proven to be versatile ● Amazing community ● Great momentum in various directions
  • 22. Xen Definitions ● Xen provides a virtual machine monitor (or hypervisor), which a physical machine runs to manage virtual machines ● There exist one or more virtual machines (or domains) running beneath the hypervisor ● The management virtual machine (called Domain0 or dom0) interacts with the hypervisor & runs device drivers ● Other virtual machines are called guests (guest domains)
  • 23. Virtualization in Xen Paravirtualization: ● Uses a modified Linux kernel ● Front-end and back-end virtual device model ● Cannot run Windows ● Guest "knows" it's a VM and cooperates with hypervisor Hardware-assisted full virtualization (HVM): ● Uses the same, normal, OS kernel ● Guest contains grub and kernel ● Normal device drivers ● Can run Windows ● Guest doesn't "know" it's a VM, so hardware manages it
  • 24. Virtualization in Xen Paravirtualization: ● High performance (claim to fame) ● High scalability ● Runs a modified operating system Hardware-assisted full virtualization (HVM): ● "Co-evolution" of hardware & software on x86 arch ● Uses an unmodified operating system
  • 25. Xen: Hypervisor Role ● Thin, privileged abstraction layer between the hardware and operating systems ● Defines the virtual machine that guest domains see instead of physical hardware: ○ Grants portions of physical resources to each guest ○ Exports simplified devices to guests ○ Enforces isolation among guests
  • 26. Xen: Domain0 (dom0) Role ● Creates and manages guest VMs xl (Xen management tool) A client application to send commands to Xen, replaces xm ● Supplies device and I/O services: ○ Runs (backend) device drivers ○ Provides domain storage
  • 27. Normal Linux Boot Process BIOS         Master Boot Record (MBR) GRUB   Kernel module Linux
  • 28. The Xen Boot Process GRUB starts                 Kernel Hypervisor starts                Module Domain0 starts                 xl command Guest domain starts                
  • 29. Guest Relocation (Migration) in Xen ● Cold Relocation ● Warm Migration ● Live Migration
  • 30. Cold Relocation Motivation: Moving guest between hosts without shared storage or with different architectures or hypervisor versions Process: 1. Shut down a guest on the source host 2. Move the guest from one Domain0's file system to another's by manually copying the guest's disk image and configuration files 3. Start the guest on the destination host
  • 31. Cold Relocation Benefits: ● Hardware maintenance with less downtime ● Shared storage not required ● Domain0s can be different ● Multiple copies and duplications Limitation: ● More manual process ● Service will be down during copy
  • 32. Warm Migration Motivation: Move a guest between hosts when uptime is not critical Result: 1. Pauses a guest's execution 2. Transfers guest's state across network to a new host 3. Resumes guest's execution on destination host
  • 33. Warm Migration Benefits: ● Guest and processes remains running ● Less data transfer than live migration Limitations: ● For a short time, the guest is not externally accessible ● Requires shared storage ● Network connections to and from guest are interrupted and will probably timeout
  • 34. Live Migration Motivation: Load balancing, hardware maintenance, and power management Result: 1. Begins transferring guest's state to new host 2. Repeatedly copies dirtied guest memory (due to continued execution) until complete 3. Re-routes network connections, and guest continues executing with execution and network uninterrupted
  • 35. Live Migration Benefits: ● No downtime ● Network connections to and from guest remain active and uninterrupted ● Guest and its services remain available Limitations: ● Requires shared storage ● Hosts must be on the same layer 2 network ● Sufficient spare resources needed on target machine ● Hosts must be configured similarly
  • 36. What's New in Xen 4.0+? ● Better performance and scalability ● blktap2 for virtual hard drive image support (snapshots, cloning) ● Improved IOMMU PCI passthru ● VGA primary graphics card GPU passthru for HVM guests ● Memory page sharing (Copy-on-Write) between VMs ● Online resize of guest disks
  • 37. What's New in Xen 4.0+? ● Remus Fault Tolerance (live VM synchronization) ● Physical CPU/memory hotplug ● libxenlight (libxl) replaces xend ● PV-USB passthru ● WHQL-certified Windows PV drivers (included in XCP)
  • 38. What's New in XCP 1.5? ● Internal improvements (Xen 4.1, smaller dom0) ● GPU pass through (for VMs serving high end graphics) ● Performance and scalability (1 TB mem/host, 16 VCPUs/VM, 128 GB/VM) ● Networking (Open vSwitch backend, Active-Backup NIC Bonding) ● More guest OS templates
  • 39. XCP 1.6 (available Sept/Oct '12) ● Xen 4.1.2, CentOS 5.7 w/ 2.6.32.43, Open vSwitch 1.4.1 ● New format Windows drivers, installable by Windows Update Service ● Net: Better VLAN scalability, LACP bonding, IPv6 ● More guest OS templates: Ubuntu Precise 12.04, RHEL, CentOS, Oracle Enterprise Linux 6.1 & 6.2, Windows 8 ● Storage XenMotion: ○ Migrate VMs between hosts/pools w/o shared storage ○ Move a VM’s disks between storage repositories while VM is running
  • 40. Xen/Xen Cloud Platform Installation, Configuration Xen Light, XCP Installer
  • 41. Installing Xen Xen installation instructions, including from source:  http://wiki.xen.org/wiki/Xen_Overview  1. Install Linux distro 2. Install Xen hypervisor package 3. Install a dom0 kernel (pkgs available for many distros) 4. Modify GRUB config to boot Xen hypervisor instead   Result: A working Xen hypervisor and "Xen Light" installation
  • 42. Installing XCP 1. Download latest XCP ISO: http://xen.org/download/xcp/index.html 2. Boot from ISO and proceed through XCP installer Result: A ready-to-go Xen hypervisor, dom0, XAPI
  • 43. Xen Cloud Platform Concepts Pools, hosts, storage, networks, VMs
  • 44. Xen Cloud Platform (XCP) ● XCP was originally derived from Citrix XenServer (a free enterprise product), is open-source, and is free ● XCP promises to contain cutting-edge features that will drive future developments of Citrix XenServer
  • 45. Xen Cloud Platform (XCP) ● Again, XCP includes: ○ Open-source Xen hypervisor ○ Enterprise-level XenAPI (XAPI) management tool stack ○ Support for Open vSwitch (open-source, standards- compliant virtual switch)
  • 46. XCP Features ● Fully-signed Windows PV drivers ● Heterogeneous machine resource pool support ● Installation by templates for many different guest OSes
  • 47. XCP XenAPI Mgmt Tool Stack ● VM lifecycle: live snapshots, checkpoint, migration ● Resource pools: live relocation, auto configuration, disaster recovery ● Flexible storage, networking, and power management ● Event tracking: progress, notification ● Upgrade and patching capabilities ● Real-time performance monitoring and alerting
  • 48. XCP's xsconsole (SSH or Local)
  • 49. XCP Command Line Interface # xe template-list (or # xe vm-import filename=lenny.xva ) # xe vm-install template=<template> new-name-label=<name> # xe vm-param-set uuid=<uuid of new VM> other-config:install- repository=http://ftp.debian.org/ # xe network-list # xe vif-create network-uuid=<network uuid from above> vm-uuid=<uuid of new VM> device=0 # xe vm-start vm=<name of VM>
  • 51.
  • 52.
  • 53. Unit 2: Nuts and Bolts
  • 54. Steve Maresca ● Wearer of many hats ○ Security analyst at a top 20 public univ in the Northeast ○ Developer for the Zentific virtualization management suite Zentific with a team of developers Involved in the Xen world since 2005
  • 55. Steve Maresca ● Why do I use Xen? ○ Original impetus: malware/rootkit research ○ Mature research community built around Xen ○ Flexibility of the architecture and codebase permits infinite variation ○ Using it today for infrastructure as well as continuing with security research ■ LibVMI, introspection
  • 56. Unit 2: Overview ● Structure of this presentation follows the general path we take while mentally approaching virtualization ○ Start simple, increase in level of sophistication ● Overall flow: ○ Why Virtualization? ○ XCP Deployment ○ Management ○ VM Deployment ○ Monitoring ○ Advanced Monitoring and Automation ○ Best Practices
  • 57. Why virtualization? ● We're all familiar with the benefits ○ When the power bill drops by 25% and the server room is ten degrees cooler, everyone wins ● Bottom line: more efficient resource utilization ○ Requires proper planning and resource allocation ○ Every industry publication technical and otherwise has made 'cloud' a household term ○ Expectations set high, then reality arrives with different opinions
  • 58. Why virtualization? ● Many of us will have or have had difficulty making the leap ○ Growing pains: shared resources of virtualization hardware stretched thin ○ Recognition that it requires both capital and staffing investment ● Certainly, you CAN use virtualization with traditional approaches used with real hardware ○ E.g.: VM creation wizard. upload ISO. attach iso, boot, install, configure. repeat. ■ almost everyone does this ○ Without much effort, you have consolidated 10 boxes into one or two; many organizations find success at this scale ● ..but: we have much more flexibility at our disposal; use it!
  • 59. Why virtualization? ● Virtualization provides the tools to avoid the endless parade of one-off installations and software deployments ● Repeatable and measurable efficiency is attainable ○ Why install apache 25 times when one well-tuned configuration meets your needs?
  • 60. Unit 2: Nuts and Bolts Deployment Methodologies for Infrastructure and Virtual Machines
  • 61. Existing deployment methods ● Traditional deployment method: install from CD ○ still works for virtualization and new XCP hosts ○ If installing for the first time, this is the simplest way to get your feet wet ○ ISOs available at xen.org ○ For deploying 5-10 systems, this method is manageable ○ Don't fix what isn't broken: if it works for you, go for it ○ For deploying 10-50 systems, this hurts ● We've all installed from CD/DVD a thousand times ○ That's probably 950 times too many ○ But..there are alternatives, and better ones at that
  • 62. Existing deployment methods ● XCP can be installed on a standard linux system thanks to Project Kronos ○ apt-get install xcp-xapi ○ Patrick discussed this earlier ● XCP can be installed via more advanced means ● Virtual machines can be deployed via templates and clones ○ Golden images ○ Snapshots ○ Linked clones ○ Templates ○ These methods are here to stay
  • 63. Preboot Execution Environment (PXE) ● Extraordinarily convenient mechanism to leverage network infrastructure to deploy client devices, often lacking any local disk ● Uses DHCP, TFTP; often uses NFS/HTTP after initial bootstrap ● Intel and partners produced spec in 1999
  • 64. Preboot Execution Environment (PXE) ● Most commonly encountered over the years for: ○ a remote firmware update tool ○ thin-client remote boot ○ LSTP Linux terminal server project ○ Windows Deployment Services (Remote Installation Services) ○ Option ROMs on NICs ● Lightly used in many regards, foreign to many ● By no means a dead technology
  • 65. Preboot Execution Environment (PXE) ● To facilitate PXE ○ early in its boot process, a PXE-capable device emits a DHCP request ○ This a DHCP request is answered with extra fields indicating a PXE environment is available (typically, this is the 'next- server' option pointing the DHCP client at an adjacent TFTP server for the next steps) ■ PXE-unaware clients requesting an IP ignore the extra data ○ the DHCP client, having obtained an IP, obtains a small bootloader from the TFTP server ○ Additionally, a configuration file is downloaded with boot information (location of kernel, command line, etc)
  • 66. PXE Architecture Deployment VLAN Production VLAN New VM New VM Network switches and routers DHCP TFTP WDS
  • 67. PXE Architecture: Components ● DHCP ○ ISC-DHCP, Windows, almost anything works.. ● TFTPd ○ TFTP is an extraordinarily simple protocol, so.. ○ If it is a TFTP server, it's perfect ● Windows Deployment Services ● HTTP or FTP ○ Apache, nginx, lighttpd, IIS, a bash script, .. ○ Optional, but very useful for serving scripts, configuration files, etc ● Roll your own on one server with very modest resources
  • 68. PXE Architecture: Components ● Purpose-built solutions ○ Cobbler ■ Fedora project, Red Hat supported ■ Supports KVM, Xen, VMware ○ LTSP (Linux Terminal Server Project) ○ Windows Deployment Services ○ FOG Project
  • 69. So what does PXE buy us? ● Near zero-footprint deployment model ● Leverages services you almost certainly already have in place ● Guaranteed reproducible deployments ● Agnostic relative to Virtual/Physical, OS ● Goes where a no USB key or optical drive is even in existence
  • 70. Requirements for deployment via PXE ● Server requires a NIC with a PXE ROM available ● NIC Enabled for booting ● Very nice if you're using a blade chassis or ILO; easy to reconfigure on the fly ● Requires an answer file prepped for the host ● Configured DHCP server ● Configured TFTP server
  • 71. Mechanisms for automated install ● General concept is often called an "answer file" ○ Some file with a list of instructions is delivered to the OS installer with device configuration info, a list of packages to install, possibly including custom scripts, etc. ● Linux ○ Centos/RHEL: kickstart ○ Debian/Ubuntu: preseed (though kickstart files are gaining popularity in the Debian world) ● Windows ○ WAIK or Windows Automated Installation Kit
  • 72. Example infrastructure setup ● Debian as the base OS ● ISC-DHCP as a means of advertising next-server DHCP option ● tftpd-hpa for a tftp daemon ● also running Apache for serving scripts and a variety of other files as installation helpers
  • 73. Our configuration: ISC-DHCP shared-network INSTALL { subnet 192.168.2.0 netmask 255.255.255.0 { option routers 192.168.2.1; range 192.168.2.2 192.168.2.254; allow booting; allow bootp; option domain-name "zentific"; option subnet-mask 255.255.255.0; option broadcast-address 192.168.2.255; option domain-name-servers 4.2.2.1; option routers 192.168.2.1; next-server 192.168.2.1; filename "pxelinux.0"; } }
  • 74. Deploying XCP via PXE ● Requires an "answer file" to configure the XCP system in an unattended fashion ● Also leverages HTTP to host the answer file and some installation media ● TFTP serves a pxeconfig referencing the answer file and providing basic configuration for the installer (console string, minimum RAM, etc)
  • 75. Deploying XCP via PXE: pxeconfig DEFAULT xcp LABEL xcp kernel mboot.c32 append /xcp/xen.gz dom0_max_vcpus=2 dom0_mem=2048M com1=115200,8n1 console=com1 --- /xcp/vmlinuz xencons=hvc console=hvc0 console=tty0 answerfile=http://192.168.2.1 /xcp_install/xcp_install_answerfile install --- /xcp/install. img
  • 76. Deploying XCP via PXE:answerfile <?xml version="1.0"?> <installation> <primary-disk>sda</primary-disk> <keymap>us</keymap> <root-password>pandas</root-password> <source type="url">http://192.168.2.1/xcp_install</source> <post-install-script type="url" stage="filesystem-populated"> http://192.168.2.1/xcp_install/post.sh </post-install-script> <admin-interface name="eth0" proto="static"> <ip>192.168.2.172</ip> <subnet-mask>255.255.255.0</subnet-mask> <gateway>192.168.2.1</gateway> </admin-interface> <nameserver>4.2.2.1</nameserver> <timezone>America/New_York</timezone> </installation>
  • 85. Deploying XCP via PXE, complete
  • 86. Unit 2: Nuts and Bolts Deployment Methodologies for Virtual Machines
  • 87. Existing deployment methods ● Again, traditional methods ○ VM creation wizard. upload ISO. attach iso, boot, install, configure. repeat. ○ almost everyone does this ● Virtual machines can be deployed via templates and clones ○ Golden images ○ Snapshots ○ Linked clones ○ Templates ○ These methods are here to stay
  • 88. Existing deployment methods ● XCP makes deployment of VMs simple ○ templates: # xe template-list | grep name-label | wc -l 84 ○ clones: xe vm-clone ● Virtual machines can be deployed via templates and clones ○ Golden images ○ Snapshots ○ Linked clones ○ Templates ○ These methods are here to stay
  • 89. Deploying Centos via PXE ● Customization via Kickstart ● Anaconda installer uses "one binary to rule them all" so customization at installation time is more restrictive than other distributions ● Standard pxeconfig
  • 90. Deploying Centos : PXE config SERIAL 0 115200 CONSOLE 0 DEFAULT centos_5.6_x86_64_install LABEL centos_5.6_x86_64_install kernel centos/5.6/x86_64/vmlinuz append vga=normal console=tty initrd=centos/5. 6/x86_64/initrd.img syslog=192.168.1.2 loglevel=debug ksdevice=eth0 ks=http://192.168.2.1/centos-minimal.ks -- PROMPT 0 TIMEOUT 0
  • 91. Deploying Centos : Kickstart install text lang en_US.UTF-8 key --skip skipx logging --host=192.168.1.125 network --device eth0 --bootproto dhcp url --url http://mirrors.greenmountainaccess.net/centos/5/os/x86_64 rootpw --iscrypted $1$j/VY6xJ6$xxxxxxxxx firewall --enabled --port=22:tcp authconfig --enableshadow --enablemd5 selinux --enforcing timezone --utc America/New_York zerombr bootloader --location=mbr --driveorder=hda clearpart --initlabel --all autopart reboot
  • 92. Deploying Centos : Kickstart ● Make a new VM using the "other" template ○ # SRDISKUUID refers to the identifer for the storage repository ID ○ xe vm-install new-name-label=$VMNAME sr-uuid=$SRDISKUUID template="Other install media" ● Set boot orderBoot order: DVD, Network, Hard-Drive ● xe vm-param-set uuid=$VMUUID HVM-boot-params:order="ndc" ●
  • 94. Unit 2: Nuts and Bolts XCP: Modifying the OS Just a quick comment
  • 95. Installing software Or, Reminding XCP of its Linux Heritage ● XCP is by no means a black box, forever sealed away ● It's only lightly locked down and easy to modify ○ Take care, it's not designed for significant upheaval ○ Very convenient to install utilities, SNMP, etc ● Just: yum --disablerepo=citrix --enablerepo=base install screen ● Helps a lot with additional monitoring utilities
  • 96. Unit 2: Nuts and Bolts Monitoring and Automation
  • 97. Automation and response XCP Event Publisher (XAPI) VM VM Adaptive feedback loop VM AMQP or IF-MAP or 0MQ .. IDS Firewall Middleware
  • 99. What it is ● The XCP API is the backbone of the platform ○ Provides the glue between components ○ Is the backend for all management applications ● Call it XAPI or XenAPI ○ occasionally when searching, XAPI can be a bit better to differentiate from earlier work in traditional open source xen deployment ● It's a XML-RPC style API, served via HTTPS ○ provided by a service on every XCP dom0 host
  • 100. What it is ● API bindings are available for many languages ○ .NET ○ Java ○C ○ Powershell ○ Python ● Documentation available via the Citrix Developers' Network (in this regard, XCP==Xenserver) ○ http://docs.vmd.citrix.com/XenServer/6.0.0/1. 0/en_gb/api/ ○ http://community.citrix. com/display/xs/Introduction+to+XenServer+XAPI
  • 101. What it is ● Official API bindings not available for your language of choice? No problem ● Protocol choice of XML-RPC means that most languages can support the API natively ● Ease of integration is superb. Here's an example using python (but ignoring the official bindings)
  • 102. What it is import xmlrpclib x=xmlrpclib.Server("https://localhost") sessid=x.session.login_with_password("root","pass") ['Value'] # go forth, that's all you needed to begin allvms=x.VM.get_all_records(sessid)['Value']
  • 103. What it is ● xapi is available on for use on any xenserver or xcp system ● In addition as mentioned in our opening segment, XAPI is accessible via the kronos project Ubuntu/Debian systems
  • 104. What XAPI isn't ● Not exactly 1:1 with the xe commands from the XCP command line ○ significant overlap, but not exact ● NOT an inflexible beast like some APIs ○ can be extended via plugins ○ and (of course) it is open source if you want to get your hands dirty ■ LGPL 2.1
  • 105. Comparisons to other APIs in the virtualization space ● Generally speaking ○ XAPI is well-designed and well-executed ○ XAPI makes it pleasantly easy to achieve quick productivity ○ Some SOAPy lovers of big XML envelopes and WSDLs scoff at XML-RPC, but it certainly gets the job done with few complaints
  • 106. Comparisons to other APIs in the virtualization space ● Amazon EC2 ○ greater "surface area" than amazon EC2, which is a classic example of doing a lot with rather a little API ○ in particular, XAPI brings you closer to the virtual machine and underlying infrastructure than EC2 ○ XAPI provides considerable introspection into the virtual machine itself ■ data reported by xen-aware tools within the guest is reported as part of VM metrics ■ Data can be injected into VM using the xenstore
  • 107. Comparisons to other APIs in the virtualization space ● Oracle VM (also xen based) ○ similar heritage; derives partly from the traditional XenAPI of which XAPI is a distant relative ○ generally speaking, the oracle VM api is on-par for typically needed features, but XAPI is more powerful (e.g., networking capabilities)
  • 108. Comparisons to other APIs in the virtualization space ● VMware ○ XAPI is far more tightly constructed than VMWare's huge (very capable, impressive) API ○ By nature of protocol construction, XAPI is XML-RPC vs heavier VMWare SOAP API. Measurably lower bandwidth requirements, parsing overhead. ○ VMware's API has a distinct feel of organic growth ( "one of these things is not like the other" is a common tune whistled while working with it ○ Speaking from a personal developer standpoint, sanity with XAPI in comparison is much higher. (We, Zentific, have worked very closely with both APIs)
  • 110. API Architecture: General shape and form ● All elements on the diagram just shown are called classes ● Note: The diagram omits another twenty or more minor classes ○ Visit the SDK documentation for documentation of all classes ● Classes are the objects XCP knows about and exposes through API bindings ● Each class has attributes called fields and functions called messages. We'll stick with 'attributes' and 'functions.'
  • 111. API Architecture: General shape and form ● Class attributes can be read-only or read-write ● All class attributes are exposed via setter and accessor functions ○ e.g. for a class named C with attribute X: C.get_X ○ There's a corresponding C.set_X too if the attribute is read-write. Absent if read-only. ○ For mapping type attributes, there are C.add_to_X and C.remove_from_X for each key/pair
  • 112. API Architecture: General shape and form ● Class functions are of two forms: implicit and explicit ○ Implicit class functions include: ■ a constructor (typically named "create") ■ a destructor (typically named "destroy") ■ Class.get_by_name_label ■ Class.get_by_uuid ■ Class.get_record ■ Class.get_all_records ○ Explicit class functions include every other documented function for the given class, which are generally quite specific to the intent of that class ■ e.g. VM.start
  • 113. API Architecture: General shape and form ● Multiple forms UUIDs and OpaqueRefs A note on of unique identifier are used in XCP ○ Universally Unique Identifiers (UUIDs) ○ OpaqueRefs ○ Class-specific identifiers ○ name-labels ● Both can be encountered in API calls and xe commands ○ Conversion between UUIDs and OpaqueRefs will be commonly required ○ Parallel naming convention is acknowledged odd consequence of development aiming at unique identifiers
  • 114. API Architecture: Major Classes ● All major classes are shown in the inner circle of the API diagram ○ VM: A virtual machine ○ Host: A physical XCP host system ○ SR: Storage repository ○ VDI: Virtual disk image ○ PBD: physical block device through which an SR is accessed ○ VDB: Virtual block device ○ Network: A virtual network ○ VIF: A virtual network interface ○ PIF: A physical network interface
  • 115. API Architecture: Minor Classes ● Minor classes are documented in the official Xenserver SDK documentation ○ pool: XCP host pool information and actions ○ event: Asynchronous event registrations ○ task: Used to track asynchronous operations with a long runtime ○ session: API session management login, password changes, etc
  • 116. API Architecture: Linking Classes ● Linking classes are those that create a conceptual bridge between a virtual object and the underlying physical entity ○ VDI<>VBD<>VM ■ VBD: Bridges the representation of a virtual machine's internal disk with the actual disk image used to provide it ○ Network<>VIF<>VM ■ VIF: Bridges the internal VM network interface with the physical network to which it is ultimately plumbed ● When building complex objects, it's often necessary to build the linkages too, or failure will occur
  • 117. API Architecture: Other Classes ● SM: storage manager plugin - for third-party storage integration (e.g. Openstack Glance) ● Tunnel: represents a tunnel interface between networks/hosts in a pool ● VLAN: assists in mapping a VLAN to a PIF, designating tagged/untagged interfaces. Each VLAN utilizes one PIF
  • 118. API Architecture: Order of Operations ● Using a correct order of operations for API calls is important, though not particularly well documented ● Example: deleting a disk ○ Resources must not be in use ○ If deleting a VDI, make certain that no VBDs currently reference it ● Generally, common sense dictates here in terms of the operations required ● When something is executed out of order, an exception is thrown
  • 119. API Architecture: Target the right destination ● When running calls against a standalone xcp system, no need for extra consideration ● When running operations against a pool, it's necessary to target the pool master ○ Otherwise an API exception will be thrown if you attempt to initiate an action against a slave (type XenAPI.Failure if using the provided Python bindings) ● It's reasonably easy to code around this problem (the pool master may rotate, after all): http://community.citrix. com/display/xs/A+pool+checking+plugin+for+nagios
  • 120. API Architecture: Target the right destination import XenAPI host="x" user="y" pass="p" try: session=XenAPI.Session('https://'+host) session.login_with_password(user, pass) except XenAPI.Failure, e: if e.details[0]=='HOST_IS_SLAVE': session=XenAPI.Session('https://'+e.details[1]) session.login_with_password(username, password) else: raise s=session.xenapi
  • 121. XAPI is Extensible: Plugins ● Extensible API via plugins ○ These are scripts that you place in the XCP host. ■ Check out /etc/xapi.d/plugins/ ○ Can be invoked via the api ■ See host.call_plugin(...) ● Affords huge flexibility for customization ● Used today by projects like Openstack to provide greater integration with XCP ● Example code ○ http://bazaar.launchpad.net/~nova- core/nova/github/files/head: /plugins/xenserver/xenapi/etc/xapi.d/plugins/ ○ https://github.com/xen-org/xen- api/blob/master/scripts/examples/python/XenAPIPlugin.py
  • 122. Things to know ● To access VM console, a valid session ID must be appended to the request ○ See http://foss-boss.blogspot.com/2010/01/taming-xen-cloud- platform-consoles.html ● Metrics ○ ${class}_metrics are instantaneous values; this is an older XCP/Xenserver style of providing such data ○ Same metrics provided via RRD backend are historical and can show trending (rather than needing to aggressively poll for instantaneous values) ● It's possible to add xenstore values for a VM, enables an agent in VM to act upon that data ○ consider: root password reset via xenstore; directed actions
  • 123. Unit 2: Nuts and Bolts Best Practices
  • 124. Best Practices These are primarily 'general' best practices Common-sense best practices are especially critical for virtualization given: ● the sharing of scarce resources (and the complex interplay thereof when it comes to performance) ● Many eggs are in one basket: failures are felt very strongly
  • 125. Best Practices: Less is more ● Often, fewer vcpus per VM are better ○ Allocate only what's needed for the workload ○ If unknown, begin with 1 VCPU and work up as needed ● Always account for the CPU needs of the hypervisor ● Never allocate more VCPUs for a VM than the number of available PCPUs (even if you “can”) ● Great video by George Dunlap for more guidance : http://www.citrix.com/tv/#videos/2930
  • 126. Best Practices: Workload grouping ● Group VMs logically based upon expected (or observed) workload and behavior ○ Workloads which are randomly 'bursty' from an IO or CPU standpoint ○ Regularly scheduled workloads demanding high CPU when running: ○ interleave schedule if possible so each VM has the maximal share of resources
  • 127. Best Practices: Workload separation ● Separate VMs logically based upon expected (or observed) workload and behavior ○ Workloads which always require the majority that the hardware can provide for performance (like an I/O bottleneck on the network when the pipe is only so wide) ○ Workloads like databases that can be heavy on memory utilization and bandwidth
  • 128. Best Practices: Resource allocation ● If needed, guarantee resources for a workload ○ grant higher scheduling priority ○ VCPU pinning to physical cores ○ Balloon VM in anticipating of memory usage, then return memory to the pool ● WARNING: use with caution ○ possible to reduce performance for adjacent workloads on the same host ○ possible to lock a VM to a host (migration becomes problematic)
  • 129. Best Practices: Compartmentalize Risk ● Segregate VMs operating in distinct security domains ○ a good practice no matter what the context ○ certainly your user-facing services don't need access to the same network that allows switch/router management. Applies similarly to VMs ● Especially important if required by compliance/regulations ○ Example: PCI-DSS (Payment Card Industry Data Security Standard) ■ https://www.pcisecuritystandards. org/documents/Virtualization_InfoSupp_v2.pdf ○ Example: DOD regulations regarding data classification and separation of networks ■ Crossing the streams causes total protonic reversal
  • 130. Best Practices: Monitor your environment! ● Log aggregation AND analysis: ○ if you don't know how to identify when a problem is occurring, how can you circumvent/fix/prevent it? ● Forecasting for the future ● Virtual environments are dynamic enough that problems can sneak up on you ● If you have a head start on hardware failure, you can migrate VMs from a failing host to a hot spare to enable repair/replacement (without downtime) ● Don't forget to monitor hardware temperature. HVAC failures are not much fun. ○ The virtual fallout can be enormous: ○ high power density-->high heat takes out high-visibility, high value resources by the dozen
  • 131. Best Practices: When not to virtualize ● Knowing when to prefer real hardware over virtualization is as important as being able to recognize when virtualization will benefit ○ Virtualization is not a panacea ● Problematic workloads ○ Highly parallel computations requiring many CPUs acting in concert ○ Heavy IO demands of network or storage ○ Tasks which require exceptionally stable clocks (nanosecond granularity) ● But: technology is improving at breakneck speed ○ 10 gb Ethernet at line rate is possible for a virtual machine ○ CPU improvements have improved or eliminated many bottlenecks (clock stability is much better, for example)
  • 132. Best Practices: Resource Modeling ● Build a simple model for your environment ○ Try to do so before virtualizing a service and afterward, then compare ○ Helps with cost management and expenditure justification ○ Measures success or failure of virtualization to solve a problem ● E.g. $x/gb of ram + $x/vcpu + $x/hr labor + $licensing/vm + VM importance factor ● Calculate worst case perspective for model and then graph current state relative to that
  • 133.
  • 134. OSCON: From the Datacenter to the Cloud - Featuring Xen and XCP XCP in the Enterprise Josh West
  • 135. Table of Contents ● Introduction: XCP in the Enterprise ● Storage in Xen Cloud Platform ● Advanced Networking in Xen Cloud Platform ● Statistics & Monitoring in XCP ● Enterprise Cloud Orchestration
  • 136. Introduction: XCP in the Enterprise ● Xen hypervisor has already been proven as a solid choice as platform for IT systems: ● Amazon ● Oracle VM ● Rackspace ● dom0 Mainline ● No need to run Xen on distribution flavor of choice and build from ground up, just for hosting IT business systems. ● Many choices (Vmware, RHEV, Oracle VM, Citrix XenServer).
  • 137. So... Why use XCP? ● Excellent blend of enterprise quality code and next generation technologies. ● Developed by Citrix/XenSource. ● Enhanced by the open source community. ● Compatible with Citrix XenCenter for management. ● Rapid deployment: ○ PXEBOOT ○ Boot from SAN
  • 138. XCP and Pools ● Pools allow you to combine multiple XCP hosts into one managed cluster. ○ Live migration. ○ Single API connection & management connection. ○ Single configuration. ○ Shared storage. ● Single master, multiple slaves.
  • 139. XCP or Citrix XenServer? Citrix XenServer: Xen Cloud Platform: ● Professional Support ● Community Support ● High Availability ● DIY High Availability ● Advanced Storage ● Standard Storage ● Cloudstack & Openstack ● Cloudstack & Openstack ● Benefits from XCP ● Benefits from Citrix Community contributions developers & codebase
  • 140. DIY? Roll Your Own ● Still not convinced? See Project Kronos. ● Benefits of XAPI in a *.deb Package. ● Run on Debian or Ubuntu dom0 with Xen Hypervisor. ● http://wiki.xen.org/wiki/Project_Kronos
  • 141. Enough Promo! Let's see the cool stuff!
  • 143. Storage in XCP ● Supports major storage technologies & protocols ● Local storage, for standalone & scratch VM's. ● Centralized storage, for live migration & scaling: ○ LVMoISCSI and LVMoFC and LVMoAOE ■ Software iSCSI Initiator ■ HBA (Qlogic & Emulex) ■ Coraid has drivers for AOE ○ VHD on NFS
  • 144. Under the Hood: VHD ● VDI's are stored in Virtual Hard Disk (VHD) format.* ● From Microsoft! (Connectix), under Microsoft Open Specification Promise. ● Type's of VHD's: ○ Fixed hard disk image (Appliances). ○ Dynamic hard disk image (XCP). ○ Differencing hard disk image (Snapshots, Cloning). ● Tools from Microsoft & Virtualbox for working/converting.
  • 145. Under the Hood: LVM on XCP ● LVM is used on all block storage in XCP. ● XCP organizes with a simple mapping: ○ Storage Repository (SR) = LVM Volume Group ○ Virtual Disk Image (VDI) = LVM Logical Volume ● Locking is not handled like cLVM. ● XCP Pool Master toggles access w/ lvchange -ay/an.
  • 146. Under the Hood: LVM on XCP
  • 147. Under the Hood: LVM on XCP
  • 148. Under the Hood: LVM on XCP ● XCP uses VHD dynamic disk images on top of LVM. ● So we have VHDoLVMo(ISCSI|FC|AOE). ● And then all our VM's will probably use LVM: ● LVMoVHDoLVMo(ISCSI|FC|AOE). :-) ● VHD differencing disk images for VM/VDI snapshots, not LVM snapshots. ○ Portable between Storage Repository types. ○ No LVM snapshot performance issues.
  • 149. Under the Hood: NFS on XCP ● NFSv3 w/ TCP is used for NFS based SR's. ● Mounted at /var/run/sr-mount/<SR UUID>/ ● Mounted with 'sync' flag; no 'async' delayed operation as this would be unwise and unsafe for VM's. ● NFS lets you get closer to VHD's - they're stored as files. ● Perhaps could integrate better with your backup solution.
  • 150. Under the Hood: NFS on XCP ● Choose NFS platform wisely for proper performance. ● Just a Linux box w/ NFS export not enough: ~32 MB/s. ● Need cache system on your NAS (e.g. NetApp PAM). ● DIY? Look into using SSD's or BBU NVRAM w/ Facebook's Flashcache or upcoming Bcache. ● Gluster has NFS server and Gluster is tunable.
  • 151. XCP Storage: Which to Choose? ● All good choices. Depends on your shop & experience. ● If you have an enterprise NAS/SAN, use it! ○ Caching for performance. ○ Enterprise support contracts. ○ Alerting and monitoring. ● No budget? No space left? No problem. You can build your own SAN for use with XCP. ● Test labs, recycling equipment, PoC, and small production deployments.
  • 152. DIY H.A./F.T. SAN for XCP ● Easy to build a storage system (that actually performs well) for use with XCP: ○ Highly Available / Fault Tolerant. ○ Manageable / Not Too Complicated. ● XCP let's you connect to multiple SR's. ● If you outgrow your DIY SAN, or find it going from a test lab purpose to hosting production critical VM's, XCP will let you move VM's between SR's with ease. ● Just attach your expensive shiny SAN/NAS and move.
  • 153. DIY H.A./F.T. SAN: What We'll Build ● Lightweight Linux-based, clustered SAN for XCP SR. ● Active/Standby with automatic failover & takeover. ● Synchronous storage replication between storage nodes. ● iSCSI presentation to XCP hosts. ● Built with two open source software projects: ○ DRBD ○ Pacemaker
  • 154. TripAdvisor XCP + XSG Lab ● Built at TripAdvisor, with 19.33TB storage. ● Two Dell PowerEdge 1950's + Cisco 6513 Catalyst.
  • 155.
  • 156. DIY H.A./F.T. SAN: Overview Stacked Switches Stacked Switches I iSC S SI iSC eth0 eth1 eth0 eth1 XCP Storage Node 1 XCP Storage Node 2 eth2 eth3 eth2 eth3 DRBD Corosync / Pacemaker
  • 157. Step 1: Hardware RAID ● Configure your hardware RAID controller. ● Use features such as Adaptive Read-Ahead and Write- Back, to enable caching. ● Battery backed up cache is important. ● Recommended: RAID 1, 5, or 6 for internal disks. ● Recommended: RAID 10, 50, or 60 for DAS shelves.
  • 158. Step 2: ILO / DRAC / LOM ● Configure your dedicated ILO card. ● Using Dell Remote Access Controller (DRAC) in our example lab. ● Enable IPMI support. Needed for STONITH. ● Set & remember the credentials. Can test with ipmitool from external host. ● Dedicated NIC recommended!
  • 159. Step 3: Install OS ● Install CentOS x86_64. Tested this with 5.8 & 6.0. ● Partition and configure accordingly. ● Leave space for attached storage. ● Partition the dedicated storage as LVM Physical Volume. ● Use gpartd if >2TB.
  • 160. Step 4: Configure Networking ● Bond eth0 + eth1 front end interfaces w/ LACP (bond0). ● Crossover eth2 to eth2, eth3 to eth3 backend interfaces. ○ eth2: Dedicated for corosync + pacemaker. ○ eth3: Dedicated for DRBD replication. Storage Node 1 Storage Node 2 Management bond0 192.168.0.10 192.168.0.11 Corosync + Pacemaker eth2 10.168.0.10 10.168.0.11 DRBD eth3 10.168.1.10 10.168.1.11 *Floating iSCSI IP 192.168.0.20
  • 161. Step 4: Configure Networking Stacked Switches Stacked Switches 19 10 2. 8 .0. 168 6 .0. 2.1 [ 192.168.0.20 ] 11 19 eth0 eth1 eth0 eth1 XCP Storage Node 1 XCP Storage Node 2 eth2 eth3 eth2 eth3 10.168.0.10 & 10.168.0.11 10.168.1.10 & 10.168.1.11
  • 162. Step 5: Configure LVM ● Setup dedicated storage partition: $ pvcreate /dev/sdb1 $ vgcreate vg-xcp /dev/sdb1 $ lvcreate -l 100%FREE -n lv-xcp vg-xcp ● Adjust /etc/lvm/lvm.conf filters and run vgscan: filter = [ "a|sd.*|", "r|.*|" ] ● XCP will put LVM on top of iSCSI LUN's (LVMoISCSI). ● SAN should not scan local DRBD resource content.
  • 163. Step 6: Install DRBD ● Latest stable... Constantly in motion. $ yum install gcc kernel-devel rpm-build flex ● Fetch from http://oss.linbit.com/drbd/ (8.4.1) $ mkdir -p ~/redhat/{RPMS,SRPMS,SPECS,SOURCE,BUILD} $ tar -xvzf drbd-8.4.1.tar.gz $ cd drbd-8.4.1 $ make rpm km-rpm $ yum install /usr/src/redhat/RPMS/x86_64/drbd*.rpm or $ yum install ~/redhat/RPMS/x86_64/drbd*.rpm
  • 164. Step 7: Configure DRBD ● Four major sections to adjust: ○ syncer { ... } ○ net { ... } ○ disk { ... } ○ handlers { ... } ● See DRBD documentation for full details. ● http://www.drbd.org/docs/about
  • 165. Step 7: global_common.conf syncer { handlers { rate 1G; ... [ snip ] ... verify-alg "crc32c"; fence-peer "/usr/lib/drbd/crm- al-extents 1087; fence-peer.sh"; } after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh"; ... [ snip ] ... disk { } on-io-error detach; fencing resource-only; net { } sndbuf-size 0; max-buffers 8000; max-epoch-size 8000; unplug-watermark 8000; }
  • 166. Step 7: d_xcp.res resource d_xcp { net { allow-two-primaries; } on xsgnode1 { device /dev/drbd0; disk /dev/vg-xcp/lv-xcp; address 10.168.1.10:7000; meta-disk internal; } on xsgnode2 { device /dev/drbd0; disk /dev/vg-xcp/lv-xcp; address 10.168.1.11:7000; meta-disk internal; } }
  • 167. Review ● Two servers with equal storage space. ● First two NIC's bonded to network. ● Third NIC crossover, dedicated for corosync/pacemaker. ● Fourth NIC crossover, dedicated for DRBD. ● We've setup LVM and then DRBD on top. ● Now time to cluster and present to XCP.
  • 168. Step 8: Corosync + Pacemaker ● Install Yum repo's from EPEL + Clusterlabs ○ EPEL is needed on CentOS/RHEL 5 and 6 ○ Clusterlabs repo only needed on CentOS/RHEL 5 ○ Red Hat now includes pacemaker :-) ○ http://fedoraproject.org/wiki/EPEL ● Installation & Configuration: ○ http://clusterlabs.org/wiki/Install ○ $ yum install pacemaker.x86_64 heartbeat.x86_64 corosync.x86_64 iscsi-initiator-utils ○ http://clusterlabs.org/wiki/Initial_Configuration
  • 169. Pacemaker Review ● Nodes ● Cluster Information Base ● Resource Agents ● Master/Slave Sets (MS) ● Resources/Primitives ● Constraints: Location ● Resource Groups ● Constratints: Colocation ● CRM Shell ● STONITH
  • 171. What Should Pacemaker Do? ● Manage floating IP address 192.168.0.20 - iSCSI target. ● Configure an iSCSI Target Daemon. ● Present an iSCSI LUN from iSCSI Target Daemon. ● Ensure DRBD is running, with Primary/Secondary. ● Ensure DRBD Primary is colocated with floating IP, iSCSI Target Daemon, and iSCSI LUN. ● Ordering: DRBD, iSCSI Target, iSCSI LUN, floating IP.
  • 172. Step 9: Pacemaker Configuration Unblock iSCSI Port Floating IP iSCSI LUN Start Stop iSCSI Target Block iSCSI Port DRBD Primary/Secondary
  • 173. Step 9: Pacemaker Configuration property $id="cib-bootstrap-options" dc-version="1.0.11-..." cluster-infrastructure="openais" expected-quorum-votes="2" no-quorum-policy="ignore" default-resource-stickiness="100" stonith-enabled="false" maintenance-mode="false" last-lrm-refresh="1311719446" rsc_defaults $id="rsc-options" resource-stickiness="100"
  • 174. Step 9: Pacemaker Configuration primitive res_ip_float ocf:heartbeat:IPaddr2 params ip="192.168.0.20" cidr_netmask="20" op monitor interval="10s" primitive res_portblock_xcp_block ocf:heartbeat:portblock params action="block" portno="3260" ip="192.168.0.20" protocol="tcp" primitive res_portblock_xcp_unblock ocf:heartbeat:portblock params action="unblock" portno="3260" ip="192.168.0.20" protocol="tcp" primitive res_drbd_xcp ocf:linbit:drbd params drbd_resource="d_xcp" ms ms_drbd_xcp res_drbd_xcp meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
  • 175. Step 9: Pacemaker Configuration primitive res_target_xcp ocf:tripadvisor:iSCSITarget params implementation="tgt" tid="1" iqn="iqn.2011-12.com.example:storage.example.xsg" incoming_username="target_xcp" incoming_password="target_xcp" additional_parameters="MaxRecvDataSegmentLength=131072 MaxXmitDataSegmentLength=131072" op monitor interval="10s" primitive res_lun_xcp_lun1 ocf:heartbeat:iSCSILogicalUnit params target_iqn="iqn.2011-12.com.example:storage.example.xsg" lun="1" path="/dev/drbd/by-res/d_xcp" scsi_id="xcp_1" op monitor interval="10s"
  • 176. Step 9: Pacemaker Configuration group rg_xcp res_portblock_xcp_block res_target_xcp res_lun_xcp_lun1 res_ip_float res_portblock_xcp_unblock colocation c_xcp_on_drbd inf: rg_xcp ms_drbd_xcp:Master order o_drbd_before_xcp inf: ms_drbd_xcp:promote rg_xcp:start
  • 177. Step 9: Pacemaker Configuration Unblock iSCSI Port Floating IP iSCSI LUN Start Stop iSCSI Target Block iSCSI Port DRBD Primary/Secondary
  • 178. Step 10: STONITH Configuration primitive stonith-xsgnode1 stonith:external/ipmi params hostname="xsgnode1.example.com" ipaddr="192.168.0.30" userid="root" passwd="shootme" primitive stonith-xsgnode2 stonith:external/ipmi params hostname="xsgnode2.example.com" ipaddr="192.168.0.31" userid="root" passwd="shootme" location loc_stonith_xsgnode1 stonith-xsg01n -inf: xsgnode1.example.com location loc_stonith_xsgnode2 stonith-xsg02n -inf: xsgnode2.example.com property stonith-enabled="true"
  • 179. Step 11: Review Pacemaker ● Make sure resources are OK: crm status ● Make sure floating IP configured: ip addr ● Make sure DRBD primary/secondary: drbd-overview ● Make sure iSCSI LUN's presented: tgt-admin -s
  • 180. Step 12: Connect SR in XCP!
  • 181. XCP and High Availability ● We've just shown how to build a highly-available / fault- tolerant SAN, using DRBD and Pacemaker. ● EXT4oVHDoLVMoISCSIoDRBDoLVM :-) ● We did this on CentOS 5.x (and 6.x). ● XCP is based on CentOS 5.x. ● XCP can use Pacemaker for H.A.!
  • 182. XCP Storage Future ● XCP 1.6 will support Storage XenMotion ○ Migration of VM's and their storage, live! ○ Can evacuate a host with local SR attached VM's. ● Cluster Filesystems: ○ Citrix is looking into Gluster and Ceph. ○ Gluster client builds and works on XCP 1.5b. ○ Relatively easy for us to write a Gluster SR driver. ○ Ceph integration is a bit trickier.
  • 183. Advanced Networking with Xen Cloud Platform
  • 184. Advanced Networking wtih XCP ● Bonding and VLAN's ● OpenvSwitch and OpenFlow ● Distributed Virtual Switch Controller ● GRE Tunnels & Private VM Networks
  • 186. NIC Bonding review ● Means of combining multiple NIC's together for: ○ Failover ○ Load Balancing ○ More Bandwidth ● Available since Linux Kernel 2.0.x. Stable and proven. ● Many modes of bonding NIC's: ○ Active/Standby. ○ Active/Active.
  • 187. NIC Bonding Modes ● mode = 1: active-backup <-- ● mode = 2: balance-xor ● mode = 3: broadcast ● mode = 4: 802.3ad (LACP) <-- ● mode = 5: balance-tlb ● mode = 6: balance-alb ● mode = 7: balance-slb <--
  • 188. XCP Bonding: Source Level Balancing ● XCP + XenServer introduce optimized bonding for virtualization. ● mode = 7, aka balance-slb. ● Derived from balance-alb. ● Spread VIF's across PIF's. ● Provides load balancing and failover. ● Active/Active.
  • 189. XCP Bonding: Source Level Balancing ● New VIF source MAC's assigned a PIF w/ lowest util. ● Rebalances VIF's/MAC's across PIF's every 10 sec. ○ No GARP during rebalance necessary. ○ Switch will see new traffic and update tables. ○ Still need to connect PIF's to same/stacked switch. ● Up/Down delay of 31s/200ms. ● Failover on link down handled with GARP for fast updates.
  • 190. XCP Bonding: Source Level Balancing ● Limitation: 16 unbonded NIC's or 8 bonded. ● Limitation: Only 2 NIC's per bond in XenCenter. ● Can override with xe command line: ● xe bond-create network-uuid=... pif-uuids=...,...,... ● Can override bonding mode if desired: ● xe pif-param-set uuid=<bond pif uuid> other-config:bond-mode=<active-backup, 802.3ad>
  • 191. XCP VLAN's ● PIF but with a tag. ● Can apply to Ethernet NIC's and Bonds. ● xe vlan-create network-uuid=... pif-uuid=... tag=...
  • 192. Traditional Advanced Networking ● Manual configuration process. ○ Bonding? /etc/modprobe.conf and ifenslave ○ Bridges? brctl from bridge-utils ○ Vlans? vconfig ○ GRE? IPSEC? QoS/Rate Limiting? ● Distribution specific configuration files.
  • 193. Virtualization and Advanced Networking ● Virtualization brought network switching into the server itself. ● Systems & services no longer fixed. ● Nomadic... VM's move around w/o Network Admin knowing. ● SPAN ports for IDS? Netflow information for a specific VM? QoS and rate limiting? How is this handled?
  • 194. OpenvSwitch ● Software switch like Cisco Nexus 1000V. ● Distribution agnostic. Plugs right into Linux kernel. ● Reuses existing Linux kernel network subsystems. ● Compatible with traditional userspace tools. ● Free and Open Source - hence the "open"... ;-) ● http://openvswitch.org/
  • 195. Why use OpenvSwitch? ● Why use it in general? ● Why does XCP/XenServer use OpenvSwitch?
  • 196. OpenvSwitch Centralized Management ● Software Defined Networking. Keep data plane, centralize control plane. ● Distributed Virtual Switch Controller (DVSC): ○ OpenFlow ○ OVSDB Management Protocol ● Ensures sFLOW, QoS, SPAN, Security policies follow VM's as they move & migrate between XCP hosts. ● Citrix XenServer DVSC works with XCP.
  • 197.
  • 198.
  • 199.
  • 200.
  • 201.
  • 202. Cross Server Private Networks ● Traditional Approach: ○ Use dedicated NIC's with separate switches. ○ Use a private dedicated non-routed VLAN. ● Management and scalability issues. ● Works for small deployments.
  • 203. Cross Server Private Networks ● New Approach: GRE Tunnels ● GRE Tunnel between each XCP host. ● Build/Teardown as needed. Don't need to waste b/w. ● Administration nightmare? ○ Not if you had some sort of... controller... to manage it for you...? ○ Oh wait! We have one of those!
  • 204. XCP Tunnel PIF ● Special PIF called "tunnel" in XCP. ● Commands: xe tunnel-* ● Placeholder for OpenvSwitch & DVSC to work with.
  • 205. XCP Tunnel PIF 1. Create new network in XCP: xe network-create name-label="Cross Server Private Network" 2. Create tunnel PIF on each XCP host for use w/ this net: xe tunnel-create network-uuid=<uuid> pif-uuid=<uuid> 3. Add VIF's of VM's to this private network. DVSC will handle the setup/teardown of GRE tunnels between XCP hosts automatically as needed.
  • 206. Statistics and Monitoring with Xen Cloud Platform
  • 207. Statistics, Monitoring, Analysis ● Citrix XenCenter ● Existing Solutions (Hyperic, Nagios, Cacti, Observium) ● Programmable Means: ○ API ○ SSH ○ SNMP
  • 208. Citrix XenCenter ● Built in graphical presentation of all XenServer/XCP metrics. ● Live view of current activity. ● Memory allocation per host, per pool. ● Excellent way to get solid overview of XCP deployment. ● VirtualBox/Parallels/Vmware + Windows
  • 209.
  • 210.
  • 211.
  • 212. XCP and Nagios ● XCP == CentOS 5.x (+ Xen + Kernel + XAPI) ● Install NRPE on dom0. ● Monitor just like any other Linux box.
  • 213. XCP and SNMP ● net-snmp installed on XCP. ● Simple steps to enable SNMP: a. Open UDP/161 in /etc/sysconfig/iptables b. Adjust /etc/snmp/snmpd.conf permissions c. chkconfig snmpd on && service snmpd start ● Standard Linux host metrics.
  • 214. Monitoring XCP with the XenAPI ● Linux SNMP and Nagios NRPE only give basics. ● SR usage? Pool utilization? ● VM metrics? VIF/VBD rates? ● All of this information is available.
  • 215. Monitoring XCP with the XenAPI
  • 216. XenAPI and SR Metrics >>> import XenAPI >>> from pprint import pprint >>> session = XenAPI.Session('http://127.0.0.1') >>> session.login_with_password('root', 'secret') >>> session.xenapi.SR.get_all() ['OpaqueRef:18c80a5d-cef6-c2e8-59d1-a03cfbed97e5', 'OpaqueRef:94f13ac8-6d8b-9bc0-2c71-fd29c9636f4e', ...] >>> >>> pprint(session.xenapi.SR.get_record('OpaqueRef: 18c80a5d-cef6-c2e8-59d1-a03cfbed97e5'))
  • 217.
  • 218. XenAPI and Events >>> import XenAPI >>> from pprint import pprint >>> session = XenAPI.Session('http://127.0.0.1') >>> session.login_with_password('root', 'secret') >>> session.xenapi.event.register(["*"]) '' >>> session.xenapi.event.next() See examples on http://community.citrix.com/
  • 220. Enterprise Cloud Orchestration ● Hypervisor Agnostic* approach to orchestrating your cloud(s). ● Suited for solving multi-tenancy requirements. ● Orchestrate vs Manage? ● I'm not a cloud provider. Why do I care? ○ Traditional approach. ○ Developer delegation
  • 221. IaaS Orchestration & XCP OpenStack http://www.openstack.com CloudStack http://www.cloudstack.org
  • 222. OpenStack Overview ● Rackspace & NASA w/ other major contributors: ○ Intel & AMD ○ Red Hat, Canonical, SUSE ○ Dell, HP, IBM ○ Yahoo! & Cisco ● Hypervisor Support: ○ KVM & QEMU ○ LXC ○ Xen (via libvirt) ○ XenServer, Xen Cloud Platform, XenAPI (Kronos)
  • 223. OpenStack Overview ● Language: Python ● Packages for Ubuntu and RHEL/CentOS (and more) ● MySQL and PostgreSQL (yay!) Database Support ● Larger project than CloudStack, encompassing many more functional areas: ○ Storage (swift, nova volume --> cinder) ○ Networking (nova network, quantum) ○ Load Balancing (Atlas)
  • 224. OpenStack and XCP ● http://wiki.openstack.org/XenServer/GettingStarted ● http://wiki.openstack. org/XenServer/XenXCPAndXenServer ● Optimize for XenDesktop on Installation (EXT vs LVM) ● Plugins for XCP host: /etc/xapi.d/plugins ● Different way of thinking -- the Xen way ○ Run OpenStack services on host/dom0? No! ○ Each XCP host has a dedicated nova VM. ○ OpenStack VM will control XCP host via XenAPI
  • 225.
  • 226. OpenStack and XCP Pools ● XCP Pools / OpenStack Host Aggregates ○ http://wiki.openstack.org/host-aggregates ○ Informs OpenStack that the XCP hosts have a collection of shared resources. ○ Works but incomplete -- e.g. if pool master changes? ○ Recommended that you don't pool your XCP hosts when orchestrating via OpenStack, for now... ● Traditional vs Cloud Workloads
  • 227. OpenStack and XCP Storage ● Optimize for XenDesktop on XCP installation. ○ Local SR uses EXT instead of LVM ● Plugins need raw access to VHD files on host/dom0. ● Can use NFS for instance image storage: ○ Switch default SR to an NFS SR. ○ nova.conf: sr_matching_filter="default-sr:true" ● OpenStack Cinder will use Storage XenMotion
  • 228. CloudStack Overview ● VMOps aka Cloud.com ---> Citrix July 2011 ● Hypervisor Support: ○ Citrix XenServer (thus XCP) ○ KVM ○ VMware vSphere ○ Oracle VM ● Multiple hypervisors in single deployment ● Languages: Java and C
  • 229. CloudStack and XCP ● CloudStack doesn't provide storage -- no nova-volume ● CloudStack uses existing SAN/NAS appliances: ○ Dell Equalogic (iSCSI) ○ NetApp (NFS and iSCSI) ● Primary and Secondary Storage (tiering) ● Supports use of additional XenServer SR's (e.g. FC) instead of NFS/iSCSI.
  • 230. {Open,Cloud}Stack -- Which? ● Depends on your team, experience, and intentions. ● CloudStack: ○ Want a cloud *now*? ○ Very mature and full featured. ○ Integrates well w/ both traditional & cloud workloads. ● OpenStack: ○ Have some time? ○ Easily extendable to do new things (Python). ○ XS/XCP support needs work, but its getting there.
  • 232. Unit 4 The Future of Xen Update from the Xen.org team
  • 233. Outline Xen.org development: Who / What? Xen 4.2 Microsoft, UEFI secure boot, and Win8 Xen 4.3 Other activities
  • 234. Xen.org development Who develops Xen? 7 full-time developers from Citrix Full-time devs from SuSe, Oracle Frequent contributions from Intel, AMD What do we develop? Xen hypervisor, toolstack Linux qemu
  • 235. Xen 4.2 features pvops dom0 support New toolstack: libxl/xl cpupools New scheduler: credit2 memory sharing, page swapping nested virtualization Live fail-over (Remus)
  • 236. libxl/xl The motivation: xend: Daemon, python xapi: duplicated low-level code The solution libxl: lightweight library for basic tasks xl: lightweight, xm-compatible replacement
  • 237. cpupools The motivation Service model: rent cpus, run as many VMS as you want Allow customers to use "weight" The solution: cpupools pools can be created at run-time cpus added or removed from pools domains assigned to pools each pool has a separate scheduler
  • 238. cpupools, con't Uses New service model Different schedulers Stronger isolation NUMA-split
  • 239. UEFI secure boot Microsoft, UEFI, and Windows 8 logo What that means for Linux Fedora's solution Ubuntu's solution What it means for Xen
  • 240. Xen 4.3 Performance NUMA issues *BSD dom0 support Memory sharing / hypervisor swap ARM servers blktap3
  • 241. Other areas of focus Distro integration Doc days
  • 244. Useful Resources and References Community: ● Xen Mailing List: http://www.xen.org/community/ ● Xen Wiki: http://wiki.xen.org ● Xen Blog: http://blog.xen.org Discussion: ● http://www.xen.org/community/xenpapers.html ● Abstracts, slides, and videos from Xen Summits ● http://pcisecuritystandards. org/organization_info/special_interest_groups.php
  • 245. Image Credits ● http://en.wikipedia.org/wiki/File:Tux.png ● http://en.wikipedia.org/wiki/File:Intertec_Superbrain.jpg ● http://wiki.xen.org/wiki/Xen_Overview
  • 246. Thank You! Enjoy the rest of OSCON 2012!
  • 248. Acknowledgments This work is based upon many materials from the 2011 Xen Day Boston slides, by Todd Deshane, Steve Maresca, Josh West, and Patrick F. Wilbur. Portions of this work are derived from the 2010 Xen Training / Tutorial, by Todd Deshane and Patrick F. Wilbur, which is derived from the 2009 Xen Training / Tutorial as updated by Zach Shepherd and Jeanna Matthews from the original version written by Zach Shepherd and Wenjin Hu, originally derived from materials written by Todd Deshane and Patrick F. Wilbur. A mouthful! Portions of this work are derived from Mike McClurg's The Xen Cloud Platform slides from the July 2012 Virtual Build a Cloud Day. Portions are based upon Jeremy Fitzhardinge's Pieces of Xen slides.