• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Xen Virtualization
 

Xen Virtualization

on

  • 3,120 views

 

Statistics

Views

Total Views
3,120
Views on SlideShare
3,105
Embed Views
15

Actions

Likes
0
Downloads
124
Comments
0

2 Embeds 15

http://www.slideshare.net 14
http://www.techgig.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Xen Virtualization Xen Virtualization Presentation Transcript

  • Xen Virtualization Niagara Frontier LUG May 2008 Erek Dyskant
  • Virtualization
    • Separation of administrative zones
    • Separation of software failure
    • Consolidation of hardware resources
    • Full utilization of hardware
    • Easier hardware provisioning -- Want a server? You’ve got a server.
    • Excellent test environments
  • What virtualization isn’t
    • Not an HA solution by itself
    • Naïve Implementation:
    • Not suitable for some secure applications
      • Timing of private keys
      • Unknown -- Lots of new code
      • Host OS adds a new point of entry
    • May actually increase complexity
      • Adds Host OSes to manage
      • Adds to total number of points of management
      • Encourages “guerilla” server projects
    Mail Web Directory Database Mail Web Directory Database
  • Container Virtualization
    • Works at the kernel level, masking processes running on other partitions.
    • All guests share the same filesystem tree.
    • Same kernel on all machines
    • Unprivileged VMs can’t mount drives or change network settings
    • Native Speeds, no emulation overhead
    • Any OS Crash effects all machines
    • OpenVZ, Virtu ozzo, Solaris Containers, FreeBSD Jails, Linux-Vserver
  • Full Virtualization
    • Hardware Virtual Machines
      • VMWare, Xen HVM, KVM, Microsoft VM, Parallels
      • Runs unmodified guests
      • Generally worst performance, but often acceptable
      • Simulates bios, communicates with VMs through ACPI emulation, BIOS emulation, sometimes custom drivers
      • Can sometimes virtualize accross architectures, although this is out of fashion.
  • VMWare Server
    • Very well-developed GUI
    • Decent Performance
    • Excellent Documentation
    • Backed by a single vendor
    • Free Version
      • Very Functional. Easy setup.
      • No server-server communication/failover or supported shared storage
    • Non-free Version
      • Shared Storage, centralized management, automated provisioning.
  • VMWare Server 2
  • Para-virtualization
    • Hypervisor runs on the bare metal. Handles CPU scheduling and memory compartmentalization.
    • Dom0, a modified Linux Kernel, handles networking and block storage for all guests.
      • Dom0 is also privileged to manage the VMs on the system.
    • DomU, or the guests OS, sends some requests straight to the hypervisor, and others to the Dom0.
    • Because the kernel knows its virtualized, features can be built into it: hot connection/disconnection of resources, friendly shutdown, serial console.
    • Other paravirtualization schemes: Sun Logical Domains, VMware (sometimes)
  • Elements of a Xen VM
    • Virtual Block Device
      • Image file
      • Real block device (either LVM or physical)
    • Network Bridges
      • Routed, terminates at the Dom0
      • Bridged, terminates at the network interface
    • Virtual Framebuffer
      • VNC Server
  • Example VM Config
    • name = ”DomU-1"
    • maxmem = 512
    • memory = 512
    • vcpus = 2
    • bootloader = "/usr/bin/pygrub"
    • on_poweroff = "destroy"
    • on_reboot = "restart"
    • on_crash = "restart"
    • vfb = [ "type=vnc,vncunused=1,keymap=en-us" ]
    • disk = [ "tap:aio:/var/lib/xen/images/Centos5Image.img,xvda,w" ]
    • vif = [ "mac=00:16:3e:79:fd:8d,bridge=xenbr0" ]
  • xm -- Xen Manager
    • Commandline tool on Dom0 for managing vms.
    • Quick overview of options:
      • console -- attach to a device’s console
      • create -- boot a DomU from a config file
      • destroy -- immediately stop a DomU
      • list -- List running DomUs
      • migrate -- Migrate a console to another Dom0
      • pause/unpause -- akin to suspend. TCP connections will timeout
      • shutdown -- Tell a DomU to shut down.
      • network-attach/network-detach
      • block-attach/block-detach
  • Redhat/Centos virt-manager
    • Simple Graphical Interface.
    • Basically does what xm does, plus:
      • Built in short-term performance graphing
      • Built in VNC client
    • Quick tour...
  • Main View
  • Create VM
  • Name Machine
  • Choose Method
  • Choose Media Location
  • Networking Config
  • Memory, CPU allocation
  • Confirmation Screen
  • VNC Window
  • Graph View
  • Benchmarks
  • More Benchmarks
  • Xen Live Migration
    • Migrate machines off during upgrades or balance load
    • Set xend.conf to allow migration from other xen Dom0s.
    • Machine must reside on shared storage.
    • Must be on the same level2 network
    • xm migrate -l Machine dest.ip.addr.ess
  • Shared Storage Options
    • NFS
      • Simple hardware failover
      • well-understood configuration
      • Spotty reliability history
    • Block level storage (iscsi or FC)
      • More complex configuration
      • Multipathing
      • Commercial solutions are expensive
      • We’re seeing traction for open iscsi lately.
  • What to Look for In Storage
    • Redundant host connections
    • Snapshotting
    • Replication
    • Sensible Volume Management
    • Thin Provisioning
    • IP-based failover, esp. if x86 based
  • Storage Systems
      • OpenFiler
        • Nice fronted.
        • Replication with DRBD
        • iscsi with linux iscsi-target
      • OpenSolaris/ZFS
        • Thin provisioning
        • Too many ZFS features to list
        • StorageTek AVS -- Replication in may forms
        • Complex configuration
      • NexentaStor
        • ZFS/AVS in Debian.
        • Rapidly Evolving
      • SAN/IQ
        • Failover, storage virtualization, n(y) redundancy
        • Expensive and wickedly strict licensing
      • Too Many propriety hardware systems to list
  • Network Segmentation
    • 802.1q VLAN tagging
      • All VLANs operate on the same physical network, but packets carry an extra tag that indicates which network they belong in.
      • Create an interface and a bridge for each vlan.
      • Connect Xen DomUs to their appropriate vlan
      • Configure host’s switch ports as vlan trunk ports.
      • Configure router somewhere, or a layer 3 switch is useful here.
  • Commercial Xens
    • Citrix XenServer
    • Oracle VM
    • VirtualIron
      • Typical Features:
      • Resource QoS
      • Performance trending
      • Physical Machine Failure detection
      • Pretty GUI!
      • API for server provisioning
  • Recovery strategies
    • Mount virtual block device on Dom0
      • losetup /dev/loop0 XenVBlockImage.img
      • losetup -a
      • kpartx -a /dev/loop0
      • pvscan (if using LVM inside VM)
      • vgchange -a y VolGroup00
      • mount /dev/mapper/VolGroup00-LogVol00 /mnt/xen
      • chroot /mnt/xen (or whatever recovery steps you take next)
  • Xen Recovery -- cont
    • Boot from recovery CD as HVM
      • disk = [ ’tap:aio:/home/xen/domains/damsel.img,ioemu:hda,w',
      • 'file:/home/jack/knoppix.iso,ioemu:hdc:cdrom,r' ]
      • builder="hvm"
      • extid=0
      • device_model="/usr/lib/xen/bin/qemu-dm"
      • kernel="/usr/lib/xen/boot/hvmloader"
      • boot="d"
      • vnc=1
      • vncunused=1
      • apic=0
      • acpi=1
    • Create custom Xen Kernel OS image for rescues
  • Pitfalls
    • Failure to segregate network
      • 802.1q and iptables firewalls everywhere
    • Creating Single Points of Failure
      • Make sure that VMs are clustered
      • If they can’t be clustered, auto started on another machine
      • Assess reliability of shared storage
    • Storage Bottlenecks
    • Not planning for extra points of management
      • cfengine, puppet, centralized authentication
    • Less predictable performance modeling
  • Maintaining HA
    • Hardware will fail
    • Individual VMs will crash
    • Cluster Multiple VMs for each application
    • Load Balancers can be VMs too.
  • HA -- Continued
    • Failure Detection, make VM restart on different machines if a machine fails
    • Make VMs migrate off a host when you shut it down
    • Build your testing system into the VM scheme.
      • At least one testing system per type of host. Diligently do all changes on that before rolling out.
      • Have at least one development VM per VM cluster.
    • Make sure that networking equipment and storage is redundant too
    • If running web servers, keep a physical web server on hand to serve a “We’re sorry, come back later” page. For mail servers, an independant backup MX.