Xen Cloud Platform by Tim Mackey
Upcoming SlideShare
Loading in...5
×
 

Xen Cloud Platform by Tim Mackey

on

  • 812 views

 

Statistics

Views

Total Views
812
Slideshare-icon Views on SlideShare
812
Embed Views
0

Actions

Likes
0
Downloads
35
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Welcome to the XenServer Technical Presentation. In this presentation we’ll be covering many of the core features of XenServer, and we’ll have the option of diving a bit deeper in areas which you may be interested in.
  • More and more organizations are choosing to host different workloads on different hypervisors enabling them not only better overall performance of their environment by also better utilizing their budget. Over 40% of companies in a recent Info-Tech study said they were using 2 or more server virtualization vendors within their datacenter, with almost half of these using Citrix and VMware together. The major challenge of this model is day to day management tasks, such as live migration, that you ideally want to complete through one management console. Currently both Citrix and Microsoft can manage each others VMs as well as VMware. VMware is beginning to offer management of Microsoft VMs.
  • Since XenServer is based on the open source Xen project, it’s important to understand how Xen itself works. Xen is a bare metal hypervisor which directly leverages virtualization features present in most CPUs from Intel and AMD since approximately 2007. These CPUs all feature VT-D or AMD-V instructions which allow virtual guests to run without needing performance robbing emulation. When Xen was first developed, the success of Vmware ESX was largely based on a series of highly optimized emulation routines. Those routines were needed to address shortcomings in the original x86 instruction set which created obstacles to running multiple general purpose “protected mode” operating systems such as Windows 2000 in parallel. With Xen, and XenServer, those obstacles were overcome through use of both the VT-D instruction set extensions and para-virtualization. Paravirtualization is a concept in which either the operating system is modified, or specific drivers are modified to become “virtualization aware”. Linux itself can optionally run as paravirtualized, while Windows requires the use of both hardware assistance and paravirtualized drivers to run at maximum potential on a hypervisor.These advances served to spur early adoption of Xen based platforms whose performance outstripped ESX in many critical applications. Eventually VMW released ESXi to leverage VT-D and paravirtualization, but it wasn’t until 2011 and vSphere 5 that ESXi became the only hypervisor for vSphere.
  • This is a slide that shows a blowup of the Xen virtualization engine and the virtualization stack “Domain 0” with a Windows and Linux virtual machine. The green arrows show memory and CPU access which goes through the Xen engine down to the hardware. In many cases Xen will get out of the way of the virtual machine and allow it to go right to the hardware.Xen is a thin layer of software that runs right on top of the hardware, Xen is only around 50,000 lines of code. The lines show the path of I/O traffic on the server. The storage and network I/O connect through a high performance memory bus in Xen to the Domain 0 environment. In the domain 0 these requests are sent through standard Linux device drivers to the hardware below.
  • Domain 0 is a Linux VM with higher priority to the hardware than the guest operating systems. Domain 0 manages the network and storage I/O of all guest VMs, and because it uses Linux device drivers, a broad range of physical devices are supported
  • Linux VMs include paravirtualized kernels and drivers. Storage and network resources are accessed through Domain 0, while CPU and memory are accessed through Xen to the hardwarehttp://wiki.xen.org/wiki/Mainline_Linux_Kernel_Configs
  • Windows VMs use paravirtualized drivers to access storage and network resources through Domain 0. XenServer is designed to utilize the virtualization capabilities of Intel VT and AMD-V enabled processors. Hardware virtualization enables high performance virtualization of the Windows kernel without using legacy emulation technology
  • Since all these use cases depend on a solid data center platform, let’s start by exploring the features critical to successful enterprise virtualization
  • Successful datacenter solutions require an easy to use management solution, and XenServer is no different. For XenServer this management solution is called XenCenter. If you’re familiar with vCenter for vSphere, you’ll see a number of common themes. XenCenter is the management console for all XenServer operations, and while there is a powerful CLI and API for XenServer, the vast majority of customers perform daily management tasks from within XenCenter. These tasks include starting and stopping VM, managing the core infrastructure such as storage and networks, through to configuring advanced features such as HA, workload placement and alerting. This single pane of glass also allows administrators to directly access the consoles of the virtual machines themselves. As you would expect, there is a fairly granular set of permissions which can be applied, and I’ll cover that topic in just a little bit.
  • What differentiates Live Storage Migration from Live VM Migration is that with Live Storage Migration the storage used for the virtual disks is moved from one storage location while the VM itself may not change virtualization hosts. In XenServer, Live VM Migration is branded XenMotion and logically Live Storage Migration became Storage XenMotion. With Storage XenMotion, live migration occurs using a shared nothing architecture which effectively means that other than having a reliable network connection between source and destination, no other elements of the virtualization infrastructure need be common. What this means is that with Storage XenMotion you can support a large number of storage agility tasks, all from within XenCenterFor example:Upgrade a storage arrayProvide tiered storage arraysUpgrade a pool with VMs on local storageRebalance VMs between XenServer pools, or CloudStack clusters
  • One of the key problems facing virtualization admins is the introduction of newer servers into older resource pools. There are several ways vendors have chosen to solve this problem. They can either “downgrade” the cluster to a known level (say Pentium Pro or Core 2), disallow mixed CPU pools, or level the pool to the lowest common feature set. The core issue when selecting the correct solution is to understand how workloads actually leverage the CPU of the host. When a guest has direct access to the CPU (in other words there is no emulation shim in place), then that guest also has the ability to interrogate the CPU for its capabilities. Once those capabilities are known, the guest can optimize its execution to leverage the most advanced features it finds and thus maximize its performance. The downsize is that if the guest is migrated to a host which lacks a given CPU feature, the guest is likely to crash in a spectacular way. Vendors which define a specific processor architecture for the “base” are effectively deciding that feature set in advance and then hooking the CPU feature set instruction and returning that base set of features. The net result could be performance well below that possible with the “least capable” processor in the pool. XenServer takes a different approach and looks at the feature set capabilities of the CPU and leverages the FlexMigration instruction set within the CPU to create a feature mask. The idea is to ensure that only the specific features present in the newer processor are disabled and that the resource pool runs at its maximum potential. This model ensures that live migrations are completely safe, regardless of the processor architectures; so long as the processors come from the same vendor.
  • The ability to overcommit memory in a hypervisor was born at a time when the ability to overcommit a CPU far outpaced the ability to populate physical memory in a server in a cost effective manner. The end objective of overcommiting memory is to increase the quantity of VMs which a given host can run. This lead to multiple ways of extracting more memory from a virtualization host than was physically present. The four most common ways of solving this problem are commonly referred to as “transparent page sharing”, “memory ballooning”, “page swap” and “memory compression”. While each has the potential to solve part of the problem, using multiple solutions often yielded the best outcome. Transparent page sharing which seeks to share the 4k memory pages used by an operating system to store its read-only code. Memory ballooning seeks to introduce a “memory balloon” which appears to consume some of the system memory and effectively share it between multiple virtual machines. “Page swap” is nothing more than placing memory pages which haven’t been accessed recently on a disk storage system, and “memory compression” seeks to compress the memory (either swapped or in memory) with a goal of creating additional free memory from commonalities in memory between virtual machines.Since this technology has been an evolutionary attempt to solve a specific problem, it stands to reason that several of the approaches offer minimal value in todays’ environment. For example, transparent page sharing assumes that the readonly memory pages in an operating system are common across VMs, but the reality is that the combination of large memory pages and memory page randomization and tainting have rendered the benefits from transparent page sharing largely ineffective. The same holds true for page swapping whose performance overhead often far exceeds the benefit. What this means is that the only truly effective solutions today are memory ballooning and memory compression. XenServer currently implements a memory balloning solution under the feature name of “dynamic memory control”. DMC leverages a balloon driver within the XenServer tools to present the guest with a known quantity of memory at system startup, and then will modify the amount of free memory seen by the guest in the even the host experiences memory pressure. It’s important to present the operating system with a known fixed memory value at system startup as that’s when the operating system defines key parameters such as cache values.
  • As today's hosts get more powerful, they are often tasked with hosting increasing numbers of virtual machines. For example, only a few years ago server consolidation efforts were generating consolidation ratios of 4:1 or even 8:1, today’s faster processors coupled with greater memory densities can easily support over a 20:1 consolidation ratio without significantly overcommiting CPUs. This creates significant risk of application failure in the event of a single host failure. High availability within XenServer protects your investment in virtualization by ensuring critical resources are automatically restarted in the event of a host failure. There are multiple restart options allowing you to precisely define what critical means in your environment.
  • When desktop virtualization is the target workload, the correct hypervisor solution will be one which not only provides a high performance platform, and has features designed to lower the overall deployment costs and address critical use cases, but one which offers flexibility in VM and host configurations while still offering a cost effective VM density. Since this is a classic case of use case matters, take a look at the Cisco Validated Design for XenDesktop on UCS with XenServerhttp://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/ucs_xd_xenserver_ntap.pdf
  • It is through the use of SRIOV and other cloud optimizations that the NetScaler SDX platform is able to provide the level of throughput, scalability and tenant isolation that it can. The NetScaler SDX is a hardware Application Delivery Controller capable of sustained throughput over 50 Gbps, all powered by a stock Cloud Optimized XenServer 6 hypevisor.
  • One of the most obvious comparisons is between vSphere and XenServer. A few years ago vSphere was the clear technical leader, but today the gap has closed considerably and there are clear differences in overall strategy and market potential. Key areas which XenServer had lagged, for example with live migration or advanced network switching are either being addressed or have already been addressed. Of course there will always be features which XenServer is unlikely to implement, such as serial port aggregation, or platforms it’s unlikely to support, such as legacy Windows operating systems, but for the majority of virtualization tasks both platforms are compelling solutions.

Xen Cloud Platform by Tim Mackey Xen Cloud Platform by Tim Mackey Presentation Transcript

  • Xen Cloud PlatformBuild a Cloud Day – May 2013
  • What is XCP? XCP = Xen Cloud Platform Open Source Citrix’s XenServer Announced in 2009 Built from XenServer until XCP 1.5 XenServer 6.1 built from XCP 1.6 Datacenter and cloud-ready API Complete virtualization stack Automation Resource pooling Event management
  • Impact of XenServer• Free XenServer Impact• 1,000,000+ downloads• 500,000+ servers activated• 150,000+ unique organizations• > 50% of Fortune 500 run XenServer“LookcloselyatCitrixasanalternativetoVMware. [Citrix]offersmanyofthesamefeaturesasVMwarewithmoreflexibilityandalowerprice.”-Info-TechResearchGroup,2011MagicQuadrant“Leader"–notedforfeaturesandprice/performance-Gartner,2011
  • Leveraging Multiple HypervisorsSource: Info-Tech Research Group; N = 711%3% 7%31%58%How many server virtualizationvendors are you using?5 4 3 2 141%32%9%9%4% 5%What pair of vendorsare you using?VMware/CitrixVMware/MicrosoftVMware/OracleMicrosoft/OracleOracle/Red HatMicrosoft/Red Hat71%20%7%2%What vendor areyou using?VMwareMicrosoftCitrixRed Hat• Many organizations leverage a combination ofVMware, for advanced management of criticalworkloads and apps, and Citrix or Microsoft for costsavings in non-critical systems.• Microsoft can also bring high performance forMicrosoft apps like Exchange or SharePoint.• Citrix XenServer is often utilized to support Citrix’sXenDesktop.The Benefits• When possible, ensure one of your solutionscan manage the other for day-to-daymanagement tasks like live migration & P2V.• Microsoft & Citrix can manage VMware andeach other.• VMware is beginning to offer management ofMicrosoft VMs.The Challenges
  • A more open Xen  a stronger XenServer• Xen is a Linux Foundation Collaborative Projectᵒhttp://xenproject.org• Supported by industry pillarsᵒAmazon, Cisco, Google, Intel• Why the Linux Foundation?ᵒProvide a trusted and neutral governance model• What about XenServer?ᵒXenServer will see accelerated growthᵒXenServer continues to power XenDesktop, CloudPlatform and NetScaler
  • What’s so Great About Xen?• It’s robustᵒNative 64-bit hypervisorᵒRuns on bare metalᵒDirectly leverages CPU hardware for virtualization• It’s widely-deployedᵒTens of thousands of organizations have deployed Xen• It’s advancedᵒOptimized for hardware-assisted virtualization and paravirtualization• It’s trustedᵒOpen, resilient Xen security framework• It’s part of mainline Linux
  • Understanding Architectural ComponentsThe Xen hypervisor and control domain (dom0) manage physical serverresources among virtual machines
  • Understanding the Domain 0 ComponentDomain 0 is a compact specialized Linux VM that manages the network andstorage I/O of all guest VMs … and isn’t the XenServer hypervisor
  • Understanding the Linux VM ComponentLinux VMs include paravirtualized kernels and drivers, and Xen is part ofMainline Linux 3.0
  • Understanding the Windows VM ComponentWindows VMs use paravirtualized drivers to access storage and networkresources through Domain 0
  • Core Management
  • XenCenter – Simple XCP Management• Single pane of management glass• Manage XenServer hostsᵒ Start/Stop VMs• Manage XenServer resource poolsᵒ Shared storageᵒ Shared networking• Configure advanced featuresᵒ HA, WLB, Reporting, Alerting• Configure updates
  • Management Architecture Comparison“The Other Guys”Traditional ManagementArchitectureSingle backend management serverXen Cloud PlatformDistributedManagement ArchitectureClustered management layer
  • Cloud Centric Features
  • XenMotion Live VM MigrationShared Storage
  • XenServer Pool• Migrates VM disks from anystorage type to any otherstorage typeᵒLocal, DAS, iSCSI, FC• Supports cross pool migrationᵒRequires compatible CPUs• Encrypted Migration model• Specify management interfacefor optimal performanceLive Storage XenMotionXenServer HypervisorVDI(s)LiveVirtualMachine
  • Heterogeneous Resource PoolsSafe Live MigrationsVirtual MachineOlder CPUFeature1Feature2Feature3Feature4XenServer 1Newer CPUFeature1Feature2Feature3Feature4XenServer 2Mixed Processor Pools
  • Memory Overcommit• Feature name: Dynamic MemoryControl• Ability to over-commit RAMresources• VMs operate in a compressed orbalanced mode within set range• Allow memory settings to beadjusted while VM is running• Can increase number of VMs perhost
  • High Availability• Automatically monitors hosts andVMs• Easily configured within XenCenter• Relies on Shared StorageᵒiSCSI, NFS, HBA• Reports failure capacity for DRplanning purposes
  • Cost Effective VM Densities• Supporting VMs with up to:ᵒ16 vCPU per VMᵒ128GB Memory per VM• Supporting hosts with up to:ᵒ1TB Physical RAMᵒ160 logical processors• Yielding up to 150 Desktop images per host• Cisco Validated Design for XenServer on UCS
  • Distributed Virtual Network Switching• Virtual SwitchᵒOpen source: www.openvswitch.orgᵒProvides a rich layer 2 feature setᵒCross host internal networksᵒRich traffic monitoring optionsᵒovs 1.4 compliant• DVS ControllerᵒVirtual applianceᵒWeb-based GUIᵒCan manage multiple poolsᵒCan exist within pool it managesVMVMVMVMVM
  • Switch Policies and Live MigrationVMVMVMVMLinux VM1•Allow all trafficLinux VM2•Allow SSH on eth0•Allow HTTP on eth1Windows VM•Allow RDP and deny HTTPLinux VM1•Allow all trafficLinux VM2•Allow SSH on eth0•Allow HTTP on eth1Windows VM•Allow RDP and deny HTTPSAP VM•Allow only SAP traffic•RSPAN to VLAN 26Windows VM•Allow all trafficLinux VM•Allow SSH on eth0•Allow HTTP on eth1Windows VM•Allow all trafficSAP VM•Allow only SAP traffic•RSPAN to VLAN 26Linux VM•Allow SSH on eth0•Allow HTTP on eth1VM
  • NetScaler SDX – Powered by XenServer• Complete tenant isolation• Complete independence• Partitions within instances• Optimized network: 50+ Gbps• Runs default XenServer 6
  • vSphere 5.1 and XCP 1.6 Quick ComparisonFeature XCP vSphere EditionHypervisor high availability Yes StandardNetFlow Yes Enterprise PlusCentralized network management Yes Enterprise PlusDistributed virtual network switching Yes Enterprise Plus with Cisco Nexus 1000vStorage live migration Yes StandardSerial port aggregation Not Available StandardOptimized for desktop workloads Yes Desktop Edition is repackagedEnterprise PlusLicensing Free Processor based
  • Getting involved with XCP• Download it and use it • http://lists.xen.org/xen-api• https://github.com/xen-org• https://launchpad.net/xcp
  • Work better. Live better.