Xen server 6.1 technical sales presentation
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Xen server 6.1 technical sales presentation

on

  • 4,130 views

Para maiores detalhes:

Para maiores detalhes:
http://condemalagueta.wordpress.com/
Twitter --> @ Nuno_Alves
Email --> nuno.alves@lcs.com.br
Site da LCS - www.lcs.com.br

Statistics

Views

Total Views
4,130
Views on SlideShare
4,130
Embed Views
0

Actions

Likes
2
Downloads
300
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Welcome to the XenServer Technical Presentation. In this presentation we’ll be covering many of the core features of XenServer, and we’ll have the option of diving a bit deeper in areas which you may be interested in.
  • For those of you unfamiliar with XenServer, XenServer is a bare metal hypervisor which directly competes with vSphere, Hyper-V and KVM. It is derived from the open source Xen project, and has been in active development for over six years. In this section we’ll cover the core architectural items of Xen based deployments.
  • Since XenServer is based on the open source Xen project, it’s important to understand how Xen itself works. Xen is a bare metal hypervisor which directly leverages virtualization features present in most CPUs from Intel and AMD since approximately 2007. These CPUs all feature VT-D or AMD-V instructions which allow virtual guests to run without needing performance robbing emulation. When Xen was first developed, the success of Vmware ESX was largely based on a series of highly optimized emulation routines. Those routines were needed to address shortcomings in the original x86 instruction set which created obstacles to running multiple general purpose “protected mode” operating systems such as Windows 2000 in parallel. With Xen, and XenServer, those obstacles were overcome through use of both the VT-D instruction set extensions and para-virtualization. Paravirtualization is a concept in which either the operating system is modified, or specific drivers are modified to become “virtualization aware”. Linux itself can optionally run as paravirtualized, while Windows requires the use of both hardware assistance and paravirtualized drivers to run at maximum potential on a hypervisor.These advances served to spur early adoption of Xen based platforms whose performance outstripped ESX in many critical applications. Eventually VMW released ESXi to leverage VT-D and paravirtualization, but it wasn’t until 2011 and vSphere 5 that ESXi became the only hypervisor for vSphere.
  • This is a slide that shows a blowup of the Xen virtualization engine and the virtualization stack “Domain 0” with a Windows and Linux virtual machine. The green arrows show memory and CPU access which goes through the Xen engine down to the hardware. In many cases Xen will get out of the way of the virtual machine and allow it to go right to the hardware.Xen is a thin layer of software that runs right on top of the hardware, Xen is only around 50,000 lines of code. The lines show the path of I/O traffic on the server. The storage and network I/O connect through a high performance memory bus in Xen to the Domain 0 environment. In the domain 0 these requests are sent through standard Linux device drivers to the hardware below.
  • Domain 0 is a Linux VM with higher priority to the hardware than the guest operating systems. Domain 0 manages the network and storage I/O of all guest VMs, and because it uses Linux device drivers, a broad range of physical devices are supported
  • Linux VMs include paravirtualized kernels and drivers. Storage and network resources are accessed through Domain 0, while CPU and memory are accessed through Xen to the hardwarehttp://wiki.xen.org/wiki/Mainline_Linux_Kernel_Configs
  • Windows VMs use paravirtualized drivers to access storage and network resources through Domain 0. XenServer is designed to utilize the virtualization capabilities of Intel VT and AMD-V enabled processors. Hardware virtualization enables high performance virtualization of the Windows kernel without using legacy emulation technology
  • XenServer is designed to address the virtualization needs of three critical markets.Within the Enterprise Data Center, XenServer solves the traditional server virtualization objectives of server consolidation, hardware independence while providing a high performance platform with a very straight forward management model.Since XenServer is a Citrix product, it only stands to reason that it can draw upon the vast experience Citrix has in optimizing the desktop experience and provide optimizations specific to desktop workloads.Lastly, with the emergence of mainstream cloud infrastructures, XenServer can draw upon the heritage of Amazon Web Services and Rackspace to provide a highly optimized platform for cloud deployments of any scale.
  • Since all these use cases depend on a solid data center platform, let’s start by exploring the features critical to successful enterprise virtualization
  • Successful datacenter solutions require an easy to use management solution, and XenServer is no different. For XenServer this management solution is called XenCenter. If you’re familiar with vCenter for vSphere, you’ll see a number of common themes. XenCenter is the management console for all XenServer operations, and while there is a powerful CLI and API for XenServer, the vast majority of customers perform daily management tasks from within XenCenter. These tasks include starting and stopping VM, managing the core infrastructure such as storage and networks, through to configuring advanced features such as HA, workload placement and alerting. This single pane of glass also allows administrators to directly access the consoles of the virtual machines themselves. As you would expect, there is a fairly granular set of permissions which can be applied, and I’ll cover that topic in just a little bit.
  • Of course any management solution which doesn’t have role based administration isn’t ready for the modern enterprise. XenServer fully supports granular access to objects and through the distributed management model ensures that access is uniformly applied across resource pools regardless of access method. In other words, the access available from within XenCenter is exactly the same access available via CLI or through API calls.
  • What differentiates Live Storage Migration from Live VM Migration is that with Live Storage Migration the storage used for the virtual disks is moved from one storage location while the VM itself may not change virtualization hosts. In XenServer, Live VM Migration is branded XenMotion and logically Live Storage Migration became Storage XenMotion. With Storage XenMotion, live migration occurs using a shared nothing architecture which effectively means that other than having a reliable network connection between source and destination, no other elements of the virtualization infrastructure need be common. What this means is that with Storage XenMotion you can support a large number of storage agility tasks, all from within XenCenterFor example:Upgrade a storage arrayProvide tiered storage arraysUpgrade a pool with VMs on local storageRebalance VMs between XenServer pools, or CloudStack clusters
  • One of the key problems facing virtualization admins is the introduction of newer servers into older resource pools. There are several ways vendors have chosen to solve this problem. They can either “downgrade” the cluster to a known level (say Pentium Pro or Core 2), disallow mixed CPU pools, or level the pool to the lowest common feature set. The core issue when selecting the correct solution is to understand how workloads actually leverage the CPU of the host. When a guest has direct access to the CPU (in other words there is no emulation shim in place), then that guest also has the ability to interrogate the CPU for its capabilities. Once those capabilities are known, the guest can optimize its execution to leverage the most advanced features it finds and thus maximize its performance. The downsize is that if the guest is migrated to a host which lacks a given CPU feature, the guest is likely to crash in a spectacular way. Vendors which define a specific processor architecture for the “base” are effectively deciding that feature set in advance and then hooking the CPU feature set instruction and returning that base set of features. The net result could be performance well below that possible with the “least capable” processor in the pool. XenServer takes a different approach and looks at the feature set capabilities of the CPU and leverages the FlexMigration instruction set within the CPU to create a feature mask. The idea is to ensure that only the specific features present in the newer processor are disabled and that the resource pool runs at its maximum potential. This model ensures that live migrations are completely safe, regardless of the processor architectures; so long as the processors come from the same vendor.
  • The ability to overcommit memory in a hypervisor was born at a time when the ability to overcommit a CPU far outpaced the ability to populate physical memory in a server in a cost effective manner. The end objective of overcommiting memory is to increase the quantity of VMs which a given host can run. This lead to multiple ways of extracting more memory from a virtualization host than was physically present. The four most common ways of solving this problem are commonly referred to as “transparent page sharing”, “memory ballooning”, “page swap” and “memory compression”. While each has the potential to solve part of the problem, using multiple solutions often yielded the best outcome. Transparent page sharing which seeks to share the 4k memory pages used by an operating system to store its read-only code. Memory ballooning seeks to introduce a “memory balloon” which appears to consume some of the system memory and effectively share it between multiple virtual machines. “Page swap” is nothing more than placing memory pages which haven’t been accessed recently on a disk storage system, and “memory compression” seeks to compress the memory (either swapped or in memory) with a goal of creating additional free memory from commonalities in memory between virtual machines.Since this technology has been an evolutionary attempt to solve a specific problem, it stands to reason that several of the approaches offer minimal value in todays’ environment. For example, transparent page sharing assumes that the readonly memory pages in an operating system are common across VMs, but the reality is that the combination of large memory pages and memory page randomization and tainting have rendered the benefits from transparent page sharing largely ineffective. The same holds true for page swapping whose performance overhead often far exceeds the benefit. What this means is that the only truly effective solutions today are memory ballooning and memory compression. XenServer currently implements a memory balloning solution under the feature name of “dynamic memory control”. DMC leverages a balloon driver within the XenServer tools to present the guest with a known quantity of memory at system startup, and then will modify the amount of free memory seen by the guest in the even the host experiences memory pressure. It’s important to present the operating system with a known fixed memory value at system startup as that’s when the operating system defines key parameters such as cache values.
  • Managing a single virtual machine at a time works perfectly fine when you’re evaluating a hypervisor, or when you’re a small shop, but eventually you’re going to want to manage applications which span a group of servers as a single item. Within XenServer, this is accomplished using a vApp. At its highest level, a vApp is a container which includes one or more VMs and their associated settings. This container is manageable using all the standard XenServer management options, and importantly can participate in HA and disaster recovery planning as well as backup export operations.
  • VM Protection & Recovery GoalProvide a way to automatically protect VM memory and disk against failures Snapshot TypesDisk onlyDisk and memorySnapshot frequencyHourlyDailyWeekly (multiple days)Start timeSnapshot retention configurable (1-10)Archive frequencyAfter each snapshotDailyWeekly (multiple days)Start timeArchive locationCIFSNFSCompressed export
  • As today's hosts get more powerful, they are often tasked with hosting increasing numbers of virtual machines. For example, only a few years ago server consolidation efforts were generating consolidation ratios of 4:1 or even 8:1, today’s faster processors coupled with greater memory densities can easily support over a 20:1 consolidation ratio without significantly overcommiting CPUs. This creates significant risk of application failure in the event of a single host failure. High availability within XenServer protects your investment in virtualization by ensuring critical resources are automatically restarted in the event of a host failure. There are multiple restart options allowing you to precisely define what critical means in your environment.
  • The features we’ve just covered form the basis of a basic virtualized data center. Once your data center operations reach a point where you’re operating at scale which has many admins, or multiple resource pools, some of the advanced data center automation components within XenServer will start to become valuable.
  • When looking at storage usage within virtualized environments, there typically is either a file based or block based model, but regardless of the model the shared storage is essentially treated as if it were nothing more than a large dumb disk. Advanced features of the storage arrays aren’t used, and storage usage might be inefficient as a result. StorageLink uses specialized adapters which are designed for a given array. These adapters take full advantage of the feature set contained within the storage array. Key advantages of StorageLink over simple block based storage repositories include: Thin-provisioning, deduplication and array based snapshot management.Note:Integrated StorageLink replaces the StorageLink Gateway technology used in previous editionsLUN-per-VDIUsing array “smarts”Does not require a (virtual) machine for running StorageLink componentsRemoves SPOFSupported AdaptersNetAppDell EqualLogicEMC VMX
  • When resource pools are small, and the number of VMs under management are similarly low, it’s not unreasonable for a virtualization admin to make acceptable decisions about where to place a given guest for optimal performance. Once the number of VMs reaches a critical point, typically between 20-30, placement decisions and interdependencies become so complex that humans aren’t going to place VMs in the most optimal location. This is why VMW and others have implemented resource placement services, and if you’re familiar with vSphere DRS, then XenServer Workload Balancing will look very familiar. Like DRS, WLB takes into account CPU and RAM utilization when attempting to determine where the best host to start or rebalance a VM is, but unlike DRS, WLB also includes key IO metrics such as disk reads and writes and network reads and writes in those computations. This allows WLB to ensure IO dominant applications are rarely placed on the same host, and that overall resource pool operations are optimized.In addition to performing workload placement, WLB is also directly integrated into XenServer power management to perform workload consolidation on a scheduled basis. This feature allows for the consolidation of underutilized servers onto fewer hosts during evening hours, and the evacuated hosts powered down for the duration. When the morning schedule takes effect, the powered down hosts are automatically restarted and workloads rebalanced for optimal performance.Lastly, WLB incorporates a series of health and status reports suitable for both operations and audit purposes.Schedule pool policy based on time of day needsWhen starting guests, an option to “Start on optimal server” is available, and XenServer chooses the most appropriate server based on policyUsers have the ability to over-ride policy, or specify guests or hosts that are excluded from policy (eg high-demand applications)
  • Planning for and supporting multi-site disaster recovery within a virtualized environment can be quite complex, but with XenServer’s integrated site recovery option, we’ve taken care of the hard parts. The key to site recovery is that we take care of the VM metadata, while your storage admins take care of the array replication piece. What this means is that every iSCSI or HBA storage solution on our HCL is supported for site recovery operations, providing that it either has built-in replication or can work with third party replication. When site recovery is enabled, the VM metadata corresponding to the VMs and/or vApps you wish to protect are written to the SR containing the disk images for the VMs. When the LUNs are replicated to the secondary site, the metadata required to reconfigure those VMs is also automatically replicated. Because we’re replicating the underlying VM disk images and associated metadata, if VMs in the secondary site are running from different LUNs Integrated Site Recovery can fully support active/active use models. Note that due to VM replication, active/active will require a minimum of two LUNs.Recovery from failure, failback and testing of failover is accomplished using a wizard within XenCenter. Each step of the wizard validates that the configuration is correct and that the system is in fact in a state of “failure”.
  • XenServer Web Console GoalsEnable XenServer Mgmt from a Web based console Offer VM level delegation so end users can manage their VM’sWeb SS delivers Remote ManagementITadmins have long wanted a means to mange VM’s remotely via a browser based, non-windows platformEnd User Self ServiceWSS also allows IT to delegate routine management tasks to the application/VM ownerThis satisfies the more strategic goal of helping IT to enable customer self service in the datacenterFinally WSS also provides a foundation for future innovation in the areas of web based mgmt, self service and an opencloud director layer for x-platform mgmt
  • Performing a snapshot of a running VM using live memory snapshot allows the full state of the VM to be captured during the snapshot, all with minimal impact to the running VM. Additionally, if the Volume Snapshot Service (VSS) is enabled within Windows VMs, any services which have registered themselves with VSS will automatically quiesce during the snapshot. Examples of services which register themselves include SQL Server.XenServer supports both parallel branches for the snapshot chains, and will automatically coalesce any chains if intermediate snapshots are deleted. Additionally, snapshots can be converted to custom templates.
  • Desktop virtualization is a core topic in many organizations today, and while some vendors would have you believe that a general purpose hypervisor is the correct solution for desktop workloads, the reality is that desktop workloads present a very distinct usage pattern not seen with traditional server based workloads. This is one reason why when you look at Citrix XenDesktop you see it taking advantage of specific features of XenServer which are unique to desktop virtualization. In this section, we’ll cover what the Desktop Optimized XenServer looks like and what specific benefits XenServer has when XenDesktop is used as the desktop broker.
  • Within desktop virtualization there are two distinct classes of users, those who are using general purpose applications and those who are using graphics intensive applications. Supporting the former is readily accomplished using the traditional emulated graphics adapters found in hypervisors, but when you need the full power of a GPU for CAD, graphic design or video processing those emulated adapters are far from sufficient. This is why XenServer implemented the GPU Pass-through feature. With GPU pass-through users requiring high performance graphics can be assigned a dedicated GPU contained within the XenServer host making GPU pass-through the highest performing option on the market.
  • So this use traditionalcase is shown on the left. Each blade or workstation needed a GPU installed, and Windows was installed physically.On the right we have the GPU pass-thru use case. We can install a number of GPUs in the XenServer host, and assign them to the Virtual machines.The actual savings will be determined by the number of GPUs in the server, or the capabilities of the new “multi-GPU cards” coming from vendors such as nVidia.
  • One of the biggest areas of concern when deploying desktop virtualization isn’t the overall license costs, but the impact of shared storage. On paper if you were considering a deployment requiring 1000 active desktops, and assumed an average of 5GB per desktop, if you happened to have space for a 5 TB LUN on an existing storage array, you might be tempted to carve out that LUN and leverage it for the desktop project. Unfortunately, were you to do so you’d quickly find that while you had the space for the storage you might not have the free IOPS to satisfy both the desktop load and whatever pre-existing users were leveraging the SAN. With XenServer, we recognized that this would be a barrier to XenDesktop adoption and implemented IntelliCache to leverage the local storage on the XenServer as a template cache for the desktop images running on that host.
  • The key to IntelliCache is recognizing that with desktop virtualization the number of unique templates per host is minimal. In fact, to maximize the effect of IntelliCache target the minimum number of templates to the given host. At the extreme, if the number of active VMs per template requires more than a single host, then dedicating a resource pool per template might be optimal.
  • Hidden as due to citrix.com website optimizations due at launch, the calculator will be offline
  • When desktop virtualization is the target workload, the correct hypervisor solution will be one which not only provides a high performance platform, and has features designed to lower the overall deployment costs and address critical use cases, but one which offers flexibility in VM and host configurations while still offering a cost effective VM density. Since this is a classic case of use case matters, take a look at the Cisco Validated Design for XenDesktop on UCS with XenServerhttp://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/ucs_xd_xenserver_ntap.pdf
  • As with desktop virtualization, there are unique characteristics of cloud workloads which make a general purpose hypervisor less than idea. The vast experience Citrix has with cloud operators such as Amazon, Rackspace and SoftLayer over the years has allowed us to develop features which directly address the scalability and serviceability of cloud infrastructure.
  • When dealing with high VM density in cloud hosting, the standard 1Gb NICs of a few years ago simply don’t provide the level of network throughput needed for most hosting providers. This lead to 10 Gb NICs becoming commonplace, but the hypervisor overhead of processing packets for a 10 Gb network artificially limited the throughput as well. This meant that even with 10 Gb cards, wire speed was hard to attain. SR-IOV is the answer to this type of problem. Through the use of specialized hardware, the physical NIC can be divided into virtual NICs at the hardware layer and these virtual NICs, commonly referred to as virtual functions, are then presented directly into the hypervisor. The core objective of this PCI standard is to minimize the hypervisor overhead in high performance networks. While SRIOV can provide significant efficiencies with 10 Gb networks, there are a few downsides to the technology today, but each of these limitations is being addressed as the technology matures.
  • It is through the use of SRIOV and other cloud optimizations that the NetScaler SDX platform is able to provide the level of throughput, scalability and tenant isolation that it can. The NetScaler SDX is a hardware Application Delivery Controller capable of sustained throughput over 50 Gbps, all powered by a stock Cloud Optimized XenServer 6 hypevisor.
  • As you would expect from Citrix, and our historical relationship with Microsoft, XenServer has a strong integration with System Center
  • CIMOM = Common Information Model Object ManagerXenServer uses the OpenPegasus CIMOM
  • XenServer is available in a variety of product editions to meet your needs with price points ranging from Free, through “Included with purchase of management framework”, to standalone paid editions.
  • Platinum: Integrated DREnterprise: Adds IntelliCache for improved TCO of XenDesktop installments and adds a monitoring pack for Systems Center Ops which can now be used to manage XenServerAdvanced: Adds Automated VM protection and recovery to protect VMs data in the event of an outage or failureXenServer: Improvements to capacity, networking, upgrading, and converting existing workloadsThe “Desktop Optimized XenServer” is available with the purchase of XenDesktop, and the “Cloud Optimized XenServer” is available with the purchase of CloudStack
  • One of the most obvious comparisons is between vSphere and XenServer. A few years ago vSphere was the clear technical leader, but today the gap has closed considerably and there are clear differences in overall strategy and market potential. Key areas which XenServer had lagged, for example with live migration or advanced network switching are either being addressed or have already been addressed. Of course there will always be features which XenServer is unlikely to implement, such as serial port aggregation, or platforms it’s unlikely to support, such as legacy Windows operating systems, but for the majority of virtualization tasks both platforms are compelling solutions.
  • Platinum Edition: Data protection and resiliency for enterprise-wide virtual environmentsEnterprise Edition: Automated, integrated, and production-ready offering for medium to large enterprise deploymentsAdvanced Edition: Highly available and memory optimized virtual infrastructure for improved TCO and host utilizationFree Edition: Free, enterprise-ready virtual infrastructure with management tools above & beyond alternatives .
  • More information on Citrix Subscription Advantage: http://www.citrix.com/lang/English/lp/lp_2317284.asp
  • Premier Support: http://www.citrix.com/lang/English/lp/lp_2321822.aspPremier Support Calculator: http://deliver.citrix.com/WWWB1201PREMIERSUPPORTCALCULATORINBOUND.html
  • The single vendor lock-in model only benefits the vendor. Choose the correct hypervisor for your workloads to ensure the best performance as well as extending your IT budget. Use POCs to measure how well each solution performs in your environment so you can truly gauge how much ROI you will get from a given implementation. Support is a valuable asset when deploying any environment and understanding each vendors model will make sure you don’t get stuck with a costly services bill later on.Understand the requirements of each project so you can assess the best tool for the job. Know what features are needed for your applications so you can spend money on costly features wisely.
  • Key items to note:GPU is attached to a VM at boot time, and stays attached as long as the VM is running.Mixing GPU and non-GPU workloads on a host will maximize VM densityThe number of GPUs which can be installed in a host is limited
  • *Require the HP Graphics Expansion Blade moduleKey items: There is a tight relationship between the host and GPU in this model, and that means a much more limited HCL. In other words, you can’t simply install a series of GPUs into a host and expect it work; it might, but it might not. There are a lot of moving parts.Current list: http://hcl.xensource.com/GPUPass-throughDeviceList.aspx
  • Pretty much read this slide. It’s important stuff
  • Key items:If you haven’t told the host to use local storage for intellicache, it won’t
  • Key items: Same idea when adding a host to desktop studio
  • Key items:As you would expect, we did some testing and found that IntelliCache made a differenceThese next three slides go together, and it’s important to pay attention to the vertical scale
  • Key items:While the best results were achieved with SSDs, this really is a spindle story so if you have a server which can host a number of high performance rotational disks, then you do get a significant benefit from IntelliCache. Live migrating a VM which is backed by IntelliCache can be done, but it does require additional configuration. By default since the disk is local, live migration won’t work.
  • Just read it
  • While not required for many private clouds, the concept of resource costing, billing and chargeback are core to determining the success of your cloud initiative. Eventually someone is going to be looking for usage stats, or better still capacity planning information. That information is going to be readily available in a solution which was designed to capture deployment details from the start. One important detail to bear in mind is that no billing solution is going to be perfect. Entire products are designed around the whole concept of “billing”, and XenServer isn’t such a product. Our approach is a bit different. It recognizes that there is going to be some requirement for external data (such as costing information), and that this information simply doesn’t belong in a billing system. What we’ve done is provide the billing primitives, and easy SQL data views to access the data. From this framework, custom billing and chargeback models can be developed without encumbering the cloud provisioning system with complex billing requirements.
  • Key items:Prior to XenServer 6, there was a feature known as Site Recovery Manager. This feature was implemented using the StorageLink Gateway and had a very limited HCL. We removed that feature and replaced it with the Integrated Site Reocvery starting in version 6 of XenServer. This allowed us to support any iSCSI or HBA array on the HCL.
  • Persist following SR information in the SR:name_labelname_descriptionallocationUUIDPersist the following VDI information for all VDIs in the SR:name_labelname_descriptionUUID is_a_snapshotsnapshot_ofsnapshot_timetypevdi_typeread_onlymanagedmetadata_of_poolThe metadata is stored in a logical volume called “MGT” in each SR. Writes to this volume are O_DIRECT and block size aligned, so the metadata is always consistent.
  • Shows the entire flow of setup and failover.
  • Core terminology used in this section
  • Explain the problems we’re attempting to address with the new switch.Rich, flexible virtual switching for each host thatGuarantees separation & isolationParticipates in all standard switching protocol exchanges, just like a physical switchProvides full visibility into and control of the packet forwarding pathBy and for multiple VM tenantsProvides complete management of all switch features just like a hardware switchBy and for multiple managing tenantsIs inherently aware of virtualizationVM ACLs are a property of the VMMulti-tenancyPooled state from multiple virtual switches in a virtual infrastructure to permit the abstraction of a virtual port to be separated from a software virtual switch on a single serverBuilding block of multi-tenant virtual private data center overlayPreserve network state per-VM as VMs migrate between physical serversPermit unified management, visibility into and control of VM traffic from a single point in the infrastructurePermit multi-tenant aware management of the distributed virtual switchPermit per-flow timescale decisions to be made for control of traffic on any virtual portMulti-tenant aware & secure
  • Key items: Access control is exactly the same as firewall rules in a traditional switch. What’s different here is that the definition of a virtual switch becomes analagous to a stack which allows network admins to define rulesets which apply regardless of what the virtualization admin might change.
  • Key items:QoS with bursting
  • Key items: Doing port mirroring on a VM without DVS requires filtering traffic from other VMs. With DVS, a couple of clicks of a mouse and you’re done.
  • Key items: NetFlow is an industry standard for network monitoring, and DVS handles it out of the box. All you need to do is configure the collector address and you’re done.
  • Key items: Built in to the DVSC is a basic NetFlow collector and analyzer. Good for small installations, but can be disabled for larger enterprises.
  • Key items:DVS gives us jumbo framesDVS allows us to create private networks using management interfaces, and those private networks are secured using GRE. This means that there are no VLAN boundaries to worry abouthttp://en.wikipedia.org/wiki/Generic_Routing_Encapsulation
  • Key items: Restart used for most critical servicesRestart if possible used if restart desired, but application design will ensure continued operation in the event the service can’t be restarted. This option also provides additonal failure “head room”Non-agile VMs can not be guaranteed to restart
  • New in XenServer 5.6 - Dynamic Memory ControlMaximize investment in server hardwareSome configuration risk – VMs not guaranteed to bootCrashed VMs automatically placed on best fit serversRunning VMs could be "squeezed" to free up RAM
  • When dealing with resilient applications and HA, there is always the potential for creating single points of failure which the application deployment guide cautioned against. For example, if you have a SQL Server cluster made up of two nodes, if both of those nodes end up on the same host and that host fails, the resiliency of SQL Server won’t save you. Here’s how to avoid such situations in XenServer by using both HA and WLB.Define a host in WLB to not participate in optimization and rebalancing activitiesPlace one node of the SQL Server cluster on that host (node A)Place the second node of the SQL Server cluster on any other host (node B)Configure HA to protect the second node of the SQL Server cluster using “restart if possible”, but not the first nodeLet’s explore the various automatic failure modes:If the host excluded from WLB activities fails, node A fails and does not restart. Node B continues to operate with no downtimeIf a host running node B fails, node B will be restarted on any surviving host except for the host excluded by WLB. If the only host with capacity to start node B is the excluded host, then node B won’t be started, otherwise it will be restarted without breaking resiliency
  • Embed multiple VMs in a single management frameworkPackage is managed as an entity (e.g. backup)VM start order and delays contained in package
  • Says it all
  • Key items: When you overlay a file system on to manage VMs, there are inherent features that file system imposes, and those features might not be compatible with what a given storage array can offer. The core objective of StorageLink is to maximize the potential of a storage array without artificially imposing virtualization concepts upon it.
  • So without StorageLink, you end up asking the storage admins for a LUN, and that LUN ends up being a storage repository in XenServer provisioned as LVM. LVM storage repositories are block based primitives which have the virtual disks contained within them and while a VM is running, LVM effectively requires that virtual disk to be fully provisioned. Obviously as you add more an more disks, there will come a point when the LUN will be full, but the virtual disks themselves might not be fully used. The net result being that additional VM capacity requires a second storage repository which in turn requires a new LUN.
  • With StorageLink, StorageLink manages the array directly and provisions a LUN for each virtual disk. Since StorageLink has direct access to the array, it can provision the LUNs using key features such as thin-provisioning and thus make more efficient utilization of the array. This model is known as LUN per VDI
  • In addition to LUN provisioning, since StorageLink has direct access to the array, it also can levearge the array’s native APIs to perform snapshots and clones. Without StorageLink, those snapshots live within the provisioned “fat LUN” and will compete for storage space with the primary virtual disks. StorageLink effectively frees the snapshot mechanism to leverage the entire space from the array.
  • StorageLink uses an adapter based architecture where the XenServer host and control domain have a StorageLink Gateway Bridge into which the adapters plug. Legacy NetApp and Dell EQL adapters are still in the code, but mainly for users upgrading to XenServer 6 who are using the Legacy adapter today. New SRs created from XenCenter will use the new integrated SL adapter.iSL supports NetApp, Dell EqualLogic and EMC VNX arrays
  • The primary driving force behind SRIOV support is that with the advent of 10Gb ethernet, the existing PV driver model simply can’t sustain full throughput on these cards. So while 1Gb line rate is possible, dom0 saturation prevents 10Gb from attaining line rate.
  • Taking a bit of a step back, we see that the solution itself requires more than just SRIOV, but rahter a series of enhancements.Starting with VMDq we create separate RX and TX pairs for each VM:Device has multiple RX queuesDedicate one RX queue to a particular guestProgram device to demultiplex incoming packets to the dedicated queue using guest MAC addressPost RX descriptors pointing to guest memoryDevice places received packet directly into guest memory avoiding data copyWith direct IO (VTd) we now can map the IO directly into a guest VM and this allows for attainment of line rate on 10GbMoving past VTd, we have SRIOV which itself carves out virtual NICs from the physical NIC to form virtual functions. Each of the virtual functions can be mapped into a VM, but more importantly since each VM can itself be performing high levels of IO the line rates can be further extended. This is precisely how the NetScaler SDX can attain its high throuput using standard XenServer.
  • When looking at the key objectives of virtualization and hardware, you can see that direct hardware access has historically provided limited scalability due to the inability of most devices to natively share access. With SRIOV, this scalability limitation is largely overcome.
  • Of course, since you’re mapping a dedicated hardware resource to a VM you’ve now prevented it from participating in live migration. With XenServer 6 we’ve introduced experimental support for SolarFlare cards supporting SRIOV with live migration.By default the guest VM will use the “fast path” for network traffic, however a regular VIF backup path is available and the VM will fallback to this path during migration to a different host. If a Solarflare SR-IOV adapter is available on the target host, the guest will switch back to the “fast path” again after migration.

Xen server 6.1 technical sales presentation Presentation Transcript

  • 1. XenServer 6.1 Technical OverviewSeptember 2012
  • 2. What is XenServer?
  • 3. What’s so Great About Xen?• It’s robust ᵒNative 64-bit hypervisor ᵒRuns on bare metal ᵒDirectly leverages CPU hardware for virtualization• It’s widely-deployed ᵒTens of thousands of organizations have deployed Xen• It’s advanced ᵒOptimized for hardware-assisted virtualization and paravirtualization• It’s trusted ᵒOpen, resilient Xen security framework• It’s part of mainline Linux © 2012 Citrix | Confidential – Do Not Distribute
  • 4. Understanding Architectural ComponentsThe Xen hypervisor and control domain (dom0) manage physical server resources among virtual machines © 2012 Citrix | Confidential – Do Not Distribute
  • 5. Understanding the Domain 0 ComponentDomain 0 is a compact specialized Linux VM that manages the network andstorage I/O of all guest VMs … and isn’t the XenServer hypervisor© 2012 Citrix | Confidential – Do Not Distribute
  • 6. Understanding the Linux VM ComponentLinux VMs include paravirtualized kernels and drivers, and Xen is part ofMainline Linux 3.0© 2012 Citrix | Confidential – Do Not Distribute
  • 7. Understanding the Windows VM ComponentWindows VMs use paravirtualized drivers to access storage and networkresources through Domain 0© 2012 Citrix | Confidential – Do Not Distribute
  • 8. XenServer Meets All Virtualization Needs • High performance, resilient virtualization platform Enterprise • Simple deployment and management model Data Center • Host based licensing to control CAPEX Desktop • Optimized for high performance desktop workloads Virtualization • Storage optimizations to control VDI CAPEX Cloud • Platform for IaaS and Cloud Service Providers • Powers the NetScaler SDX platform Infrastructure • Fully supports Software Defined Networking© 2012 Citrix | Confidential – Do Not Distribute
  • 9. EnterpriseData Center Virtualization
  • 10. XenCenter – Simple XenServer Management• Single pane of management glass• Manage XenServer hosts ᵒ Start/Stop VMs• Manage XenServer resource pools ᵒ Shared storage ᵒ Shared networking• Configure advanced features ᵒ HA, WLB, Reporting, Alerting• Configure updates © 2012 Citrix | Confidential – Do Not Distribute
  • 11. Management Architecture Comparison “The Other Guys” Citrix XenServer Traditional Management Distributed Architecture Management Architecture Single backend management server Clustered management layer© 2012 Citrix | Confidential – Do Not Distribute
  • 12. Role-Based Administration• Provide user roles with varying permissions • Pool Admin • Pool Operator • VM Power Admin • VM Admin • VM Operator • Read-only• Roles are defined within a Resource Pool• Assigned to Active Directory users, groups• Audit logging via Workload Reports © 2012 Citrix | Confidential – Do Not Distribute
  • 13. XenMotion Live VM Migration Shared Storage More about XenMotion© 2012 Citrix | Confidential – Do Not Distribute
  • 14. Live Storage XenMotion Live • Migrates VM disks from any Virtual Machine storage type to any other storage type ᵒLocal, DAS, iSCSI, FC XenServer Hypervisor • Supports cross pool migration ᵒRequires compatible CPUs VDI(s) • Encrypted Migration model • Specify management interface for optimal performance XenServer Pool More about Storage XenMotion© 2012 Citrix | Confidential – Do Not Distribute
  • 15. Heterogeneous Resource Pools Virtual Machine Safe Live Migrations Mixed Processor Pools Feature Feature Feature Feature Feature Feature Feature Feature 1 2 3 4 1 2 3 4 Older CPU Newer CPU XenServer 1 XenServer 2© 2012 Citrix | Confidential – Do Not Distribute
  • 16. Memory Overcommit • Feature name: Dynamic Memory Control • Ability to over-commit RAM resources • VMs operate in a compressed or balanced mode within set range • Allow memory settings to be adjusted while VM is running • Can increase number of VMs per host© 2012 Citrix | Confidential – Do Not Distribute
  • 17. Virtual Appliances (vApp)• Support for “vApps” or Virtual Appliances ᵒOVF definition of Virtual Appliance• vApp contains one or more Virtual Machines• Enables grouping of VMs which can be utilized by ᵒXenCenter ᵒIntegrated Site Recovery ᵒAppliance Import and Export ᵒHA © 2012 Citrix | Confidential – Do Not Distribute
  • 18. Virtual Machine Protection and Recovery• Policy based snapshotting and archiving• Separate scheduling options for snapshot and archive ᵒSnapshot-only or Snapshot and Archive• Policy Configuration ᵒAdd multiple VMs to policy ᵒSearch filter available ᵒVM can only belong to 1 policy ᵒXenCenter or CLI © 2012 Citrix | Confidential – Do Not Distribute
  • 19. High Availability in XenServer • Automatically monitors hosts and VMs • Easily configured within XenCenter • Relies on Shared Storage ᵒiSCSI, NFS, HBA • Reports failure capacity for DR planning purposes More about HA© 2012 Citrix | Confidential – Do Not Distribute
  • 20. Advanced Data CenterAutomation
  • 21. Optimizing Storage – Integrated StorageLink Virtualization can hinder the linkage XenServer between servers and storage, turning Hosts expensive storage systems into little more than “dumb disks” Citrix StorageLink™ technology lets your XenServer StorageLink virtual servers fully leverage all the Hosts Storage power of existing storage systems More about StorageLink© 2012 Citrix | Confidential – Do Not Distribute
  • 22. Workload Placement Services• Feature name: Workload Balancing• Automated guest start-up and management based on defined policy• Guests automatically migrate from one host to another based on resource usage• Power-on/off hosts as needed• Report on utilization of pool resources – by VM, by host, etc.More about WLB © 2012 Citrix | Confidential – Do Not Distribute
  • 23. Integrated Site Recovery• Supports LVM SRs• Replication/mirroring setup outside scope of solution ᵒFollow vendor instructions ᵒBreaking of replication/mirror also manual• Works with every iSCSI and FC array on HCL• Supports active-active DR More about Site Recovery © 2012 Citrix | Confidential – Do Not Distribute
  • 24. Delegated Web Based Administration• Enables: • IT delegation for administrators • VM level administration for end users• Support for multiple pools• Active Directory enabled• XenVNC and RDP console access © 2012 Citrix | Confidential – Do Not Distribute
  • 25. Live Memory Snapshot and Rollback• Live VM snapshot and revert ᵒBoth memory and disk state are captured ᵒOptional quiesce option via VSS provider (Windows guests) ᵒOne-click revert• Snapshot branches ᵒSupport for parallel subsequent checkpoints based on a previous common snapshot © 2012 Citrix | Confidential – Do Not Distribute
  • 26. Desktop Optimized XenServer
  • 27. Supporting High Performance Graphics• Feature name: GPU pass-through• Enables high-end graphics in VDI deployments with HDX 3D Pro• Optimal CAD application support with XenDesktop• More powerful than RemoteFX, virtual GPUs, or other general purpose graphics solutions © 2012 Citrix | Confidential – Do Not Distribute
  • 28. Benefits of GPU Pass-throughWithout GPU pass-through, each user With GPU pass-through, hardware requires their own Blade PC costs are cut up to 75% GPU cards XenServer Host More about GPU Pass Through© 2012 Citrix | Confidential – Do Not Distribute
  • 29. Controlling Shared Storage Costs – IntelliCache• Caching of XenDesktop 5 images• Leverages local storage• Reduce IOPS on shared storage• Supported since XenServer 5.6 SP2 © 2012 Citrix | Confidential – Do Not Distribute
  • 30. IntelliCache Fundamentals XenDesktop 1. Master Image created through XenDesktop MCS 2. VM is configured to use Master Image 3. VM using Master Image is started 001 011 4. XenServer creates read cache object on local storage 0101 0011 5. Reads in VM being done from local cache 6. Additional Reads done from SAN when required 7. Writes will happen in VHD child per VM 8. Local “write” cache is deleted when 0101 0011 Master Image Cache VM is shutdown/restarted 9. Additional VMs will use same read cacheNFS Based Storage © 2012 Citrix | Confidential – Do Not Distribute
  • 31. Cost Effective VM Densities• Supporting VMs with up to: ᵒ16 vCPU per VM ᵒ128GB Memory per VM• Supporting XenServer hosts with up to: ᵒ1TB Physical RAM ᵒ160 logical processors• Yielding up to 150 Desktop images per host• Included at no cost with all XenDesktop purchases• Cisco Validated Design for XenDesktop on UCS © 2012 Citrix | Confidential – Do Not Distribute
  • 32. Cloud Optimized XenServer
  • 33. Distributed Virtual Network Switching• Virtual Switch VM ᵒOpen source: www.openvswitch.org ᵒProvides a rich layer 2 feature set ᵒCross host internal networks VM ᵒRich traffic monitoring options ᵒovs 1.4 compliant• DVS Controller ᵒVirtual appliance VM ᵒWeb-based GUI VM ᵒCan manage multiple pools ᵒCan exist within pool it manages VM © 2012 Citrix | Confidential – Do Not Distribute
  • 34. Switch Policies and Live Migration Windows VM •Allow Windows VM all traffic VM SAP •Allow all traffic VM •Allow only SAP traffic Linux VM •RSPAN to VLAN 26 •Allow SSH on eth0 Linux VM HTTP on eth1 •Allow •Allow SSH on eth0 VM •Allow HTTP on eth1 Linux VM1 Linux •Allow all traffic VM1 Linux •Allow all traffic VM2 VM Linux •Allow SSH on eth0 VM2 •Allow SSH on eth0 •Allow HTTP on eth1 VM •Allow HTTP on eth1 Windows VM Windows VM •Allow RDP and deny HTTP SAP VM•Allow RDP and deny HTTP VM •Allow only SAP traffic •RSPAN to VLAN 26 More about DVSC© 2012 Citrix | Confidential – Do Not Distribute
  • 35. Single Root IO Virtualization (SR-IOV) Guest Guest App VM App VM• PCI Specification for direct IO access ᵒHardware supports multiple PCI ids VF driver VF driver ᵒPresents multiple virtual NICs from single NIC• Virtual NICs presented directly into guests dom0 ᵒMinimize hypervisor overhead in high performance networks vSwitch• Not without downsides Physical driver ᵒRequires specialized hardware Virtual NIC Virtual NIC NIC ᵒCan not participate in DVS ᵒDoes not support live migration ᵒLimited number of virtual NICs More about SRIOV © 2012 Citrix | Confidential – Do Not Distribute
  • 36. NetScaler SDX – Powered by XenServer• Complete tenant isolation• Complete independence• Partitions within instances• Optimized network: 50+ Gbps• Runs default XenServer 6 © 2012 Citrix | Confidential – Do Not Distribute
  • 37. System Center Integration
  • 38. Support for SCVMM• SCVMM communicates with CIMOM in XenServer which communicates with XAPI• Requires SCVMM 2012• Very easy to setup ᵒDelivered as Integration Suite Supplemental Pack ᵒAdd Resource Pool or host• Secure communication using certificates © 2012 Citrix | Confidential – Do Not Distribute
  • 39. Support for SCOM• Monitor XenServer hosts through System Center Operations Manager• Support for SCOM 2007 R2 and higher• Part of Integration Suite Supplemental Pack• Monitor various host information (considered Linux host) ᵒMemory usage ᵒProcess information ᵒHealth status © 2012 Citrix | Confidential – Do Not Distribute
  • 40. XenServer Editions
  • 41. Summary of Key Features and Packages • Integrated disaster recovery management • Provisioning services for physical and virtual workloads • Dynamic Workload Balancing and Power Management • Web Management Console with Delegated Admin • Monitoring pack for Systems Center Ops Manager • High Availability • Dynamic Memory Control • Shared nothing live storage migration • Resource pooling with shared storage • Centralized management console • No performance restrictions© 2012 Citrix | Confidential – Do Not Distribute
  • 42. vSphere 5.1 and XenServer 6.1 Quick ComparisonFeature XenServer Edition vSphere EditionHypervisor high availability Advanced StandardNetFlow Advanced Enterprise PlusCentralized network management Free Enterprise PlusDistributed virtual network switching Advanced Enterprise Plus with Cisco Nexus 1000vStorage live migration Advanced StandardSerial port aggregation Not Available StandardNetwork based resource scheduling Enterprise Not AvailableDisk IO based resource scheduling Enterprise Not AvailableOptimized for desktop workloads Yes Desktop Edition is repackaged Enterprise PlusLicensing Host based Processor based © 2012 Citrix | Confidential – Do Not Distribute
  • 43. XenServer 6.1 – Product Edition Feature Matrix Feature Free Advanced Enterprise Platinum64-bit Xen Hypervisor a a a aActive Directory Integration a a a aVM Conversion Utilities a a a aLive VM Migration with XenMotion™ a a a aMulti-Server Management with XenCenter a a a aManagement Integration with Systems Center VMM a a a aAutomated VM Protection and Recovery a a aLive Storage Migration with Storage XenMotion™ a a aDistributed Virtual Switching a a aDynamic Memory Control a a aHigh Availability a a aPerformance Reporting and Alerting a a aMixed Resource Pools with CPU Masking a a aDynamic Workload Balancing and Power Management a aGPU Pass-Through for Desktop Graphics Processing a aIntelliCache™ for XenDesktop Storage Optimization a aLive Memory Snapshot and Revert a aProvisioning Services for Virtual Servers a aRole-Based Administration and Audit Trail a aStorageLink™ Advanced Storage Management a aMonitoring Pack for Systems Center Ops Manager a aWeb Management Console with Delegated Admin a aProvisioning Services for Physical Servers aSite Recovery a © 2012 Citrix | Confidential – Do Not Distribute Price Free $1000/server $2500/server $5000/server
  • 44. Subscription AdvantageCitrix Subscription Advantage entitles customers the ability to upgrade to the latest software version for their productat no additional charge. Support not included. Renewal CategoriesCurrent: Renewal SRPActive membershipsReinstatement: Renewal SRP + pro-rated renewal for time expiredMemberships that are expired 1 through 365 days + 20% feeRecovery: Recovery SRPMemberships that are expired more than 365 daysEdition Renewal SRP Recovery SRPXenServer Platinum $675.00 per SVR $2,800.00 per SVRXenServer Enterprise $325.00 per SVR $1,400.00 per SVRXenServer Advanced $130.00 per SVR $560.00 per SVR © 2012 Citrix | Confidential – Do Not Distribute
  • 45. Support OptionsXenServer Support Options Premier SupportCost 7% of license cost (SRP)Product Coverage XenServer Advanced, Enterprise and PlatinumCoverage Hours 24x7x365Incidents UnlimitedNamed Contacts UnlimitedType of Access Phone/Web/Email Add-on Service OptionsSoftware or Hardware TRM 200 hours/Unlimited incidents/1region $40,000Additional TRM hours 100 hours $20,000Fully Dedicated TRM 1600 hours/Unlimited incidents/1 region $325,000On-site Days On-site technical support service $2,000 per dayAssigned Escalation 200 hours/1 region (must have TRM) $16,000Fully Dedicated Assigned Escalation 1600 hours $480,000 © 2012 Citrix | Confidential – Do Not Distribute
  • 46. It’s Your Budget … Spend it Wisely • Vendor lock-in great for vendor Single Vendor • Beware product lifecycles and tool set changes • ROI Calculators always show vendor author as best ROI Can be Manipulated • Use your own numbers • Over buying is costly; get what you need Understand Support Model • Support call priority with tiered models • Some projects have requirements best suited to specific tool Use Correct Tool • Understand deployment and licensing impact Leverage Costly Features as • Blanket purchases benefit only vendor Required • Chargeback to project for feature requirements© 2012 Citrix | Confidential – Do Not Distribute
  • 47. Work better. Live better.
  • 48. GPU Pass-through Details
  • 49. How GPU Pass-through Works• Identical GPUs in a host auto-create a GPU group• The GPU Group can be assigned to set of VMs – each VM will attach to a GPU at VM boottime• When all GPUs in a group are in use, additional VMs requiring GPUs will not start• GPU and non-GPU VMs can (and should) be mixed on a host• GPU groups are recognized within a pool ᵒIf Server 1, 2, 3 each have GPU type 1, then VMs requiring GPU type 1 can be started on any of those servers © 2012 Citrix | Confidential – Do Not Distribute
  • 50. GPU Pass-through HCL is Server Specific• Server ᵒHP ProLiant WS460c G6 Workstation series* ᵒIBM System x3650 M3 ᵒDell Precision R5500• GPU (1-4 per host) ᵒNVIDIA Quadro 2000, 4000, 5000, 6000 ᵒNVIDIA Tesla M2070-Q• Support for Windows guests only• Important: Combinations of servers + GPUs must be tested as a pair © 2012 Citrix | Confidential – Do Not Distribute
  • 51. Limitations of GPU Pass-through• GPU Pass-through binds the VM to host for duration of session ᵒRestricts XenMotion and WLB• Multiple GPU types can exist in a single server ᵒE.g. high performance and mid performance GPUs• VNC will be disabled, so RDP is required• Fully supported for XenDesktop, best effort for other windows workloads ᵒNot supported for Linux guests• HCL is very important © 2012 Citrix | Confidential – Do Not Distribute
  • 52. IntelliCache Details
  • 53. Enabling IntelliCache on XenServer Hosts• IntelliCache requires local EXT3 storage, to be selected during XenServer installation• If this is selected during installation the host is automatically enabled for IntelliCache• Manual steps in Admin guide © 2012 Citrix | Confidential – Do Not Distribute
  • 54. Enabling IntelliCache in XenDesktop• http://support.citrix.com/ article/CTX129052• Use IntelliCache checkbox when adding a host in Desktop Studio• Supported from XenDesktop 5 FP1 © 2012 Citrix | Confidential – Do Not Distribute
  • 55. NFS Ops 10000 12000 14000 16000 18000 4000 6000 8000 2000 0 0:00:00 0:00:45 0:01:30 0:02:15 0:03:00 0:03:45 0:04:30 0:05:15 0:06:00 0:06:45 0:07:30 0:08:15 0:09:00 0:09:45 0:10:30 0:11:15© 2012 Citrix | Confidential – Do Not Distribute 0:12:00 0:12:45 0:13:30 0:14:15 0:15:00 0:15:45 0:16:30 0:17:15 0:18:00 0:18:45 0:19:30 0:20:15 NFS Read Ops 0:21:00 0:21:45 0:22:30 0:23:15 0:24:00 NFS Ops (Non-IC) 0:24:45 0:25:30 NFS Write Ops 0:26:15 0:27:00 0:27:45 0:28:30 0:29:15 0:30:00 0:30:45 0:31:30 0:32:15 IOPS – 1000 Users – No IntelliCache 0:33:00 0:33:45 0:34:30 0:35:15 0:36:00 0:36:45 0:37:30 0:38:15 0:39:00 0:39:45 0:40:30 0:41:15 0:42:00 0:42:45 0:43:30 0:44:15 0:45:00 0:45:45
  • 56. NFS Ops 1000 1500 2000 2500 3000 500 0 0:00:00 0:00:40 0:01:20 0:02:00 0:02:40 0:03:20 0:04:00 0:04:40 0:05:20 0:06:00 0:06:40 0:07:20 0:08:00 0:08:40 0:09:20 0:10:00© 2012 Citrix | Confidential – Do Not Distribute 0:10:40 0:11:20 0:12:00 0:12:40 0:13:20 0:14:00 0:14:40 0:15:20 0:16:00 0:16:40 0:17:20 0:18:00 0:18:40 NFS Read Ops 0:19:20 0:20:00 0:20:40 0:21:20 0:22:00 0:22:40 0:23:20 NFS Ops (Cold Cache) NFS Write Ops 0:24:00 0:24:40 0:25:20 0:26:00 0:26:40 0:27:20 0:28:00 0:28:40 0:29:20 0:30:00 0:30:40 0:31:20 IOPS – 1000 Users – Cold Cache Boot 0:32:00 0:32:40 0:33:20 0:34:00 0:34:40 0:35:20 0:36:00 0:36:40 0:37:20 0:38:00 0:38:40 0:39:20 0:40:00 0:40:40 0:41:20
  • 57. NFS Ops 10 15 20 25 35 30 0 5 0:00:00 0:00:45 0:01:30 0:02:15 0:03:00 0:03:45 0:04:30 0:05:15 0:06:00 0:06:45 0:07:30 0:08:15 0:09:00 0:09:45 0:10:30 0:11:15© 2012 Citrix | Confidential – Do Not Distribute 0:12:00 0:12:45 0:13:30 0:14:15 0:15:00 0:15:45 0:16:30 0:17:15 0:18:00 0:18:45 0:19:30 0:20:15 NFS Read Ops 0:21:00 0:21:45 0:22:30 0:23:15 0:24:00 0:24:45 NFS Ops (Hot Cache) 0:25:30 NFS Write Ops 0:26:15 0:27:00 0:27:45 0:28:30 0:29:15 0:30:00 0:30:45 0:31:30 0:32:15 0:33:00 IOPS – 1000 Users – Hot Cache Boot 0:33:45 0:34:30 0:35:15 0:36:00 0:36:45 0:37:30 0:38:15 0:39:00 0:39:45 0:40:30 0:41:15 0:42:00 0:42:45 0:43:30 0:44:15 0:45:00
  • 58. Limitations of IntelliCache• Best results achieved with local SSD drives ᵒSAS and SATA supported, but spindled disks are slower• XenMotion and WLB restrictions (pooled images)• Best practice Local space sizing ᵒExpecting 50% cache usage per user + daily log off ᵒ[real size master image] + #[users per server] * [size master image] * 0,5 ᵒCache disk may vary according to VM lifecycle definition (reboot cycle) © 2012 Citrix | Confidential – Do Not Distribute
  • 59. IntelliCache Conclusions• Dramatic reduction of I/O for pooled desktops• Significant reduction of I/O for assigned desktops ᵒStill need IOPS for write traffic ᵒLocal write cache benefits• Storage investment much lower – and more appropriate• Overall TCO 15 – 30 % improvement• Continued evolution of features to yield better performance and TCO © 2012 Citrix | Confidential – Do Not Distribute
  • 60. Workload Balancing Details
  • 61. Components Analysis Engine service• Workload Balancing Components ᵒData Collection Manager service ᵒAnalysis Engine service Data Store ᵒWeb Service Host Resource Pool XenServer ᵒData Store ᵒXenServer ᵒXenCenter XenCenter Data Collection Manager service Resource Pool XenServer Recommendations Web Service Host © 2012 Citrix | Confidential – Do Not Distribute
  • 62. Placement Strategies• Maximize Performance ᵒDefault setting ᵒSpread workload evenly across all physical hosts in a resource pool ᵒThe goal is to minimize CPU, memory, and network pressure for all hosts• Maximize Density ᵒFit as many virtual machines as possible onto a physical host ᵒThe goal is to minimize the number of physical hosts that must be online © 2012 Citrix | Confidential – Do Not Distribute
  • 63. Critical Thresholds• Components included in WLB evaluation: ᵒCPU ᵒMemory ᵒNetwork Read ᵒNetwork Write ᵒDisc Read ᵒDisk Write• Optimization recommendation is being triggered if a threshold is reached © 2012 Citrix | Confidential – Do Not Distribute
  • 64. Reports• Pool Health ᵒShows aggregated resource usage for a pool. Helps you evaluate the effectiveness of your optimization thresholds• Pool Health History ᵒDisplays resource usage for a pool over time. Helps you evaluate the effectiveness of your optimization thresholds• Host Health History ᵒSimilar to Pool Health History but filtered by a specific host• Optimization Performance History ᵒShows resource usage before and after executing optimization recommendations © 2012 Citrix | Confidential – Do Not Distribute
  • 65. Reports• Virtual Machine Motion History ᵒProvides information about how many times virtual machines moved on a resource pool, including the name of the virtual machine that moved, number of times it moved, and physical hosts affected• Optimization Performance History ᵒShows resource usage before and after executing accepting optimization recommendations• Virtual Machine Performance History ᵒDisplays key performance metrics for all virtual machines that operated on a host during the specified timeframe © 2012 Citrix | Confidential – Do Not Distribute
  • 66. Workload Chargeback Reports• Billing codes and costs• Resources to be charged• Exportable data © 2012 Citrix | Confidential – Do Not Distribute
  • 67. Workload Balancing Virtual Appliance• Ready-to-use WLB Virtual Appliance• Up and running with WLB in minutes rather than hours• Small footprint, Linux Virtual Appliance ᵒ~150Mb © 2012 Citrix | Confidential – Do Not Distribute
  • 68. Installation• Download Virtual Appliance• Import Virtual Appliance• Start Virtual Appliance• Initial setup steps ᵒDefine steps• Enable WLB in XenCenter © 2012 Citrix | Confidential – Do Not Distribute
  • 69. Integrated Site RecoveryDetails
  • 70. Integrated Site Recovery• Replaces StorageLink Gateway Site Recovery• Decoupled from StorageLink adapters• Supports LVM SRs only in this release• Replication/mirroring setup outside scope of solution ᵒFollow vendor instructions ᵒBreaking of replication/mirror also manual• Works with every iSCSI and FC array on HCL• Supports active-active DR © 2012 Citrix | Confidential – Do Not Distribute
  • 71. Feature Set• Integrated in XenServer and XenCenter• Support failover and failback• Supports grouping and startup order through vApp functionality• Failover pre-checks ᵒPowerstate of source VM ᵒDuplicate VMs on target pool ᵒSR connectivity• Ability to start VMs paused (e.g. for dry-run) © 2012 Citrix | Confidential – Do Not Distribute
  • 72. How it Works• Depends on “Portable SR” technology ᵒDifferent from Metadata backup/restore functionality• Creates a logical volume on SR during setup• Logical Volume contains ᵒSR metadata information ᵒVDI metadata information for all VDIs stored on SR• Metadata information is read during failover sr-probe © 2012 Citrix | Confidential – Do Not Distribute
  • 73. Integrated Site Recovery - Screenshots© 2012 Citrix | Confidential – Do Not Distribute
  • 74. Distributed Virtual SwitchDetails
  • 75. Terminology• OpenFlow ᵒAn open standard that separates the control and data paths for switching devices• OpenFlow switch ᵒCould be physical or virtual ᵒIncludes packet processing and remote configuration/control support via OpenFlow• Open vSwitch ᵒAn OSS Linux-based implementation of an OpenFlow virtual switch ᵒMaintained at www.openvswitch.org• vSwitch Controller ᵒA commercial implementation of a OpenFlow controller ᵒProvides integration with XenServer pools © 2012 Citrix | Confidential – Do Not Distribute
  • 76. Core Distributed Switch Objectives• Extend network management to virtual networks• Provide network monitoring using standard protocols• Define network policies on virtual objects• Support multi-tenant virtual data centers• Provide cross host private networking without VLANs• Answer to VMware VDS and Cisco Nexus 1000v © 2012 Citrix | Confidential – Do Not Distribute
  • 77. Understanding Policies• Access control VM ᵒBasic Layer 3 firewall rules ᵒDefinable by pool/network/VM ᵒInheritance controls VM VM VM VM © 2012 Citrix | Confidential – Do Not Distribute
  • 78. Understanding Policies• Access control VM• QoS ᵒRate limits to control bandwidth VM VM VM VM © 2012 Citrix | Confidential – Do Not Distribute
  • 79. Understanding Policies• Access control VM• QoS• RSPAN VM ᵒTransparent monitoring of VM level traffic VM VM VM © 2012 Citrix | Confidential – Do Not Distribute
  • 80. What is NetFlow?• Layer 3 monitoring protocol• UDP/SCTP based• Broadly adopted solution• Implemented in three parts ᵒExporter (DVS) ᵒCollector ᵒAnalyzer• DVSC is NetFlow v5 based ᵒEnabled at pool level © 2012 Citrix | Confidential – Do Not Distribute
  • 81. Performance Monitoring• Enabled via NetFlow• Dashboard ᵒThroughput ᵒPacket flow ᵒConnection flow• Flow Statistics ᵒSlice and dice reports ᵒSee top VM traffic ᵒData goes back 1 week © 2012 Citrix | Confidential – Do Not Distribute
  • 82. Bonus Features *****• Jumbo Frames• Cross Server Private Networks• LACP• 4 NIC bonds © 2012 Citrix | Confidential – Do Not Distribute
  • 83. High Availability Details
  • 84. Protecting Workloads • Not just for mission critical applications anymore • Helps manage VM density issues • "Virtual" definition of HA a little different than physical • Low cost / complexity option to restart machines in case of failure© 2012 Citrix | Confidential – Do Not Distribute
  • 85. High Availability Operation • Pool-wide settings • Failure capacity – number of hosts to carry out HA Plan • Uses network and storage heartbeat to verify servers© 2012 Citrix | Confidential – Do Not Distribute
  • 86. VM Protection Options• Restart Priority ᵒDo not restart ᵒRestart if possible ᵒRestart• Start Order ᵒDefines a sequence and delay to ensure applications run correctly © 2012 Citrix | Confidential – Do Not Distribute
  • 87. HA Design – Hot SparesSimple Design ᵒ Similar to hot spare in disk array ᵒ Guaranteed available ᵒ Inefficient  Idle resourcesFailure Planning ᵒ If surviving hosts are fully loaded – VMs will be forced to start on spare ᵒ Could lead to restart delays due to resource plugs ᵒ Could lead to performance issues if spare is pool master ᵒ If using WLB, need to exclude spare from rebalancing © 2012 Citrix | Confidential – Do Not Distribute
  • 88. HA Design – Distributed CapacityEfficient Design ᵒAll hosts utilized ᵒWLB can ensure optimal performanceFailure Planning ᵒImpacted VMs automatically placed for best fit ᵒRunning VMs undisturbed ᵒProvides efficient guaranteed availability © 2012 Citrix | Confidential – Do Not Distribute
  • 89. HA Design – Impact of Dynamic MemoryEnhances Failure Planning ᵒDefine reduced memory which meets SLA ᵒOn restart, some VMs may “squeeze” their memory ᵒIncreases host efficiency © 2012 Citrix | Confidential – Do Not Distribute
  • 90. HA Design - Preventing Single Point of Failure• HA recovery may create single points of failure• WLB host exclusion minimizes impact © 2012 Citrix | Confidential – Do Not Distribute
  • 91. HA Enhancements in XenServer 6• HA over NFS• HA with Application Packages ᵒDefine multi-VM services ᵒDefine VM startup order and delays ᵒApplication packages can be defined from running VMs• Auto-Start VMs are removed ᵒUsage conflicted with HA failure planning ᵒCreated situations when perceived host recovery wasn’t met © 2012 Citrix | Confidential – Do Not Distribute
  • 92. High Availability – No Excuses• Shared storage the hardest part of setup ᵒSimple wizard can have HA defined in minutes ᵒMinimally invasive technology• Protects your important workloads ᵒReduce on-call support incidents ᵒAddresses VM density risks ᵒNo performance, workload, configuration penalties• Compatible with resilient application designs• Fault tolerant options exist through ecosystem © 2012 Citrix | Confidential – Do Not Distribute
  • 93. StorageLink Details
  • 94. Leverage Array Technologies• No file system overlay Array OS Array OS• Use Best-of-Breed technologies ᵒThin Provisioning Hypervisor Filesystem Snapshotting Snapshotting Provisioning Provisioning ᵒDeduplexing Snapshotting Cloning Cloning ᵒCloning Provisioning Cloning ᵒSnapshotting ᵒMirroring VM VM VM VM VM VM VM VM VM VM• Maximize array performance VM VM VM VM VM VM VM VM VM VM Traditional Approach Citrix StorageLink © 2012 Citrix | Confidential – Do Not Distribute
  • 95. No StorageLink – Inefficient LUN Usage Today 4 weeks 8 weeks 12 weeksCustomer request Customer adds 5 VMs Customer adds 5 VMs Customer adds 5 VMs for 600GB with 50 GB each with 50 GB each with 50 GB each LUN 600GB  Customer400 GB free 400 GB free 400 GB free requests new storage LUN 600GB LUN 600GB LUN 600GB LUN 600GB capacity 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk 50GB disk1 TB storage capacity© 2012 Citrix | Confidential – Do Not Distribute
  • 96. With StorageLink – Maximize Array Utilization Today 4 weeks 8 weeks 12 weeks Customer request Customer adds 5 VMs Customer adds 5 VMs Customer adds 5 VMs for 600 GB with 50 GB each with 50 GB each with 50 GB each1 TB free 750 GB free 500 GB free 250 GB free 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN 50GB LUN1 2012 Citrix Confidential – Do Not Distribute© TB storage |capacity
  • 97. StorageLink – Efficient Snapshot Management NO StorageLink With StorageLinkVM Snapshot capacity limited to LUN VM Snapshot capacity limited storage size pool size 400 GB free 750 GB free Snapshot capacity LUN 600GB 350 GB 750 GB 50GB disk 50GB disk 50GB LUN 50GB disk 50GB LUN 50GB disk 50GB LUN 50GB disk 50GB LUN 50GB LUN© 2012 Citrix | Confidential – Do Not Distribute
  • 98. Integrated StorageLink Architecture XenServer Host XAPI Daemon SMAPI CSLG LVM NFS NetApp … Bridge EQL NTAP SMI-S …© 2012 Citrix | Confidential – Do Not Distribute
  • 99. SR-IOV Details
  • 100. Network Performance for GbE with PV drivers• XenServer PV drivers can sustain peak throughput on GbE ᵒHowever limited to 2.9Gb/s in total• But XenServer uses significantly more CPU cycles than Linux ᵒLess available cycles for application ᵒ10GbE networks: CPU saturation in dom0 prevents achieving line rate• Need to reduce I/O virtualization overhead in XenServer networking © 2012 Citrix | Confidential – Do Not Distribute
  • 101. I/O Virtualization Overview – Hardware Solution• VMDq (Virtual Machine Device Queue) ᵒSeparate Rx & Tx queue pairs of NIC for Network Only each VM, Software “switch”.• Direct I/O (VT-d) ᵒImproved I/O performance through direct VM exclusively assignment of a I/O device to a HVM or PV owns device workload• SR-IOV (Single Root I/O Virtualization) One Device, multiple ᵒChanges to I/O device silicon to support Virtual Functions multiple PCI device ID’s, thus one I/O device can support multiple direct assigned guests. Requires VT-d. © 2012 Citrix | Confidential – Do Not Distribute
  • 102. Where Does SR-IOV Fit In?Technique Efficiency Hardware Abstraction Applicability ScalabilityCharacteristicEmulation Low Very high All device classes HighPara-virtualization Medium High – requires installing paravirtual Block, network High drivers on the guestAcceleration (VMDq) High Medium: Network only, Medium (for -Transparent to apps hypervisor dependent accelerated interfaces) -May require device-specific acceleratorsPCI Pass-through High Low: All devices Low -Explicit device plug/unplug -Device specific drivers SR-IOV Addresses This © 2012 Citrix | Confidential – Do Not Distribute
  • 103. XenServer Solarflare SR-IOV Implementation Typical SR-IOV Implementation XS & Solarflare SR-IOV Model Guest Guest Guest App VM App VM VM App Plug-in Netfront driver VF driver VF driver driver VF dom0 dom0 Netback driver vSwitch vSwitch Physical driver Physical driver Virtual NIC Virtual NIC NIC Virtual NIC NIC Improved performance, but loss of services Improved performance AND full and management (e.g. live migration) use of services and management© 2012 Citrix | Confidential – Do Not Distribute
  • 104. XenMotion in Detail
  • 105. XenMotion – Live VM Migration• Requires systems that have compatible CPUs ᵒMust be the same manufacturer ᵒCan be different speed ᵒMust support maskable features; or be of simlar type (e.g. 3450 and 3430)• Minimal Downtime ᵒGenerally sub 200 mS; mostly due to network switches• Requires shared storage ᵒVM state moves between hosts; underlying disks remain in existing location © 2012 Citrix | Confidential – Do Not Distribute
  • 106. Detailed XenMotion Example
  • 107. Pre-Copy Migration: Round 1 • Systems verify correct storage and network setup on destination server • VM Resources Reserved on Destination Server Source Virtual Machine Destination© 2012 Citrix | Confidential – Do Not Distribute
  • 108. Pre-Copy Migration: Round 1• While source VM is still running XenServer copies over memory image to destination server• XenServer keeps track of any memory changes during this process © 2012 Citrix | Confidential – Do Not Distribute
  • 109. Pre-Copy Migration: Round 1© 2012 Citrix | Confidential – Do Not Distribute
  • 110. Pre-Copy Migration: Round 1© 2012 Citrix | Confidential – Do Not Distribute
  • 111. Pre-Copy Migration: Round 1 • After first pass most of the memory image is now copied to the destination server • Any memory changes during initial memory copy are tracked© 2012 Citrix | Confidential – Do Not Distribute
  • 112. Pre-Copy Migration: Round 2 • XenServer now does another pass at copying over changed memory© 2012 Citrix | Confidential – Do Not Distribute
  • 113. Pre-Copy Migration: Round 2© 2012 Citrix | Confidential – Do Not Distribute
  • 114. Pre-Copy Migration: Round 2 • Xen still tracks any changes during the second memory copy • Second copy moves much less data • Also less time for memory changes to occur© 2012 Citrix | Confidential – Do Not Distribute
  • 115. Pre-Copy Migration: Round 2© 2012 Citrix | Confidential – Do Not Distribute
  • 116. Pre-Copy Migration • Xen will keep doing successive memory copies until minimal differences between source and destination© 2012 Citrix | Confidential – Do Not Distribute
  • 117. XenMotion: Final • Source VM is paused and last bit of memory and machine state copied over • Master unlocks storage from source system and locks to destination system • Destination VM is unpaused and attached to storage and network resources • Source VM resources cleared© 2012 Citrix | Confidential – Do Not Distribute
  • 118. Storage XenMotion
  • 119. Live Storage XenMotionUpgrading VMs from Local to Shared Storage Live Virtual Machine XenServer Hypervisor VDI(s) Local Storage FC, iSCSI, NFS SAN XenServer Pool © 2012 Citrix | Confidential – Do Not Distribute
  • 120. Live Storage XenMotionMoving VMs within a Pool with local-only storage Live Virtual Machine XenServer Hypervisor XenServer Hypervisor VDI(s) Local Local Storage Storage XenServer Pool © 2012 Citrix | Confidential – Do Not Distribute
  • 121. Live Storage XenMotionMoving or rebalancing VMs between Pools (Local  SAN) Live Virtual Machine XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor VDI(s) Local Storage FC, iSCSI, NFS SAN XenServer Pool 1 XenServer Pool 2 © 2012 Citrix | Confidential – Do Not Distribute
  • 122. Live Storage XenMotionMoving or rebalancing VMs between Pools (Local  Local) Live Virtual Machine XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor XenServer Hypervisor VDI(s) Local Local Storage Storage XenServer Pool 1 XenServer Pool 2 © 2012 Citrix | Confidential – Do Not Distribute
  • 123. VHD Benefits• Many SRs implement VDIs as VHD trees• VHDs are a copy-on-write format for storing virtual disks• VDIs are the leaves of VHD trees• Interesting VDI operation: snapshot (implemented as VHD “cloning”) RW RO A RO RW B A• A: Original VDI• B: Snapshot VDI © 2012 Citrix | Confidential – Do Not Distribute
  • 124. no color = emptyVDI Mirroring Flow gradient = live SOURCE DESTINATION mirror VM VMroot © 2012 Citrix | Confidential – Do Not Distribute
  • 125. Benefits of VDI Mirroring• Optimization: start with most similar VDI ᵒAnother VDI with the least number of different blocks ᵒOnly transfer blocks that are different• New VDI field: Content ID for each VDI ᵒEasy way to confirm that different VDIs have identical content ᵒPreserved across VDI copy, refreshed after VDI attached RW• Worst case is a full copy (common in server virtualization)• Best case occurs when you use VM “gold images” (i.e. XenDesktop) © 2012 Citrix | Confidential – Do Not Distribute
  • 126. Work better. Live better.