SlideShare a Scribd company logo
DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Installing and Configuring the DM-MPIO
DELL EMC XTREMIO X2 WITH CITRIX
XENDESKTOP 7.16
Abstract
This reference architecture evaluates the best-in-class performance and
scalability delivered by Dell EMC XtremIO X2 for Citrix XenDesktop 7.16 VDI
above VMware vSphere 6.5 infrastructure. We present data-quantifying
performance at scale for thousands of desktops in each stage of the VDI
lifecycle. Datacenter design elements, both hardware and software, that
synergize into achieving the optimum results, are also discussed in detail.
March, 2018
REFERENCE ARCHITECTURE
2 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Contents
Abstract.............................................................................................................................................................1
Executive Summary...........................................................................................................................................4
Business Case ..................................................................................................................................................5
Overview ...........................................................................................................................................................6
Test Results ......................................................................................................................................................7
Summary............................................................................................................................................................................. 7
Deployment Performance Results ...................................................................................................................................... 8
Citrix Machine Creation Services (MCS) ............................................................................................................................ 8
Citrix Provisioning Services (PVS)...................................................................................................................................... 9
MCS Full Clone Provisioning .......................................................................................................................................... 9
MCS Linked Clone Provisioning....................................................................................................................................12
Production Use Performance Results...............................................................................................................................14
Boot Storms...................................................................................................................................................................14
LoginVSI Results...........................................................................................................................................................15
Solution's Hardware Layer...............................................................................................................................23
Storage Array: Dell EMC XtremIO X2 All-Flash Array ......................................................................................................23
XtremIO X2 Overview ...................................................................................................................................................23
Architecture and Scalability...........................................................................................................................................24
XIOS and the I/O Flow ..................................................................................................................................................27
System Features ...........................................................................................................................................................31
XtremIO Management Server .......................................................................................................................................39
Test Setup......................................................................................................................................................................... 47
Compute Hosts: Dell PowerEdge Servers ........................................................................................................................47
Storage Configuration .......................................................................................................................................................49
Zoning ........................................................................................................................................................................... 49
Storage Volumes...........................................................................................................................................................49
Initiator Groups and LUN Mapping................................................................................................................................49
Storage Networks..............................................................................................................................................................50
Solution's Software Layer ................................................................................................................................51
Hypervisor Management Layer.........................................................................................................................................51
vCenter Server Appliance .............................................................................................................................................51
Hypervisor .....................................................................................................................................................................51
ESX Clusters.................................................................................................................................................................52
Network Configuration...................................................................................................................................................52
Storage Configuration, EMC SIS and VSI ........................................................................................................................54
3 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Virtual Desktop Management Layer: Citrix XenDesktop 7.16...........................................................................56
Citrix XenDesktop .............................................................................................................................................................56
Citrix XenDesktop Components........................................................................................................................................57
Machine Creation Services (MCS)................................................................................................................................58
Provisioning Services (PVS) .........................................................................................................................................60
PVS Write Cache ..........................................................................................................................................................61
Personal vDisk ..............................................................................................................................................................62
Citrix XenDesktop 7.16 Configurations and Tuning..........................................................................................63
XenDesktop Delivery Controller....................................................................................................................................63
Microsoft Windows 10 Desktop Configuration and Optimization......................................................................................64
Conclusion.......................................................................................................................................................66
References......................................................................................................................................................67
Appendix A – Test Methodology......................................................................................................................68
How to Learn More..........................................................................................................................................69
4 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Executive Summary
This paper describes a reference architecture for deploying a Citrix XenDesktop 7.16 Virtual Desktop Infrastructure (VDI)
environment and published applications using Dell EMC XtremIO X2 storage array. It also discusses design
considerations for deploying such an environment. Based on the data presented herein, we firmly establish the value of
XtremIO X2 as a best-in-class all-flash array for Citrix XenDesktop Enterprise deployments. This reference architecture
presents a complete VDI solution for Citrix XenDesktop 7.16 delivering virtualized 32-bit Windows 10 desktops using MCS
and PVS technologies with applications such as Microsoft Office 2016, Adobe Reader 11, Java, IE and other common
desktop user applications. It discusses design considerations that will give you a reference point for successfully
deploying a VDI project using XtremIO X2, and describes tests performed by XtremIO to validate and measure the
operation and performance of the recommended solution.
5 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Business Case
A well-known objective of virtualizing desktops is lowering the Total Cost of Ownership (TCO). TCO generally includes
capital expenditures from purchasing hardware such as storage, servers, networking switches and routers, in addition to
the software licensing and maintenance costs. The main goals in virtualizing desktops are to improve economics and
efficiency in desktop delivery, ease maintenance and management, and improve desktop security. In addition to these
goals, a key objective of a successful VDI deployment, and one that probably matters the most, is the end user
experience. It is imperative for VDI deployments to demonstrate parity with that of physical workstations when it comes to
the end user experience. The overwhelming value of virtualizing desktops in a software-defined datacenter and the need
to deliver a rich end-user experience compels us to select the best-of-breed infrastructure components for our VDI
deployment. Selecting the best-in-class and performant storage system, that is also easy to manage, helps to achieve our
long-term goal of lowering the TCO, and hence is a critical piece of the infrastructure.
The shared storage infrastructure in a VDI solution should be robust enough to deliver consistent performance and
scalability for thousands of desktops regardless of the desktop delivery mechanism (linked clones, full clones, etc.).
XtremIO brings tremendous value by providing consistent performance at scale with features such as always-on inline
deduplication, compression, thin provisioning and unique data protection capabilities. Seamless interoperability with
VMware vSphere is achieved by using VMware APIs for Array Integration (VAAI). Dell EMC Solutions Integration Service
(SIS) and Virtual Storage Integrator's (VSI) ease of management make choosing this best-of-breed all-flash array even
more attractive for desktop virtualization applications.
XtremIO is a scale-out storage system that can grow in storage capacity, compute resources and bandwidth capacity
whenever storage requirements for the environment are enhanced. With the advent of multi-core server systems with
increasing number of CPU cores per processor (following Moore's law), we are able to consolidate a growing number of
desktops on a single enterprise-class server. When combined with XtremIO X2 All-Flash Array, we can consolidate vast
numbers of virtualized desktops on a single storage array, thereby achieving high consolidation at great performance from
a storage and a compute perspective.
The solution is based on Citrix XenDesktop 7.16 which provides a complete end-to-end solution delivering Microsoft
Windows virtual desktops or server-based hosted shared sessions to users on a wide variety of endpoint devices. Virtual
desktops are dynamically assembled on demand, providing users with pristine, yet personalized, desktops each time they
log on.
Citrix XenDesktop 7.16 provides a complete virtual desktop delivery system by integrating several distributed components
with advanced configuration tools that simplify the creation and real-time management of the virtual desktop infrastructure.
Enterprise IT organizations are tasked with the challenge of provisioning Microsoft Windows apps and desktops while
managing cost, centralizing control, and enforcing corporate security policy. Deploying Windows apps to users in any
location, regardless of the device type and available network bandwidth, enables a mobile workforce that can improve
productivity. With Citrix XenDesktop 7.16, IT can effectively control app and desktop provisioning while securing data
assets and lowering capital and operating expenses.
6 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Overview
It is well known that implementing a complete VDI solution is a multi-faceted effort with nuances encompassing compute,
memory, network – and most importantly – storage. Our focus in this reference architecture is on XtremIO X2 capabilities
and benefits in such a solution; however, we intend to give a complete picture of a VDI solution.
An XtremIO X2 cluster provides sufficient storage capacity and adequate performance for servicing the I/O requests and
storage bandwidth required for a scale of thousands and tens of thousands of virtual desktops. This includes desktop
delivery, management operations, login and boot storms, and production use at scale. The XtremIO X2 Storage Array
provides top class performance when deploying virtual desktops and running management operations on them, as well as
when subjected to live user emulation tests using LoginVSI (Login Virtual Session Indexer – a software simulating user
workloads for Windows-based virtualized desktops).
The XtremIO All Flash Storage array is based upon a scale-out architecture. It is comprised of building blocks called
X-Bricks which can be clustered together to grow performance and capacity as required. An X-Brick is the basic building
block of an XtremIO cluster. Each X-Brick is a highly-available, high-performance unit that consists of dual Active-Active
Storage Controllers, with CPU and RAM resources, Ethernet, FC and iSCSI connections, and a Disk Array Enclosure
(DAE) containing the SSDs that hold the data. With XtremIO X2, a single X-Brick can service the storage capacity and
bandwidth requirements for 4000 desktops, with capacity to spare.
XtremIO X2 All-Flash Array is designed to provide high responsiveness for increasing data usage for thousands of users
and is extremely beneficial for VDI projects. In subsequent sections of this reference architecture, we will present
XtremIO's compounding returns for its data reduction capabilities and the high performance it provides to VDI
environments with thousands of desktops. We will see the benefits in terms of data reduction and storage performance in
deploying an Instant Clone Desktop Pool as well as in deploying a Linked Clone Desktop Pool.
XtremIO's scale-out architecture allows scaling any environment, in our case VDI environments, in a linear way that
satisfies both the capacity and performance needs of the growing infrastructure. An XtremIO X2 cluster can start with any
number of required X-Bricks to service the current or initial loads and can grow linearly (up to 4 X-Bricks in a cluster) to
appropriately service the increasing environment (to be increased to 8 X-Bricks in the future, depending on the cluster's
type). With X2, in addition to its scale-out capabilities, an XtremIO storage array can scale-up by adding extra SSDs to an
X-Brick. An X-Brick can contain between 18 and 72 SSDs (in increments of 6) of fixed sizes (400GB or 1.92TB,
depending on the cluster's type, with future versions allowing 3.84TB sized SSDs).
In developing this VDI solution, we have selected VMware vSphere 6.5 update 1 as the virtualization platform, and Citrix
XenDesktop 7.16 for virtual desktop delivery and management. Windows 10 (32-bit) is the virtual desktops' operating
system. EMC VSI (Virtual Storage Integrator) 7.3 and the vSphere Web Client are used to apply best practices pertaining
to XtremIO storage Volumes and the general environment.
To some degree, data in subsequent sections of this reference architecture helps us quantify the end user experience for
a desktop user and also demonstrates the efficiency in management operations that a datacenter administrator may
achieve when deploying a VDI environment on XtremIO X2 all-flash array.
We begin the reference architecture by discussing test results, which are classified into the following categories:
• Management Operations – resource consumption and time to complete Citrix Xendesktop management
operations.
• Production Use – resource consumption patterns and time to complete a boot storm, and resource consumption
patterns and responsiveness when desktops in the pool are subjected to "LoginVSI Knowledge Worker"
workloads, emulating real users' workloads.
After presenting and analyzing the test results of our VDI environment, we will discuss the different elements of our
infrastructure, beginning with the hardware layer and moving up to the software layer, including the features and best
practices we recommend for the environment. This includes extensive details of the XtremIO X2 storage array, storage
network equipment and host details at the hardware level, and the VMware vSphere Hypervisor (ESXi), vCenter Server
and Citrix Xendesktop environment at the software level. The details of the virtual machine settings and LoginVSI
workload profile provide us with the complete picture of how all building blocks of a VDI environment function together.
7 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Test Results
In this section, we elaborate on the tests performed on our VDI environment and their results. We start with a summary of
the results and related conclusions, and dive deeper into each test's detailed results and analyzed data and statistics
(including various storage and compute metrics such as bandwidth, latency, IOPS, and CPU and RAM utilization).
Summary
Citrix XenDesktop delivers virtual Windows desktops and applications as secure services on any device. It provides a
native touch-enabled look and feel that is optimized for the device type as well as the network.
A Citrix XenDesktop desktop pool has the following basic lifecycle stages:
• Provisioning
• Production work by active users
• Maintenance operations
We will show summary and detailed test results for these stages, divided into two types of lifecycle phases: Management
Operations (Provisioning and Maintenance operations) and Production Use
From the perspective of datacenter administrators, operational efficiency is translated to time to complete management
operations. The less time it takes to provision desktops and perform maintenance operations, the faster the availability is
of VDI desktop pools for production. It is for this reason that the storage array's throughput performance deserves special
attention – the more throughput the system can provide, the faster those management operations will complete. The
storage array throughput is measured in terms of IOPS or bandwidth that manifest in terms of data transfer rate.
During production, desktops are in actual use by end users via remote sessions. Two events are tested to examine the
infrastructure's performance and ability to serve VDI users: Virtual desktops boot storm, and heavy workloads produced
by high percentage of users using their desktops. Boot storms are measured by time to complete, and heavy workloads
by the "user experience". The criteria dictating "user experience" is the applications' responsiveness and overall desktop
experience. We use the proven LoginVSI tests (explained further in this paper) to evaluate user experience, and track
storage latency during those LoginVSI tests.
Table 1 shows a summary of the test results for all stages of a VDI desktop pool lifecycle with 4000 desktops for Instant
Clone and Linked Clone desktops, when deployed on an XtremIO X2 cluster as its storage array. Note that the
Recompose and Refresh maintenance operations are not applicable for Linked Clone desktops.
Table 1. VDI Performance Tests with XtremIO X2 – Results Summary
4000 DESKTOPS MCS LINKED CLONES MCS FULL CLONES PVS CLONES
Elapsed Time – Deployment 50 Minutes 65 Minutes N/A
LoginVSI Boot Storm 10 Minutes 10 Minutes 10 Minutes
LoginVSI – VSI Baseline 862 864 841
LoginVSI – VSI Average 1122 1096 1071
LoginVSI – VSI Max Not Reached Not Reached Not Reached
We notice the excellent results for deployment time, boot storm performance, and maintenance operation time, as well as
the accomplished LoginVSI results (detailed in LoginVSI Results on page 15) that emulate production work by active
users.
8 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
We suggest a scale-out approach for VDI environments, in which we add compute and memory resources (more ESX
hosts) as we scale up in the number of desktops. In our tests, we deployed Virtual Desktops with two vCPUs and 4GB of
RAM (not all utilized since we are using a 32-bit operating system) per desktop. After performing a number of tests to
understand the appropriate scaling, we concluded the appropriate scale to be 125 desktops per single ESX host (with the
given host configuration listed in Table 2). Using this scale, we deployed 4000 virtual desktops on 32 ESX hosts.
For storage volume size, the selected scale was 125 virtual desktops per XtremIO Volume of 3TB (the maximum number
of desktops per a single LUN when provisioned with VAAI is 500). As we will see next, the total of 32 volumes and 96TB
were easily handled by our single X-Brick X2 cluster, both in terms of capacity and performance (IOPS, bandwidth and
latency).
In the rest of this section, we take a deeper look into the data collected in our storage array and other environment
components during each of the management operation tests, as well as during boot storms and LoginVSI's "Knowledge
worker" workload tests.
A data-driven understanding of our XtremIO X2 storage array's behavior provides us with evidence that assure a rich user
experience and efficiency in management operations when using this effective all-flash array. This is manifested by
providing performance-at-scale, for thousands of desktops. The data collected below includes statistics of storage
bandwidth, IOPS, I/O latency, CPU utilization and more.
Performance statistics were collected from the XtremIO Management Server (XMS) by using XtremIO RESTful API
(Representational State Transfer Application Program Interface). This API is a powerful feature that enables performance
monitoring while executing management operation tests and running LoginVSI workloads. These results provided a clear
view of the exceptional capabilities of XtremIO X2 for VDI environments.
Deployment Performance Results
In this section, we take a deeper look at performance statistics from our XtremIO X2 array when used for in a VDI
environment for performing management operations such as MCS full clone and MCS linked clone desktop provisioning.
PVS provisioning is preformed synchronously, while the resources consumed are mostly the CPU and memory of the
hosts. Since it is not impacted by the storage performance, it is not detailed in this section.
Citrix Machine Creation Services (MCS)
Machine Creation Services (MCS) is a centralized provisioning mechanism that is integrated with the XenDesktop
management interface, Citrix Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle.
MCS enables the management of several types of machines within a catalog in Citrix Studio. Desktop customization is
persistent for machines that use the Personal vDisk (PvDisk or PvD) feature, while non-Personal vDisk machines are
appropriate if desktop changes are discarded when the user logs off.
Desktops provisioned using MCS share a common base image within a catalog. Because of the XtremIO X2 architecture,
the base image is stored only once in the storage array, providing efficient data storage and maximizing the utilization of
flash disks, while providing exceptional performance and optimal I/O response time for the virtual desktops.
Figure 1. Logical Representation of an MCS-base Disk and Linked Clone
9 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Citrix Provisioning Services (PVS)
• Citrix Provisioning Services (PVS) takes a different approach from traditional desktop imaging solutions by
fundamentally changing the relationship between software and the hardware on which it runs.
• By streaming a single shared disk image (vDisk) instead of copying images to individual machines, PVS lets
organizations reduce the number of disk images that they manage. Because the number of machines continues
to grow, PVS provides the efficiency of centralized management with the benefits of distributed processing.
• Because machines stream disk data dynamically in real time from a single shared image, machine image
consistency is ensured. In addition, large pools of machines can completely change their configuration,
applications, and even the operating system during a reboot operation.
Figure 2. Boot Process of a PVS Target Device
MCS Full Clone Provisioning
The operational efficiency of datacenter administrators is determined mainly by the completion rate of desktop delivery
(provisioning) and management operations. It is critical for datacenter administrators that the provisioning and
maintenance operations on VDI desktops finish in a timely manner to be ready for production users. The time it takes to
provision the desktops is directly related to storage performance capabilities. As shown in Figure 3, XtremIO X2 is
handling storage bandwidths as high as ~20GB/s with over 100k IOPS (read + write) during a 4000 Full Clone desktops
provisioning phase, resulting in a quick and efficient desktop delivery (65 minutes for all 4000 Full Clone desktops).
Figure 3. XtremIO X2 IOPS and I/O Bandwidth – 4000 Full Clone Desktops Provisioning
It took 65 minutes for the system to finish the provisioning and OS customization of all 4000 desktops with our X2 array.
We can deduce that desktops were provisioned in our test at an excellent rate of about 62 desktops per minute, or one
desktop provisioned every second.
10 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Figure 4 shows the block sizes distribution during the Instant Clone provisioning process. We can see that most of the
bandwidth used is 256kB and >1MB blocks, as these are the block sizes that were configured at the software level
(VMware) to use with our storage array.
Figure 4. XtremIO X2 Bandwidth by Block Size – 4000 Full Clone Desktops Provisioning
In Figure 5, we can see the IOPS and latency statistics in an Instant Clone provisioning process of 4000 desktops. The
graph shows again that IOPS are well over 100k but that the latency for all I/O operations remains less than 0.1 msec,
yielding the excellent performance and fast-paced provisioning of our virtual desktop environment.
Figure 5. XtremIO X2 Latency vs. IOPS – 4000 Instant Clone Desktops Provisioning
11 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Figure 6 shows the CPU utilization of our Storage Controllers during the Instant Clone provisioning process. This process
is less demanding than the Full Clone provisioning process, due to significantly less data written. We can see that the
CPU utilization of the Storage Controllers normally remains around 60. We can also see the excellent synergy across our
X2 cluster, when all our Active-Active Storage Controllers' CPUs share the load and effort, with CPU utilization virtually
equal between all Controllers for the entire process.
Figure 6. XtremIO X2 CPU Utilization – 4000 Full Clone Desktops Provisioning
Figure 7 shows XtremIO's incredible storage savings for the scenario of 4000 Full Clone desktops provisioned (each with
about 13.5GB used space in their 40GB-sized C: drive volume). Notice that the physical capacity footprint of the 4000
desktops after XtremIO deduplication and compression is at 827.51GB, while the logical capacity is 51.95TB. This is a
direct result of an extraordinary data reduction factor reaching 65.5:1 (32.4:1 for deduplication and 2.0:1 for compression).
Thin provisioning further-adds to storage efficiency, aggregating it to 391.1:1.
Figure 7. XtremIO X2 Data Savings – 4000 Full Clone Desktops Provisioning
12 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
MCS Linked Clone Provisioning
As with MCS Linked Clones, we also examined storage statistics for 4000 Linked Clone desktops provisioning. As Figure
8 shows, our X2 array handles storage bandwidths as high as ~4K IOPS for small I/O operations. This I/O pattern is a
result of Linked Clones' use of VMware snapshots which means no outstanding data is written to the array. Instead,
pointers and VMware metadata are used. Unlike the process of deploying Linked Clones via VMware Horizon View,
XenDesktop creates the computer accounts in advance and associates them with the virtual desktops during their initial
power on. This mechanism saves a lot of resources during the deployment of the pool and the entire provisioning process
for the 4000 desktops took 50 minutes. This is over 30% faster than what it took provisioning Full Clone desktops (65
minutes), translating to a rate of 80 desktops per minute.
Figure 8. XtremIO X2 IOPS and I/O Bandwidth – 4000 Linked Clone Desktops Provisioning
Figure 9 shows the block sizes distribution during the Linked Clone provisioning process. We can see the 400MB/s of the
I/O operations of 512KB blocks which are generated during the desktops power on.
Figure 9. XtremIO X2 Bandwidth by Block Size – 4000 Linked Clone Desktops Provisioning
13 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Examining the IOPS and latency stats during the Linked Clone provisioning process of the 4000 desktops, we can see
in Figure 10 a latency of mostly below 0.2 msec with some peaks of a higher latency, almost entirely under 0.4msec.
These high-performance numbers are the reason for the excellent provisioning rate achieved in our test.
Figure 10. XtremIO X2 Latency vs. IOPS – 4000 Linked Clone Desktops Provisioning
Figure 11 shows the CPU utilization of the Storage Controllers during the Linked Clone provisioning process. This process
hardly loads the storage array. This is due to the significantly less data written as controlled by the Citrix platforms. We
can see that the CPU utilization of the Storage Controllers stays normally at around 2%.
Figure 11. XtremIO X2 CPU Utilization – 4000 Linked Clone Desktops Provisioning
Figure 12 shows the incredible efficiency that is achieved in storage capacity when using Linked Clones on XtremIO X2.
The 4000 Linked Clone desktops provisioned takes up a logical footprint of 51.62TB, while the physical is only 1.01 GB as
a result of an impressive data reduction factor of 51.4:1 (21.5:1 for deduplication and 2.4:1 for compression). Thin
provisioning is also a great saving factor especially with Linked Clones (here with almost 848.6 savings), as the desktops
are merely VMware snapshots of an original parent machine, and consume no space until changes are being made.
Figure 12. XtremIO X2 Data Savings – 4000 Linked Clone Desktops Provisioning
14 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Production Use Performance Results
This section examines how an XtremIO X2 single X-Brick cluster delivers the best-in-class user experience with high
performance during a boot storm and during the actual work of virtual desktop users, as emulated by LoginVSI
"Knowledge Worker" – LoginVSI's workload that emulates more advanced users (details below).
Boot Storms
The rebooting of VDI desktops at a large scale is a process often orchestrated by administrators by invoking the vSphere
task of rebooting virtual machines asynchronously (albeit sequentially), but it can also be performed by the end user. It is
necessary, for instance, in scenarios where new applications or operating system updates are installed and need to be
deployed to the virtual desktops. Desktops are issued with a reboot without waiting for previous ones to finish booting up.
As a result, multiple desktops boot up at the same time. The number of concurrent reboots is also affected by the limit
configured in the vCenter server configurations. This configuration can be altered after some experimentation to
determine how many concurrent operations a given vCenter server is capable of handling.
Figure 13 show storage bandwidth consumption and IOPS for rebooting 4000 Linked Clone virtual desktops
simultaneously. The entire process took about 10 minutes when processed on a single X-Brick X2 cluster.
Figure 13. XtremIO X2 IOPS and I/O Bandwidth – 4000 Linked Clone Desktops Boot Storm
The 10 minutes it took to reboot the 4000 desktops in both cases translate to an amazing rate of 6.67 desktops every
second, or one desktop boot per 150 milliseconds. Looking closely at the figures above, we can see that even though the
process with Linked Clones required more IOPS with a lower bandwidth, it was still able to complete in 10 minutes, the
same time required for the reboot with Instant Clones. We will explain this next using the block distribution graphs and
XtremIO X2 advanced Write Boost feature.
Figure 14 shows the block distribution during the 4000 Linked Clone desktops boot storm. We can see that the I/Os per
block size remain the same for most sizes during the operation.
Figure 14. XtremIO X2 Bandwidth by Block Size – 4000 Linked Clone Desktops Boot Storm
15 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Figure 15 shows the CPU utilization for the 4000 Linked Clone boot storm. We can see that for the 4000 Linked Clone
boot storm, the CPU is well utilized in a range between 65% and 75%, mainly due to the increase in I/O operation and the
use of Write Boost when booting up Linked Clones.
Figure 15. XtremIO X2 CPU Utilization – 4000 MCS Linked Clone Desktops Boot Storm
LoginVSI Results
In this section, we present the LoginVSI "Knowledge Worker" workload results for the 4000 Instant Clone and Linked
Clone desktops. The "Knowledge Worker" profile of LoginVSI emulates user actions such as opening a Word document,
modifying an Excel spreadsheet, browsing a PDF document, web browsing or streaming a webinar. This emulates typical
"advanced" user behavior and helps characterize XtremIO's performance in such scenarios. While characterizing the user
experience in those scenarios, any I/O latency that is detected in the storage array is of the utmost importance. This is a
parameter that directly influences the end user experience. Other parameters impacting user experience are CPU and
memory usage on the ESX hosts and storage network bandwidth utilization.
Figure 16. LoginVSI's "Knowledge Worker" Workload Profile
We chose Microsoft Windows 10 build 1709(32-bit) as the desktop operating system. Office 2016 suite, Adobe Reader 11
and the latest Oracle JRE, Internet Explorer 11, and Doro PDF Printer were installed and used by LoginVSI's "Knowledge
Worker" workloads.
16 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Figure 17 and Figure 18 show the LoginVSI results of our 4000 MCS Full Clone, MCS Linked Clone, and PVS Clone
desktops respectively. LoginVSI scores are determined by observing the average application latencies, highlighting the
speed at which user operations are completed. This helps quantify user experience, since the measurements considered
are at the application level. As a case in point, the blue line in each of the LoginVSI charts follows the progression of the
"VSI average" against the number of active sessions. This is an aggregated metric, using average application latencies as
more desktop sessions are added over time. The factor to be observed in these graphs is the VSImax threshold, which
represents the threshold beyond which LoginVSI's methodology indicates that the user experience has deteriorated to the
point where the maximum number of desktops that can be consolidated in a given VDI infrastructure has been reached.
Figure 17. LoginVSI's "Knowledge Worker" Results – 4000 MCS Full Clone Desktops
Figure 18. LoginVSI's "Knowledge Worker" Results – 4000 MCS Lined Clone Desktops
17 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Figure 19. LoginVSI's "Knowledge Worker" Results – 4000 PVS Clone Desktops
From the average shown in both graphs (the blue line), the application latency quantified is much lower than the VSImax
threshold watermark for the 4000 active users (~1100 average vs. a ~840 baseline). This demonstrates how XtremIO X2
all-flash single X-Brick cluster provides a best-in-class delivery of user experience for up to 4000 VDI users, with room to
scale further.
More details about LoginVSI test methodology can be found in Appendix A – Test Methodology and in the LoginVSI
documentation.
These LoginVSI results help us understand the user experience and are a testimony of the scalability and performance
that manifests into an optimal end user experience with XtremIO X2. The obvious reason, as highlighted by Figure
20, Figure 21 and Figure 22, is none other than the outstanding storage latency demonstrated by XtremIO X2.
Figure 20. XtremIO X2 Latency vs. IOPS – 4000 MCS Linked Clone Desktops In-Use
18 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Figure 21. XtremIO X2 Latency vs. IOPS – 4000 MCS Full Clone Desktops In-Use
Figure 22. XtremIO X2 Latency vs. IOPS – 4000 PVS Clone Desktops In-Use
For all the three desktops methods, we can see a steady and remarkable ~0.2msec latency for the entire LoginVSI
workload test. We see a small rise in latency numbers as IOPS accumulate, but never exceeding 0.3msec. These
numbers yield the great LoginVSI results described above, and provide a superb user experience for our VDI users.
19 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Figure 23, Figure 24 and Figure 25 present total IOPS and bandwidth seen during the LoginVSI "Knowledge Worker"
profile workload on our 4000 MCS Linked Clone desktops, 4000 MCS Full Clone desktops, and 4000 PVS Clone
desktops respectively. In all occasions, the bandwidth at the peak of the workload reaches about ~1.5GB/s.
Figure 23. XtremIO X2 IOPS and I/O Bandwidth – 4000 MCS Linked Clone Desktops In-Use
Figure 24. XtremIO X2 IOPS and I/O Bandwidth – 4000 MCS Full Clone Desktops In-Use
Figure 25. XtremIO X2 IOPS and I/O Bandwidth – 4000 PVS Clone Desktops In-Use
20 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Figure 26, Figure 27 and Figure 28 show the CPU Utilization of our X2 storage array during the LoginVSI "Knowledge
Worker" profile workload of 4000 MCS Full Clone, Linked Clone and PVS Clone desktops. We can see that the CPU
utilization at the peak of the workload reaches about 30% and 20% in MCS scenarios respectively, while it reached 13%
utilization for PVS Clones This emphasizes that although they save much space and provide various advantages, the
MCS Linked clones are slightly heavier than MCS Full clones since they are based on the same master images/in-
memory metadata. As for the PVS Clones, since some of the workload runs in memory, the CPU utilization is lower, but
as a result, the memory utilization at the host level is higher.
Figure 26. XtremIO X2 CPU Utilization – 4000 MCS Full Clone Desktops In-Use
Figure 27. XtremIO X2 CPU Utilization – 4000 MCS Linked Clone Desktops In-Use
Figure 28. XtremIO X2 CPU Utilization – 4000 PVS Clone Desktops In-Use
21 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Figure 29, Figure 30 and Figure 31 show block size distribution of 4000 Instant Clone and Linked Clone desktops
respectively during the LoginVSI "Knowledge Worker" profile workload. We can see that the I/Os per block size remain
the same for most sizes, while the bandwidth usage increase when more users login into their virtual desktops.
Figure 29. XtremIO X2 Bandwidth by Block Size – 4000 MCS Linked Clone Desktops In-Use
Figure 30. XtremIO X2 Bandwidth by Block Size – 4000 MCS Full Clone Desktops In-Use
Figure 31. XtremIO X2 Bandwidth by Block Size – 4000 PVS Clone Desktops In-Use
Examining all the graphs collected during the LoginVSI "Knowledge Worker" profile workload test, we see that the X2
single-brick is more than capable of managing and servicing 4000 VDI working stations, with room to serve additional
volumes and workloads.
22 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
We also took a deeper look at the ESXi hosts to see if our scaling fits from a compute-resources perspective as well.
Specifically, we checked both the CPU utilization of our ESX hosts and their Memory utilization (Figure 32) during the
LoginVSI "Knowledge Worker" profile workload test on the 4000 desktops.
Please note that using RAM Write Cache for PVS Clones (which is described later) increases the Memory utilization
drastically in light of the fact that storage workload is offloaded to RAM.
Figure 32. ESX Hosts CPU and Memory Utilization – 4000 MCS Linked Clone Desktops In-Use
We can see an approximate 65% utilization of both CPU and memory resources of the ESX hosts, indicating a well-
utilized environment and good resource consumption of the hosts, leaving room for extra VMs in the environment, and
spare resources for VMotion of VMs (due to hosts failures, planned upgrades, etc.). In Figure 33 below, we see the
change in CPU utilization of a single ESX host in the environment as the LoginVSI "Knowledge Worker" profile workload
test progresses. The test creates logins and workloads to the virtual desktops in a cumulative way, emulating a typical
working environment in which users log in during a span of a few dozen minutes and not all at the same time. This
behavior is seen clearly in the figure below, as the CPU utilization of this ESX host increases as time passes, until all
virtual desktops in the host are in use and CPU utilization reaches about 70%.
Figure 33. A Single ESX Host CPU Utilization – 4000 Desktops In-Use
23 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Solution's Hardware Layer
Based on the data presented above, it is evident that storage/virtualization administrators must strive to achieve an
optimal user experience for their VDI desktop end users. The following sections discuss how the hardware and software
synergize in order to achieve these goals.
We begin at the hardware layer, taking a wide look at our XtremIO X2 array and the features and benefits it provides to
VDI environments, continue by discussing the details of our ESX hosts, based on Dell PowerEdge servers, on which our
entire environment runs, and then review our storage configuration and networks that connect the servers to the storage
array, thereby encompassing all of the hardware components of the solution.
We follow this up with details of the software layer by providing configuration details for VMware vSphere, VMware
Horizon View Suite, Dell and EMC plugins for VMware and configuration settings on the "parent" virtual machine from
which VDI desktops are deployed.
Storage Array: Dell EMC XtremIO X2 All-Flash Array
Dell EMC's XtremIO is an enterprise-class scalable all-flash storage array that provides rich data services with high
performance. It is designed from the ground up to unlock flash technology's instant performance potential by uniquely
leveraging the characteristics of SSDs and using advanced inline data reduction methods to reduce the physical data that
must be stored on the disks.
XtremIO's storage system uses industry-standard components and proprietary intelligent software to deliver unparalleled
levels of performance, achieving consistent low latency for up to millions of IOPS. It comes with a simple, easy-to-use
interface for storage administrators and fits a wide variety of use cases for customers in need of a fast and efficient
storage system for their datacenters, requiring very little planning to set-up before provisioning.
XtremIO leverages flash to deliver value across multiple dimensions:
• Performance – provides consistent low-latency and up to millions of IOPS.
• Scalability -uses a scale-out and scale-up architecture.
• Storage Efficiency -uses data reduction techniques such as deduplication, compression and thin-provisioning.
• Data Protection -uses a proprietary flash-optimized algorithm named XDP.
• Environment Consolidation -uses XtremIO Virtual Copies or VMware's XCOPY.
We will further review XtremIO X2 features and capabilities.
XtremIO X2 Overview
XtremIO X2 is the new generation of Dell EMC's All-Flash Array storage system. It adds enhancements and flexibility in
several aspects to the already proficient and high-performant former generation storage array. Features such as scale-up
for a more flexible system, write boost for a more sensible and high-performing storage array, NVRAM for improved data
availability and a new web-based UI for managing the storage array and monitoring its alerts and performance stats, add
the extra value and advancements required in the evolving world of computer infrastructure.
The XtremIO X2 Storage Array uses building blocks called X-Bricks. Each X-Brick has its own compute, bandwidth and
storage resources, and can be clustered together with additional X-Bricks to grow in both performance and capacity
(scale-out). Each X-Brick can also grow individually in terms of capacity, with an option to add to up to 72 SSDs in each
brick.
XtremIO architecture is based on a metadata-centric, content-aware system, which helps streamline data operations
efficiently without requiring any movement of data post-write for any maintenance reason (data protection, data reduction,
etc. – all done inline). The system lays out the data uniformly across all SSDs in all X-Bricks in the system using unique
fingerprints of the incoming data and controls access using metadata tables. This contributes to an extremely balanced
system across all X-Bricks in terms of compute power, storage bandwidth and capacity.
24 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Using the same unique fingerprints, XtremIO is equipped with exceptional always-on in-line data deduplication abilities,
which highly benefits virtualized environments. Together with its data compression and thin provisioning capabilities (both
also in-line and always-on), it achieves incomparable data reduction rates.
System operation is controlled by storage administrators via a stand-alone dedicated Linux-based server called the
XtremIO Management Server (XMS). An intuitive user interface is used to manage and monitor the storage cluster and its
performance. The XMS can be either a physical or a virtual server and can manage multiple XtremIO clusters.
With its intelligent architecture, XtremIO provides a storage system that is easy to set-up, needs zero tuning by the client,
and does not require complex capacity or data protection planning. All this is handled autonomously by the system.
Architecture and Scalability
An XtremIO X2 Storage System is comprised of a set of X-Bricks that together form a cluster. This is the basic building
block of an XtremIO array. There are two types of X2 X-Bricks available: X2-S and X2-R. X2-S is for environments whose
storage needs are more I/O intensive than capacity intensive, as they use smaller SSDs and less RAM. An effective use
of the X2-S is for environments that have high data reduction ratios (high compression ratio or a great deal of duplicated
data) which lower the capacity footprint of the data significantly. X2-R X-Bricks clusters are made for the capacity
intensive environments, with bigger disks, more RAM and a bigger expansion potential in future releases. The two X-Brick
types cannot be mixed together in a single system, so the decision which type is suitable for your environment must be
made in advance.
Each X-Brick is comprised of two 1U Storage Controllers (SCs) with:
• Two dual socket Haswell CPUs
• 346GB RAM (for X2-S) or 1TB RAM (for X2-R)
• Two 1/10GbE iSCSI ports
• Two user interface interchangeable ports (either 4/8/16Gb FC or 1/10GbE iSCSI)
• Two 56Gb/s InfiniBand ports
• One 100/1000/10000 Mb/s management port
• One 1Gb/s IPMI port
• Two redundant power supply units (PSUs)
• One 2U Disk Array Enclosure (DAE) containing:
• Up to 72 SSDs of sizes 400GB (for X2-S) or 1.92TB (for X2-R)
• Two redundant SAS interconnect modules
• Two redundant power supply units (PSUs)
Figure 34. An XtremIO X2 X-Brick
4U
First
Storage Controller
DAE2U
Second
Storage Controller
1U
1U
25 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
The Storage Controllers on each X-Brick are connected to their DAE via redundant SAS interconnects.
An XtremIO storage array can have one or multiple X-Bricks. Multiple X-Bricks are clustered together into an XtremIO
array, using an InfiniBand switch and the Storage Controllers' InfiniBand ports for back-end connectivity between Storage
Controllers and DAEs across all X-Bricks in the cluster. The system uses the Remote Direct Memory Access (RDMA)
protocol for this back-end connectivity, ensuring a highly-available ultra-low latency network for communication between
all components of the cluster. The InfiniBand switches are the same size (1U) for both X2-S and X2-R cluster types, but
include 12 ports for X2-S and 36 ports for X2-R. By leveraging RDMA, an XtremIO system is essentially a single shared-
memory space spanning all of its Storage Controllers.
The 1GB port for management is configured with an IPv4 address. The XMS, which is the cluster's management software,
communicates with the Storage Controllers via the management interface. Through this interface, the XMS communicates
with the Storage Controllers, and sends storage management requests such as creating an XtremIO Volume or mapping
a Volume to an Initiator Group.
The second 1GB/s port for IPMI interconnects the X-Brick's two Storage Controllers. IPMI connectivity is strictly within the
bounds of an X-Brick and will never be connected to an IPMI port of a Storage Controller in another X-Brick in the cluster.
With X2, an XtremIO cluster has both scale-out and scale-up capabilities. Scale-out is implemented by adding X-Bricks to
an existing cluster. The addition of an X-Brick to an existing cluster linearly increases its compute power, bandwidth and
capacity. Each X-Brick that is added to the cluster brings with it two Storage Controllers, each with its CPU power, RAM
and FC/iSCSI ports to service the clients of the environment, together with a DAE with SSDs to increase the capacity
provided by the cluster. Adding an X-Brick to scale-out an XtremIO cluster is intended for environments that grow both in
capacity and performance needs, such as in the case of an increase in the number of active users and their data, or a
database which grows in data and complexity.
An XtremIO cluster can start with any number of X-Bricks that fits the environment's initial needs and can currently grow to
up to 4 X-Bricks (for both X2-S and X2-R). Future code upgrades of XtremIO X2 will support up to 8 X-Bricks for X2-R
arrays.
Figure 35. Scale Out Capabilities – Single to Multiple X2 X-Brick Clusters
26 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Scale-up of an XtremIO cluster is implemented by adding SSDs to existing DAEs in the cluster. This is intended for
environments that grow in capacity needs without need for extra performance. For example, this may occur when the
same number of users have an increasing amount of data to save, or when an environment grows in both capacity and
performance needs but has only reached its capacity limits with additional performance available with its current
infrastructure.
Each DAE can hold up to 72 SSDs and is divided into 2 groups of SSDs called Data Protection Groups (DPGs). Each
DPG can hold a minimum of 18 SSDs and can grow by increments of 6 SSDs up to the maximum of 36 SSDs. In other
words, 18, 24, 30 or 36 SSDs may be installed per DPG, where up to 2 DPGs can occupy a DAE.
SSDs are 400GB per drive for X2-S clusters and 1.92TB per drive for X2-R clusters. Future releases will allow customers
to populate their X2-R clusters with 3.84TB sized drives, doubling the physical capacity available in their clusters.
Figure 36. Scale Up Capabilities – Up to 2 DPGs and 72 SSDs per DAE
For more details on XtremIO X2, see the XtremIO X2 Specifications
[2]
and XtremIO X2 Datasheet
[3]
.
27 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
XIOS and the I/O Flow
Each Storage Controller within the XtremIO cluster runs a specially-customized lightweight Linux-based operating system
as the base platform of the array. The XtremIO Operating System (XIOS) handles all activities within a Storage Controller
and runs on top of the Linux-based operating system. XIOS is optimized for handling high I/O rates and manages the
system's functional modules, RDMA communication, monitoring etc.
Figure 37. X-Brick Components
XIOS has a proprietary process scheduling-and-handling algorithm designed to meet the specific requirements of a
content-aware, low-latency, and high-performing storage system. It provides efficient scheduling and data access, Instant
exploitation of CPU resources, optimized inter-sub-process communication, and minimized dependency between sub-
processes that run on different sockets.
The XtremIO Operating System gathers a variety of metadata tables on incoming data including data fingerprint, location
in the system, mappings and reference counts. The metadata is used as the fundamental reference for performing system
operations such as laying out incoming data uniformly, implementing inline data reduction services, and accessing data
on read requests. The metadata is also involved in communication with external applications (such as VMware XCOPY
and Microsoft ODX) to optimize integration with the storage system.
Regardless of which Storage Controller receives an I/O request from a host, multiple Storage Controllers on multiple X-
Bricks cooperate to process the request. The data layout in the XtremIO system ensures that all components share the
load and participate evenly in processing I/O operations.
An important functionality of XIOS is its data reduction capabilities. This is achieved by using inline data deduplication and
compression. Data deduplication and data compression complement each other. Data deduplication removes
redundancies, whereas data compression compresses the already deduplicated data before it is written to the flash
media. XtremIO is an always-on thin-provisioned storage system, further realizing storage savings by the storage system,
which never writes a block of zeros to the disks.
XtremIO integrates with existing SANs through 16Gb/s Fibre Channel or 10Gb/s Ethernet iSCSI connectivity to service
hosts' I/O requests.
Details of the XIOS architecture and its data reduction capabilities are available in the Introduction to DELL EMC XtremIO
X2 Storage Array document
[4]
.
28 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
XtremIO Write I/O Flow
In a write operation to the storage array, the incoming data stream reaches any one of the Active-Active Storage
Controllers and is broken into data blocks. For every data block, the array fingerprints the data with a unique identifier and
stores it in the cluster's mapping table. The mapping table maps the host Logical Block Addresses (LBA) to the block
fingerprints, and the block fingerprints to its physical location in the array (the DAE, SSD and offset the block is located
at). The fingerprint of a block has two objectives: to determine if the block is a duplicate of a block that already exists in
the array and to distribute blocks uniformly across the cluster. The array divides the list of potential fingerprints among
Storage Controllers and assigns each its own fingerprint range. The mathematical process that calculates the fingerprints
results in a uniform distribution of fingerprint values and thus fingerprints and blocks are evenly distributed across all
Storage Controllers in the cluster.
A write operation works as follows:
1. A new write request reaches the cluster.
2. The new write is broken into data blocks.
3. For each data block:
a. A fingerprint is calculated for the block.
b. An LBA-to-fingerprint mapping is created for this write request.
c. The fingerprint is checked to see if it already exists in the array.
d. If it exists, the reference count for this fingerprint is incremented by one.
e. If it does not exist:
1. A location is chosen on the array where the block will be written (distributed uniformly across the array
according to fingerprint value).
2. A fingerprint-to-physical location mapping is created.
3. The data is compressed.
4. The data is written.
5. The reference count for the fingerprint is set to one.
29 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Deduplicated writes are of course much faster than original writes. Once the array identifies a write as a duplicate, it
updates the LBA-to-fingerprint mapping for the write and updates the reference count for this fingerprint. No further data is
written to the array and the operation is completed quickly, adding an extra benefit to in-line deduplication. Figure 38
shows an example of an incoming data stream which contains duplicate blocks with identical fingerprints.
Figure 38. Incoming Data Stream Example with Duplicate blocks
As mentioned, fingerprints also help to decide where to write the block in the array. Figure 39 shows the incoming stream
demonstrated in Figure 38, after duplicates were removed, as it is being written to the array. The blocks are divided to
their appointed Storage Controller according to their fingerprint value, which ensures a uniform distribution of the data
across the cluster. The blocks are transferred to their destinations in the array using Remote Direct Memory Access
(RDMA) via the low-latency InfiniBand network.
Figure 39. Incoming Deduplicated Data Stream Written to the Storage Controllers
Storage
Controller
Storage
Controller
DAE
Storage
Controller
Storage
Controller
DAE
CA38C90
Data
134F871
Data
0325F7A
Data
F3AFBA3
Data
AB45CB7
Data
20147A8
Data
963FE7B
Data
Data
DataData
DataData
Data Data
X-Brick 1
X-Brick 2
F, …
2, A, …
1, 9, …
0, C, …
InfiniBand
30 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
The actual write of the data blocks to the SSDs is carried out asynchronously. At the time of the application write, the
system places the data blocks in the in-memory write buffer, and protects it using journaling to local and remote NVRAMs.
Once it is written to the local NVRAM and replicated to a remote one, the Storage Controller returns an acknowledgment
to the host. This guarantees a quick response to the host, ensures low-latency of I/O traffic, and preserves the data in
case of system failure (power-related or any other). When enough blocks are collected in the buffer (to fill up a full stripe),
the system writes them to the SSDs on the DAE. Figure 40 demonstrates the phase of writing the data to the DAEs after a
full stripe of data blocks is collected in each Storage Controller.
Figure 40. Full Stripe of Blocks Written to the DAEs
XtremIO Read I/O Flow
In a read operation, the system first performs a look-up of the logical address in the LBA-to-fingerprint mapping. The
fingerprint found is then looked up in the fingerprint-to-physical mapping and the data is retrieved from the right physical
location. Just as with writes, the read load is also evenly shared across the cluster, as blocks are evenly distributed, and
all volumes are accessible across all X-Bricks. If the requested block size is larger than the data block size, the system
performs parallel data block reads across the cluster and assembles them into bigger blocks before returning them to the
application. A compressed data block is decompressed before it is delivered to the host.
XtremIO has a memory-based read cache in each Storage Controller. The read cache is organized by content fingerprint.
Blocks whose contents are more likely to be read are placed in the read cache for a fast retrieve.
Storage
Controller
Storage
Controller
DAE
Storage
Controller
Storage
Controller
DAE
Data Data Data Data P1 P2DataDataDataDataDataData
Data Data Data Data P1 P2DataDataDataDataDataData
Data Data Data Data P1 P2DataDataDataDataDataData
Data Data Data Data P1 P2DataDataDataDataDataData
Data
DataData
DataData
Data Data
X-Brick 1
X-Brick 2
31 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
A read operation works as such:
1. A new read request reaches the cluster.
2. The read request is analyzed to determine the LBAs for all data blocks and a buffer is created to hold the data.
3. For each LBA:
a. The LBA-to-fingerprint mapping is checked to find the fingerprint of each data block to be read.
b. The fingerprint-to-physical location mapping is checked to find the physical location of each of the data
blocks.
c. The requested data block is read from its physical location (read cache or a place in the disk) and
transmitted to the buffer created in step 2 in the Storage Controller that processes the request via RDMA
over InfiniBand.
4. The system assembles the requested read from all data blocks transmitted to the buffer and sends it back to the
host.
System Features
The XtremIO X2 Storage Array offers a wide range of built-in features that require no special license. The architecture and
implementation of these features is unique to XtremIO and is designed around the capabilities and limitations of flash
media. We will list some key features included in the system.
Inline Data Reduction
XtremIO's unique Inline Data Reduction is achieved by these two mechanisms: Inline Data Deduplication and Inline Data
Compression
Data Deduplication
Inline Data Deduplication is the removal of duplicate I/O blocks from a stream of data prior to it being written to the flash
media. XtremIO inline deduplication is always on, meaning no configuration is needed for this important feature. The
deduplication is at a global level, meaning no duplicate blocks are written over the entire array. Being an inline and global
process, no resource-consuming background processes or additional reads and writes (which are mainly associated with
post-processing deduplication) are necessary for the feature's activity, thus increasing SSD endurance and eliminating
performance degradation.
As mentioned earlier, deduplication on XtremIO is performed using the content's fingerprints (see XtremIO Write I/O Flow
on page 28). The fingerprints are also used for uniform distribution of data blocks across the array, thus providing inherent
load balancing for performance and enhancing flash wear-level efficiency, since the data never needs to be rewritten or
rebalanced.
XtremIO uses a content-aware, globally deduplicated Unified Data Cache for highly efficient data deduplication. The
system's unique content-aware storage architecture provides a substantially larger cache size with a small DRAM
allocation. Therefore, XtremIO is the ideal solution for difficult data access patterns, such as "boot storms" common in VDI
environments.
XtremIO has excellent data deduplication ratios, especially for virtualized environments. With it, SSD usage is smarter,
flash longevity is maximized, logical storage capacity is multiplied (see Figure 7 and Figure 12 for examples) and total
cost of ownership is reduced.
32 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Data Compression
Inline data compression is that done on data prior to being written to the flash media. XtremIO automatically compresses
data after all duplications are removed, ensuring that the compression is performed only for unique data blocks. The
compression is performed in real-time and not as a post-processing operation. This way, it does not overuse the SSDs or
impact performance. Compressibility rates depend on the type of data written.
Data Compression complements data deduplication in many cases and saves storage capacity by storing only unique
data block in the most efficient manner. We can see the benefits and capacity savings for the deduplication-compression
combination demonstrated in Figure 41 and some real ratios in the Test Results section in Figure 7 and Figure 12.
Figure 41. Data Deduplication and Data Compression Demonstrated
Thin Provisioning
XtremIO storage is natively thin provisioned, using a small internal block size. All volumes in the system are thin
provisioned, meaning that the system consumes capacity only when it is needed. No storage space is ever pre-allocated
before writing.
Because of XtremIO's content-aware architecture, blocks can be stored at any location in the system (with the metadata
referring to their location), and the data is written only when unique blocks are received. Therefore, as opposed to disk-
oriented architecture, no space creeping or garbage collection is necessary on XtremIO, volume fragmentation does not
occur in the array, and defragmentation utilities are not needed.
This XtremIO feature enables consistent performance and data management across the entire life cycle of a volume,
regardless of the system capacity utilization or the write patterns of clients.
Integrated Copy Data Management
XtremIO pioneered the concept of integrated Copy Data Management (iCDM) – the ability to consolidate both primary
data and its associated copies on the same scale-out all-flash array for unprecedented agility and efficiency.
XtremIO is one of a kind in its capabilities to consolidate multiple workloads and entire business processes safely and
efficiently, providing organizations with a new level of agility and self-service for on-demand procedures. XtremIO provides
consolidation, supporting on-demand copy operations at scale, and still maintains delivery of all performance SLAs in a
consistent and predictable way.
Data Written by Host
3:1
Data
Deduplication
2:1
Data
Compression
6:1 Total Data Reduction
This is the only
data written to
the flash media.
33 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Consolidation of primary data and its copies in the same array has numerous benefits:
1. It can make development and testing activities up to 50% faster, creating copies of production code quickly for
development and testing purposes, and then refreshing the output back into production for the full cycle of code
upgrades in the same array. This dramatically reduces complexity and infrastructure needs, as well as development
risks, and increases the quality of the product.
2. Production data can be extracted and pushed to all downstream analytics applications on-demand as a simple in-
memory operation. Copies of the data are high performance and receive the same SLA as production copies without
compromising production SLAs. XtremIO offers this on-demand as both self-service and automated workflows for
both application and infrastructure teams.
3. Operations such as patches, upgrades and tuning tests can be made quickly using copies of production data.
Diagnosing problems of applications and databases can be done using these copies, and changes can be applied
and refreshed back to production. The same process can be used for testing new technologies and combining them in
production environments.
4. iCDM can also be used for data protection purposes, as it enables creating many copies at low point-in-time intervals
for recovery. Application integration and orchestration policies can be set to auto-manage data protection, using
different SLAs.
XtremIO Virtual Copies
XtremIO uses its own implementation of snapshots for all iCDM purposes, called XtremIO Virtual Copies (XVCs). XVCs
are created by capturing the state of data in volumes at a particular point in time and allowing users to access that data
when needed, regardless of the state of the source volume (even deletion). They allow any access type and can be taken
either from a source volume or another Virtual Copy.
XtremIO's Virtual Copy technology is implemented by leveraging the content-aware capabilities of the system and
optimized for SSDs with a unique metadata tree structure that directs I/O to the right data timestamp. This allows efficient
copy creation that can sustain high performance, while maximizing the media endurance.
Figure 42. A Metadata Tree Structure Example of XVCs
When creating a Virtual Copy, the system only generates a pointer to the ancestor metadata of the actual data in the
system, making the operation very quick. This operation does not have any impact on the system and does not consume
any capacity at the point of creation, unlike traditional snapshots, which may need to reserve space or copy the metadata
for each snapshot. Virtual Copy capacity consumption occurs only when changes are made to any copy of the data. Then,
the system updates the metadata of the changed volume to reflect the new write, and stores the blocks in the system
using the standard write flow process.
The system supports the creation of Virtual Copies on a single, as well as on a set, of volumes. All Virtual Copies of the
volumes in the set are cross-consistent and contain the exact same point-in-time. This can be done manually by selecting
a set of volumes for copying, or by placing volumes in a Consistency Group and making copies of that Group.
34 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Virtual Copy deletions are lightweight and proportional only to the amount of changed blocks between the entities. The
system uses its content-aware capabilities to handle copy deletions. Each data block has a counter that indicates the
number of instances of that block in the system. If a block is referenced from some copy of the data, it will not be deleted.
Any block whose counter value reaches zero is marked as deleted and will be overwritten when new unique data enters
the system.
With XVCs, XtremIO's iCDM offers the following tools and workflows to provide the consolidation capabilities:
• Consistency Groups (CG) – Grouping of volumes to allow Virtual Copies to be taken on a group of volumes as a
single entity.
• Snapshot Sets – A group of Virtual Copies volumes taken together using CGs or a group of manually-chosen
volumes.
• Protection Copies – Immutable read-only copies created for data protection and recovery purposes.
• Protection Scheduler – Used for local protection of a volume or a CG. It can be defined using intervals of
seconds/minutes/hours or can be set using a specific time of day or week. It has a retention policy based on the
number of copies needed or the permitted age of the oldest snapshot.
• Restore from Protection – Restore a production volume or CG from one of its descendant snapshot sets.
• Repurposing Copies – Virtual Copies configured with changing access types (read-write / read-only / no-access)
for alternating purposes.
• Refresh a Repurposing Copy – Refresh a Virtual Copy of a volume or a CG from the parent object or other
related copies with relevant updated data. It does not require volume provisioning changes for the refresh to take
effect, but only host-side logical volume management operations to discover the changes.
XtremIO Data Protection
XtremIO Data Protection (XDP) provides a "self-healing" double-parity data protection with very high efficiency to the
storage system. It requires very little capacity overhead and metadata space and does not require dedicated spare drives
for rebuilds. Instead, XDP leverages the "hot space" concept, where any free space available in the array can be utilized
for failed drive reconstructions. The system always reserves sufficient distributed capacity for performing at least a single
drive rebuild. In the rare case of a double SSD failure, the second drive will be rebuilt only if there is enough space to
rebuild the second drive as well, or when one of the failed SSDs is replaced.
The XDP algorithm provides:
• N+2 drive protection.
• Capacity overhead of only 5.5%-11% (depends on the number of disks in the protection group).
• 60% more write-efficient than RAID1.
• Superior flash endurance to any RAID algorithm, due to the smaller number of writes and even distribution of
data.
• Automatic rebuilds that are faster than traditional RAID algorithms.
35 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
As shown in Figure 43, XDP uses a variation of N+2 row and diagonal parity which provides protection from two
simultaneous SSD errors. An X-Brick DAE may contain up to 72 SSDs organized in two Data Protection Groups (DPGs).
XDP is managed independently on the DPG level. A DPG of 36 SSDs will result in capacity overhead of only 5.5% for its
data protection needs.
Figure 43. N+2 Row and Diagonal Parity
1 2
2 3
3 4
D0 D1
3 4
4 5
5 1
D2 D3
k = 5 (prime)
5
1
2
D4
1
2
3
P Q
4 5 1 2 3 4
k-1
5
36 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Data at Rest Encryption
Data at Rest Encryption (DARE) provides a solution for securing critical data even when the media is removed from the
array, for customers in need of such security. XtremIO arrays utilize a high-performance inline encryption technique to
ensure that all data stored on the array is unusable if the SSD media is removed. This prevents unauthorized access in
the event of theft or loss during transport, and makes it possible to return/replace failed components containing sensitive
data. DARE has been established as a mandatory requirement in several industries, such as health care, banking, and
government institutions.
At the heart of XtremIO's DARE solution is Self-Encrypting Drive (SED) technology. An SED has dedicated hardware
which is used to encrypt and decrypt data as it is written to or read from the drive. Offloading the encryption task to the
SSDs enables XtremIO to maintain the same software architecture whenever encryption is enabled or disabled on the
array. All XtremIO's features and services (including Inline Data Reduction, XtremIO Data Protection, Thin Provisioning,
XtremIO Virtual Copies, etc.) are available on an encrypted cluster as well as on a non-encrypted cluster, and
performance is not impacted when using encryption.
A unique Data Encryption Key (DEK) is created during the drive manufacturing process and does not leave the drive at
any time. The DEK can be erased or changed, rendering its current data unreadable forever. To ensure that only
authorized hosts can access the data on the SED, the DEK is protected by an Authentication Key (AK) that resides on the
Storage Controller. Without the AK, the DEK is encrypted and cannot be used to encrypt or decrypt data.
Figure 44. Data at Rest Encryption in XtremIO
37 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Write Boost
In the new X2 storage array, the write flow algorithm was improved significantly to improve array performance, countering
the rise in compute power and disk speeds, and accounting for common applications' I/O patterns and block sizes. As
mentioned when discussing the write I/O flow, the commit to the host is now asynchronous to the actual writing of the
blocks to disk. The commit is sent after the changes are written to a local and remote NVRAMs for protection, and are
written to the disk only later, at a time that best optimizes the system's activity. In addition to the shortened procedure from
write to commit, the new algorithm addresses an issue relevant to many applications and clients: a high percentage of
small I/Os creating load on the storage system and influencing latency, especially on bigger I/O blocks. Examining
customers' applications and I/O patterns, it was found that many I/Os from common applications come in small blocks,
under than 16K pages, creating high loads on the storage array. Figure 45 shows the block size histogram from the entire
XtremIO install base. The percentage of blocks smaller than 16KB is highly evident. The new algorithm takes care of this
issue by aggregating small writes to bigger blocks in the array before writing them to disk, making them less demanding
on the system, which is now more capable of handling bigger I/Os faster. The test results for the improved algorithm were
amazing: the improvement in latency for several cases is around 400% and allows XtremIO X2 to address application
requirements with 0.5msec or lower latency.
Figure 45. XtremIO Install Base Block Size Histogram
VMware APIs for Array Integration (VAAI)
VAAI was first introduced as VMware's improvements to host-based VM cloning. It offloads the workload of cloning a VM
to the storage array, making cloning much more efficient. Instead of copying all blocks of a VM from the array and back to
it for the creation of a new cloned VM, the application lets the array do it internally, utilizing the array's features and saving
host and network resources that are no longer involved in the actual cloning of data. This procedure of offloading the
operation to the storage array is backed by the X-copy (extended copy) command to the array, which is used when
cloning large amounts of complex data.
XtremIO is fully VAAI compliant, allowing the array to communicate directly with vSphere and provide accelerated storage
vMotion, VM provisioning, and thin provisioning functionality. In addition, XtremIO's VAAI integration improves X-copy
efficiency even further by making the whole operation metadata driven. Due to its inline data reduction features and in-
memory metadata, no actual data blocks are copied during an X-copy command. The system only creates new pointers to
the existing data within the Storage Controllers' memory. Therefore, the operation saves host and network resources and
does not consume storage resources, leaving no impact on the system's performance, as opposed to other
implementations of VAAI and the X-copy command.
Performance tests of XtremIO during X-copy operations and comparison between X1 and X2 with different block sizes
can be found in a dedicated post written at XtremIO's CTO blog
[9]
.
38 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Figure 46 illustrates the X-copy operation when performed against an XtremIO storage array and shows the efficiency in
metadata-based cloning.
Figure 46. VAAI X-Copy with XtremIO
The XtremIO features for VAAI support include:
• Zero Blocks / Write Same – used for zeroing-out disk regions and providing accelerated volume formatting.
• Clone Blocks / Full Copy / X-Copy – used for copying or migrating data within the same physical array; an almost
instantaneous operation on XtremIO due to its metadata-driven operations.
• Record Based Locking / Atomic Test & Set (ATS) – used during creation and locking of files on VMFS volumes
and during power-down and powering-up of VMS.
• Block Delete / Unmap / Trim – used for reclamation of unused space using the SCSI UNMAP feature.
Other features of XtremIO X2 (some described in previous sections):
• Scalability (scale-up and scale-out)
• Even Data Distribution (uniformity)
• High Availability (no single points of failures)
• Non-disruptive Upgrade and Expansion
• RecoverPoint Integration (for replications to local or remote arrays)
Ptr Ptr Ptr Ptr Ptr Ptr
A B C D
Metadata in RAM
Data on SSD
XtremIO
X-Copy command (full clone)
A
VM1
Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6
Ptr Ptr Ptr Ptr Ptr Ptr
A B C D
Copy metadata pointers
Data on SSD
XtremIO
B
VM1
Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6
Ptr Ptr Ptr Ptr Ptr Ptr
A B C D
Ptr Ptr Ptr Ptr Ptr Ptr
Metadata in RAM
Data on SSD
XtremIO
C
• No data blocks are copied.
• New pointers are created to the existing data.
VM1 VM2
New
Addr 1
New
Addr 2
New
Addr 3
New
Addr 4
New
Addr 5
New
Addr 6Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6
39 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
XtremIO Management Server
The XtremIO Management Server (XMS) is the component that manages XtremIO clusters (up to 8 clusters). It is
preinstalled with CLI, GUI and RESTful API interfaces, and can be installed on a dedicated physical server or a VMware
virtual machine.
The XMS manages the cluster via the management ports on both Storage Controllers of the first X-Brick in the cluster and
uses a standard TCPI/IP connection to communicate with them. It is not part of the XtremIO data path, thus can be
disconnected from an XtremIO cluster without jeopardizing data I/O tasks. A failure on the XMS affects only monitoring
and configuration activities, such as creating and attaching volumes. A virtual XMS is naturally less vulnerable to such
failures.
The GUI is based on a new Web User Interface (WebUI), which is accessible with any browser, and provides easy-to-use
tools for performing most system operations (certain management operations must be performed using the CLI). Some of
the most useful features of the new WebUI are described following.
Dashboard
The Dashboard window presents an overview of the cluster. It has three panels:
1. Health – Provides an overview of the system's health status and alerts.
2. Performance (shown in Figure 47) – Provides an overview of the system's overall performance and top used
Volumes and Initiator Groups.
3. Capacity (shown in Figure 48) – Provides an overview of the system's physical capacity and data savings. Note
these figures represent views available in the dashboard and not test results shown in earlier figures.
Figure 47. XtremIO WebUI – Dashboard – Performance Panel
40 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Figure 48. XtremIO WebUI – Dashboard – Capacity Panel
The main Navigation menu bar is located on the left side of the UI. Users can select one of the navigation menu options
related to XtremIO's management actions. The main menus contain options for the Dashboard, Notifications,
Configuration, Reports, Hardware and Inventory.
Notifications
In the Notifications menu, we can navigate to the Events window (shown in Figure 49) and the Alerts window, showing
major and minor issues related to the cluster's health and operations.
Figure 49. XtremIO WebUI – Notifications – Events Window
41 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Configuration
The Configuration window displays the cluster's logical components: Volumes (shown in Figure 50), Consistency Groups,
Snapshot Sets, Initiator Groups, Initiators, and Protection Schedulers. From this window we can create and modify these
entities by using the action panel on the top right.
Figure 50. XtremIO WebUI – Configuration
Reports
In the Reports menu, we can navigate to different windows to show graphs and data of different aspects of the system's
activities, mainly related to the system's performance and resource utilization. Menu options we can choose to view
include: Overview, Performance, Blocks, Latency, CPU Utilization, Capacity, Savings, Endurance, SSD Balance, Usage
or User Defined reports. We can view reports using different time resolutions and components. Entities to be viewed are
selected with the "Select Entity" option in the Report menu (shown in Figure 51). In addition, pre-defined or custom time
intervals can be selected for the report as shown in Figure 52.
The Test Result graphs shown earlier in this document were generated with these menu options.
42 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Figure 51. XtremIO WebUI – Reports – Selecting Specific Entities to View
Figure 52. XtremIO WebUI – Reports – Selecting Specific Times to View
The Overview window shows basic reports on the system, including performance, weekly I/O patterns and storage
capacity information. The Performance window shows extensive performance reports which mainly include Bandwidth,
IOPS and Latency information. The Blocks window shows block distribution and statistics of I/Os going through the
system. The Latency window (shown in Figure 53) shows Latency reports per block size and IOPS metrics. The CPU
Utilization window shows CPU utilization of all Storage Controllers in the system.
43 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Figure 53. XtremIO WebUI – Reports – Latency Window
The Capacity window (shown in Figure 54) shows capacity statistics and the change in storage capacity over time. The
Savings window shows Data Reduction statistics and change over time. The Endurance window shows SSD's
endurance status and statistics. The SSD Balance window shows data balance and variance between the SSDs. The
Usage window shows Bandwidth and IOPS usage, both overall and separately for reads and writes. The User Defined
window allows users to define their own reports.
Figure 54. XtremIO WebUI – Reports – Capacity Window
44 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Monitoring
Monitoring, managing and optimizing storage health are critical to ensure performance of a VDI infrastructures. Simple
and easy-to-use has always been the design principle for XtremIO Management Server (XMS). With XIOS 6.0, XMS
delivers an HTML5 user interface for consumer-grade simplicity with enterprise-class features. The improved user
interface includes:
• Contextual, automated workflow suggestions for management activities.
• Advance reporting and analytics that make it easy to troubleshoot.
• Global search to quickly find that proverbial needle in the haystack.
The simple, yet powerful user interface drives efficiency by enabling administrators to manage, monitor, receive
notifications, and set alerts on the storage. With XMS, key system metrics are displayed in an easy-to-read graphical
dashboard. From the main dashboard, you can easily monitor the overall system health, performance and capacity
metrics and drill down to each object for additional details. This information allows you to quickly identify potential issues
and take corrective actions.
XtremIO X2 collects real time and historical data (up to 2 years) for a rich set of statistics. These statistics are collected at
both the Cluster/Array level and also at the object level (Volumes, Initiator Groups, Targets, etc.). This data collection is
available from day one, enabling XMS to provide advanced analytics to the storage environment running VDI
infrastructures.
Figure 55. XtremIO WebUI – Blocks Distribution Windows
45 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Advanced Analytics Reporting
VDI desktops data access pattern varies based on many factors such as desktop applications behavior, boot storms, login
storms, and OS updates. This greatly complicates storage sizing for VDI environments. XMS built-in reporting tracks data
traffic patterns, thus significantly simplifies the sizing effort.
With X2 release, XMS provides a built-in reporting widget that tracks weekly data traffic pattern. You can easily discover
IOPs pattern on each day and hour of the week and understand if the pattern is sporadic or consistent over a period time.
Figure 56. XtremIO WebUI – Weekly Patterns Reporting Widget
The CHANGE button on the widget tracks and displays changes (increasing or decreasing) of the past week relative to
the past 8 weeks. If there is no major change (i.e. that in the past week the hourly pattern did not change relative to the
past 8 weeks), then there will be no up/down arrow indication. However, if there is an increase/decrease in the traffic of
this week relative to the past 8 weeks, a visual arrow indication will appear.
Figure 57. XtremIO WebUI – Weekly Patterns Reporting on Relative Changes in Data Pattern
46 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Hardware
In the Hardware menu, a picture is provided of the physical cluster and the installed X-Bricks. When viewing the FRONT
panel, we can select and highlight any component of the X-Brick and view related detailed information in the panel on the
right. Figure 58 shows a hardware view of Storage Controller #1 in X-Brick #1 including installed disks and status LEDs.
We can further click on the "OPEN DAE" button to see a visual illustration of the X-Brick's DAE and its SSDs, and view
additional information on each SSD and Row Controller.
Figure 58. XtremIO WebUI – Hardware – Front Panel
Figure 59 shows the back panel view including physical connections to and within the X-Brick. This includes FC
connections, Power, iSCSI, SAS, Management, IPMI and InfiniBand. Connections can be filtered by the "Show
Connections" list at the top right.
Figure 59. XtremIO WebUI – Hardware – Back Panel – Show Connections
Inventory
In the Inventory menu, all components in the environment are shown together with related information. This includes:
XMS, Clusters, X-Bricks, Storage Controllers, Local Disks, Storage Controller PSUs, XEnvs, Data Protection Groups,
SSDs, DAEs, DAE Controllers, DAE PSUs, DAE Row Controllers, InfiniBand Switches and NVRAMs.
47 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
XMS Menus
The XMS Menus are global system menus that can be accessed in the top right tools of the interface. We can use them to
Search components in the system, view Health status of managed components, view major Alerts, view and configure
System Settings (shown in Figure 60) and use the User Menu to view login information (and logout), and support
options.
Figure 60. XtremIO WebUI – XMS Menus – System Settings
As mentioned, other interfaces are also available to monitor and manage an XtremIO cluster with the XMS server. The
system's Command Line Interface (CLI) can be used for everything the GUI provides and more. A RESTful API is another
pre-installed interface in the system which allows HTTP-based commands to manage clusters. And for Windows'
PowerShell console uses, a PowerShell API Module is also available for XtremIO management.
Test Setup
We used an XtremIO cluster with a single X2-S X-Brick as the storage array for our environment. The X-Brick had 36
drives of 400GB size each which, after leaving capacity for parity calculations and other needs, amounts to about 11.2TB
of physical capacity. As we saw in the Test Results section, this is more than enough capacity for our 4000 virtual
desktops. 36 drives are half the amount that can fit in a single X-Brick. This means that in terms of capacity, we can grow
to a maximum of x8 the capacity in this test setup with our scale-up (up to 72 drives per X-Brick) and scale-out (up to 4 X-
Bricks per cluster) capabilities for X2-S. For X2-R, we currently provide drives which are about 5 times bigger, yielding a
much higher capacity. X2-R drives will soon be 10 times bigger, and X2-R clusters could grow to up to 8 X-Bricks.
Performance-wise, we can also see from the Test Results section that our single X2-S X-Brick was enough to service our
VDI environment of 4000 desktops, with excellent storage traffic metrics (latency, bandwidth, IOPS) and resource
consumption metrics (CPU, RAM) throughout all of the VDI environment's processes. X2-R clusters would have even
higher compute performance as they have x3 the RAM of X2-S.
Compute Hosts: Dell PowerEdge Servers
The test setup includes a homogenous cluster of 32 ESX servers for hosting the Citrix desktops and 2 ESX servers for
virtual appliances, which are used to manage the Citrix and vSphere infrastructure. We chose Dell's PowerEdge FC630
as our ESX hosts, as they have the compute power to deal with an environment at such a scale (125 virtual desktops per
ESX host) and are a good fit for virtualization environments. Dell PowerEdge servers work with the Dell OpenManage
systems management portfolio that simplifies and automates server lifecycle management, and can be integrated with
VMware vSphere with a dedicated plugin.
48 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Table 2 lists ESX hosts details at our environment.
Table 2. ESX Hosts Details Used for VDI Desktops and Infrastructure
PROPERTIES 2+32 ESX HOSTS
System make Dell
Model PowerEdge FC630
CPU cores 36 CPUs x 2.10GHz
Processor type Intel Xeon CPU E5-2695 v4 @ 2.10GHz
Processor Sockets 2
Cores per socket 18
Logical processors 72
Memory 524 GB
Ethernet NICs 4
Ethernet NICs type QLogic 57840 10Gb
iSCSI NICs 4
iSCSI NICs type QLogic 57840 10Gb
FC adapters 4
FC adapters type QLE2742 Dual Port 32Gb
On-board SAS controller 1
In our test, we used FC connectivity to attach XtremIO LUNs to the ESX hosts, but iSCSI connectivity could have been
used in the same manner.
It is highly recommended to select and purchase servers after verifying the vendor, make and model from VMware's
hardware compatibility list (HCL). It is also recommended that the latest firmware be installed for the server and its
adapters, and that the latest GA release of VMware vSphere ESXi, including any of the latest update releases or express
patches be used.
For more information on Dell EMC PowerEdge FC630 see its specification sheet
[12]
.
49 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
Storage Configuration
This section outlines the storage configuration in our test environment, highlighting zoning considerations, XtremIO
Volumes, Initiator Groups, and mapping between Volumes and Initiator Groups.
Zoning
In a single X-Brick cluster configuration, a host equipped with a dual port storage adapter may have up to four paths per
device. Figure 61 shows the logical connection topology for four paths. Each XtremIO Storage Controller has two Fibre
Channel paths that connect to the physical host, via redundant SAN switches.
Figure 61. Dual Port HBA on an ESX Host to a Single X2 X-Brick Cluster Zoning
As recommended in EMC Host Connectivity Guide for VMware ESX Server
[6]
, the following connectivity guidelines should
be followed:
• Use multiple HBAs on the servers.
• Use at least two SAN switches to provide redundant paths between the servers and the XtremIO cluster.
• Restrict zoning to four paths to the storage ports from a single host.
• Use a single-Target-per-single-Initiator (1:1) zoning scheme.
Storage Volumes
We provisioned two sets of XtremIO Volumes as follows:
• 1 Volume of 4TB for hosting all virtual machines for management functions of the VDI environment.
• 32 X 3TB Volumes for hosting PVS/MCS Linked Clone desktops.
• 32 X 10TB Volumes for hosting MCS Full Clone desktops.
• We highly recommend leveraging the capabilities of EMC VSI plugin for vSphere Web client, to provision multiple
XtremIO Volumes.
Initiator Groups and LUN Mapping
We configured a 1:1 mapping between Initiator Groups and ESX hosts in our test environment. Each of our ESX hosts
has a dual port FC HBA, thus each Initiator Group contains two Initiators mapped to the two WWNs of the FC HBA.
Altogether 34 Initiator Groups were created, as follows:
• 2 Initiator Groups for mapping volumes to 2 management servers.
• 32 Initiator Groups for mapping volumes to all 32 ESX hosts hosting VDI desktops.
50 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16
© 2018 Dell Inc. or its subsidiaries.
The Initiator Groups and Volumes mapping was as follows:
• 1 Volume (size = 2TB) mapped to the 2 management infrastructure's Initiator Groups.
• 32 Volumes (3TB for PVS/ MCS Linked Clones, 10TB for MCS Full Clones) mapped to the 32 ESX hosts' Initiator
Groups hosting virtual desktops.
Storage Networks
We used FC connectivity between our X2 storage array and the ESX hosts to provision LUNs, but our environment was
also iSCSI-ready. For SAN fabric, we used Brocade G620 switches connecting the HBAs on the host to the Storage
Controllers on the X-Brick. Some important Brocade G620 details are summarized in Table 3. For more details on the FC
switch, refer to Brocade G620 Switch Datasheet.
Table 3. Brocade G620 FC Switch Details
Make/Model Brocade 6510
Form factor 1U
FC Ports 64
Port Speed 32Gb
Maximum Aggregate Bandwidth 2048Gbps Full Duplex
Supported Media 128Gbps, 32Gbps, 16Gbps, 10Gbps
For iSCSI connectivity, we used Mellanox MSX1016 switches connecting host ports to the Storage Controllers on the X-
Brick. Some important Mellanox MSX1016 details are summarized in Table 4. For more details on the iSCSI switch, refer
to Mellanox MSX1016 Switch Product Brief.
Table 4. Mellanox MSX1016 10GbE Switch Details
Make/Model Mellanox MSX1016 10GbE
Form factor 1U
Ports 64
Port Speed 10G
Jumbo Frames Supported (9216 Byte size)
Supported Media 1GbE, 10GbE
We highly recommend installing the most recent FC and iSCSI switch firmware for datacenter deployments.
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16

More Related Content

What's hot

Comparing performance and cost: Dell PowerEdge VRTX vs. legacy hardware solution
Comparing performance and cost: Dell PowerEdge VRTX vs. legacy hardware solutionComparing performance and cost: Dell PowerEdge VRTX vs. legacy hardware solution
Comparing performance and cost: Dell PowerEdge VRTX vs. legacy hardware solution
Principled Technologies
 
Configuring a highly available Microsoft Exchange Server 2013 environment on ...
Configuring a highly available Microsoft Exchange Server 2013 environment on ...Configuring a highly available Microsoft Exchange Server 2013 environment on ...
Configuring a highly available Microsoft Exchange Server 2013 environment on ...
Principled Technologies
 
X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...
X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...
X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...
X-IO Technologies
 
Why Choose VMware for Server Virtualization
Why Choose VMware for Server VirtualizationWhy Choose VMware for Server Virtualization
Why Choose VMware for Server Virtualization
VMware
 
Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013
Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013
Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013
Jaroslav Prodelal
 
DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed ReviewDELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
Itzik Reich
 
EMC Desktop as a Service
EMC Desktop as a Service  EMC Desktop as a Service
EMC Desktop as a Service
EMC
 
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...
Principled Technologies
 
Citrix Virtual Desktop Handbook
Citrix Virtual Desktop HandbookCitrix Virtual Desktop Handbook
Citrix Virtual Desktop Handbook
Nuno Alves
 
Net scaler vpx
Net scaler vpxNet scaler vpx
Net scaler vpx
Nuno Alves
 
Consolidate and upgrade: Dell PowerEdge VRTX and Microsoft SQL Server 2014
Consolidate and upgrade: Dell PowerEdge VRTX and Microsoft SQL Server 2014Consolidate and upgrade: Dell PowerEdge VRTX and Microsoft SQL Server 2014
Consolidate and upgrade: Dell PowerEdge VRTX and Microsoft SQL Server 2014
Principled Technologies
 
Paravirtualization
ParavirtualizationParavirtualization
Paravirtualization
Shahbaz Sidhu
 
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdidHitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Chetan Gabhane
 
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
EMC
 

What's hot (14)

Comparing performance and cost: Dell PowerEdge VRTX vs. legacy hardware solution
Comparing performance and cost: Dell PowerEdge VRTX vs. legacy hardware solutionComparing performance and cost: Dell PowerEdge VRTX vs. legacy hardware solution
Comparing performance and cost: Dell PowerEdge VRTX vs. legacy hardware solution
 
Configuring a highly available Microsoft Exchange Server 2013 environment on ...
Configuring a highly available Microsoft Exchange Server 2013 environment on ...Configuring a highly available Microsoft Exchange Server 2013 environment on ...
Configuring a highly available Microsoft Exchange Server 2013 environment on ...
 
X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...
X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...
X-Pod for VDI Reference Architecture Enabled by Cisco UCS, VMware Horizon Vie...
 
Why Choose VMware for Server Virtualization
Why Choose VMware for Server VirtualizationWhy Choose VMware for Server Virtualization
Why Choose VMware for Server Virtualization
 
Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013
Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013
Webinář: Provozujte datacentrum v kanceláři (Dell VRTX) / 5.9.2013
 
DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed ReviewDELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
 
EMC Desktop as a Service
EMC Desktop as a Service  EMC Desktop as a Service
EMC Desktop as a Service
 
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...
Comparing performance and cost: Dell PowerEdge VRTX with one Dell PowerEdge M...
 
Citrix Virtual Desktop Handbook
Citrix Virtual Desktop HandbookCitrix Virtual Desktop Handbook
Citrix Virtual Desktop Handbook
 
Net scaler vpx
Net scaler vpxNet scaler vpx
Net scaler vpx
 
Consolidate and upgrade: Dell PowerEdge VRTX and Microsoft SQL Server 2014
Consolidate and upgrade: Dell PowerEdge VRTX and Microsoft SQL Server 2014Consolidate and upgrade: Dell PowerEdge VRTX and Microsoft SQL Server 2014
Consolidate and upgrade: Dell PowerEdge VRTX and Microsoft SQL Server 2014
 
Paravirtualization
ParavirtualizationParavirtualization
Paravirtualization
 
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdidHitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
Hitachi whitepaper-protect-ucp-hc-v240-with-vmware-vsphere-hdid
 
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
 

Similar to Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16

Hdx optimization and best practices
Hdx optimization and best practicesHdx optimization and best practices
Hdx optimization and best practices
Nuno Alves
 
Citrix XenDesktop Reference Architecture for 750 users
Citrix XenDesktop Reference Architecture for 750 usersCitrix XenDesktop Reference Architecture for 750 users
Citrix XenDesktop Reference Architecture for 750 users
X-IO Technologies
 
Citrix netscaler-and-citrix-xendesktop-7-deployment-guide
Citrix netscaler-and-citrix-xendesktop-7-deployment-guideCitrix netscaler-and-citrix-xendesktop-7-deployment-guide
Citrix netscaler-and-citrix-xendesktop-7-deployment-guide
KunKun Ng
 
NetScaler Deployment Guide for XenDesktop7
NetScaler Deployment Guide for XenDesktop7NetScaler Deployment Guide for XenDesktop7
NetScaler Deployment Guide for XenDesktop7
Nuno Alves
 
Reference architecture dir and es - final
Reference architecture   dir and es - finalReference architecture   dir and es - final
Reference architecture dir and es - final
Nuno Alves
 
Dell EMC XtremIO & Stratoscale White Paper
Dell EMC XtremIO & Stratoscale White PaperDell EMC XtremIO & Stratoscale White Paper
Dell EMC XtremIO & Stratoscale White Paper
Itzik Reich
 
White Paper: EMC Compute-as-a-Service
White Paper: EMC Compute-as-a-Service   White Paper: EMC Compute-as-a-Service
White Paper: EMC Compute-as-a-Service
EMC
 
Deploying the XenMobile 8.5 Solution
Deploying the XenMobile 8.5 SolutionDeploying the XenMobile 8.5 Solution
Deploying the XenMobile 8.5 Solution
Nuno Alves
 
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White PaperMicrosoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Itzik Reich
 
Citrix/Intel Brochure salon Hit
Citrix/Intel Brochure salon HitCitrix/Intel Brochure salon Hit
Citrix/Intel Brochure salon Hit
Christian Hym
 
Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...
Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...
Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...
Principled Technologies
 
Dell 3-2-1 Reference Configurations: Configuration, management, and upgrade g...
Dell 3-2-1 Reference Configurations: Configuration, management, and upgrade g...Dell 3-2-1 Reference Configurations: Configuration, management, and upgrade g...
Dell 3-2-1 Reference Configurations: Configuration, management, and upgrade g...
Principled Technologies
 
Get the Facts - Dispelling Citrix Myths
Get the Facts - Dispelling Citrix MythsGet the Facts - Dispelling Citrix Myths
Get the Facts - Dispelling Citrix Myths
VMware
 
Citrix XenApp and XenDesktop 7.X
Citrix XenApp and XenDesktop 7.XCitrix XenApp and XenDesktop 7.X
Citrix XenApp and XenDesktop 7.X
Izaak Salman
 
White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments
White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments  White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments
White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments
EMC
 
Xendesktop 7-on-windows-azure-design-guide
Xendesktop 7-on-windows-azure-design-guideXendesktop 7-on-windows-azure-design-guide
Xendesktop 7-on-windows-azure-design-guide
Nuno Alves
 
Virtual graphic workspace
Virtual graphic workspace Virtual graphic workspace
Virtual graphic workspace
TTEC
 
VDI Performance of PRIMERGY S7 Server Generation
VDI Performance of PRIMERGY S7 Server GenerationVDI Performance of PRIMERGY S7 Server Generation
VDI Performance of PRIMERGY S7 Server Generation
Kingfin Enterprises Limited
 
XenDesktop 7 Blueprint
XenDesktop 7 BlueprintXenDesktop 7 Blueprint
XenDesktop 7 Blueprint
Nuno Alves
 
White Paper: Backup and Recovery of the EMC Greenplum Data Computing Applianc...
White Paper: Backup and Recovery of the EMC Greenplum Data Computing Applianc...White Paper: Backup and Recovery of the EMC Greenplum Data Computing Applianc...
White Paper: Backup and Recovery of the EMC Greenplum Data Computing Applianc...
EMC
 

Similar to Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16 (20)

Hdx optimization and best practices
Hdx optimization and best practicesHdx optimization and best practices
Hdx optimization and best practices
 
Citrix XenDesktop Reference Architecture for 750 users
Citrix XenDesktop Reference Architecture for 750 usersCitrix XenDesktop Reference Architecture for 750 users
Citrix XenDesktop Reference Architecture for 750 users
 
Citrix netscaler-and-citrix-xendesktop-7-deployment-guide
Citrix netscaler-and-citrix-xendesktop-7-deployment-guideCitrix netscaler-and-citrix-xendesktop-7-deployment-guide
Citrix netscaler-and-citrix-xendesktop-7-deployment-guide
 
NetScaler Deployment Guide for XenDesktop7
NetScaler Deployment Guide for XenDesktop7NetScaler Deployment Guide for XenDesktop7
NetScaler Deployment Guide for XenDesktop7
 
Reference architecture dir and es - final
Reference architecture   dir and es - finalReference architecture   dir and es - final
Reference architecture dir and es - final
 
Dell EMC XtremIO & Stratoscale White Paper
Dell EMC XtremIO & Stratoscale White PaperDell EMC XtremIO & Stratoscale White Paper
Dell EMC XtremIO & Stratoscale White Paper
 
White Paper: EMC Compute-as-a-Service
White Paper: EMC Compute-as-a-Service   White Paper: EMC Compute-as-a-Service
White Paper: EMC Compute-as-a-Service
 
Deploying the XenMobile 8.5 Solution
Deploying the XenMobile 8.5 SolutionDeploying the XenMobile 8.5 Solution
Deploying the XenMobile 8.5 Solution
 
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White PaperMicrosoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
 
Citrix/Intel Brochure salon Hit
Citrix/Intel Brochure salon HitCitrix/Intel Brochure salon Hit
Citrix/Intel Brochure salon Hit
 
Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...
Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...
Upgrade to Dell EMC PowerEdge R940 servers with VMware vSphere 7.0 and gain g...
 
Dell 3-2-1 Reference Configurations: Configuration, management, and upgrade g...
Dell 3-2-1 Reference Configurations: Configuration, management, and upgrade g...Dell 3-2-1 Reference Configurations: Configuration, management, and upgrade g...
Dell 3-2-1 Reference Configurations: Configuration, management, and upgrade g...
 
Get the Facts - Dispelling Citrix Myths
Get the Facts - Dispelling Citrix MythsGet the Facts - Dispelling Citrix Myths
Get the Facts - Dispelling Citrix Myths
 
Citrix XenApp and XenDesktop 7.X
Citrix XenApp and XenDesktop 7.XCitrix XenApp and XenDesktop 7.X
Citrix XenApp and XenDesktop 7.X
 
White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments
White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments  White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments
White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments
 
Xendesktop 7-on-windows-azure-design-guide
Xendesktop 7-on-windows-azure-design-guideXendesktop 7-on-windows-azure-design-guide
Xendesktop 7-on-windows-azure-design-guide
 
Virtual graphic workspace
Virtual graphic workspace Virtual graphic workspace
Virtual graphic workspace
 
VDI Performance of PRIMERGY S7 Server Generation
VDI Performance of PRIMERGY S7 Server GenerationVDI Performance of PRIMERGY S7 Server Generation
VDI Performance of PRIMERGY S7 Server Generation
 
XenDesktop 7 Blueprint
XenDesktop 7 BlueprintXenDesktop 7 Blueprint
XenDesktop 7 Blueprint
 
White Paper: Backup and Recovery of the EMC Greenplum Data Computing Applianc...
White Paper: Backup and Recovery of the EMC Greenplum Data Computing Applianc...White Paper: Backup and Recovery of the EMC Greenplum Data Computing Applianc...
White Paper: Backup and Recovery of the EMC Greenplum Data Computing Applianc...
 

More from Itzik Reich

Best practices for running Microsoft sql server on xtremIO X2_h16920
Best practices for running Microsoft sql server on xtremIO X2_h16920Best practices for running Microsoft sql server on xtremIO X2_h16920
Best practices for running Microsoft sql server on xtremIO X2_h16920
Itzik Reich
 
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
Itzik Reich
 
VMUG ISRAEL November 2012, EMC session by Itzik Reich
VMUG ISRAEL November 2012, EMC session by Itzik ReichVMUG ISRAEL November 2012, EMC session by Itzik Reich
VMUG ISRAEL November 2012, EMC session by Itzik Reich
Itzik Reich
 
Bca1931 final
Bca1931 finalBca1931 final
Bca1931 final
Itzik Reich
 
Vce vdi reference_architecture_knowledgeworkerenvironments
Vce vdi reference_architecture_knowledgeworkerenvironmentsVce vdi reference_architecture_knowledgeworkerenvironments
Vce vdi reference_architecture_knowledgeworkerenvironments
Itzik Reich
 
Emc world svpg68_2011_05_06_final
Emc world svpg68_2011_05_06_finalEmc world svpg68_2011_05_06_final
Emc world svpg68_2011_05_06_final
Itzik Reich
 

More from Itzik Reich (6)

Best practices for running Microsoft sql server on xtremIO X2_h16920
Best practices for running Microsoft sql server on xtremIO X2_h16920Best practices for running Microsoft sql server on xtremIO X2_h16920
Best practices for running Microsoft sql server on xtremIO X2_h16920
 
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
 
VMUG ISRAEL November 2012, EMC session by Itzik Reich
VMUG ISRAEL November 2012, EMC session by Itzik ReichVMUG ISRAEL November 2012, EMC session by Itzik Reich
VMUG ISRAEL November 2012, EMC session by Itzik Reich
 
Bca1931 final
Bca1931 finalBca1931 final
Bca1931 final
 
Vce vdi reference_architecture_knowledgeworkerenvironments
Vce vdi reference_architecture_knowledgeworkerenvironmentsVce vdi reference_architecture_knowledgeworkerenvironments
Vce vdi reference_architecture_knowledgeworkerenvironments
 
Emc world svpg68_2011_05_06_final
Emc world svpg68_2011_05_06_finalEmc world svpg68_2011_05_06_final
Emc world svpg68_2011_05_06_final
 

Recently uploaded

Data structures and Algorithms in Python.pdf
Data structures and Algorithms in Python.pdfData structures and Algorithms in Python.pdf
Data structures and Algorithms in Python.pdf
TIPNGVN2
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
Kari Kakkonen
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
sonjaschweigert1
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Aggregage
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
DianaGray10
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
innovationoecd
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
shyamraj55
 
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIEnchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Vladimir Iglovikov, Ph.D.
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
Aftab Hussain
 
Building RAG with self-deployed Milvus vector database and Snowpark Container...
Building RAG with self-deployed Milvus vector database and Snowpark Container...Building RAG with self-deployed Milvus vector database and Snowpark Container...
Building RAG with self-deployed Milvus vector database and Snowpark Container...
Zilliz
 
Large Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial ApplicationsLarge Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial Applications
Rohit Gautam
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
SOFTTECHHUB
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
Neo4j
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems S.M.S.A.
 

Recently uploaded (20)

Data structures and Algorithms in Python.pdf
Data structures and Algorithms in Python.pdfData structures and Algorithms in Python.pdf
Data structures and Algorithms in Python.pdf
 
Climate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing DaysClimate Impact of Software Testing at Nordic Testing Days
Climate Impact of Software Testing at Nordic Testing Days
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
 
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIEnchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
 
Building RAG with self-deployed Milvus vector database and Snowpark Container...
Building RAG with self-deployed Milvus vector database and Snowpark Container...Building RAG with self-deployed Milvus vector database and Snowpark Container...
Building RAG with self-deployed Milvus vector database and Snowpark Container...
 
Large Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial ApplicationsLarge Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial Applications
 
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
 

Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16

  • 1. DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Installing and Configuring the DM-MPIO DELL EMC XTREMIO X2 WITH CITRIX XENDESKTOP 7.16 Abstract This reference architecture evaluates the best-in-class performance and scalability delivered by Dell EMC XtremIO X2 for Citrix XenDesktop 7.16 VDI above VMware vSphere 6.5 infrastructure. We present data-quantifying performance at scale for thousands of desktops in each stage of the VDI lifecycle. Datacenter design elements, both hardware and software, that synergize into achieving the optimum results, are also discussed in detail. March, 2018 REFERENCE ARCHITECTURE
  • 2. 2 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Contents Abstract.............................................................................................................................................................1 Executive Summary...........................................................................................................................................4 Business Case ..................................................................................................................................................5 Overview ...........................................................................................................................................................6 Test Results ......................................................................................................................................................7 Summary............................................................................................................................................................................. 7 Deployment Performance Results ...................................................................................................................................... 8 Citrix Machine Creation Services (MCS) ............................................................................................................................ 8 Citrix Provisioning Services (PVS)...................................................................................................................................... 9 MCS Full Clone Provisioning .......................................................................................................................................... 9 MCS Linked Clone Provisioning....................................................................................................................................12 Production Use Performance Results...............................................................................................................................14 Boot Storms...................................................................................................................................................................14 LoginVSI Results...........................................................................................................................................................15 Solution's Hardware Layer...............................................................................................................................23 Storage Array: Dell EMC XtremIO X2 All-Flash Array ......................................................................................................23 XtremIO X2 Overview ...................................................................................................................................................23 Architecture and Scalability...........................................................................................................................................24 XIOS and the I/O Flow ..................................................................................................................................................27 System Features ...........................................................................................................................................................31 XtremIO Management Server .......................................................................................................................................39 Test Setup......................................................................................................................................................................... 47 Compute Hosts: Dell PowerEdge Servers ........................................................................................................................47 Storage Configuration .......................................................................................................................................................49 Zoning ........................................................................................................................................................................... 49 Storage Volumes...........................................................................................................................................................49 Initiator Groups and LUN Mapping................................................................................................................................49 Storage Networks..............................................................................................................................................................50 Solution's Software Layer ................................................................................................................................51 Hypervisor Management Layer.........................................................................................................................................51 vCenter Server Appliance .............................................................................................................................................51 Hypervisor .....................................................................................................................................................................51 ESX Clusters.................................................................................................................................................................52 Network Configuration...................................................................................................................................................52 Storage Configuration, EMC SIS and VSI ........................................................................................................................54
  • 3. 3 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Virtual Desktop Management Layer: Citrix XenDesktop 7.16...........................................................................56 Citrix XenDesktop .............................................................................................................................................................56 Citrix XenDesktop Components........................................................................................................................................57 Machine Creation Services (MCS)................................................................................................................................58 Provisioning Services (PVS) .........................................................................................................................................60 PVS Write Cache ..........................................................................................................................................................61 Personal vDisk ..............................................................................................................................................................62 Citrix XenDesktop 7.16 Configurations and Tuning..........................................................................................63 XenDesktop Delivery Controller....................................................................................................................................63 Microsoft Windows 10 Desktop Configuration and Optimization......................................................................................64 Conclusion.......................................................................................................................................................66 References......................................................................................................................................................67 Appendix A – Test Methodology......................................................................................................................68 How to Learn More..........................................................................................................................................69
  • 4. 4 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Executive Summary This paper describes a reference architecture for deploying a Citrix XenDesktop 7.16 Virtual Desktop Infrastructure (VDI) environment and published applications using Dell EMC XtremIO X2 storage array. It also discusses design considerations for deploying such an environment. Based on the data presented herein, we firmly establish the value of XtremIO X2 as a best-in-class all-flash array for Citrix XenDesktop Enterprise deployments. This reference architecture presents a complete VDI solution for Citrix XenDesktop 7.16 delivering virtualized 32-bit Windows 10 desktops using MCS and PVS technologies with applications such as Microsoft Office 2016, Adobe Reader 11, Java, IE and other common desktop user applications. It discusses design considerations that will give you a reference point for successfully deploying a VDI project using XtremIO X2, and describes tests performed by XtremIO to validate and measure the operation and performance of the recommended solution.
  • 5. 5 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Business Case A well-known objective of virtualizing desktops is lowering the Total Cost of Ownership (TCO). TCO generally includes capital expenditures from purchasing hardware such as storage, servers, networking switches and routers, in addition to the software licensing and maintenance costs. The main goals in virtualizing desktops are to improve economics and efficiency in desktop delivery, ease maintenance and management, and improve desktop security. In addition to these goals, a key objective of a successful VDI deployment, and one that probably matters the most, is the end user experience. It is imperative for VDI deployments to demonstrate parity with that of physical workstations when it comes to the end user experience. The overwhelming value of virtualizing desktops in a software-defined datacenter and the need to deliver a rich end-user experience compels us to select the best-of-breed infrastructure components for our VDI deployment. Selecting the best-in-class and performant storage system, that is also easy to manage, helps to achieve our long-term goal of lowering the TCO, and hence is a critical piece of the infrastructure. The shared storage infrastructure in a VDI solution should be robust enough to deliver consistent performance and scalability for thousands of desktops regardless of the desktop delivery mechanism (linked clones, full clones, etc.). XtremIO brings tremendous value by providing consistent performance at scale with features such as always-on inline deduplication, compression, thin provisioning and unique data protection capabilities. Seamless interoperability with VMware vSphere is achieved by using VMware APIs for Array Integration (VAAI). Dell EMC Solutions Integration Service (SIS) and Virtual Storage Integrator's (VSI) ease of management make choosing this best-of-breed all-flash array even more attractive for desktop virtualization applications. XtremIO is a scale-out storage system that can grow in storage capacity, compute resources and bandwidth capacity whenever storage requirements for the environment are enhanced. With the advent of multi-core server systems with increasing number of CPU cores per processor (following Moore's law), we are able to consolidate a growing number of desktops on a single enterprise-class server. When combined with XtremIO X2 All-Flash Array, we can consolidate vast numbers of virtualized desktops on a single storage array, thereby achieving high consolidation at great performance from a storage and a compute perspective. The solution is based on Citrix XenDesktop 7.16 which provides a complete end-to-end solution delivering Microsoft Windows virtual desktops or server-based hosted shared sessions to users on a wide variety of endpoint devices. Virtual desktops are dynamically assembled on demand, providing users with pristine, yet personalized, desktops each time they log on. Citrix XenDesktop 7.16 provides a complete virtual desktop delivery system by integrating several distributed components with advanced configuration tools that simplify the creation and real-time management of the virtual desktop infrastructure. Enterprise IT organizations are tasked with the challenge of provisioning Microsoft Windows apps and desktops while managing cost, centralizing control, and enforcing corporate security policy. Deploying Windows apps to users in any location, regardless of the device type and available network bandwidth, enables a mobile workforce that can improve productivity. With Citrix XenDesktop 7.16, IT can effectively control app and desktop provisioning while securing data assets and lowering capital and operating expenses.
  • 6. 6 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Overview It is well known that implementing a complete VDI solution is a multi-faceted effort with nuances encompassing compute, memory, network – and most importantly – storage. Our focus in this reference architecture is on XtremIO X2 capabilities and benefits in such a solution; however, we intend to give a complete picture of a VDI solution. An XtremIO X2 cluster provides sufficient storage capacity and adequate performance for servicing the I/O requests and storage bandwidth required for a scale of thousands and tens of thousands of virtual desktops. This includes desktop delivery, management operations, login and boot storms, and production use at scale. The XtremIO X2 Storage Array provides top class performance when deploying virtual desktops and running management operations on them, as well as when subjected to live user emulation tests using LoginVSI (Login Virtual Session Indexer – a software simulating user workloads for Windows-based virtualized desktops). The XtremIO All Flash Storage array is based upon a scale-out architecture. It is comprised of building blocks called X-Bricks which can be clustered together to grow performance and capacity as required. An X-Brick is the basic building block of an XtremIO cluster. Each X-Brick is a highly-available, high-performance unit that consists of dual Active-Active Storage Controllers, with CPU and RAM resources, Ethernet, FC and iSCSI connections, and a Disk Array Enclosure (DAE) containing the SSDs that hold the data. With XtremIO X2, a single X-Brick can service the storage capacity and bandwidth requirements for 4000 desktops, with capacity to spare. XtremIO X2 All-Flash Array is designed to provide high responsiveness for increasing data usage for thousands of users and is extremely beneficial for VDI projects. In subsequent sections of this reference architecture, we will present XtremIO's compounding returns for its data reduction capabilities and the high performance it provides to VDI environments with thousands of desktops. We will see the benefits in terms of data reduction and storage performance in deploying an Instant Clone Desktop Pool as well as in deploying a Linked Clone Desktop Pool. XtremIO's scale-out architecture allows scaling any environment, in our case VDI environments, in a linear way that satisfies both the capacity and performance needs of the growing infrastructure. An XtremIO X2 cluster can start with any number of required X-Bricks to service the current or initial loads and can grow linearly (up to 4 X-Bricks in a cluster) to appropriately service the increasing environment (to be increased to 8 X-Bricks in the future, depending on the cluster's type). With X2, in addition to its scale-out capabilities, an XtremIO storage array can scale-up by adding extra SSDs to an X-Brick. An X-Brick can contain between 18 and 72 SSDs (in increments of 6) of fixed sizes (400GB or 1.92TB, depending on the cluster's type, with future versions allowing 3.84TB sized SSDs). In developing this VDI solution, we have selected VMware vSphere 6.5 update 1 as the virtualization platform, and Citrix XenDesktop 7.16 for virtual desktop delivery and management. Windows 10 (32-bit) is the virtual desktops' operating system. EMC VSI (Virtual Storage Integrator) 7.3 and the vSphere Web Client are used to apply best practices pertaining to XtremIO storage Volumes and the general environment. To some degree, data in subsequent sections of this reference architecture helps us quantify the end user experience for a desktop user and also demonstrates the efficiency in management operations that a datacenter administrator may achieve when deploying a VDI environment on XtremIO X2 all-flash array. We begin the reference architecture by discussing test results, which are classified into the following categories: • Management Operations – resource consumption and time to complete Citrix Xendesktop management operations. • Production Use – resource consumption patterns and time to complete a boot storm, and resource consumption patterns and responsiveness when desktops in the pool are subjected to "LoginVSI Knowledge Worker" workloads, emulating real users' workloads. After presenting and analyzing the test results of our VDI environment, we will discuss the different elements of our infrastructure, beginning with the hardware layer and moving up to the software layer, including the features and best practices we recommend for the environment. This includes extensive details of the XtremIO X2 storage array, storage network equipment and host details at the hardware level, and the VMware vSphere Hypervisor (ESXi), vCenter Server and Citrix Xendesktop environment at the software level. The details of the virtual machine settings and LoginVSI workload profile provide us with the complete picture of how all building blocks of a VDI environment function together.
  • 7. 7 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Test Results In this section, we elaborate on the tests performed on our VDI environment and their results. We start with a summary of the results and related conclusions, and dive deeper into each test's detailed results and analyzed data and statistics (including various storage and compute metrics such as bandwidth, latency, IOPS, and CPU and RAM utilization). Summary Citrix XenDesktop delivers virtual Windows desktops and applications as secure services on any device. It provides a native touch-enabled look and feel that is optimized for the device type as well as the network. A Citrix XenDesktop desktop pool has the following basic lifecycle stages: • Provisioning • Production work by active users • Maintenance operations We will show summary and detailed test results for these stages, divided into two types of lifecycle phases: Management Operations (Provisioning and Maintenance operations) and Production Use From the perspective of datacenter administrators, operational efficiency is translated to time to complete management operations. The less time it takes to provision desktops and perform maintenance operations, the faster the availability is of VDI desktop pools for production. It is for this reason that the storage array's throughput performance deserves special attention – the more throughput the system can provide, the faster those management operations will complete. The storage array throughput is measured in terms of IOPS or bandwidth that manifest in terms of data transfer rate. During production, desktops are in actual use by end users via remote sessions. Two events are tested to examine the infrastructure's performance and ability to serve VDI users: Virtual desktops boot storm, and heavy workloads produced by high percentage of users using their desktops. Boot storms are measured by time to complete, and heavy workloads by the "user experience". The criteria dictating "user experience" is the applications' responsiveness and overall desktop experience. We use the proven LoginVSI tests (explained further in this paper) to evaluate user experience, and track storage latency during those LoginVSI tests. Table 1 shows a summary of the test results for all stages of a VDI desktop pool lifecycle with 4000 desktops for Instant Clone and Linked Clone desktops, when deployed on an XtremIO X2 cluster as its storage array. Note that the Recompose and Refresh maintenance operations are not applicable for Linked Clone desktops. Table 1. VDI Performance Tests with XtremIO X2 – Results Summary 4000 DESKTOPS MCS LINKED CLONES MCS FULL CLONES PVS CLONES Elapsed Time – Deployment 50 Minutes 65 Minutes N/A LoginVSI Boot Storm 10 Minutes 10 Minutes 10 Minutes LoginVSI – VSI Baseline 862 864 841 LoginVSI – VSI Average 1122 1096 1071 LoginVSI – VSI Max Not Reached Not Reached Not Reached We notice the excellent results for deployment time, boot storm performance, and maintenance operation time, as well as the accomplished LoginVSI results (detailed in LoginVSI Results on page 15) that emulate production work by active users.
  • 8. 8 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. We suggest a scale-out approach for VDI environments, in which we add compute and memory resources (more ESX hosts) as we scale up in the number of desktops. In our tests, we deployed Virtual Desktops with two vCPUs and 4GB of RAM (not all utilized since we are using a 32-bit operating system) per desktop. After performing a number of tests to understand the appropriate scaling, we concluded the appropriate scale to be 125 desktops per single ESX host (with the given host configuration listed in Table 2). Using this scale, we deployed 4000 virtual desktops on 32 ESX hosts. For storage volume size, the selected scale was 125 virtual desktops per XtremIO Volume of 3TB (the maximum number of desktops per a single LUN when provisioned with VAAI is 500). As we will see next, the total of 32 volumes and 96TB were easily handled by our single X-Brick X2 cluster, both in terms of capacity and performance (IOPS, bandwidth and latency). In the rest of this section, we take a deeper look into the data collected in our storage array and other environment components during each of the management operation tests, as well as during boot storms and LoginVSI's "Knowledge worker" workload tests. A data-driven understanding of our XtremIO X2 storage array's behavior provides us with evidence that assure a rich user experience and efficiency in management operations when using this effective all-flash array. This is manifested by providing performance-at-scale, for thousands of desktops. The data collected below includes statistics of storage bandwidth, IOPS, I/O latency, CPU utilization and more. Performance statistics were collected from the XtremIO Management Server (XMS) by using XtremIO RESTful API (Representational State Transfer Application Program Interface). This API is a powerful feature that enables performance monitoring while executing management operation tests and running LoginVSI workloads. These results provided a clear view of the exceptional capabilities of XtremIO X2 for VDI environments. Deployment Performance Results In this section, we take a deeper look at performance statistics from our XtremIO X2 array when used for in a VDI environment for performing management operations such as MCS full clone and MCS linked clone desktop provisioning. PVS provisioning is preformed synchronously, while the resources consumed are mostly the CPU and memory of the hosts. Since it is not impacted by the storage performance, it is not detailed in this section. Citrix Machine Creation Services (MCS) Machine Creation Services (MCS) is a centralized provisioning mechanism that is integrated with the XenDesktop management interface, Citrix Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle. MCS enables the management of several types of machines within a catalog in Citrix Studio. Desktop customization is persistent for machines that use the Personal vDisk (PvDisk or PvD) feature, while non-Personal vDisk machines are appropriate if desktop changes are discarded when the user logs off. Desktops provisioned using MCS share a common base image within a catalog. Because of the XtremIO X2 architecture, the base image is stored only once in the storage array, providing efficient data storage and maximizing the utilization of flash disks, while providing exceptional performance and optimal I/O response time for the virtual desktops. Figure 1. Logical Representation of an MCS-base Disk and Linked Clone
  • 9. 9 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Citrix Provisioning Services (PVS) • Citrix Provisioning Services (PVS) takes a different approach from traditional desktop imaging solutions by fundamentally changing the relationship between software and the hardware on which it runs. • By streaming a single shared disk image (vDisk) instead of copying images to individual machines, PVS lets organizations reduce the number of disk images that they manage. Because the number of machines continues to grow, PVS provides the efficiency of centralized management with the benefits of distributed processing. • Because machines stream disk data dynamically in real time from a single shared image, machine image consistency is ensured. In addition, large pools of machines can completely change their configuration, applications, and even the operating system during a reboot operation. Figure 2. Boot Process of a PVS Target Device MCS Full Clone Provisioning The operational efficiency of datacenter administrators is determined mainly by the completion rate of desktop delivery (provisioning) and management operations. It is critical for datacenter administrators that the provisioning and maintenance operations on VDI desktops finish in a timely manner to be ready for production users. The time it takes to provision the desktops is directly related to storage performance capabilities. As shown in Figure 3, XtremIO X2 is handling storage bandwidths as high as ~20GB/s with over 100k IOPS (read + write) during a 4000 Full Clone desktops provisioning phase, resulting in a quick and efficient desktop delivery (65 minutes for all 4000 Full Clone desktops). Figure 3. XtremIO X2 IOPS and I/O Bandwidth – 4000 Full Clone Desktops Provisioning It took 65 minutes for the system to finish the provisioning and OS customization of all 4000 desktops with our X2 array. We can deduce that desktops were provisioned in our test at an excellent rate of about 62 desktops per minute, or one desktop provisioned every second.
  • 10. 10 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Figure 4 shows the block sizes distribution during the Instant Clone provisioning process. We can see that most of the bandwidth used is 256kB and >1MB blocks, as these are the block sizes that were configured at the software level (VMware) to use with our storage array. Figure 4. XtremIO X2 Bandwidth by Block Size – 4000 Full Clone Desktops Provisioning In Figure 5, we can see the IOPS and latency statistics in an Instant Clone provisioning process of 4000 desktops. The graph shows again that IOPS are well over 100k but that the latency for all I/O operations remains less than 0.1 msec, yielding the excellent performance and fast-paced provisioning of our virtual desktop environment. Figure 5. XtremIO X2 Latency vs. IOPS – 4000 Instant Clone Desktops Provisioning
  • 11. 11 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Figure 6 shows the CPU utilization of our Storage Controllers during the Instant Clone provisioning process. This process is less demanding than the Full Clone provisioning process, due to significantly less data written. We can see that the CPU utilization of the Storage Controllers normally remains around 60. We can also see the excellent synergy across our X2 cluster, when all our Active-Active Storage Controllers' CPUs share the load and effort, with CPU utilization virtually equal between all Controllers for the entire process. Figure 6. XtremIO X2 CPU Utilization – 4000 Full Clone Desktops Provisioning Figure 7 shows XtremIO's incredible storage savings for the scenario of 4000 Full Clone desktops provisioned (each with about 13.5GB used space in their 40GB-sized C: drive volume). Notice that the physical capacity footprint of the 4000 desktops after XtremIO deduplication and compression is at 827.51GB, while the logical capacity is 51.95TB. This is a direct result of an extraordinary data reduction factor reaching 65.5:1 (32.4:1 for deduplication and 2.0:1 for compression). Thin provisioning further-adds to storage efficiency, aggregating it to 391.1:1. Figure 7. XtremIO X2 Data Savings – 4000 Full Clone Desktops Provisioning
  • 12. 12 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. MCS Linked Clone Provisioning As with MCS Linked Clones, we also examined storage statistics for 4000 Linked Clone desktops provisioning. As Figure 8 shows, our X2 array handles storage bandwidths as high as ~4K IOPS for small I/O operations. This I/O pattern is a result of Linked Clones' use of VMware snapshots which means no outstanding data is written to the array. Instead, pointers and VMware metadata are used. Unlike the process of deploying Linked Clones via VMware Horizon View, XenDesktop creates the computer accounts in advance and associates them with the virtual desktops during their initial power on. This mechanism saves a lot of resources during the deployment of the pool and the entire provisioning process for the 4000 desktops took 50 minutes. This is over 30% faster than what it took provisioning Full Clone desktops (65 minutes), translating to a rate of 80 desktops per minute. Figure 8. XtremIO X2 IOPS and I/O Bandwidth – 4000 Linked Clone Desktops Provisioning Figure 9 shows the block sizes distribution during the Linked Clone provisioning process. We can see the 400MB/s of the I/O operations of 512KB blocks which are generated during the desktops power on. Figure 9. XtremIO X2 Bandwidth by Block Size – 4000 Linked Clone Desktops Provisioning
  • 13. 13 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Examining the IOPS and latency stats during the Linked Clone provisioning process of the 4000 desktops, we can see in Figure 10 a latency of mostly below 0.2 msec with some peaks of a higher latency, almost entirely under 0.4msec. These high-performance numbers are the reason for the excellent provisioning rate achieved in our test. Figure 10. XtremIO X2 Latency vs. IOPS – 4000 Linked Clone Desktops Provisioning Figure 11 shows the CPU utilization of the Storage Controllers during the Linked Clone provisioning process. This process hardly loads the storage array. This is due to the significantly less data written as controlled by the Citrix platforms. We can see that the CPU utilization of the Storage Controllers stays normally at around 2%. Figure 11. XtremIO X2 CPU Utilization – 4000 Linked Clone Desktops Provisioning Figure 12 shows the incredible efficiency that is achieved in storage capacity when using Linked Clones on XtremIO X2. The 4000 Linked Clone desktops provisioned takes up a logical footprint of 51.62TB, while the physical is only 1.01 GB as a result of an impressive data reduction factor of 51.4:1 (21.5:1 for deduplication and 2.4:1 for compression). Thin provisioning is also a great saving factor especially with Linked Clones (here with almost 848.6 savings), as the desktops are merely VMware snapshots of an original parent machine, and consume no space until changes are being made. Figure 12. XtremIO X2 Data Savings – 4000 Linked Clone Desktops Provisioning
  • 14. 14 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Production Use Performance Results This section examines how an XtremIO X2 single X-Brick cluster delivers the best-in-class user experience with high performance during a boot storm and during the actual work of virtual desktop users, as emulated by LoginVSI "Knowledge Worker" – LoginVSI's workload that emulates more advanced users (details below). Boot Storms The rebooting of VDI desktops at a large scale is a process often orchestrated by administrators by invoking the vSphere task of rebooting virtual machines asynchronously (albeit sequentially), but it can also be performed by the end user. It is necessary, for instance, in scenarios where new applications or operating system updates are installed and need to be deployed to the virtual desktops. Desktops are issued with a reboot without waiting for previous ones to finish booting up. As a result, multiple desktops boot up at the same time. The number of concurrent reboots is also affected by the limit configured in the vCenter server configurations. This configuration can be altered after some experimentation to determine how many concurrent operations a given vCenter server is capable of handling. Figure 13 show storage bandwidth consumption and IOPS for rebooting 4000 Linked Clone virtual desktops simultaneously. The entire process took about 10 minutes when processed on a single X-Brick X2 cluster. Figure 13. XtremIO X2 IOPS and I/O Bandwidth – 4000 Linked Clone Desktops Boot Storm The 10 minutes it took to reboot the 4000 desktops in both cases translate to an amazing rate of 6.67 desktops every second, or one desktop boot per 150 milliseconds. Looking closely at the figures above, we can see that even though the process with Linked Clones required more IOPS with a lower bandwidth, it was still able to complete in 10 minutes, the same time required for the reboot with Instant Clones. We will explain this next using the block distribution graphs and XtremIO X2 advanced Write Boost feature. Figure 14 shows the block distribution during the 4000 Linked Clone desktops boot storm. We can see that the I/Os per block size remain the same for most sizes during the operation. Figure 14. XtremIO X2 Bandwidth by Block Size – 4000 Linked Clone Desktops Boot Storm
  • 15. 15 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Figure 15 shows the CPU utilization for the 4000 Linked Clone boot storm. We can see that for the 4000 Linked Clone boot storm, the CPU is well utilized in a range between 65% and 75%, mainly due to the increase in I/O operation and the use of Write Boost when booting up Linked Clones. Figure 15. XtremIO X2 CPU Utilization – 4000 MCS Linked Clone Desktops Boot Storm LoginVSI Results In this section, we present the LoginVSI "Knowledge Worker" workload results for the 4000 Instant Clone and Linked Clone desktops. The "Knowledge Worker" profile of LoginVSI emulates user actions such as opening a Word document, modifying an Excel spreadsheet, browsing a PDF document, web browsing or streaming a webinar. This emulates typical "advanced" user behavior and helps characterize XtremIO's performance in such scenarios. While characterizing the user experience in those scenarios, any I/O latency that is detected in the storage array is of the utmost importance. This is a parameter that directly influences the end user experience. Other parameters impacting user experience are CPU and memory usage on the ESX hosts and storage network bandwidth utilization. Figure 16. LoginVSI's "Knowledge Worker" Workload Profile We chose Microsoft Windows 10 build 1709(32-bit) as the desktop operating system. Office 2016 suite, Adobe Reader 11 and the latest Oracle JRE, Internet Explorer 11, and Doro PDF Printer were installed and used by LoginVSI's "Knowledge Worker" workloads.
  • 16. 16 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Figure 17 and Figure 18 show the LoginVSI results of our 4000 MCS Full Clone, MCS Linked Clone, and PVS Clone desktops respectively. LoginVSI scores are determined by observing the average application latencies, highlighting the speed at which user operations are completed. This helps quantify user experience, since the measurements considered are at the application level. As a case in point, the blue line in each of the LoginVSI charts follows the progression of the "VSI average" against the number of active sessions. This is an aggregated metric, using average application latencies as more desktop sessions are added over time. The factor to be observed in these graphs is the VSImax threshold, which represents the threshold beyond which LoginVSI's methodology indicates that the user experience has deteriorated to the point where the maximum number of desktops that can be consolidated in a given VDI infrastructure has been reached. Figure 17. LoginVSI's "Knowledge Worker" Results – 4000 MCS Full Clone Desktops Figure 18. LoginVSI's "Knowledge Worker" Results – 4000 MCS Lined Clone Desktops
  • 17. 17 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Figure 19. LoginVSI's "Knowledge Worker" Results – 4000 PVS Clone Desktops From the average shown in both graphs (the blue line), the application latency quantified is much lower than the VSImax threshold watermark for the 4000 active users (~1100 average vs. a ~840 baseline). This demonstrates how XtremIO X2 all-flash single X-Brick cluster provides a best-in-class delivery of user experience for up to 4000 VDI users, with room to scale further. More details about LoginVSI test methodology can be found in Appendix A – Test Methodology and in the LoginVSI documentation. These LoginVSI results help us understand the user experience and are a testimony of the scalability and performance that manifests into an optimal end user experience with XtremIO X2. The obvious reason, as highlighted by Figure 20, Figure 21 and Figure 22, is none other than the outstanding storage latency demonstrated by XtremIO X2. Figure 20. XtremIO X2 Latency vs. IOPS – 4000 MCS Linked Clone Desktops In-Use
  • 18. 18 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Figure 21. XtremIO X2 Latency vs. IOPS – 4000 MCS Full Clone Desktops In-Use Figure 22. XtremIO X2 Latency vs. IOPS – 4000 PVS Clone Desktops In-Use For all the three desktops methods, we can see a steady and remarkable ~0.2msec latency for the entire LoginVSI workload test. We see a small rise in latency numbers as IOPS accumulate, but never exceeding 0.3msec. These numbers yield the great LoginVSI results described above, and provide a superb user experience for our VDI users.
  • 19. 19 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Figure 23, Figure 24 and Figure 25 present total IOPS and bandwidth seen during the LoginVSI "Knowledge Worker" profile workload on our 4000 MCS Linked Clone desktops, 4000 MCS Full Clone desktops, and 4000 PVS Clone desktops respectively. In all occasions, the bandwidth at the peak of the workload reaches about ~1.5GB/s. Figure 23. XtremIO X2 IOPS and I/O Bandwidth – 4000 MCS Linked Clone Desktops In-Use Figure 24. XtremIO X2 IOPS and I/O Bandwidth – 4000 MCS Full Clone Desktops In-Use Figure 25. XtremIO X2 IOPS and I/O Bandwidth – 4000 PVS Clone Desktops In-Use
  • 20. 20 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Figure 26, Figure 27 and Figure 28 show the CPU Utilization of our X2 storage array during the LoginVSI "Knowledge Worker" profile workload of 4000 MCS Full Clone, Linked Clone and PVS Clone desktops. We can see that the CPU utilization at the peak of the workload reaches about 30% and 20% in MCS scenarios respectively, while it reached 13% utilization for PVS Clones This emphasizes that although they save much space and provide various advantages, the MCS Linked clones are slightly heavier than MCS Full clones since they are based on the same master images/in- memory metadata. As for the PVS Clones, since some of the workload runs in memory, the CPU utilization is lower, but as a result, the memory utilization at the host level is higher. Figure 26. XtremIO X2 CPU Utilization – 4000 MCS Full Clone Desktops In-Use Figure 27. XtremIO X2 CPU Utilization – 4000 MCS Linked Clone Desktops In-Use Figure 28. XtremIO X2 CPU Utilization – 4000 PVS Clone Desktops In-Use
  • 21. 21 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Figure 29, Figure 30 and Figure 31 show block size distribution of 4000 Instant Clone and Linked Clone desktops respectively during the LoginVSI "Knowledge Worker" profile workload. We can see that the I/Os per block size remain the same for most sizes, while the bandwidth usage increase when more users login into their virtual desktops. Figure 29. XtremIO X2 Bandwidth by Block Size – 4000 MCS Linked Clone Desktops In-Use Figure 30. XtremIO X2 Bandwidth by Block Size – 4000 MCS Full Clone Desktops In-Use Figure 31. XtremIO X2 Bandwidth by Block Size – 4000 PVS Clone Desktops In-Use Examining all the graphs collected during the LoginVSI "Knowledge Worker" profile workload test, we see that the X2 single-brick is more than capable of managing and servicing 4000 VDI working stations, with room to serve additional volumes and workloads.
  • 22. 22 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. We also took a deeper look at the ESXi hosts to see if our scaling fits from a compute-resources perspective as well. Specifically, we checked both the CPU utilization of our ESX hosts and their Memory utilization (Figure 32) during the LoginVSI "Knowledge Worker" profile workload test on the 4000 desktops. Please note that using RAM Write Cache for PVS Clones (which is described later) increases the Memory utilization drastically in light of the fact that storage workload is offloaded to RAM. Figure 32. ESX Hosts CPU and Memory Utilization – 4000 MCS Linked Clone Desktops In-Use We can see an approximate 65% utilization of both CPU and memory resources of the ESX hosts, indicating a well- utilized environment and good resource consumption of the hosts, leaving room for extra VMs in the environment, and spare resources for VMotion of VMs (due to hosts failures, planned upgrades, etc.). In Figure 33 below, we see the change in CPU utilization of a single ESX host in the environment as the LoginVSI "Knowledge Worker" profile workload test progresses. The test creates logins and workloads to the virtual desktops in a cumulative way, emulating a typical working environment in which users log in during a span of a few dozen minutes and not all at the same time. This behavior is seen clearly in the figure below, as the CPU utilization of this ESX host increases as time passes, until all virtual desktops in the host are in use and CPU utilization reaches about 70%. Figure 33. A Single ESX Host CPU Utilization – 4000 Desktops In-Use
  • 23. 23 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Solution's Hardware Layer Based on the data presented above, it is evident that storage/virtualization administrators must strive to achieve an optimal user experience for their VDI desktop end users. The following sections discuss how the hardware and software synergize in order to achieve these goals. We begin at the hardware layer, taking a wide look at our XtremIO X2 array and the features and benefits it provides to VDI environments, continue by discussing the details of our ESX hosts, based on Dell PowerEdge servers, on which our entire environment runs, and then review our storage configuration and networks that connect the servers to the storage array, thereby encompassing all of the hardware components of the solution. We follow this up with details of the software layer by providing configuration details for VMware vSphere, VMware Horizon View Suite, Dell and EMC plugins for VMware and configuration settings on the "parent" virtual machine from which VDI desktops are deployed. Storage Array: Dell EMC XtremIO X2 All-Flash Array Dell EMC's XtremIO is an enterprise-class scalable all-flash storage array that provides rich data services with high performance. It is designed from the ground up to unlock flash technology's instant performance potential by uniquely leveraging the characteristics of SSDs and using advanced inline data reduction methods to reduce the physical data that must be stored on the disks. XtremIO's storage system uses industry-standard components and proprietary intelligent software to deliver unparalleled levels of performance, achieving consistent low latency for up to millions of IOPS. It comes with a simple, easy-to-use interface for storage administrators and fits a wide variety of use cases for customers in need of a fast and efficient storage system for their datacenters, requiring very little planning to set-up before provisioning. XtremIO leverages flash to deliver value across multiple dimensions: • Performance – provides consistent low-latency and up to millions of IOPS. • Scalability -uses a scale-out and scale-up architecture. • Storage Efficiency -uses data reduction techniques such as deduplication, compression and thin-provisioning. • Data Protection -uses a proprietary flash-optimized algorithm named XDP. • Environment Consolidation -uses XtremIO Virtual Copies or VMware's XCOPY. We will further review XtremIO X2 features and capabilities. XtremIO X2 Overview XtremIO X2 is the new generation of Dell EMC's All-Flash Array storage system. It adds enhancements and flexibility in several aspects to the already proficient and high-performant former generation storage array. Features such as scale-up for a more flexible system, write boost for a more sensible and high-performing storage array, NVRAM for improved data availability and a new web-based UI for managing the storage array and monitoring its alerts and performance stats, add the extra value and advancements required in the evolving world of computer infrastructure. The XtremIO X2 Storage Array uses building blocks called X-Bricks. Each X-Brick has its own compute, bandwidth and storage resources, and can be clustered together with additional X-Bricks to grow in both performance and capacity (scale-out). Each X-Brick can also grow individually in terms of capacity, with an option to add to up to 72 SSDs in each brick. XtremIO architecture is based on a metadata-centric, content-aware system, which helps streamline data operations efficiently without requiring any movement of data post-write for any maintenance reason (data protection, data reduction, etc. – all done inline). The system lays out the data uniformly across all SSDs in all X-Bricks in the system using unique fingerprints of the incoming data and controls access using metadata tables. This contributes to an extremely balanced system across all X-Bricks in terms of compute power, storage bandwidth and capacity.
  • 24. 24 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Using the same unique fingerprints, XtremIO is equipped with exceptional always-on in-line data deduplication abilities, which highly benefits virtualized environments. Together with its data compression and thin provisioning capabilities (both also in-line and always-on), it achieves incomparable data reduction rates. System operation is controlled by storage administrators via a stand-alone dedicated Linux-based server called the XtremIO Management Server (XMS). An intuitive user interface is used to manage and monitor the storage cluster and its performance. The XMS can be either a physical or a virtual server and can manage multiple XtremIO clusters. With its intelligent architecture, XtremIO provides a storage system that is easy to set-up, needs zero tuning by the client, and does not require complex capacity or data protection planning. All this is handled autonomously by the system. Architecture and Scalability An XtremIO X2 Storage System is comprised of a set of X-Bricks that together form a cluster. This is the basic building block of an XtremIO array. There are two types of X2 X-Bricks available: X2-S and X2-R. X2-S is for environments whose storage needs are more I/O intensive than capacity intensive, as they use smaller SSDs and less RAM. An effective use of the X2-S is for environments that have high data reduction ratios (high compression ratio or a great deal of duplicated data) which lower the capacity footprint of the data significantly. X2-R X-Bricks clusters are made for the capacity intensive environments, with bigger disks, more RAM and a bigger expansion potential in future releases. The two X-Brick types cannot be mixed together in a single system, so the decision which type is suitable for your environment must be made in advance. Each X-Brick is comprised of two 1U Storage Controllers (SCs) with: • Two dual socket Haswell CPUs • 346GB RAM (for X2-S) or 1TB RAM (for X2-R) • Two 1/10GbE iSCSI ports • Two user interface interchangeable ports (either 4/8/16Gb FC or 1/10GbE iSCSI) • Two 56Gb/s InfiniBand ports • One 100/1000/10000 Mb/s management port • One 1Gb/s IPMI port • Two redundant power supply units (PSUs) • One 2U Disk Array Enclosure (DAE) containing: • Up to 72 SSDs of sizes 400GB (for X2-S) or 1.92TB (for X2-R) • Two redundant SAS interconnect modules • Two redundant power supply units (PSUs) Figure 34. An XtremIO X2 X-Brick 4U First Storage Controller DAE2U Second Storage Controller 1U 1U
  • 25. 25 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. The Storage Controllers on each X-Brick are connected to their DAE via redundant SAS interconnects. An XtremIO storage array can have one or multiple X-Bricks. Multiple X-Bricks are clustered together into an XtremIO array, using an InfiniBand switch and the Storage Controllers' InfiniBand ports for back-end connectivity between Storage Controllers and DAEs across all X-Bricks in the cluster. The system uses the Remote Direct Memory Access (RDMA) protocol for this back-end connectivity, ensuring a highly-available ultra-low latency network for communication between all components of the cluster. The InfiniBand switches are the same size (1U) for both X2-S and X2-R cluster types, but include 12 ports for X2-S and 36 ports for X2-R. By leveraging RDMA, an XtremIO system is essentially a single shared- memory space spanning all of its Storage Controllers. The 1GB port for management is configured with an IPv4 address. The XMS, which is the cluster's management software, communicates with the Storage Controllers via the management interface. Through this interface, the XMS communicates with the Storage Controllers, and sends storage management requests such as creating an XtremIO Volume or mapping a Volume to an Initiator Group. The second 1GB/s port for IPMI interconnects the X-Brick's two Storage Controllers. IPMI connectivity is strictly within the bounds of an X-Brick and will never be connected to an IPMI port of a Storage Controller in another X-Brick in the cluster. With X2, an XtremIO cluster has both scale-out and scale-up capabilities. Scale-out is implemented by adding X-Bricks to an existing cluster. The addition of an X-Brick to an existing cluster linearly increases its compute power, bandwidth and capacity. Each X-Brick that is added to the cluster brings with it two Storage Controllers, each with its CPU power, RAM and FC/iSCSI ports to service the clients of the environment, together with a DAE with SSDs to increase the capacity provided by the cluster. Adding an X-Brick to scale-out an XtremIO cluster is intended for environments that grow both in capacity and performance needs, such as in the case of an increase in the number of active users and their data, or a database which grows in data and complexity. An XtremIO cluster can start with any number of X-Bricks that fits the environment's initial needs and can currently grow to up to 4 X-Bricks (for both X2-S and X2-R). Future code upgrades of XtremIO X2 will support up to 8 X-Bricks for X2-R arrays. Figure 35. Scale Out Capabilities – Single to Multiple X2 X-Brick Clusters
  • 26. 26 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Scale-up of an XtremIO cluster is implemented by adding SSDs to existing DAEs in the cluster. This is intended for environments that grow in capacity needs without need for extra performance. For example, this may occur when the same number of users have an increasing amount of data to save, or when an environment grows in both capacity and performance needs but has only reached its capacity limits with additional performance available with its current infrastructure. Each DAE can hold up to 72 SSDs and is divided into 2 groups of SSDs called Data Protection Groups (DPGs). Each DPG can hold a minimum of 18 SSDs and can grow by increments of 6 SSDs up to the maximum of 36 SSDs. In other words, 18, 24, 30 or 36 SSDs may be installed per DPG, where up to 2 DPGs can occupy a DAE. SSDs are 400GB per drive for X2-S clusters and 1.92TB per drive for X2-R clusters. Future releases will allow customers to populate their X2-R clusters with 3.84TB sized drives, doubling the physical capacity available in their clusters. Figure 36. Scale Up Capabilities – Up to 2 DPGs and 72 SSDs per DAE For more details on XtremIO X2, see the XtremIO X2 Specifications [2] and XtremIO X2 Datasheet [3] .
  • 27. 27 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. XIOS and the I/O Flow Each Storage Controller within the XtremIO cluster runs a specially-customized lightweight Linux-based operating system as the base platform of the array. The XtremIO Operating System (XIOS) handles all activities within a Storage Controller and runs on top of the Linux-based operating system. XIOS is optimized for handling high I/O rates and manages the system's functional modules, RDMA communication, monitoring etc. Figure 37. X-Brick Components XIOS has a proprietary process scheduling-and-handling algorithm designed to meet the specific requirements of a content-aware, low-latency, and high-performing storage system. It provides efficient scheduling and data access, Instant exploitation of CPU resources, optimized inter-sub-process communication, and minimized dependency between sub- processes that run on different sockets. The XtremIO Operating System gathers a variety of metadata tables on incoming data including data fingerprint, location in the system, mappings and reference counts. The metadata is used as the fundamental reference for performing system operations such as laying out incoming data uniformly, implementing inline data reduction services, and accessing data on read requests. The metadata is also involved in communication with external applications (such as VMware XCOPY and Microsoft ODX) to optimize integration with the storage system. Regardless of which Storage Controller receives an I/O request from a host, multiple Storage Controllers on multiple X- Bricks cooperate to process the request. The data layout in the XtremIO system ensures that all components share the load and participate evenly in processing I/O operations. An important functionality of XIOS is its data reduction capabilities. This is achieved by using inline data deduplication and compression. Data deduplication and data compression complement each other. Data deduplication removes redundancies, whereas data compression compresses the already deduplicated data before it is written to the flash media. XtremIO is an always-on thin-provisioned storage system, further realizing storage savings by the storage system, which never writes a block of zeros to the disks. XtremIO integrates with existing SANs through 16Gb/s Fibre Channel or 10Gb/s Ethernet iSCSI connectivity to service hosts' I/O requests. Details of the XIOS architecture and its data reduction capabilities are available in the Introduction to DELL EMC XtremIO X2 Storage Array document [4] .
  • 28. 28 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. XtremIO Write I/O Flow In a write operation to the storage array, the incoming data stream reaches any one of the Active-Active Storage Controllers and is broken into data blocks. For every data block, the array fingerprints the data with a unique identifier and stores it in the cluster's mapping table. The mapping table maps the host Logical Block Addresses (LBA) to the block fingerprints, and the block fingerprints to its physical location in the array (the DAE, SSD and offset the block is located at). The fingerprint of a block has two objectives: to determine if the block is a duplicate of a block that already exists in the array and to distribute blocks uniformly across the cluster. The array divides the list of potential fingerprints among Storage Controllers and assigns each its own fingerprint range. The mathematical process that calculates the fingerprints results in a uniform distribution of fingerprint values and thus fingerprints and blocks are evenly distributed across all Storage Controllers in the cluster. A write operation works as follows: 1. A new write request reaches the cluster. 2. The new write is broken into data blocks. 3. For each data block: a. A fingerprint is calculated for the block. b. An LBA-to-fingerprint mapping is created for this write request. c. The fingerprint is checked to see if it already exists in the array. d. If it exists, the reference count for this fingerprint is incremented by one. e. If it does not exist: 1. A location is chosen on the array where the block will be written (distributed uniformly across the array according to fingerprint value). 2. A fingerprint-to-physical location mapping is created. 3. The data is compressed. 4. The data is written. 5. The reference count for the fingerprint is set to one.
  • 29. 29 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Deduplicated writes are of course much faster than original writes. Once the array identifies a write as a duplicate, it updates the LBA-to-fingerprint mapping for the write and updates the reference count for this fingerprint. No further data is written to the array and the operation is completed quickly, adding an extra benefit to in-line deduplication. Figure 38 shows an example of an incoming data stream which contains duplicate blocks with identical fingerprints. Figure 38. Incoming Data Stream Example with Duplicate blocks As mentioned, fingerprints also help to decide where to write the block in the array. Figure 39 shows the incoming stream demonstrated in Figure 38, after duplicates were removed, as it is being written to the array. The blocks are divided to their appointed Storage Controller according to their fingerprint value, which ensures a uniform distribution of the data across the cluster. The blocks are transferred to their destinations in the array using Remote Direct Memory Access (RDMA) via the low-latency InfiniBand network. Figure 39. Incoming Deduplicated Data Stream Written to the Storage Controllers Storage Controller Storage Controller DAE Storage Controller Storage Controller DAE CA38C90 Data 134F871 Data 0325F7A Data F3AFBA3 Data AB45CB7 Data 20147A8 Data 963FE7B Data Data DataData DataData Data Data X-Brick 1 X-Brick 2 F, … 2, A, … 1, 9, … 0, C, … InfiniBand
  • 30. 30 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. The actual write of the data blocks to the SSDs is carried out asynchronously. At the time of the application write, the system places the data blocks in the in-memory write buffer, and protects it using journaling to local and remote NVRAMs. Once it is written to the local NVRAM and replicated to a remote one, the Storage Controller returns an acknowledgment to the host. This guarantees a quick response to the host, ensures low-latency of I/O traffic, and preserves the data in case of system failure (power-related or any other). When enough blocks are collected in the buffer (to fill up a full stripe), the system writes them to the SSDs on the DAE. Figure 40 demonstrates the phase of writing the data to the DAEs after a full stripe of data blocks is collected in each Storage Controller. Figure 40. Full Stripe of Blocks Written to the DAEs XtremIO Read I/O Flow In a read operation, the system first performs a look-up of the logical address in the LBA-to-fingerprint mapping. The fingerprint found is then looked up in the fingerprint-to-physical mapping and the data is retrieved from the right physical location. Just as with writes, the read load is also evenly shared across the cluster, as blocks are evenly distributed, and all volumes are accessible across all X-Bricks. If the requested block size is larger than the data block size, the system performs parallel data block reads across the cluster and assembles them into bigger blocks before returning them to the application. A compressed data block is decompressed before it is delivered to the host. XtremIO has a memory-based read cache in each Storage Controller. The read cache is organized by content fingerprint. Blocks whose contents are more likely to be read are placed in the read cache for a fast retrieve. Storage Controller Storage Controller DAE Storage Controller Storage Controller DAE Data Data Data Data P1 P2DataDataDataDataDataData Data Data Data Data P1 P2DataDataDataDataDataData Data Data Data Data P1 P2DataDataDataDataDataData Data Data Data Data P1 P2DataDataDataDataDataData Data DataData DataData Data Data X-Brick 1 X-Brick 2
  • 31. 31 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. A read operation works as such: 1. A new read request reaches the cluster. 2. The read request is analyzed to determine the LBAs for all data blocks and a buffer is created to hold the data. 3. For each LBA: a. The LBA-to-fingerprint mapping is checked to find the fingerprint of each data block to be read. b. The fingerprint-to-physical location mapping is checked to find the physical location of each of the data blocks. c. The requested data block is read from its physical location (read cache or a place in the disk) and transmitted to the buffer created in step 2 in the Storage Controller that processes the request via RDMA over InfiniBand. 4. The system assembles the requested read from all data blocks transmitted to the buffer and sends it back to the host. System Features The XtremIO X2 Storage Array offers a wide range of built-in features that require no special license. The architecture and implementation of these features is unique to XtremIO and is designed around the capabilities and limitations of flash media. We will list some key features included in the system. Inline Data Reduction XtremIO's unique Inline Data Reduction is achieved by these two mechanisms: Inline Data Deduplication and Inline Data Compression Data Deduplication Inline Data Deduplication is the removal of duplicate I/O blocks from a stream of data prior to it being written to the flash media. XtremIO inline deduplication is always on, meaning no configuration is needed for this important feature. The deduplication is at a global level, meaning no duplicate blocks are written over the entire array. Being an inline and global process, no resource-consuming background processes or additional reads and writes (which are mainly associated with post-processing deduplication) are necessary for the feature's activity, thus increasing SSD endurance and eliminating performance degradation. As mentioned earlier, deduplication on XtremIO is performed using the content's fingerprints (see XtremIO Write I/O Flow on page 28). The fingerprints are also used for uniform distribution of data blocks across the array, thus providing inherent load balancing for performance and enhancing flash wear-level efficiency, since the data never needs to be rewritten or rebalanced. XtremIO uses a content-aware, globally deduplicated Unified Data Cache for highly efficient data deduplication. The system's unique content-aware storage architecture provides a substantially larger cache size with a small DRAM allocation. Therefore, XtremIO is the ideal solution for difficult data access patterns, such as "boot storms" common in VDI environments. XtremIO has excellent data deduplication ratios, especially for virtualized environments. With it, SSD usage is smarter, flash longevity is maximized, logical storage capacity is multiplied (see Figure 7 and Figure 12 for examples) and total cost of ownership is reduced.
  • 32. 32 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Data Compression Inline data compression is that done on data prior to being written to the flash media. XtremIO automatically compresses data after all duplications are removed, ensuring that the compression is performed only for unique data blocks. The compression is performed in real-time and not as a post-processing operation. This way, it does not overuse the SSDs or impact performance. Compressibility rates depend on the type of data written. Data Compression complements data deduplication in many cases and saves storage capacity by storing only unique data block in the most efficient manner. We can see the benefits and capacity savings for the deduplication-compression combination demonstrated in Figure 41 and some real ratios in the Test Results section in Figure 7 and Figure 12. Figure 41. Data Deduplication and Data Compression Demonstrated Thin Provisioning XtremIO storage is natively thin provisioned, using a small internal block size. All volumes in the system are thin provisioned, meaning that the system consumes capacity only when it is needed. No storage space is ever pre-allocated before writing. Because of XtremIO's content-aware architecture, blocks can be stored at any location in the system (with the metadata referring to their location), and the data is written only when unique blocks are received. Therefore, as opposed to disk- oriented architecture, no space creeping or garbage collection is necessary on XtremIO, volume fragmentation does not occur in the array, and defragmentation utilities are not needed. This XtremIO feature enables consistent performance and data management across the entire life cycle of a volume, regardless of the system capacity utilization or the write patterns of clients. Integrated Copy Data Management XtremIO pioneered the concept of integrated Copy Data Management (iCDM) – the ability to consolidate both primary data and its associated copies on the same scale-out all-flash array for unprecedented agility and efficiency. XtremIO is one of a kind in its capabilities to consolidate multiple workloads and entire business processes safely and efficiently, providing organizations with a new level of agility and self-service for on-demand procedures. XtremIO provides consolidation, supporting on-demand copy operations at scale, and still maintains delivery of all performance SLAs in a consistent and predictable way. Data Written by Host 3:1 Data Deduplication 2:1 Data Compression 6:1 Total Data Reduction This is the only data written to the flash media.
  • 33. 33 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Consolidation of primary data and its copies in the same array has numerous benefits: 1. It can make development and testing activities up to 50% faster, creating copies of production code quickly for development and testing purposes, and then refreshing the output back into production for the full cycle of code upgrades in the same array. This dramatically reduces complexity and infrastructure needs, as well as development risks, and increases the quality of the product. 2. Production data can be extracted and pushed to all downstream analytics applications on-demand as a simple in- memory operation. Copies of the data are high performance and receive the same SLA as production copies without compromising production SLAs. XtremIO offers this on-demand as both self-service and automated workflows for both application and infrastructure teams. 3. Operations such as patches, upgrades and tuning tests can be made quickly using copies of production data. Diagnosing problems of applications and databases can be done using these copies, and changes can be applied and refreshed back to production. The same process can be used for testing new technologies and combining them in production environments. 4. iCDM can also be used for data protection purposes, as it enables creating many copies at low point-in-time intervals for recovery. Application integration and orchestration policies can be set to auto-manage data protection, using different SLAs. XtremIO Virtual Copies XtremIO uses its own implementation of snapshots for all iCDM purposes, called XtremIO Virtual Copies (XVCs). XVCs are created by capturing the state of data in volumes at a particular point in time and allowing users to access that data when needed, regardless of the state of the source volume (even deletion). They allow any access type and can be taken either from a source volume or another Virtual Copy. XtremIO's Virtual Copy technology is implemented by leveraging the content-aware capabilities of the system and optimized for SSDs with a unique metadata tree structure that directs I/O to the right data timestamp. This allows efficient copy creation that can sustain high performance, while maximizing the media endurance. Figure 42. A Metadata Tree Structure Example of XVCs When creating a Virtual Copy, the system only generates a pointer to the ancestor metadata of the actual data in the system, making the operation very quick. This operation does not have any impact on the system and does not consume any capacity at the point of creation, unlike traditional snapshots, which may need to reserve space or copy the metadata for each snapshot. Virtual Copy capacity consumption occurs only when changes are made to any copy of the data. Then, the system updates the metadata of the changed volume to reflect the new write, and stores the blocks in the system using the standard write flow process. The system supports the creation of Virtual Copies on a single, as well as on a set, of volumes. All Virtual Copies of the volumes in the set are cross-consistent and contain the exact same point-in-time. This can be done manually by selecting a set of volumes for copying, or by placing volumes in a Consistency Group and making copies of that Group.
  • 34. 34 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Virtual Copy deletions are lightweight and proportional only to the amount of changed blocks between the entities. The system uses its content-aware capabilities to handle copy deletions. Each data block has a counter that indicates the number of instances of that block in the system. If a block is referenced from some copy of the data, it will not be deleted. Any block whose counter value reaches zero is marked as deleted and will be overwritten when new unique data enters the system. With XVCs, XtremIO's iCDM offers the following tools and workflows to provide the consolidation capabilities: • Consistency Groups (CG) – Grouping of volumes to allow Virtual Copies to be taken on a group of volumes as a single entity. • Snapshot Sets – A group of Virtual Copies volumes taken together using CGs or a group of manually-chosen volumes. • Protection Copies – Immutable read-only copies created for data protection and recovery purposes. • Protection Scheduler – Used for local protection of a volume or a CG. It can be defined using intervals of seconds/minutes/hours or can be set using a specific time of day or week. It has a retention policy based on the number of copies needed or the permitted age of the oldest snapshot. • Restore from Protection – Restore a production volume or CG from one of its descendant snapshot sets. • Repurposing Copies – Virtual Copies configured with changing access types (read-write / read-only / no-access) for alternating purposes. • Refresh a Repurposing Copy – Refresh a Virtual Copy of a volume or a CG from the parent object or other related copies with relevant updated data. It does not require volume provisioning changes for the refresh to take effect, but only host-side logical volume management operations to discover the changes. XtremIO Data Protection XtremIO Data Protection (XDP) provides a "self-healing" double-parity data protection with very high efficiency to the storage system. It requires very little capacity overhead and metadata space and does not require dedicated spare drives for rebuilds. Instead, XDP leverages the "hot space" concept, where any free space available in the array can be utilized for failed drive reconstructions. The system always reserves sufficient distributed capacity for performing at least a single drive rebuild. In the rare case of a double SSD failure, the second drive will be rebuilt only if there is enough space to rebuild the second drive as well, or when one of the failed SSDs is replaced. The XDP algorithm provides: • N+2 drive protection. • Capacity overhead of only 5.5%-11% (depends on the number of disks in the protection group). • 60% more write-efficient than RAID1. • Superior flash endurance to any RAID algorithm, due to the smaller number of writes and even distribution of data. • Automatic rebuilds that are faster than traditional RAID algorithms.
  • 35. 35 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. As shown in Figure 43, XDP uses a variation of N+2 row and diagonal parity which provides protection from two simultaneous SSD errors. An X-Brick DAE may contain up to 72 SSDs organized in two Data Protection Groups (DPGs). XDP is managed independently on the DPG level. A DPG of 36 SSDs will result in capacity overhead of only 5.5% for its data protection needs. Figure 43. N+2 Row and Diagonal Parity 1 2 2 3 3 4 D0 D1 3 4 4 5 5 1 D2 D3 k = 5 (prime) 5 1 2 D4 1 2 3 P Q 4 5 1 2 3 4 k-1 5
  • 36. 36 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Data at Rest Encryption Data at Rest Encryption (DARE) provides a solution for securing critical data even when the media is removed from the array, for customers in need of such security. XtremIO arrays utilize a high-performance inline encryption technique to ensure that all data stored on the array is unusable if the SSD media is removed. This prevents unauthorized access in the event of theft or loss during transport, and makes it possible to return/replace failed components containing sensitive data. DARE has been established as a mandatory requirement in several industries, such as health care, banking, and government institutions. At the heart of XtremIO's DARE solution is Self-Encrypting Drive (SED) technology. An SED has dedicated hardware which is used to encrypt and decrypt data as it is written to or read from the drive. Offloading the encryption task to the SSDs enables XtremIO to maintain the same software architecture whenever encryption is enabled or disabled on the array. All XtremIO's features and services (including Inline Data Reduction, XtremIO Data Protection, Thin Provisioning, XtremIO Virtual Copies, etc.) are available on an encrypted cluster as well as on a non-encrypted cluster, and performance is not impacted when using encryption. A unique Data Encryption Key (DEK) is created during the drive manufacturing process and does not leave the drive at any time. The DEK can be erased or changed, rendering its current data unreadable forever. To ensure that only authorized hosts can access the data on the SED, the DEK is protected by an Authentication Key (AK) that resides on the Storage Controller. Without the AK, the DEK is encrypted and cannot be used to encrypt or decrypt data. Figure 44. Data at Rest Encryption in XtremIO
  • 37. 37 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Write Boost In the new X2 storage array, the write flow algorithm was improved significantly to improve array performance, countering the rise in compute power and disk speeds, and accounting for common applications' I/O patterns and block sizes. As mentioned when discussing the write I/O flow, the commit to the host is now asynchronous to the actual writing of the blocks to disk. The commit is sent after the changes are written to a local and remote NVRAMs for protection, and are written to the disk only later, at a time that best optimizes the system's activity. In addition to the shortened procedure from write to commit, the new algorithm addresses an issue relevant to many applications and clients: a high percentage of small I/Os creating load on the storage system and influencing latency, especially on bigger I/O blocks. Examining customers' applications and I/O patterns, it was found that many I/Os from common applications come in small blocks, under than 16K pages, creating high loads on the storage array. Figure 45 shows the block size histogram from the entire XtremIO install base. The percentage of blocks smaller than 16KB is highly evident. The new algorithm takes care of this issue by aggregating small writes to bigger blocks in the array before writing them to disk, making them less demanding on the system, which is now more capable of handling bigger I/Os faster. The test results for the improved algorithm were amazing: the improvement in latency for several cases is around 400% and allows XtremIO X2 to address application requirements with 0.5msec or lower latency. Figure 45. XtremIO Install Base Block Size Histogram VMware APIs for Array Integration (VAAI) VAAI was first introduced as VMware's improvements to host-based VM cloning. It offloads the workload of cloning a VM to the storage array, making cloning much more efficient. Instead of copying all blocks of a VM from the array and back to it for the creation of a new cloned VM, the application lets the array do it internally, utilizing the array's features and saving host and network resources that are no longer involved in the actual cloning of data. This procedure of offloading the operation to the storage array is backed by the X-copy (extended copy) command to the array, which is used when cloning large amounts of complex data. XtremIO is fully VAAI compliant, allowing the array to communicate directly with vSphere and provide accelerated storage vMotion, VM provisioning, and thin provisioning functionality. In addition, XtremIO's VAAI integration improves X-copy efficiency even further by making the whole operation metadata driven. Due to its inline data reduction features and in- memory metadata, no actual data blocks are copied during an X-copy command. The system only creates new pointers to the existing data within the Storage Controllers' memory. Therefore, the operation saves host and network resources and does not consume storage resources, leaving no impact on the system's performance, as opposed to other implementations of VAAI and the X-copy command. Performance tests of XtremIO during X-copy operations and comparison between X1 and X2 with different block sizes can be found in a dedicated post written at XtremIO's CTO blog [9] .
  • 38. 38 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Figure 46 illustrates the X-copy operation when performed against an XtremIO storage array and shows the efficiency in metadata-based cloning. Figure 46. VAAI X-Copy with XtremIO The XtremIO features for VAAI support include: • Zero Blocks / Write Same – used for zeroing-out disk regions and providing accelerated volume formatting. • Clone Blocks / Full Copy / X-Copy – used for copying or migrating data within the same physical array; an almost instantaneous operation on XtremIO due to its metadata-driven operations. • Record Based Locking / Atomic Test & Set (ATS) – used during creation and locking of files on VMFS volumes and during power-down and powering-up of VMS. • Block Delete / Unmap / Trim – used for reclamation of unused space using the SCSI UNMAP feature. Other features of XtremIO X2 (some described in previous sections): • Scalability (scale-up and scale-out) • Even Data Distribution (uniformity) • High Availability (no single points of failures) • Non-disruptive Upgrade and Expansion • RecoverPoint Integration (for replications to local or remote arrays) Ptr Ptr Ptr Ptr Ptr Ptr A B C D Metadata in RAM Data on SSD XtremIO X-Copy command (full clone) A VM1 Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6 Ptr Ptr Ptr Ptr Ptr Ptr A B C D Copy metadata pointers Data on SSD XtremIO B VM1 Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6 Ptr Ptr Ptr Ptr Ptr Ptr A B C D Ptr Ptr Ptr Ptr Ptr Ptr Metadata in RAM Data on SSD XtremIO C • No data blocks are copied. • New pointers are created to the existing data. VM1 VM2 New Addr 1 New Addr 2 New Addr 3 New Addr 4 New Addr 5 New Addr 6Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6
  • 39. 39 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. XtremIO Management Server The XtremIO Management Server (XMS) is the component that manages XtremIO clusters (up to 8 clusters). It is preinstalled with CLI, GUI and RESTful API interfaces, and can be installed on a dedicated physical server or a VMware virtual machine. The XMS manages the cluster via the management ports on both Storage Controllers of the first X-Brick in the cluster and uses a standard TCPI/IP connection to communicate with them. It is not part of the XtremIO data path, thus can be disconnected from an XtremIO cluster without jeopardizing data I/O tasks. A failure on the XMS affects only monitoring and configuration activities, such as creating and attaching volumes. A virtual XMS is naturally less vulnerable to such failures. The GUI is based on a new Web User Interface (WebUI), which is accessible with any browser, and provides easy-to-use tools for performing most system operations (certain management operations must be performed using the CLI). Some of the most useful features of the new WebUI are described following. Dashboard The Dashboard window presents an overview of the cluster. It has three panels: 1. Health – Provides an overview of the system's health status and alerts. 2. Performance (shown in Figure 47) – Provides an overview of the system's overall performance and top used Volumes and Initiator Groups. 3. Capacity (shown in Figure 48) – Provides an overview of the system's physical capacity and data savings. Note these figures represent views available in the dashboard and not test results shown in earlier figures. Figure 47. XtremIO WebUI – Dashboard – Performance Panel
  • 40. 40 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Figure 48. XtremIO WebUI – Dashboard – Capacity Panel The main Navigation menu bar is located on the left side of the UI. Users can select one of the navigation menu options related to XtremIO's management actions. The main menus contain options for the Dashboard, Notifications, Configuration, Reports, Hardware and Inventory. Notifications In the Notifications menu, we can navigate to the Events window (shown in Figure 49) and the Alerts window, showing major and minor issues related to the cluster's health and operations. Figure 49. XtremIO WebUI – Notifications – Events Window
  • 41. 41 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Configuration The Configuration window displays the cluster's logical components: Volumes (shown in Figure 50), Consistency Groups, Snapshot Sets, Initiator Groups, Initiators, and Protection Schedulers. From this window we can create and modify these entities by using the action panel on the top right. Figure 50. XtremIO WebUI – Configuration Reports In the Reports menu, we can navigate to different windows to show graphs and data of different aspects of the system's activities, mainly related to the system's performance and resource utilization. Menu options we can choose to view include: Overview, Performance, Blocks, Latency, CPU Utilization, Capacity, Savings, Endurance, SSD Balance, Usage or User Defined reports. We can view reports using different time resolutions and components. Entities to be viewed are selected with the "Select Entity" option in the Report menu (shown in Figure 51). In addition, pre-defined or custom time intervals can be selected for the report as shown in Figure 52. The Test Result graphs shown earlier in this document were generated with these menu options.
  • 42. 42 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Figure 51. XtremIO WebUI – Reports – Selecting Specific Entities to View Figure 52. XtremIO WebUI – Reports – Selecting Specific Times to View The Overview window shows basic reports on the system, including performance, weekly I/O patterns and storage capacity information. The Performance window shows extensive performance reports which mainly include Bandwidth, IOPS and Latency information. The Blocks window shows block distribution and statistics of I/Os going through the system. The Latency window (shown in Figure 53) shows Latency reports per block size and IOPS metrics. The CPU Utilization window shows CPU utilization of all Storage Controllers in the system.
  • 43. 43 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Figure 53. XtremIO WebUI – Reports – Latency Window The Capacity window (shown in Figure 54) shows capacity statistics and the change in storage capacity over time. The Savings window shows Data Reduction statistics and change over time. The Endurance window shows SSD's endurance status and statistics. The SSD Balance window shows data balance and variance between the SSDs. The Usage window shows Bandwidth and IOPS usage, both overall and separately for reads and writes. The User Defined window allows users to define their own reports. Figure 54. XtremIO WebUI – Reports – Capacity Window
  • 44. 44 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Monitoring Monitoring, managing and optimizing storage health are critical to ensure performance of a VDI infrastructures. Simple and easy-to-use has always been the design principle for XtremIO Management Server (XMS). With XIOS 6.0, XMS delivers an HTML5 user interface for consumer-grade simplicity with enterprise-class features. The improved user interface includes: • Contextual, automated workflow suggestions for management activities. • Advance reporting and analytics that make it easy to troubleshoot. • Global search to quickly find that proverbial needle in the haystack. The simple, yet powerful user interface drives efficiency by enabling administrators to manage, monitor, receive notifications, and set alerts on the storage. With XMS, key system metrics are displayed in an easy-to-read graphical dashboard. From the main dashboard, you can easily monitor the overall system health, performance and capacity metrics and drill down to each object for additional details. This information allows you to quickly identify potential issues and take corrective actions. XtremIO X2 collects real time and historical data (up to 2 years) for a rich set of statistics. These statistics are collected at both the Cluster/Array level and also at the object level (Volumes, Initiator Groups, Targets, etc.). This data collection is available from day one, enabling XMS to provide advanced analytics to the storage environment running VDI infrastructures. Figure 55. XtremIO WebUI – Blocks Distribution Windows
  • 45. 45 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Advanced Analytics Reporting VDI desktops data access pattern varies based on many factors such as desktop applications behavior, boot storms, login storms, and OS updates. This greatly complicates storage sizing for VDI environments. XMS built-in reporting tracks data traffic patterns, thus significantly simplifies the sizing effort. With X2 release, XMS provides a built-in reporting widget that tracks weekly data traffic pattern. You can easily discover IOPs pattern on each day and hour of the week and understand if the pattern is sporadic or consistent over a period time. Figure 56. XtremIO WebUI – Weekly Patterns Reporting Widget The CHANGE button on the widget tracks and displays changes (increasing or decreasing) of the past week relative to the past 8 weeks. If there is no major change (i.e. that in the past week the hourly pattern did not change relative to the past 8 weeks), then there will be no up/down arrow indication. However, if there is an increase/decrease in the traffic of this week relative to the past 8 weeks, a visual arrow indication will appear. Figure 57. XtremIO WebUI – Weekly Patterns Reporting on Relative Changes in Data Pattern
  • 46. 46 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Hardware In the Hardware menu, a picture is provided of the physical cluster and the installed X-Bricks. When viewing the FRONT panel, we can select and highlight any component of the X-Brick and view related detailed information in the panel on the right. Figure 58 shows a hardware view of Storage Controller #1 in X-Brick #1 including installed disks and status LEDs. We can further click on the "OPEN DAE" button to see a visual illustration of the X-Brick's DAE and its SSDs, and view additional information on each SSD and Row Controller. Figure 58. XtremIO WebUI – Hardware – Front Panel Figure 59 shows the back panel view including physical connections to and within the X-Brick. This includes FC connections, Power, iSCSI, SAS, Management, IPMI and InfiniBand. Connections can be filtered by the "Show Connections" list at the top right. Figure 59. XtremIO WebUI – Hardware – Back Panel – Show Connections Inventory In the Inventory menu, all components in the environment are shown together with related information. This includes: XMS, Clusters, X-Bricks, Storage Controllers, Local Disks, Storage Controller PSUs, XEnvs, Data Protection Groups, SSDs, DAEs, DAE Controllers, DAE PSUs, DAE Row Controllers, InfiniBand Switches and NVRAMs.
  • 47. 47 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. XMS Menus The XMS Menus are global system menus that can be accessed in the top right tools of the interface. We can use them to Search components in the system, view Health status of managed components, view major Alerts, view and configure System Settings (shown in Figure 60) and use the User Menu to view login information (and logout), and support options. Figure 60. XtremIO WebUI – XMS Menus – System Settings As mentioned, other interfaces are also available to monitor and manage an XtremIO cluster with the XMS server. The system's Command Line Interface (CLI) can be used for everything the GUI provides and more. A RESTful API is another pre-installed interface in the system which allows HTTP-based commands to manage clusters. And for Windows' PowerShell console uses, a PowerShell API Module is also available for XtremIO management. Test Setup We used an XtremIO cluster with a single X2-S X-Brick as the storage array for our environment. The X-Brick had 36 drives of 400GB size each which, after leaving capacity for parity calculations and other needs, amounts to about 11.2TB of physical capacity. As we saw in the Test Results section, this is more than enough capacity for our 4000 virtual desktops. 36 drives are half the amount that can fit in a single X-Brick. This means that in terms of capacity, we can grow to a maximum of x8 the capacity in this test setup with our scale-up (up to 72 drives per X-Brick) and scale-out (up to 4 X- Bricks per cluster) capabilities for X2-S. For X2-R, we currently provide drives which are about 5 times bigger, yielding a much higher capacity. X2-R drives will soon be 10 times bigger, and X2-R clusters could grow to up to 8 X-Bricks. Performance-wise, we can also see from the Test Results section that our single X2-S X-Brick was enough to service our VDI environment of 4000 desktops, with excellent storage traffic metrics (latency, bandwidth, IOPS) and resource consumption metrics (CPU, RAM) throughout all of the VDI environment's processes. X2-R clusters would have even higher compute performance as they have x3 the RAM of X2-S. Compute Hosts: Dell PowerEdge Servers The test setup includes a homogenous cluster of 32 ESX servers for hosting the Citrix desktops and 2 ESX servers for virtual appliances, which are used to manage the Citrix and vSphere infrastructure. We chose Dell's PowerEdge FC630 as our ESX hosts, as they have the compute power to deal with an environment at such a scale (125 virtual desktops per ESX host) and are a good fit for virtualization environments. Dell PowerEdge servers work with the Dell OpenManage systems management portfolio that simplifies and automates server lifecycle management, and can be integrated with VMware vSphere with a dedicated plugin.
  • 48. 48 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Table 2 lists ESX hosts details at our environment. Table 2. ESX Hosts Details Used for VDI Desktops and Infrastructure PROPERTIES 2+32 ESX HOSTS System make Dell Model PowerEdge FC630 CPU cores 36 CPUs x 2.10GHz Processor type Intel Xeon CPU E5-2695 v4 @ 2.10GHz Processor Sockets 2 Cores per socket 18 Logical processors 72 Memory 524 GB Ethernet NICs 4 Ethernet NICs type QLogic 57840 10Gb iSCSI NICs 4 iSCSI NICs type QLogic 57840 10Gb FC adapters 4 FC adapters type QLE2742 Dual Port 32Gb On-board SAS controller 1 In our test, we used FC connectivity to attach XtremIO LUNs to the ESX hosts, but iSCSI connectivity could have been used in the same manner. It is highly recommended to select and purchase servers after verifying the vendor, make and model from VMware's hardware compatibility list (HCL). It is also recommended that the latest firmware be installed for the server and its adapters, and that the latest GA release of VMware vSphere ESXi, including any of the latest update releases or express patches be used. For more information on Dell EMC PowerEdge FC630 see its specification sheet [12] .
  • 49. 49 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. Storage Configuration This section outlines the storage configuration in our test environment, highlighting zoning considerations, XtremIO Volumes, Initiator Groups, and mapping between Volumes and Initiator Groups. Zoning In a single X-Brick cluster configuration, a host equipped with a dual port storage adapter may have up to four paths per device. Figure 61 shows the logical connection topology for four paths. Each XtremIO Storage Controller has two Fibre Channel paths that connect to the physical host, via redundant SAN switches. Figure 61. Dual Port HBA on an ESX Host to a Single X2 X-Brick Cluster Zoning As recommended in EMC Host Connectivity Guide for VMware ESX Server [6] , the following connectivity guidelines should be followed: • Use multiple HBAs on the servers. • Use at least two SAN switches to provide redundant paths between the servers and the XtremIO cluster. • Restrict zoning to four paths to the storage ports from a single host. • Use a single-Target-per-single-Initiator (1:1) zoning scheme. Storage Volumes We provisioned two sets of XtremIO Volumes as follows: • 1 Volume of 4TB for hosting all virtual machines for management functions of the VDI environment. • 32 X 3TB Volumes for hosting PVS/MCS Linked Clone desktops. • 32 X 10TB Volumes for hosting MCS Full Clone desktops. • We highly recommend leveraging the capabilities of EMC VSI plugin for vSphere Web client, to provision multiple XtremIO Volumes. Initiator Groups and LUN Mapping We configured a 1:1 mapping between Initiator Groups and ESX hosts in our test environment. Each of our ESX hosts has a dual port FC HBA, thus each Initiator Group contains two Initiators mapped to the two WWNs of the FC HBA. Altogether 34 Initiator Groups were created, as follows: • 2 Initiator Groups for mapping volumes to 2 management servers. • 32 Initiator Groups for mapping volumes to all 32 ESX hosts hosting VDI desktops.
  • 50. 50 | DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 © 2018 Dell Inc. or its subsidiaries. The Initiator Groups and Volumes mapping was as follows: • 1 Volume (size = 2TB) mapped to the 2 management infrastructure's Initiator Groups. • 32 Volumes (3TB for PVS/ MCS Linked Clones, 10TB for MCS Full Clones) mapped to the 32 ESX hosts' Initiator Groups hosting virtual desktops. Storage Networks We used FC connectivity between our X2 storage array and the ESX hosts to provision LUNs, but our environment was also iSCSI-ready. For SAN fabric, we used Brocade G620 switches connecting the HBAs on the host to the Storage Controllers on the X-Brick. Some important Brocade G620 details are summarized in Table 3. For more details on the FC switch, refer to Brocade G620 Switch Datasheet. Table 3. Brocade G620 FC Switch Details Make/Model Brocade 6510 Form factor 1U FC Ports 64 Port Speed 32Gb Maximum Aggregate Bandwidth 2048Gbps Full Duplex Supported Media 128Gbps, 32Gbps, 16Gbps, 10Gbps For iSCSI connectivity, we used Mellanox MSX1016 switches connecting host ports to the Storage Controllers on the X- Brick. Some important Mellanox MSX1016 details are summarized in Table 4. For more details on the iSCSI switch, refer to Mellanox MSX1016 Switch Product Brief. Table 4. Mellanox MSX1016 10GbE Switch Details Make/Model Mellanox MSX1016 10GbE Form factor 1U Ports 64 Port Speed 10G Jumbo Frames Supported (9216 Byte size) Supported Media 1GbE, 10GbE We highly recommend installing the most recent FC and iSCSI switch firmware for datacenter deployments.