SlideShare a Scribd company logo
1 of 79
Download to read offline
VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
CONSOLIDATING AND PROTECTING
VIRTUALIZED ENTERPRISE ENVIRONMENTS
WITH DELL EMC XTREMIO X2
Abstract
This white paper describes the components, design, functionality, and
advantages of hosting a VMware-based multisite virtual server on the DELL
EMC XtremIO X2 All-Flash array and protecting this environment with DELL
EMC RecoverPoint, RP4VMS, AppSync and VMware SRM.
December 2017
WHITE PAPER
VMware Integrated Replication and Disaster Recovery with
DELL EMC XtremIO X2, RecoverPoint, RP4VMS, AppSync
and VMware SRM
2 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Contents
Abstract.............................................................................................................................................................1
Executive Summary...........................................................................................................................................4
Introduction........................................................................................................................................................4
Business Case ..................................................................................................................................................5
Solution Overview..............................................................................................................................................5
Dell EMC XtremIO X2 for VMware Environments ..............................................................................................8
XtremIO X2 Overview ......................................................................................................................................................... 9
Architecture ....................................................................................................................................................................... 10
Multi-dimensional Scaling .................................................................................................................................................11
XIOS and the I/O Flow ......................................................................................................................................................13
XtremIO Write I/O Flow .................................................................................................................................................14
XtremIO Read I/O Flow.................................................................................................................................................16
System Features ...............................................................................................................................................................17
Inline Data Reduction....................................................................................................................................................17
Thin Provisioning...........................................................................................................................................................18
Integrated Copy Data Management..............................................................................................................................19
XtremIO Data Protection ...............................................................................................................................................21
Data at Rest Encryption ................................................................................................................................................21
Write Boost....................................................................................................................................................................22
VMware APIs for Array Integration (VAAI)........................................................................................................................23
Dashboard.....................................................................................................................................................................25
Notifications...................................................................................................................................................................27
Configuration.................................................................................................................................................................28
Reports.......................................................................................................................................................................... 29
Hardware....................................................................................................................................................................... 31
Inventory........................................................................................................................................................................ 32
XtremIO X2 Space Management and Reclamation in vSphere Environments .................................................32
VMFS Datastores Reclamation.........................................................................................................................................33
Asynchronous Reclamation of Free Space on VMFS 6 Datastore...............................................................................33
Space Reclamation Granularity ....................................................................................................................................33
In-Guest Space Reclamation for Virtual Machines ...........................................................................................................35
Space Reclamation for VMFS 6 Virtual Machines........................................................................................................35
Space Reclamation for VMFS5 Virtual Machines .........................................................................................................35
Space Reclamation prerequisites .................................................................................................................................35
In-Guest Unmap Alignment Requirements ...................................................................................................................36
EMC VSI for VMware vSphere Web Client Integration with XtremIO X2..........................................................38
3 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Setting Best Practices Host Parameters for XtremIO X2 Storage Array ..........................................................................40
Provisioning VMFS Datastores .........................................................................................................................................40
Provisioning RDM Disks....................................................................................................................................................41
Setting Space Reclamation...............................................................................................................................................41
Creating Native Clones on XtremIO VMFS Datastores ................................................................................................42
Working with XtremIO X2 XVCs........................................................................................................................................42
XtremIO X2 Storage Analytics for VMware vRealize Operations Manager.......................................................43
XtremIO X2 Content Pack for vRealize Log Insight..........................................................................................45
XtremIO X2 Workflows for VMware vRealize Orchestrator ..............................................................................47
Compute Hosts: Dell PowerEdge Servers .......................................................................................................49
Compute Integration – Dell OpenManage ........................................................................................................................49
Firmware Update Assurances...........................................................................................................................................50
Enabling Integrated Copy Data Management with XtremIO X2 & AppSync 3.5 ...............................................51
Registering a New AppSync System ................................................................................................................................52
Restoring a Datastore from a Copy...................................................................................................................................54
Managing Virtual Machine Copies ....................................................................................................................................55
File or Folder Restore with VMFS Datastores ..................................................................................................................56
RecoverPoint Snap-Based Replication for XtremIO X2....................................................................................58
Snap-Based Replication Use Cases.............................................................................................................................59
XtremIO Virtual Copies (XVCs).....................................................................................................................................59
Replication Flow................................................................................................................................................................59
XtremIO Volumes Configured on the Production Copy ................................................................................................59
XtremIO Volumes Configured on the Target Copy .......................................................................................................61
Configuring RecoverPoint Consistency Groups............................................................................................................64
Registering vCenter Server...........................................................................................................................................65
Configuring the Consistency Group for Management by SRM.....................................................................................66
Configuring Site Recovery with VMware vCenter Site Recovery Manager 6.6.................................................66
Point-in-Time Recovery Images........................................................................................................................................68
Testing the Recovery Plan................................................................................................................................................69
Failover.............................................................................................................................................................................. 70
RecoverPoint 5.1.1 for VMS ............................................................................................................................71
References......................................................................................................................................................78
How to Learn More..........................................................................................................................................79
4 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Executive Summary
This white paper describes the components, design and functionality of a VMware-based multisite Virtual Server
Infrastructure (VSI), running consolidated, virtualized enterprise applications protected by DELL EMC RecoverPoint or
RP4VMs, all hosted on a DELL EMC XtremIO X2 All-Flash array.
This white paper discusses and highlights the advantages presented to enterprise IT operations and applications already
virtualized or considering hosting virtualized enterprise application deployments on a DELL EMC XtremIO X2 All-Flash
array. The primary issues examined in this white paper include:
• Performance of consolidated virtualized enterprise applications
• Business continuity and disaster recovery considerations
• Management and monitoring efficiencies
Introduction
The goal of this document is to showcase the benefits of deploying a multisite VMware-based virtualized enterprise
environment hosted on a DELL EMC XtremIO X2 All-Flash array. This document provides information and procedures
highlighting XtremIO's ability to consolidate multiple business-critical enterprise application workloads within a single
cluster, providing data efficiencies, consistent predictable performance and multiple integration vectors to assist in
disaster recovery and business continuity, as well as monitoring and managing of the environment.
This document demonstrates how the integrated solution of a DELL EMC XtremIO X2 All-Flash array, coupled with
VMware-based virtualized infrastructure, is a true enabler for architecting and implementing a multisite virtual data center
to support Business Continuity and Disaster Recovery (BCDR) services during data center failover scenarios.
This document outlines a process for implementing a cost-effective BCDR solution to support the most common disaster
readiness scenarios for a VMware-based infrastructure hosted on a DELL EMC XtremIO X2 All-Flash array. It provides
reference material for data center architects and administrators creating a scalable, fault-tolerant and highly available
BCDR solution. This document demonstrates the advantages of RecoverPoint array-based replication and RecoverPoint
for VMs for XtremIO X2 and discusses examples of replication options relating to Recovery Point Objectives (RPO).
Combining XtremIO X2 with Dell EMC AppSync simplifies, orchestrates and automates the process of generating and
consuming copies of production data.
Among the benefits of this solution are ease of setup, linear scalability, consistent performance and data-storage
efficiencies, as well as the various integration capabilities available for a VMware-XtremIO-based environment. These
integration capabilities, across the various products used within this solution, provide customers increased management,
monitoring and business continuity options.
This document demonstrates that the DELL EMC XtremIO X2 All-Flash array, when paired with EMC RecoverPoint
replication technology, both physical and virtual, in support of a VMware-based virtualized data center architecture,
delivers an industry-leading ability to consolidate business-critical applications and provide an enterprise-level business
continuity solution as compared with today's alternative all-flash array offerings.
5 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Business Case
A well designed and efficiently orchestrated enterprise-class data center ensures the organization meets the operational
policies and objectives of the business through predictable performance and consistent availability of the business-critical
applications supporting the actualization of the organization's goals. Due to the non-insignificant cost required to manage
data layout across the entire infrastructure, scalability and management are additional and important challenges for
enterprise environments, with the main goal being the avoidance of contention between independent workloads
competing for shared storage resources.
This document offers a solution design allowing for consistent performance of consolidated production applications
without the possibility of contention from organizational development activities and the storage efficiencies and dynamism
demanded by modern-day test and development activities. Together with a demonstration of XtremIO's ability to
consolidate multiple concurrent enterprise application workloads onto a single platform without penalty, this solution
highlights an innovative data protection scheme involving RecoverPoint native integration with the XtremIO X2 platform. In
this solution, the recovery point objective for protected virtual machines reduces to less than sixty seconds. Space
efficient point-in-time (PiT) copies of production databases without penalty for BCDR and DevOps requirements is
available.
XtremIO X2 brings tremendous value by providing consistent performance at scale by means of always-on inline
deduplication, compression, thin provisioning and unique data protection capabilities. Seamless interoperability with
VMware vSphere by means of VMware APIs for Array Integration (VAAI), Dell EMC Solutions Integration Service (SIS)
and Virtual Storage Integrator's (VSI) ease of management make choosing this best-of-breed all-flash array for desktop
virtualization purposes even more attractive.
XtremIO X2 is a scale-out and scale-up storage system capable of growing in storage capacity, compute resources and
bandwidth capacity whenever you enhance storage requirements for the environment. With the advent of multi-core
server systems and the number of CPU cores per processor (following Moore's law), we are able to consolidate an
increasing number of virtual workloads on a single enterprise-class server. When combined with XtremIO X2 All-Flash
Array, we can consolidate vast numbers of virtualized servers on a single storage array, thereby achieving high
consolidation at great performance from a storage and a computational perspective.
Solution Overview
The solutions described in Figure 1 and Figure 2 represent a two-site virtualized, distributed data center environment. The
consolidated virtualized enterprise applications run on the production site. These include Oracle and Microsoft SQL
database workloads, as well as additional Data Warehousing profiles. These workloads make up our pseudo-
organization's primary production workload. For the purposes of this proposed solution, these workloads are essential to
the continued fulfillment of crucial business operational objectives. They should behave as expected consistently, remain
undisrupted, and in the course of a disaster event impacting the primary data center, be migrated and resume on a
secondary site with minimal operational interruption.
We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO X2 array and the features and
benefits it provides to VMware environments. The software layer is also discussed later in the document, including the
configuration details for VMware vSphere, VMware SRM and Dell EMC plugins for VMware environments such as VSI,
ESA and AppSync.
We follow this with details about our replication solutions - based on DELL EMC RecoverPoint and RP4VMS - that when
paired with XtremIO X2, deliver an industry-leading ability to consolidate business-critical applications and provide an
enterprise-level business continuity solution.
6 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Physical Replication Architecture Topology - XtremIO X2 Combined with RecoverPoint and VMware SRMFigure 1.
Virtual Replication Architecture Topology - XtremIO X2 Combined with RecoverPoint for VMsFigure 2.
7 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Table 1. Solution Hardware
HARDWARE QUANTITY CONFIGURATION NOTES
DELL EMC XtremIO X2 2 Two Storage Controllers (SCs) with:
Two dual socket Haswell CPUs
346GB RAM
DAEs configured with:
18 400 GB SSDs drives
XtremIO X2 X2-S 400GB 18 DRIVES
DELL EMC RecoverPoint 5.1 4 Gen 6 hardware 1 RPA cluster per site with 2RPAs per
cluster.
Brocade 6510 SAN switch 4 32 or 16 Gbps FC switches 2 switches per site, dual FC fabric
configuration
Mellanox MSX1016 10GbE 2 10 or 1 Gbps Ethernet switches Infrastructure Ethernet switch
PowerEdge FC630 16 Intel Xeon CPU E5-2695 v4 @ 2.10GHz
524 GB
2 for management cluster and 6 for
workload cluster in each site
Table 2. Solution Software
SOFTWARE QUANTITY CONFIGURATION
vCenter Server Appliance VM 6.5 update 1 2 16 vCPU
32 GB Memory
100 GB VMDK
VMware Site Recovery Manager Server 6.6 VM 2 4 vCPU
16 GB Memory
40 GB VMDK
MSSQL Server 2017 VM 2 8 vCPU
16 GB Memory
100 GB VMDK
VSI for VMware vSphere 7.2 VM 1 2 vCPU
8 GB Memory
80 GB VMDK
RecoverPoint for VMs 5.1.1 4 4 vCPU
16 GB Memory
40 GB VMDK
vRealize Operations Manager 6.6 VM 1 4 vCPU
16 GB Memory
256 GB VMDK
VMware Log Insight 4.5 VM 1 4 vCPU
8 GB Memory
256 GB VMDK
AppSync 3.5 VM 1 4 vCPU
16 GB Memory
40 GB VMDK
vRealize Orchestrator 7.3 1 2 vCPU
4 GB Memory
32 GB VMDK
vSphere ESXi 6.5 update 1 16 N/A
ESA Plugin for VROPS 4.4 1 N/A
RecoverPoint Storage Replication Adapter 2.2.1 2 N/A
8 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Dell EMC XtremIO X2 for VMware Environments
Dell EMC's XtremIO X2 is an enterprise-class scalable all-flash storage array that provides rich data services with high
performance. It is designed from the ground up to unlock flash technology's full performance potential by uniquely
leveraging the characteristics of SSDs and uses advanced inline data reduction methods to reduce the physical data that
must be stored on the disks.
XtremIO X2’s storage system uses industry-standard components and proprietary intelligent software to deliver
unparalleled levels of performance, achieving consistent low latency for up to millions of IOPS. It comes with a simple,
easy-to-use interface for storage administrators and fits a wide variety of use cases for customers in need of a fast and
efficient storage system for their data centers, requiring very little planning to set-up before provisioning.
XtremIO X2 storage system serves many use cases in the IT world, due to its high performance and advanced abilities.
One major use case is for virtualized environments and cloud computing. Figure 3 shows XtremIO X2’s incredible
performance of an intensive live VMware production environment. We can see an extremely high IOPS (~1.6M) stats
handled by XtremIO X2 storage array with a latency mostly below 1 msec. In addition, we can see an impressive data
reduction factor of 6.6:1 (2.8:1 for deduplication and 2.4:1 for compression) which lowers the physical footprint of the data.
Intensive VMware Production Environment Workload for XtremIO X2 Array PerspectiveFigure 3.
XtremIO leverages flash to deliver value across multiple dimensions:
• Performance (consistent low-latency and up to millions of IOPS)
• Scalability (using a scale-out and scale-up architecture)
• Storage efficiency (using data reduction techniques such as deduplication, compression and thin-provisioning)
• Data Protection (with a proprietary flash-optimized algorithm named XDP)
• Environment Consolidation (using XtremIO Virtual Copies or VMware's XCOPY)
9 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
XtremIO Key Values for Virtualized EnvironmentsFigure 4.
XtremIO X2 Overview
XtremIO X2 is the new generation of the Dell EMC's All-Flash Array storage system. It adds enhancements and flexibility
in several aspects to the already proficient and high-performant storage array's former generation. Features such as
scale-up for a more flexible system, write boost for a more sensible and high-performing storage array, NVRAM for
improved data availability and a new web-based UI for managing the storage array and monitoring its alerts and
performance stats, add the extra value and advancements required in the evolving world of computer infrastructure.
The XtremIO X2 Storage Array uses building blocks called X-Bricks. Each X-Brick has its own compute, bandwidth and
storage resources. Each X-Brick can be clustered with additional X-Bricks to grow in both performance and capacity
(scale-out). Each X-Brick can also grow individually in terms of capacity, with an option to add to up to 72 SSDs in each
brick.
XtremIO architecture is based on a metadata-centric content-aware system, which helps streamlining data operations
efficiently without requiring any movement of data post-write for any maintenance reason (data protection, data reduction,
etc. – all done inline). Using unique fingerprints of the incoming data, the system lays out the data uniformly across all
SSDs in all X-Bricks in the system, and controls access using metadata tables. This contributes to an extremely balanced
system across all X-Bricks in terms of compute power, storage bandwidth and capacity.
Using the same unique fingerprints, XtremIO is equipped with exceptional always-on inline data deduplication abilities,
which highly benefits virtualized environments. Together with its data compression and thin provisioning capabilities (both
inline and always-on), it achieves incomparable data reduction rates.
System operation is controlled by storage administrators via a stand-alone dedicated Linux-based server called the
XtremIO Management Server (XMS). An intuitive user interface is used to manage and monitor the storage cluster and its
performance. The XMS can be either a physical or a virtual server and can manage multiple XtremIO clusters.
With its intelligent architecture, XtremIO provides a storage system that is easy to set-up, needs zero tuning by the client
and does not require complex capacity or data protection planning, as the system handles it on its own.
10 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Architecture
An XtremIO X2 Storage System is comprised of a set of X-Bricks that form a cluster. This is the basic building block of an
XtremIO array. There are two types of X2 X-Bricks available: X2-S and X2-R. X2-S is for environments whose storage
needs are more I/O intensive than capacity intensive, as they use smaller SSDs and less RAM. An effective use of the
X2-S is for environments that have high data reduction ratios (high compression ratio or significant duplicated data) which
lowers the capacity footprint of the data significantly. X2-R X-Bricks clusters are made for the capacity intensive
environments, with larger disks, more RAM and a larger expansion potential in future releases. The two X-Brick types
cannot be mixed together in a single system. Therefore, decide which type is suitable for your environment in advance.
Each X-Brick is comprised of:
• Two 1U Storage Controllers (SCs) with:
o Two dual socket Haswell CPUs
o 346GB RAM (for X2-S) or 1TB RAM (for X2-R)
o Two 1/10GbE iSCSI ports
o Two user interface interchangeable ports (either 4/8/16Gb FC or 1/10GbE iSCSI)
o Two 56Gb/s InfiniBand ports
o One 100/1000/10000 Mb/s management port
o One 1Gb/s IPMI port
o Two redundant power supply units (PSUs)
• One 2U Disk Array Enclosure (DAE) containing:
o Up to 72 SSDs of sizes 400GB (for X2-S) or 1.92TB (for X2-R)
o Two redundant SAS interconnect modules
o Two redundant power supply units (PSUs)
An XtremIO X2 X-BrickFigure 5.
The Storage Controllers on each X-Brick are connected to their DAE via redundant SAS interconnects.
An XtremIO X2 storage array can have one or multiple X-Bricks. Multiple X-Bricks are clustered together into an XtremIO
X2 array, using an InfiniBand switch and the Storage Controllers' InfiniBand ports for back-end connectivity between
Storage Controllers and DAEs across all X-Bricks in the cluster. The system uses the Remote Direct Memory Access
(RDMA) protocol for this back-end connectivity, ensuring a highly-available ultra-low latency network for communication
between all components of the cluster. The InfiniBand switches are the same size (1U) for both X2-S and X2-R cluster
types, but include 12 ports for X2-S and 36 ports for X2-R. By leveraging RDMA, an XtremIO X2 system is essentially a
single shared-memory space spanning all of its Storage Controllers.
4U
First
Storage Controller
DAE2U
Second
Storage Controller
1U
1U
11 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
The 1GB port for management is configured with an IPv4 address. The XMS, which is the cluster's management software,
communicates with the Storage Controllers via the management interface. Through this interface, the XMS communicates
with the Storage Controllers and sends storage management requests such as creating an XtremIO X2 Volume, mapping
a Volume to an Initiator Group, etc.
The second 1GB/s port for IPMI interconnects the X-Brick's two Storage Controllers. IPMI connectivity is strictly within the
bounds of an X-Brick and never connects to an IPMI port of a Storage Controller in another X-Brick in the cluster.
Multi-dimensional Scaling
With X2, an XtremIO cluster has both scale-out and scale-up capabilities, enabling a flexible growth capability adapted to
the customer's unique workload and needs. Scale-out is implemented by adding X-Bricks to an existing cluster. The
addition of an X-Brick to an existing cluster increases its compute power, bandwidth and capacity linearly. Each X-Brick
that is added to the cluster brings with it two Storage Controllers, each with its CPU power, RAM and FC/iSCSI ports to
service the clients of the environment, together with a DAE with SSDs to increase the capacity provided by the cluster.
Adding an X-Brick to scale-out an XtremIO cluster is for environments that grow both in capacity and in performance
needs, such as in the case of an increase in the number of active users and the data that they hold, or a database that
grows in data and complexity.
An XtremIO cluster can start with any number of X-Bricks that fits the environment's initial needs and can currently grow to
up to 4 X-Bricks (for both X2-S and X2-R). Future code upgrades of XtremIO X2 will allow up to 8 supported X-Bricks for
X2-R arrays.
Scale Out Capabilities – Single to Multiple X2 X-Brick ClustersFigure 6.
12 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Scale-up of an XtremIO cluster is implemented by adding SSDs to existing DAEs in the cluster. Adding SSDs to existing
DAEs to scale-up an XtremIO cluster is for environments that currently grow in capacity needs and have no need for extra
performance. This occurs, for example, when the same number of users has an increasing amount of data to save, or
when an environment grows in both capacity and performance needs, but has only reached its capacity limits with room to
grow in performance with its current infrastructure.
Each DAE can hold up to 72 SSDs and is divided into up to 2 groups of SSDs called Data Protection Groups (DPGs).
Each DPG can hold a minimum of 18 SSDs and can grow by increments of 6 SSDs up to a maximum of 36 SSDs. In
other words, 18, 24, 30 or 36 are the possible numbers of SSDs per DPG. Up to 2 DPGs can occupy a DAE.
SSDs are 400GB per drive for X2-S clusters and 1.92TB per drive for X2-R clusters. Future releases will allow customers
to populate their X2-R clusters with 3.84TB sized drives, doubling the physical capacity available in their clusters.
Multi-Dimensional ScalingFigure 7.
13 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
XIOS and the I/O Flow
Each Storage Controller within the XtremIO cluster runs a specially customized lightweight Linux-based operating system
as the base platform of the array. The XtremIO Operating System (XIOS) handles all activities within a Storage Controller
and runs on top of the Linux-based operating system. XIOS is optimized for handling high I/O rates and manages the
system's functional modules, RDMA communication, monitoring, etc.
X-Brick ComponentsFigure 8.
XIOS has a proprietary process-scheduling-and-handling algorithm designed to meet the specific requirements of a
content-aware, low-latency and high-performing storage system. It provides efficient scheduling and data access, full
exploitation of CPU resources, optimized inter-sub-process communication and minimized dependency between sub-
processes that run on different sockets.
The XtremIO Operating System gathers a variety of metadata tables on incoming data that includes data fingerprint, its
location in the system, mappings and reference counts. The metadata is used as the fundamental insight for performing
system operations, such as laying out incoming data uniformly, implementing inline data reduction services and accessing
the data on read requests. The metadata is also involved in communication with external applications (such as VMware
XCOPY and Microsoft ODX) to optimize integration with the storage system.
Regardless of which Storage Controller receives an I/O request from the host, multiple Storage Controllers on multiple
X-Bricks cooperate to process the request. The data layout in the XtremIO X2 system ensures that all components share
the load and participate evenly in processing I/O operations.
An important functionality of XIOS is its data reduction capabilities. Inline data deduplication and compression achieves
data reduction. Data deduplication and data compression complement each other. Data deduplication removes
redundancies, whereas data compression compresses the already deduplicated data before writing the data to the flash
media. XtremIO is an always-on thin-provisioned storage system, further realizing storage savings by the storage system,
which never writes a block of zeros to the disks.
XtremIO integrates with existing SANs through 16Gb/s Fibre Channel or 10Gb/s Ethernet iSCSI connectivity to service
hosts' I/O requests.
14 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
XtremIO Write I/O Flow
In a write operation to the storage array, the incoming data stream reaches any one of the Active-Active Storage
Controllers and is broken into data blocks. For every data block, the array fingerprints the data with a unique identifier and
stores it in the cluster's mapping table. The mapping table maps the host's Logical Block Addresses (LBA) to the blocks'
fingerprints and the blocks' fingerprints to its physical location in the array (the DAE, SSD and offset the block is located
at). The fingerprint of a block has two objectives: (1) to determine if the block is a duplicate of a block that already exists in
the array and (2) to distribute blocks uniformly across the cluster. The array divides the list of potential fingerprints among
Storage Controllers in the array and gives each Storage Controller a range of fingerprints to manage. The mathematical
process that calculates the fingerprints results in a uniform distribution of fingerprint values. As a result, fingerprints and
blocks are evenly spread across all Storage Controllers in the cluster.
A write operation works as such:
1. A new write request reaches the cluster.
2. The new write is broken into data blocks.
3. For each data block:
1. A fingerprint is calculated for the block.
2. An LBA-to-fingerprint mapping is created for this write request.
3. The fingerprint is checked to see if it already exists in the array.
• If it exists:
o The reference count for this fingerprint is incremented by one.
• If it does not exist:
1. A location is chosen on the array where the block is written (distributed uniformly across the array
according to fingerprint value).
2. A fingerprint-to-physical location mapping is created.
3. The data is compressed.
4. The data is written.
5. The reference count for the fingerprint is set to one.
15 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Deduplicated writes are of course much faster than original writes. Once the array identifies a write as a duplicate, it
updates the LBA-to-fingerprint mapping for the write and updates the reference count for this fingerprint. No data is
additionally written to the array and the operation is completed quickly, adding an extra benefit to inline
deduplication. Figure 9 shows an example of an incoming data stream which contains duplicate blocks with identical
fingerprints.
Incoming Data Stream Example with Duplicate BlocksFigure 9.
As mentioned, fingerprints also help to decide where to write the block in the array. Figure 10 shows the incoming stream
after duplicates were removed as it is being written to the array. The blocks are divided to their appointed Storage
Controller according to their fingerprint values ensuring a uniform distribution of the data across the cluster. The blocks
are transferred to their destinations in the array using Remote Direct Memory Access (RDMA) via the low-latency
InfiniBand network.
Incoming Deduplicated Data Stream Written to the Storage ControllersFigure 10.
The actual write of the data blocks to the SSDs is asynchronous. At the time of the application write, the system places
the data blocks in the in-memory write buffer and protects it using journaling to local and remote NVRAMs. Once it is
written to the local NVRAM and replicated to a remote one, the Storage Controller returns an acknowledgment to the host.
This guarantees a quick response to the host, ensures low-latency of I/O traffic and preserves the data in case of system
failure (power-related or any other). When enough blocks are collected in the buffer (to fill up a full stripe), the system
writes them to the SSDs on the DAE. Figure 11 demonstrates the phase of writing the data to the DAEs after a full stripe
of data blocks is collected in each Storage Controller.
Storage
Controller
Storage
Controller
DAE
Storage
Controller
Storage
Controller
DAE
CA38C90
Data
134F871
Data
0325F7A
Data
F3AFBA3
Data
AB45CB7
Data
20147A8
Data
963FE7B
Data
Data
DataData
DataData
Data Data
X-Brick 1
X-Brick 2
F, …
2, A, …
1, 9, …
0, C, …
InfiniBand
16 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Full Stripe of Blocks Written to the DAEsFigure 11.
XtremIO Read I/O Flow
In a read operation, the system first performs a look-up of the logical address in the LBA-to-fingerprint mapping. The
found fingerprint is then located in the fingerprint-to-physical mapping and the data is retrieved from the right physical
location. In the same fashion as write operations, the read load is also evenly shared across the cluster, blocks are evenly
distributed and all Volumes are accessible across all X-Bricks. If the requested block size is larger than the data block
size, the system performs parallel data block reads across the cluster and assembles them into bigger blocks before
returning them to the application. A compressed data block is decompressed before it is delivered to the host.
XtremIO has a memory-based read cache in each Storage Controller. The read cache is organized by content fingerprint.
Blocks whose contents are more likely to be read are placed in the read cache for faster retrieval.
A read operation works as such:
1. A new read request reaches the cluster.
2. The read request is analyzed to determine the LBAs for all data blocks and a buffer is created to hold the data.
3. For each LBA:
1. The LBA-to-fingerprint mapping is checked to find the fingerprint of each data block to be read.
2. The fingerprint-to-physical location mapping is checked to find the physical location of each of the data
blocks.
3. The requested data block is read from its physical location (read cache or a place in the disk) and
transmitted to the buffer created in step 2 in the Storage Controller that processes the request via RDMA
over InfiniBand.
4. The system assembles the requested read from all data blocks transmitted to the buffer and sends it back to the
host.
Storage
Controller
Storage
Controller
DAE
Storage
Controller
Storage
Controller
DAE
Data Data Data Data P1 P2DataDataDataDataDataData
Data Data Data Data P1 P2DataDataDataDataDataData
Data Data Data Data P1 P2DataDataDataDataDataData
Data Data Data Data P1 P2DataDataDataDataDataData
Data
DataData
DataData
Data Data
X-Brick 1
X-Brick 2
17 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
System Features
The XtremIO X2 Storage Array provides and offers a wide range of built-in features that require no special license. The
architecture and implementation of these features are unique to XtremIO and are designed around the capabilities and
limitations of flash media. We will list some key features included in the system.
Inline Data Reduction
XtremIO's unique Inline Data Reduction is achieved by these two mechanisms: Inline Data Deduplication and Inline Data
Compression
Data Deduplication
Inline Data Deduplication is the removal of duplicate I/O blocks from a stream of data prior to it being written to the flash
media. XtremIO inline deduplication is always on, meaning no configuration is needed for this important feature. The
deduplication is at a global level, meaning no duplicate blocks are written over the entire array. Being an inline and global
process, no resource-consuming background processes or additional reads and writes (which are mainly associated with
post-processing deduplication) are necessary for the feature's activity, which increases SSD endurance and eliminates
performance degradation.
As mentioned earlier, deduplication on XtremIO is performed using the content's fingerprints (see XtremIO Write I/O Flow
on page 14). The fingerprints are also used for uniform distribution of data blocks across the array. This provides inherent
load balancing for performance and enhances flash wear-level efficiency, since the data never needs to be rewritten or
rebalanced.
XtremIO uses a content-aware, globally deduplicated Unified Data Cache for highly efficient data deduplication. The
system's unique content-aware storage architecture enables achieving a substantially larger cache size with a small
DRAM allocation. Therefore, XtremIO is the ideal solution for difficult data access patterns, such as "boot storms" that are
common in VSI environments.
XtremIO has excellent data deduplication ratios, especially for virtualized environments. SSD usage is smarter, flash
longevity is maximized, the logical storage capacity is multiplied and total cost of ownership is reduced.
Figure 12 shows the CPU utilization of our Storage Controllers during VMware production workload. When new blocks are
written to the system, the hash calculation is distributed across all Storage Controllers. We can see here the excellent
synergy across our X2 cluster, when all our Active-Active Storage Controllers' CPUs share the load and effort, as the CPU
utilization between all is virtually equal for the entire workload.
XtremIO X2 CPU UtilizationFigure 12.
18 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Data Compression
Inline data compression is the compression of data prior to writing the data to the flash media. XtremIO automatically
compresses data after all duplications are removed, ensuring that the compression is performed only for unique data
blocks. The compression is performed in real-time and not as a post-processing operation. As a result, compression does
not overuse the SSDs or impact performance. Compressibility rates depend on the type of data written.
Data Compression complements data deduplication in many cases and saves storage capacity by storing only unique
data block in the most efficient manner. Data compression is always inline and never performed as a post-processing
activity. Therefore, XtremIO writes the data only once, always. It increases overall endurance of the flash array's SSDs In
a VSI environment, deduplication dramatically reduces the required capacity for the virtual servers. Consequently,
compression reduces the specific user data. As a result, a single X-Brick can manage an increased number of virtual
servers. Therefore, less physical capacity is required to store the data, increasing the storage array's efficiency and
dramatically reducing the $/GB cost of storage, even when compared to hybrid storage systems.
We can see the benefits and capacity savings for the deduplication-compression combination demonstrated in Figure 13.
Data Deduplication and Data Compression DemonstratedFigure 13.
In the above example, the twelve data blocks written by the host are first deduplicated to four data blocks, demonstrating
a 3:1 data deduplication ratio. Following the data compression process, the four data blocks are then each compressed,
by a ratio of 2:1, resulting in a total data reduction ratio of 6:1.
Thin Provisioning
XtremIO storage is natively thin provisioned, using a small internal block size. All Volumes in the system are thin
provisioned, meaning the system only consumes capacity as needed. No storage space is ever pre-allocated before
writing.
XtremIO's content-aware architecture permits blocks to be stored at any location in the system (when the metadata is
used to refer to their location) and the data is written only when unique blocks are received. Therefore, as opposed to
disk-oriented architecture, no space creeping or garbage collection is necessary on XtremIO, Volume fragmentation does
not occur in the array and no defragmentation utilities are needed.
This XtremIO feature enables consistent performance and data management across the entire life cycle of a Volume,
regardless of the system capacity utilization or the write patterns of clients.
Data Written by Host
3:1
Data
Deduplication
2:1
Data
Compression
6:1 Total Data Reduction
This is the only
data written to
the flash media.
19 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
This characteristic allows manual and frequent automatic reclaiming of unused space directly from VMFS datastores and
virtual machines that has the following benefits:
• The allocated disks can be used optimally and the actual space reports are more accurate.
• More efficient snapshots (called XVCs - XtremIO Virtual Copies). Blocks that are no longer needed are not
protected by additional snapshots.
Integrated Copy Data Management
XtremIO pioneered the concept of integrated Copy Data Management (iCDM) – the ability to consolidate both primary
data and its associated copies on the same scale-out all-flash array for unprecedented agility and efficiency.
XtremIO is one of a kind in its capabilities to consolidate multiple workloads and entire business processes safely and
efficiently, providing organizations with a new level of agility and self-service for on-demand procedures. XtremIO provides
consolidation, supporting on-demand copy operations at scale while maintaining delivery of all performance SLAs in a
consistent and predictable way.
Consolidation of primary data and its copies in the same array has numerous benefits:
• It can make development and testing activities up to 50% faster, creating copies of production code quickly for
development and testing purposes, then refreshing the output back into production for the full cycle of code
upgrades in the same array. This dramatically reduces complexity and infrastructure needs, as well as
development risks, and increases the quality of the product.
• Production data can be extracted and pushed to all downstream analytics applications on-demand as a simple in-
memory operation. Copies of the data are high performance and can get the same SLA as production copies
without compromising production SLAs. XtremIO offers this on-demand as both self-service and automated
workflows for both application and infrastructure teams.
• Operations such as patches, upgrades and tuning tests can be quickly performed using copies of production data.
Diagnosing problems of applications and databases can be done using these copies, and applying the changes
back to production can be done by refreshing copies back. The same goes for testing new technologies and
combining them in production environments.
• iCDM can also be used for data protection purposes, as it enables creating many copies at low point-in-time
intervals for recovery. Application integration and orchestration policies can be set to auto-manage data
protection, using different SLAs.
XtremIO Virtual Copies
XtremIO uses its own implementation of snapshots for all iCDM purposes, called XtremIO Virtual Copies (XVCs). XVCs
are created by capturing the state of data in Volumes at a particular point in time and allowing users to access that data
when needed, no matter the state of the source Volume (even deletion). They allow any access type. XVCs can be taken
either from a source Volume or from another Virtual Copy.
XtremIO's Virtual Copy technology is implemented by leveraging the content-aware capabilities of the system, optimized
for SSDs, with a unique metadata tree structure that directs I/O to the right timestamp of the data. This allows efficient
copy creation that can sustain high performance, while maximizing the media endurance.
20 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
A Metadata Tree Structure Example of XVCsFigure 14.
When creating a Virtual Copy, the system only generates a pointer to the ancestor metadata of the actual data in the
system, making the operation very quick. This operation does not have any impact on the system and does not consume
any capacity at the point of creation, unlike traditional snapshots, which may need to reserve space or copy the metadata
for each snapshot. Virtual Copies capacity consumption occurs only when changes are made to any copy of the data.
Then, the system updates the metadata of the changed Volume to reflect the new write, and stores its blocks in the
system using the standard write flow process.
The system supports the creation of Virtual Copies on a single, as well as on a set, of Volumes. All Virtual Copies of the
Volumes in the set are cross-consistent and contain the exact same point in time for them all. This can be done manually
by selecting a set of Volumes for copying, or by placing Volumes in a Consistency Group and making copies of that
Consistency Group.
Virtual Copy deletions are lightweight and proportional only to the amount of changed blocks between the entities. The
system uses its content-aware capabilities to handle copy deletions. Each data block has a counter that indicates the
number of instances of that block in the system. If a block is referenced from some copy of the data, it will not be deleted.
Any block whose counter value reaches zero is marked as deleted and will be overwritten when new unique data enters
the system.
With XVCs, XtremIO's iCDM offers the following tools and workflows to provide the consolidation capabilities:
• Consistency Groups (CG) – Grouping of Volumes to allow Virtual Copies to be taken on a group of Volumes as a
single entity.
• Snapshot Sets – A group of Virtual Copies of Volumes taken together using CGs or a group of manually chosen
Volumes.
• Protection Copies – Immutable read-only copies created for data protection and recovery purposes.
• Protection Scheduler – Used for local protection of a Volume or a CG. It can be defined using intervals of
seconds/minutes/hours or can be set using a specific time of day or week. It has a retention policy based on the
number of copies wanted or the permitted age of the oldest XVC.
• Restore from Protection – Restore a production Volume or CG from one of its descendant Snapshot Sets.
• Repurposing Copies – Virtual Copies configured with changing access types (read-write / read-only / no-access)
for alternating purposes.
• Refresh a Repurposing Copy – Refresh a Virtual Copy of a Volume or a CG from the parent object or other
related copies with relevant updated data. It does not require Volume provisioning changes for the refresh to take
effect, but only host-side logical Volume management operations to discover the changes.
21 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
XtremIO Data Protection
XtremIO Data Protection (XDP) provides a "self-healing" double-parity data protection with very high efficiency to the
storage system. It requires very little capacity overhead and metadata space, and does not require dedicated spare drives
for rebuilds. Instead, XDP leverages the "hot space" concept, where any free space available in the array can be utilized
for failed drive reconstructions. The system always reserves sufficient distributed capacity for performing at least a single
drive rebuild. In the rare case of a double SSD failure, the second drive is rebuilt only if there is enough space to rebuild
the second drive or when one of the failed SSDs is replaced.
The XDP algorithm provides:
• N+2 drives protection
• Capacity overhead of only 5.5%-11% (depends on the number of disks in the protection group)
• 60% more write-efficient than RAID1
• Superior flash endurance to any RAID algorithm, due to the smaller number of writes and even distribution of data
• Automatic rebuilds that are faster than traditional RAID algorithms
As shown in Figure 15, XDP uses a variation of N+2 row and diagonal parity that provides protection from two
simultaneous SSD errors. An X-Brick DAE may contain up to 72 SSDs organized in two Data Protection Groups (DPGs).
XDP is managed independently on the DPG level. A DPG of 36 SSDs will result in capacity overhead of only 5.5% for its
data protection needs.
N+2 Row and Diagonal ParityFigure 15.
Data at Rest Encryption
Data at Rest Encryption (DARE) provides a solution to securing critical data even when the media is removed from the
array, for customers in need of such security. XtremIO arrays utilize a high-performance inline encryption technique to
ensure that all data stored on the array is unusable if the SSD media is removed. This prevents unauthorized access in
the event of theft or loss during transport, and makes it possible to return/replace failed components containing sensitive
data. DARE has been established as a mandatory requirement in several industries, such as health care, banking, and
government institutions.
At the heart of XtremIO's DARE solution lies the use of the Self-Encrypting Drive (SED) technology. An SED has a
dedicated hardware that is used to encrypt and decrypt data as it is written to or read from the drive. Offloading the
encryption task to the SSDs enables XtremIO to maintain the same software architecture whether encryption is enabled or
disabled on the array. All XtremIO's features and services (including Inline Data Reduction, XtremIO Data Protection, Thin
Provisioning, XtremIO Virtual Copies, etc.) are available on an encrypted cluster as well as on a non-encrypted cluster,
and performance is not impacted when using encryption.
1 2
2 3
3 4
D0 D1
3 4
4 5
5 1
D2 D3
k = 5 (prime)
5
1
2
D4
1
2
3
P Q
4 5 1 2 3 4
k-1
5
22 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
A unique Data Encryption Key (DEK) is created during the drive manufacturing process and does not leave the drive at
any time. The DEK can be erased or changed, rendering its current data unreadable forever. To ensure that only
authorized hosts can access the data on the SED, the DEK is protected by an Authentication Key (AK) that resides on the
Storage Controller. Without the AK, the DEK is encrypted and cannot be used to encrypt or decrypt data.
Data at Rest Encryption in XtremIOFigure 16.
Write Boost
In the new X2 storage array, the write flow algorithm was improved significantly to improve array performance, countering
the rise in compute power and disk speeds and taking into account common applications' I/O patterns and block sizes. As
mentioned when discussing the write I/O flow, the commit to the host is now asynchronous to the actual writing of the
blocks to disk. The commit is sent after the changes are written to a local and remote NVRAMs for protection, and are
written to the disk only later, at a time that best optimizes the system's activity. In addition to the shortened procedure from
write to commit, the new algorithm addresses an issue relevant to many applications and clients: a high percentage of
small I/Os creating load on the storage system and influencing latency, especially on bigger I/O blocks. Examining
customers' applications and I/O patterns, the algorithm finds that many I/Os from common applications come in small
blocks, under 16K pages, creating high loads on the storage array. Figure 17 shows the block size histogram from the
entire XtremIO install base. The percentage of blocks smaller than 16KB is highly evident. The new algorithm takes care
of this issue by aggregating small writes to bigger blocks in the array before writing them to disk, making them less
demanding on the system, which is now more capable of taking care of bigger I/Os faster. The test results for the
improved algorithm were amazing: the improvement in latency for several cases is around 400% and allows XtremIO X2
to address application requirements with 0.5msec or lower latency.
23 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
XtremIO Install Base Block Size HistogramFigure 17.
VMware APIs for Array Integration (VAAI)
VAAI was first introduced as VMware's improvements to host-based VM cloning. It offloads the workload of cloning a VM
to the storage array, making cloning much more efficient. Instead of copying all blocks of a VM from the array and back to
it for the creation of a new cloned VM, the application lets the array do it internally. This utilizes the array's features and
saving host and network resources that are no longer involved in the actual cloning of data. This procedure of offloading
the operation to the storage array is backed by the X-copy (extended copy) command to the array, which is used when
cloning large amounts of complex data.
XtremIO is fully VAAI compliant, allowing the array to communicate directly with vSphere and provide accelerated storage
vMotion, VM provisioning and thin provisioning functionality. In addition, XtremIO's VAAI integration improves X-copy
efficiency even further by making the whole operation metadata driven. Due to its inline data reduction features and in-
memory metadata, no actual data blocks are copied during an X-copy command and the system only creates new
pointers to the existing data. This is all done inside the Storage Controllers' memory. Therefore, the operation saves host
and network resources and does not consume storage resources, leaving no impact on the system's performance, as
opposed to other implementations of VAAI and the X-copy command.
24 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Figure 18 illustrates the X-copy operation when performed against an XtremIO storage array and shows the efficiency in
metadata-based cloning.
VAAI X-Copy with XtremIOFigure 18.
The XtremIO features for VAAI support include:
• Zero Blocks / Write Same – Used for zeroing-out disk regions and provides accelerated Volume formatting.
• Clone Blocks / Full Copy / X-Copy – Used for copying or migrating data within the same physical array, an almost
instantaneous operation on XtremIO due to its metadata-driven operations.
• Record Based Locking / Atomic Test & Set (ATS) – Used during creation and locking of files on VMFS Volumes,
such as during power-down and powering-up of VMS.
• Block Delete / Unmap / Trim – Used for reclamation of unused space using the SCSI unmap feature.
Ptr Ptr Ptr Ptr Ptr Ptr
A B C D
Metadata in RAM
Data on SSD
XtremIO
X-Copy command (full clone)
A
VM1
Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6
Ptr Ptr Ptr Ptr Ptr Ptr
A B C D
Copy metadata pointers
Data on SSD
XtremIO
B
VM1
Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6
Ptr Ptr Ptr Ptr Ptr Ptr
A B C D
Ptr Ptr Ptr Ptr Ptr Ptr
Metadata in RAM
Data on SSD
XtremIO
C
• No data blocks are copied.
• New pointers are created to the existing data.
VM1 VM2
New
Addr 1
New
Addr 2
New
Addr 3
New
Addr 4
New
Addr 5
New
Addr 6Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6
25 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Figure 19 shows the exceptional performance during multiple VMware cloning operations. X2 is handling storage
bandwidths as high as ~160GB/s with over 220k IOPS (read+write), resulting in a quick and efficient production delivery.
Multiple VMware Cloning Operations (X-Copy) from XtremIO X2 PerspectiveFigure 19.
Other features of XtremIO X2 (some of them will be described in next sections):
• Even Data Distribution (uniformity)
• High Availability (no single points of failures)
• Non-disruptive Upgrade and Expansion
• RecoverPoint Integration (for replications to local or remote arrays)
• XtremIO Management Server
The XtremIO Management Server (XMS) is the component that manages XtremIO clusters (up to 8 clusters). It is
preinstalled with the CLI, GUI and RESTful API interfaces, and can be installed on a dedicated physical server or a
VMware virtual machine.
The XMS manages the cluster through the management ports on both Storage Controllers of the first X-Brick in the
cluster, and uses a standard TCPI/IP connection to communicate with them. It is not part of the XtremIO data path, thus
can be disconnected from an XtremIO cluster without jeopardizing usual I/O tasks. A failure on the XMS only affects
monitoring and configuration activities, such as creating and attaching Volumes. A virtual XMS is naturally less vulnerable
to such failures.
The GUI is based on a new Web User Interface (WebUI), which is accessible via any browser, and provides easy-to-use
tools for performing most system operations (certain management operations must be performed using the CLI). Some of
the useful features of the new WebUI are described in the following sections.
Dashboard
The Dashboard window presents a main overview of the cluster. It has three panels:
• Health - the main overview of the system's health status, alerts, etc.
• Performance (shown in Figure 20) – the main overview of the system's overall performance and top used
Volumes and Initiator Groups
• Capacity (shown in Figure 21) – the main overview of the system's physical capacity and data savings
26 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
XtremIO WebUI – Dashboard – Performance PanelFigure 20.
XtremIO WebUI – Dashboard – Capacity PanelFigure 21.
The main Navigation menu bar is located on the left side of the UI. Users can select one of the navigation menu options
pertaining to XtremIO's management actions. The main menus contain the Dashboard, Notifications, Configuration,
Reports, Hardware and Inventory.
27 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Notifications
In the Notifications menu, we can navigate to the Events window (shown in Figure 22) and the Alerts window, showing
major and minor issues related to the cluster's health and operations.
XtremIO WebUI – Notifications – Events WindowFigure 22.
28 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Configuration
The Configuration window displays the cluster's logical components: Volumes (shown in Figure 23), Consistency Groups,
Snapshot Sets, Initiator Groups, Initiators, and Protection Schedulers. Through this window, we can create and modify
these entities, using the action panel on the top right side.
XtremIO WebUI – ConfigurationFigure 23.
29 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Reports
In the Reports menu, we can navigate to different windows to show graphs and data of different aspects of the system's
activities, mainly related to the system's performance and resource utilization. The menu options we can choose to view
include Overview, Performance, Blocks, Latency, CPU Utilization, Capacity, Savings, Endurance, SSD Balance, Usage or
User-defined reports. We can view reports using different resolutions of time and components: selecting specific entities
we want to view reports on in the "Select Entity" option (shown in Figure 24) that appears above when in the Reports
menus, or selecting predefined and custom days and times to review reports for (shown in Figure 25).
XtremIO WebUI – Reports – Selecting Specific Entities to ViewFigure 24.
XtremIO WebUI – Reports – Selecting Specific Times to ViewFigure 25.
30 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
The Overview window shows basic reports on the system, including performance, weekly I/O patterns and storage
capacity information. The Performance window shows extensive performance reports that mainly include Bandwidth,
IOPS and Latency information. The Blocks window shows block distribution and statistics of I/Os going through the
system. The Latency window (shown in Figure 26) shows Latency reports, including latency as a function of block sizes
and IOPS metrics. The CPU Utilization window shows CPU utilization of all Storage Controllers in the system.
XtremIO WebUI – Reports – Latency WindowFigure 26.
The Capacity window (shown in Figure 27) shows capacity statistics and the change in storage capacity over time. The
Savings window shows Data Reduction statistics and change over time. The Endurance window shows SSD's
endurance status and statistics. The SSD Balance window shows how much the SSDs are balanced with data and the
variance between them all. The Usage window shows Bandwidth and IOPS usage, both overall and divided to reads and
writes. The User-defined window allows users to define their own reports to view.
31 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
XtremIO WebUI – Reports – Capacity WindowFigure 27.
Hardware
In the Hardware menu, we can overview our cluster and X-Bricks with visual illustrations. When viewing the FRONT
panel, we can choose and highlight any component of the X-Brick and view information about it in the Information panel
on the right. In Figure 28 we can see extended information on Storage Controller 1 in X-Brick 1, but we can view
information on more granular specifics such as local disks and Status LEDs. We can further click on the "OPEN DAE"
button to see visual illustration of the X-Brick's DAE and its SSDs, and view additional information on each SSD and Row
Controller.
XtremIO WebUI – Hardware – Front PanelFigure 28.
In the BACK panel, we can view an illustration of the back of the X-Brick and see every physical connection to the X-Brick
and inside of it, including FC connections, Power, iSCSI, SAS, Management, IPMI and InfiniBand, filtered by the "Show
Connections" list at the top right. An example of this view is seen in Figure 29.
32 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
XtremIO WebUI – Hardware – Back Panel – Show ConnectionsFigure 29.
Inventory
In the Inventory menu, we can see all components of our environment with information about them, including: XMS,
Clusters, X-Bricks, Storage Controllers, Local Disks, Storage Controller PSUs, XEnvs, Data Protection Groups, SSDs,
DAEs, DAE Controllers, DAE PSUs, DAE Row Controllers, Infiniband Switches and NVRAMs.
As mentioned earlier, other interfaces to monitor and manage an XtremIO cluster through the XMS server are available.
The system's Command Line Interface (CLI) provides all the functionality of the GUI, as well as additional functionality. A
RESTful API is another pre-installed interface in the system that allows HTTP-based commands to manage clusters. A
PowerShell API Module is also an option to use Windows' PowerShell console to administer XtremIO clusters.
XtremIO X2 Space Management and Reclamation in vSphere Environments
VMFS file systems are managed by the ESXi hosts. Because of this, block storage arrays have no visibility inside a VMFS
Volume so when any data is deleted by vSphere the array is unaware of it and it remains allocated on the array. In
XtremIO storage array, all LUNs are thin provisioned and that space could be immediately allocated to another
device/application or just returned to the pool of available storage. Space consumed by files that have been deleted or
moved is referred to as "dead space".
Reclaiming the dead space from an XtremIO X2 storage array frequently has the following benefits:
• The allocated disks can be used optimally and the actual space reports are more accurate.
• More space is available for use of the virtual environment.
• More efficient replication when using RecoverPoint since it will not replicate blocks that are no longer needed.
The feature that can be used to reclaim space is called Space Reclamation, which uses the SCSI command called
unmap. Unmap can be issued to underlying thin-provisioned devices to inform the array that certain blocks are no longer
needed by the host and can be "reclaimed". The array can then return those blocks to the pool of free storage.
The VMFS 6 datastore can send the space reclamation command automatically. With the VMFS5 datastore, Space
reclaim can be done manually via an esxcli command or via the VSI plugin, which will be detailed later in this document.
Storage space inside the VMFS datastore can be freed by deleting or migrating a VM, consolidating an XVC and so on.
Inside the virtual machine, storage space is freed when files are deleted on a thin virtual disk. These operations leave
blocks of unused space on the storage array. However, when the array is not aware that the data was deleted from the
blocks, the blocks remain allocated by the array until the datastore releases them. VMFS uses the SCSI unmap command
to indicate to the array that the storage blocks contain deleted data, so that the array can deallocate these blocks.
33 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Unmap ProcessFigure 30.
Dead space can be reclaimed using one of the following options:
• Space Reclamation Requests from VMFS Datastores - Deleting or removing files from a VMFS datastore frees
space within the file system. This free space is mapped to a storage device until the file system releases or
unmaps it. ESXi supports reclamation of free space, which is also called the unmap operation.
• Space Reclamation Requests from Guest Operating Systems - ESXi supports the unmap commands issued
directly from a guest operating system to reclaim storage space. The level of support and requirements depend
on the type of datastore where your virtual machine resides.
VMFS Datastores Reclamation
Asynchronous Reclamation of Free Space on VMFS 6 Datastore
On VMFS 6 datastores, ESXi supports the automatic asynchronous reclamation of free space. VMFS 6 can run the
unmap command to release free storage space in the background on thin-provisioned storage arrays that support unmap
operations.
Asynchronous unmap processing has several advantages:
• Unmap requests are sent at a constant rate, which helps to avoid any instant load on the backing array.
• Freed regions are batched and unmapped together.
• Unmap processing and truncate I/O paths are disconnected, so I/O performance is not impacted.
Space Reclamation Granularity
Granularity defines the minimum size of a released space sector that an underlying storage can reclaim. Storage cannot
reclaim sectors that are smaller in size than the specified granularity.
For VMFS 6, reclamation granularity equals to the block size. When you specify the block size as 1 MB, the granularity is
also 1 MB. Storage sectors smaller than 1 MB are not reclaimed.
Automatic unmap is an asynchronous task and reclamation will not occur immediately and will typically take 12 to 24
hours to complete. Each ESXi 6.5 host has an unmap "crawler" that will work in tandem to reclaim space on all VMFS 6
Volumes they have access to.
34 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Space Reclamation PriorityFigure 31.
Manual Reclamation of Free Space on VMFS5 Datastore
VMFS5 and earlier file systems do not unmap free space automatically. We recommend using the esxcli storage vmfs
unmap command to reclaim space manually using the parameter --reclaim-unit=20000’, indicating the number of vmfs
blocks to unmap per iteration.
Esxcli Command for Manual Space ReclamationFigure 32.
Using the space reclamation feature in VSI, you can reclaim unused storage on datastores, hosts, clusters, folders and
storage folders on XtremIO storage arrays. It allows us to schedule space reclamation on a daily basis, or run it once, for
a specific datastore or on all datastores under the same datastore cluster.
Setting Space Reclamation Scheduler via VSI PluginFigure 33.
Figure 34 shows the logical space in use before and after space reclamation.
35 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Logical Space in Use Before and After Space ReclamationFigure 34.
In-Guest Space Reclamation for Virtual Machines
Space Reclamation for VMFS 6 Virtual Machines
Inside a virtual machine, storage space is freed when, for example, you delete files on a thin virtual disk. The guest
operating system notifies VMFS about freed space by sending the unmap command. The unmap command sent from the
guest operating system releases space within the VMFS datastore. The command then proceeds to the array, so that the
array can reclaim the freed blocks of space.
VMFS 6 generally supports automatic space reclamation requests that are generated from the guest operating systems,
and passes these requests to the array. Many guest operating systems can send the unmap command and do not require
any additional configuration. Guest operating systems that do not support automatic unmaps might require user
intervention
Generally, guest operating systems send the unmap commands based on the unmap granularity they advertise. VMFS 6
processes unmap requests from the guest OS only when the space to reclaim equals 1 MB or is a multiple of 1 MB. If the
space is less than 1 MB or is not aligned to 1 MB, the unmap requests are not processed.
Space Reclamation for VMFS5 Virtual Machines
Typically, the unmap command generated from the guest operation system on VMFS5 cannot be passed directly to the
array. You must run the esxcli storage vmfs unmap command to trigger unmaps on the array.
However, for a limited number of guest operating systems, VMFS5 supports the automatic space reclamation requests.
Space Reclamation prerequisites
To send the unmap requests from the guest operating system to the array, the virtual machine must meet the following
prerequisites:
• The virtual disk must be thin-provisioned.
• Virtual machine hardware must be of version 11 (ESXi 6.0) or later.
• The advanced setting EnableBlockDelete must be set to 1.
• The guest operating system must be able to identify the virtual disk as thin.
ESXi 6.5 expands support for in-guest unmap to additional guest types; ESXi 6.0 in-guest unmap is supported only for
Windows Server 2012 R2 and later. ESXi 6.5 introduces support for Linux operating systems. The underlying reason for
this is that ESXi 6.0 and earlier only supported SCSI version 2. Windows uses SCSI-2 unmap and therefore could take
advantage of this feature set. Linux uses SCSI version 5 and could not. In ESXi 6.5, VMware enhanced their SCSI
support to go up to SCSI-6, which allows Linux-based guests to issue commands that they could not issue before.
36 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
In-Guest Unmap Alignment Requirements
VMware ESXi requires that any unmap request sent down by a guest must be aligned to 1 MB. For a variety of reasons,
not all unmap requests will be aligned as such and in ESXi 6.5 and earlier a large percentage fails. In ESXi 6.5 P1, ESXi
has been altered to be more tolerant of misaligned unmap requests. See the VMware patch information here:
https://kb.vmware.com/kb/2148989
Prior to this, any unmap request that was even partially misaligned would fail entirely leading to no reclamation. In
ESXi 6.5 PI, any portion of unmap requests that are aligned will be accepted and passed along to the underlying array.
Misaligned portions will be accepted but not passed down. Instead, the affected blocks to which the misaligned unmaps
refer will be zeroed out with WRITE SAME. The benefit of this behavior on the XtremIO X2 is that zeroing is identical in
behavior to unmap so all of the space is reclaimed regardless of any misalignment.
In-Guest Unmap in Windows OS
Starting with ESXi 6.0, In-Guest unmap is supported with Windows 2012 R2 and later Windows-based operating systems.
For a full report of unmap support with Windows, refer to Microsoft documentation. NTFS supports automatic unmap by
default—this means (assuming the underlying storage supports it) Windows will issue unmap to the blocks a file
consumed once the file has been deleted or moved.
Windows also supports manual unmap, which can be run on-demand or per a schedule. This is performed using the Disk
Optimizer tool. Thin virtual disks can be identified in the tool as Volume media types of "thin provisioned drive”. These are
the Volumes that support unmap.
Manual Space Reclamation using Optimize Drives Utility Inside a Windows Virtual MachineFigure 35.
In- Guest Unmap in Linux OS
Starting with ESXi 6.5, In-Guest unmap is supported with Linux-based operating systems and most common file systems.
To enable this behavior, it is necessary to use Virtual Machine Hardware Version 13 or later. Linux supports both
automatic and manual methods of unmap.
Linux file systems do not support automatic unmap by default—this behavior needs to be enabled during the mount
operation of the file system. This is achieved by mounting the file system with the "discard" option.
Mounting Drive Using the Discard OptionFigure 36.
When mounted with the discard option, Linux will issue unmap to the blocks a file consumed once the file has been
deleted or moved.
37 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
With vSphere 6.5, SPC-4 is fully supported so you can run space reclamation inside the Linux OS using either manual CLI
or a crone job. In order to check that the Linux OS does indeed support space reclamation, run the “sg_vpd” command as
seen in Figure 37 and look for the LBPU:1 output. Running the sg_inq command will actually show if SPC-4 is enabled at
the Linux OS level or not.
Running sg_vpd and sg_inq Commands to Verify Support for Space ReclamationFigure 37.
Figure 38 shows the I/O pattern during an in-guest unmap process. The unmap commands appear to be sent from ESXi
in 100 MB chunks until the space reclamation process completes.
In-Guest Space Reclamation Pattern from XtremIO PerspectiveFigure 38.
38 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
EMC VSI for VMware vSphere Web Client Integration with XtremIO X2
EMC Solutions Integration Service 7.2 (EMC SIS) provides us with unique storage integration capabilities between
VMware vSphere 6.5 and EMC XtremIO X2 (XMS 6.0.0 and above). The EMC VSI (Virtual Storage Integrator) 7.2 plugin
for VMware vSphere web client can be registered via EMC SIS.
The plugin enables VMware administrators to view, manage and optimize EMC storage for their ESX/ESXi servers. It
consists of a graphical user interface and the EMC Solutions Integration Service (SIS), which provides communication
and access to XtremIO array(s).
The VSI plugin allows the users to interact with their XtremIO array directly from the vCenter web client. This provides
VMware administrators with the capabilities to monitor, manage and optimize their XtremIO hosted storage from a single
GUI. For example, a user can provision VMFS datastores and RDM Volumes, create full clones using XtremIO Virtual
Copy technology, view on-array used logical capacity of datastores and RDM Volumes, extend datastore capacity, and do
bulk provisioning of datastores and RDM Volumes.
Incorporating the VSI plugin into an existing vSphere infrastructure involves deploying a free to use, pre-packaged OVA,
and then registering the connection from the VSI Solution Integration Service (SIS) to both the vCenter Server and the
XtremIO cluster. Installation requires a minimum of 2.7GB, if thin provisioned, and maximum of 80GB storage capacity, if
thick provisioned.
VSI Plugin OVF DeploymentFigure 39.
39 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
After the VSI virtual application is powered on and the SIS becomes available, the vCenter server should be first
registered with the VSI plugin. Following this action, the SIS instance can then be registered within the vCenter server via
the web client.
Registering VSI Solutions Integration Service Within the vCenter Server Web ClientFigure 40.
From the vCenter Inventory listing within the web client, we can register XtremIO X2 system with the vCenter Server by
specifying the XMS details.
Registering XtremIO Storage System Within the vCenter Server Web ClientFigure 41.
40 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Setting Best Practices Host Parameters for XtremIO X2 Storage Array
The VSI plugin can be used for modifying ESXi host/cluster storage-related settings, setting multipath management and
policies and for invoking space reclamation operations from an ESX server or from a cluster.
The VSI plugin is the best way to enforce the following XtremIO-recommended best practices for ESX servers:
• Enable VAAI.
• Set Queue depth on FC HBA to 256.
• Set multi-pathing policy to "round robin" on each of the XtremIO SCSI Disks.
• Set I/O path switching parameter to 1.
• Set outstanding number of I/O request limit to 256.
• Set the "SchedQuantum" parameter to 64.
• Set the maximum limit on disk I/O size to 4096.
Configuring XtremIO X2 Recommended Settings using the VSI PluginFigure 42.
Provisioning VMFS Datastores
New VMFS datastores can be created using the VSI plugin, and backed-up by XtremIO Volumes at the click of a button.
The VSI plugin interacts with EMC XtremIO to create Volumes of the required size, map them to the appropriate Initiator
Groups and create a VMFS datastore on vSphere, ready for use.
When VMFS datastores start to run out of free space, you can add more storage space by extending them, using the VSI
plugin.
41 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Create a Datastore using the EMC VSI PluginFigure 43.
Provisioning RDM Disks
RDM disks can be provisioned directly from XtremIO at the virtual machine level.
The process creates a LUN on the XtremIO storage arrays, maps it to the ESXi cluster where the virtual machine resides
and attaches it as a physical/virtual RDM disk to the Virtual machine.
Provisioning RDM DisksFigure 44.
Setting Space Reclamation
Using the space reclamation feature in VSI, we can reclaim unused storage on datastores, hosts, clusters, folders and
storage folders on XtremIO storage arrays. We can schedule space reclamation on a daily basis, or run it once, for a
specific datastore or on all datastores under the same datastore cluster.
42 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Setting Space Reclamation Scheduler via VSI PluginFigure 45.
Creating Native Clones on XtremIO VMFS Datastores
The Native Clone feature uses the VMware Native Clone API to create a clone of a virtual machine in a VMFS datastore.
This function is especially useful for cloning a large number of machines, while specifying various options such as
containing folder, destination datastore, cluster, naming pattern, customization specification and more.
Creating Native ClonesFigure 46.
Working with XtremIO X2 XVCs
The following actions for XtremIO XVCs (XtremIO Virtual Copies) can be performed directly from the VSI plugin, providing
maximum protection for critical virtual machines and datastores, backed up by XtremIO X2 XVC technology:
• Creating XVCs of XtremIO datastores
• Viewing XtremIO XVCs generated for virtual machine restore
• Mounting a datastore from an XVC
• Creating a writable or read-only XVC
• Creating and managing XVC schedules
• Restoring virtual machines and datastores from XtremIO XVCs
43 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Managing XtremIO XVC (Snapshot) SchedulesFigure 47.
XtremIO X2 Storage Analytics for VMware vRealize Operations Manager
VMware vRealize Operations Manager is a software product that collects performance and capacity data from monitored
software and hardware resources. It provides users with real-time information about potential problems in their
infrastructure. vRealize Operations Manager presents data and analysis in several ways:
• Through alerts that warn of potential or occurring problems.
• In configurable dashboards and predefined pages that show commonly needed information.
• In predefined reports, EMC Storage Analytics links vRealize Operations Manager with the EMC Adapter.
EMC Storage Analytics (ESA) is a management pack for VMware vRealize Operations Manager that enables the
collection of analytical data from EMC resources. ESA complies with VMware management pack certification
requirements and has received the VMware Ready certification.
The XtremIO X2 Adapter is bundled with a connector that enables vRealize Operations Manager to collect performance
metrics on an X2 array. The adapter is installed with the vRealize Operations Manager user interface. EMC Storage
Analytics uses the power of existing vCenter features to aggregate data from multiple sources and process the data with
proprietary analytic algorithms.
XtremIO X2 Storage Analytics solution provides a single, end-to-end view of virtualized infrastructures (servers to storage)
powered by the VMware vRealize Operations Manager analytics engine. EMC Storage Analytics (ESA) delivers
actionable performance analysis and proactively facilitates increased insight into storage resource pools to help detect
capacity and performance issues, so they can be corrected before they cause a major impact. ESA provides increased
visibility, metrics and a rich collection of storage analytics and metrics for XtremIO X2 for clusters, Data Protection
Groups, XVCs, SSD Disks, Storage Controllers, Volumes and X-Bricks.
XtremIO X2 Storage Analytics further extend the integration capabilities across EMC and VMware solutions to provide
out-of-the-box analytics and visualization across your physical and virtual infrastructure. Storage Analytics provide
preconfigured, customizable dashboards so users can optimally manage their storage environment.
44 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
The preconfigured dashboards include:
1. Performance - Provides greater visibility across the VMware and storage domains in terms of end-to-end mapping.
Mappings include storage system components, storage system objects and vCenter objects. It enables health scores and
alerts from storage system components, such as storage processors and disks, to appear on affected vCenter objects,
such as LUNs, datastores and VMs.
XtremIO Performance DashboardFigure 48.
2. Overview - Populates heat maps that show administrators the health of their system and reflect which workloads are
stressed.
XtremIO Overview DashboardFigure 49.
45 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
3. Metrics - Provides metrics based on “normal” behavior of that application workload (which it learns over a period of
time), after which it can analyze and make sense of all the data that has been collected and appropriately point out
anomalies in behavior. This dashboard displays resource and metrics for storage systems and graphs of resource
metrics.
XtremIO Metrics DashboardFigure 50.
XtremIO X2 Content Pack for vRealize Log Insight
VMware vRealize Log Insight delivers automated log management through log analytics, aggregation and search. An
integrated cloud operations management approach provides the operational intelligence and enterprise-wide visibility
needed to proactively enable service levels and operational efficiency in dynamic hybrid cloud environments.
VMware vRealize Log Insight provides real-time log administration for heterogeneous environments that span across
physical, virtual and cloud environments. Log Insight provides:
• Universal Log Collection
• Powerful Log Analytics
• Enterprise-class Scalability
• Ease of Use and Deployment
• Built-in vSphere Knowledge
The Dell EMC XtremIO X2 Content Pack, when integrated into VMware vRealize Log Insight, provides predefined
dashboards and user-defined fields specifically for XtremIO arrays to enable administrators to conduct problem analysis
and analytics on their array(s).
The vRealize Log Insight Content Pack with dashboards, alerts and chart widgets generated from XtremIO logs,
visualizes log information generated by XtremIO X2 devices to ensure a clear insight into the performance of the XtremIO
X2 flash storage connected to the environment.
The XtremIO X2 Content Pack includes 3 predefined dashboards, over 20 widgets, and alerts for understanding the logs
and graphically representing the operations, critical events and faults of the XtremIO X2 storage array.
The XtremIO X2 Content Pack can be installed directly from the Log Insight Marketplace. Once installed, the Content
Pack uses the syslog protocol to send remote syslog data from an XtremIO X2 array to the Log Insight Server. Log Insight
IP should be set on the XtremIO console under Administration Notification Syslog Configuration in the list of Targets.
46 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
XtremIO Content PackFigure 51.
XtremIO management server dashboard collects all events sent from XMS over time and allows search and graphical
display of all the events of X-Bricks managed by this XMS.
XtremIO Management Server DashboardFigure 52.
47 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
XtremIO errors dashboard collects all error and faults sent from the XMS over time and allows search and graphical
display of all the errors and faults of X-Bricks managed by this XMS.
XtremIO Errors DashboardFigure 53.
XtremIO X2 Workflows for VMware vRealize Orchestrator
VMware vRealize Orchestrator is an IT process automation tool that allows automated management and operational tasks
across both VMware and third-party applications. XtremIO workflows for vRealize Orchestrator facilitate the automation
and orchestration of tasks that involve the XtremIO X2 Storage Array. It augments the capabilities of VMware’s vRealize
Orchestrator solution by providing access to XtremIO X2 Storage Array-specific management workflows.
The XtremIO workflows for VMware vRealize Orchestrator contain both basic and high-level workflows.
A basic workflow is a workflow that allows for the management of a discrete piece of XtremIO functionality, such as
Consistency Groups, Clusters, Initiator Groups, Protection Schedulers, Snapshot Sets, Tags, Volumes, RecoverPoint and
XMS Management.
A high-level workflow is a collection of basic workflows put together in such a way as to achieve a higher level of
automation, simplicity and efficiency than what is available from the available basic workflows. The high-level workflows in
the XtremIO Storage Management and XtremIO VMware Storage Management folders combine both XtremIO and
VMware specific functionality into a set of high-level workflows.
The workflows in the XtremIO Storage Management folder allow for rapid provisioning of datastores to ESXi hosts and
VMDKs/RDMs to VMs. The VM Clone Storage workflow, for instance, allows rapid cloning of datastores associated with a
set of source VMs to a set of target VMs accompanied by automatic VMDK reattachment to the set of target VMs.
Another example is the Host Expose Storage workflow in the XtremIO VMware Storage Management folder, which allows
a user to create Volumes, create any necessary Initiator Groups and map those Volumes to a host, all from one workflow.
All the input needed for this workflow is supplied prior to the calling of the first workflow in the chain of basic workflows
that are utilized.
The XtremIO workflows for VMware vRealize Orchestrator allows the vRealize architect to either rapidly design and
deploy high-level workflows from the rich set of supplied basic workflows or utilize the pre-existing XtremIO high-level
workflows to automate the provisioning, backup and recovery of XtremIO storage in a VMware vCenter environment.
48 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
VRO and XtremIO X2 Integration ArchitectureFigure 54.
XtremIO X2 Workflows for VMware vRealize OrchestratorFigure 55.
49 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
Compute Hosts: Dell PowerEdge Servers
For our environment, we set up two homogenous clusters at each site: one cluster with 6 ESXi servers for hosting VSI
servers and a second cluster with 2 ESXi servers for virtual platforms, which are used to manage the VSI infrastructure.
We used Dell's PowerEdge FC630 as our ESX hosts, as they have the compute power to deal with an environment at
such a scale, and are a good fit for virtualization environments. Dell PowerEdge servers work with the Dell OpenManage
systems management portfolio that simplifies and automates server lifecycle management, and can be integrated with
VMware vSphere with a dedicated plugin.
Compute Integration – Dell OpenManage
Dell OpenManage is a program providing simplicity and automation of hardware management tasks and monitoring for
both Dell and multi-vendor hardware systems. Among its capabilities are:
• Rapid deployment of PowerEdge servers, operating systems and agent-free updates
• Maintenance of policy-based configuration profiles
• Streamlined template-driven network setup and management for Dell Modular Infrastructure
• Providing a "geographic view" of Dell-related hardware
Dell OpenManage can integrate with VMware vCenter using the OpenManage Integration for VMware vCenter (OMIVV),
which provides VMware vCenter with the ability to manage a data center's entire server infrastructure, both physical and
virtual. It can assist with monitoring the physical environment, send system alerts to the user, roll out firmware updates to
an ESXi cluster, etc. The integration is more profitable when using Dell PowerEdge servers as the ESX hosts of the
VMware environment.
Figure 56 shows an example of a cluster's hardware information provided by the OpenManage Integration for VMware
vCenter.
Dell Cluster Information Menu Provided by the Dell OpenManage Plugin for VMwareFigure 56.
50 | VSI protected by DELL EMC RecoverPoint or
RP4VMs hosted on DELL EMC XtremIO X2.
© 2017 Dell Inc. or its subsidiaries.
The OpenManage Integration enables users to schedule firmware updates for clusters from within VMware vCenter web
client. In addition, users can schedule the firmware update to run at a future time. This feature helps users to perform the
firmware updates at the scheduled maintenance window without having to be present personally to attend the firmware
updates. This capability reduces complexity by natively integrating the key management capabilities into the VMware
vSphere Client console. It minimizes risk with hardware alarms, streamlined firmware updates and deep visibility into
inventory, health and warranty details.
Firmware Update Assurances
• Sequential execution: To make sure not all the hosts are brought down to perform firmware updates, the firmware
update is performed sequentially, one host at a time.
• Single failure stoppage: If an update job fails on a server being updated, the existing jobs for that server
continues; however, the firmware update task stops and does not update any remaining servers.
• One firmware update job for each vCenter: To avoid the possibility of multiple update jobs interacting with a
server or cluster, only one firmware update job for each vCenter is allowed. If a firmware update is scheduled or
running for a vCenter, a second firmware update job cannot be scheduled or invoked on that vCenter.
• Entering Maintenance Mode: If an update requires a reboot, the host is placed into maintenance mode prior to the
update being applied. Before a host can enter maintenance mode, VMware requires that you power off or migrate
guest virtual machines to another host. This can be performed automatically when DRS is set to fully automated
mode.
• Exiting Maintenance Mode: Once the updates for a host have completed, the host will be taken out of
maintenance mode, if a host was in maintenance mode prior to the updates.
Applying Firmware Update Directly from vSphere Web ClientFigure 57.
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821
consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821

More Related Content

What's hot

Using EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookUsing EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookEMC
 
Networking for Storage Virtualization and EMC RecoverPoint TechBook
Networking for Storage Virtualization and EMC RecoverPoint TechBook Networking for Storage Virtualization and EMC RecoverPoint TechBook
Networking for Storage Virtualization and EMC RecoverPoint TechBook EMC
 
V mware implementation with ibm system storage ds4000 ds5000 redp4609
V mware implementation with ibm system storage ds4000 ds5000 redp4609V mware implementation with ibm system storage ds4000 ds5000 redp4609
V mware implementation with ibm system storage ds4000 ds5000 redp4609Banking at Ho Chi Minh city
 
DDoS Secure: VMware Virtual Edition Installation Guide
DDoS Secure: VMware Virtual Edition Installation GuideDDoS Secure: VMware Virtual Edition Installation Guide
DDoS Secure: VMware Virtual Edition Installation GuideJuniper Networks
 
Perf vsphere-memory management
Perf vsphere-memory managementPerf vsphere-memory management
Perf vsphere-memory managementRam Prasad Ohnu
 
Perf best practices_v_sphere5.0
Perf best practices_v_sphere5.0Perf best practices_v_sphere5.0
Perf best practices_v_sphere5.0Ram Prasad Ohnu
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsEMC
 
TechBook: EMC VPLEX Metro Witness Technology and High Availability
TechBook: EMC VPLEX Metro Witness Technology and High Availability   TechBook: EMC VPLEX Metro Witness Technology and High Availability
TechBook: EMC VPLEX Metro Witness Technology and High Availability EMC
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsEMC
 
V sphere 5-upgrade-best-practices-guide[1]
V sphere 5-upgrade-best-practices-guide[1]V sphere 5-upgrade-best-practices-guide[1]
V sphere 5-upgrade-best-practices-guide[1]gerdev
 
V sphere 5-upgrade-best-practices-guide
V sphere 5-upgrade-best-practices-guideV sphere 5-upgrade-best-practices-guide
V sphere 5-upgrade-best-practices-guidegerdev
 
Backing up web sphere application server with tivoli storage management redp0149
Backing up web sphere application server with tivoli storage management redp0149Backing up web sphere application server with tivoli storage management redp0149
Backing up web sphere application server with tivoli storage management redp0149Banking at Ho Chi Minh city
 
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...EMC
 
Whats-New-VMware-vCloud-Director-15-Technical-Whitepaper
Whats-New-VMware-vCloud-Director-15-Technical-WhitepaperWhats-New-VMware-vCloud-Director-15-Technical-Whitepaper
Whats-New-VMware-vCloud-Director-15-Technical-WhitepaperDjbilly Mixe Pour Toi
 
Backup of Microsoft SQL Server in EMC Symmetrix Environments ...
Backup of Microsoft SQL Server in EMC Symmetrix Environments ...Backup of Microsoft SQL Server in EMC Symmetrix Environments ...
Backup of Microsoft SQL Server in EMC Symmetrix Environments ...webhostingguy
 
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktop
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktopIBM SmartCloud Desktop Infrastructure with Citrix XenDesktop
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktopIBM India Smarter Computing
 
Ibm total storage productivity center for replication on aix sg247407
Ibm total storage productivity center for replication on aix sg247407Ibm total storage productivity center for replication on aix sg247407
Ibm total storage productivity center for replication on aix sg247407Banking at Ho Chi Minh city
 
Set Up Security and Integration with DataPower XI50z
Set Up Security and Integration with DataPower XI50zSet Up Security and Integration with DataPower XI50z
Set Up Security and Integration with DataPower XI50zSarah Duffy
 
Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157Banking at Ho Chi Minh city
 

What's hot (20)

Using EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBookUsing EMC VNX storage with VMware vSphereTechBook
Using EMC VNX storage with VMware vSphereTechBook
 
Networking for Storage Virtualization and EMC RecoverPoint TechBook
Networking for Storage Virtualization and EMC RecoverPoint TechBook Networking for Storage Virtualization and EMC RecoverPoint TechBook
Networking for Storage Virtualization and EMC RecoverPoint TechBook
 
V mware implementation with ibm system storage ds4000 ds5000 redp4609
V mware implementation with ibm system storage ds4000 ds5000 redp4609V mware implementation with ibm system storage ds4000 ds5000 redp4609
V mware implementation with ibm system storage ds4000 ds5000 redp4609
 
DDoS Secure: VMware Virtual Edition Installation Guide
DDoS Secure: VMware Virtual Edition Installation GuideDDoS Secure: VMware Virtual Edition Installation Guide
DDoS Secure: VMware Virtual Edition Installation Guide
 
Perf vsphere-memory management
Perf vsphere-memory managementPerf vsphere-memory management
Perf vsphere-memory management
 
Perf best practices_v_sphere5.0
Perf best practices_v_sphere5.0Perf best practices_v_sphere5.0
Perf best practices_v_sphere5.0
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere Environments
 
TechBook: EMC VPLEX Metro Witness Technology and High Availability
TechBook: EMC VPLEX Metro Witness Technology and High Availability   TechBook: EMC VPLEX Metro Witness Technology and High Availability
TechBook: EMC VPLEX Metro Witness Technology and High Availability
 
Using EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere EnvironmentsUsing EMC Symmetrix Storage in VMware vSphere Environments
Using EMC Symmetrix Storage in VMware vSphere Environments
 
V sphere 5-upgrade-best-practices-guide[1]
V sphere 5-upgrade-best-practices-guide[1]V sphere 5-upgrade-best-practices-guide[1]
V sphere 5-upgrade-best-practices-guide[1]
 
V sphere 5-upgrade-best-practices-guide
V sphere 5-upgrade-best-practices-guideV sphere 5-upgrade-best-practices-guide
V sphere 5-upgrade-best-practices-guide
 
Db2 virtualization
Db2 virtualizationDb2 virtualization
Db2 virtualization
 
Backing up web sphere application server with tivoli storage management redp0149
Backing up web sphere application server with tivoli storage management redp0149Backing up web sphere application server with tivoli storage management redp0149
Backing up web sphere application server with tivoli storage management redp0149
 
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
 
Whats-New-VMware-vCloud-Director-15-Technical-Whitepaper
Whats-New-VMware-vCloud-Director-15-Technical-WhitepaperWhats-New-VMware-vCloud-Director-15-Technical-Whitepaper
Whats-New-VMware-vCloud-Director-15-Technical-Whitepaper
 
Backup of Microsoft SQL Server in EMC Symmetrix Environments ...
Backup of Microsoft SQL Server in EMC Symmetrix Environments ...Backup of Microsoft SQL Server in EMC Symmetrix Environments ...
Backup of Microsoft SQL Server in EMC Symmetrix Environments ...
 
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktop
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktopIBM SmartCloud Desktop Infrastructure with Citrix XenDesktop
IBM SmartCloud Desktop Infrastructure with Citrix XenDesktop
 
Ibm total storage productivity center for replication on aix sg247407
Ibm total storage productivity center for replication on aix sg247407Ibm total storage productivity center for replication on aix sg247407
Ibm total storage productivity center for replication on aix sg247407
 
Set Up Security and Integration with DataPower XI50z
Set Up Security and Integration with DataPower XI50zSet Up Security and Integration with DataPower XI50z
Set Up Security and Integration with DataPower XI50z
 
Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157Disaster recovery solutions for ibm total storage san file system sg247157
Disaster recovery solutions for ibm total storage san file system sg247157
 

Similar to consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821

White Paper: EMC Infrastructure for VMware Cloud Environments
White Paper: EMC Infrastructure for VMware Cloud Environments  White Paper: EMC Infrastructure for VMware Cloud Environments
White Paper: EMC Infrastructure for VMware Cloud Environments EMC
 
V mware v-sphere-replication-overview
V mware v-sphere-replication-overviewV mware v-sphere-replication-overview
V mware v-sphere-replication-overviewFirman Indrianto
 
Backup and Recovery Solution for VMware vSphere on EMC Isilon Storage
Backup and Recovery Solution for VMware vSphere on EMC Isilon Storage Backup and Recovery Solution for VMware vSphere on EMC Isilon Storage
Backup and Recovery Solution for VMware vSphere on EMC Isilon Storage EMC
 
Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...
Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...
Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...EMC
 
What's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - StorageWhat's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - StorageVMware
 
White Paper: Best Practices for Data Replication with EMC Isilon SyncIQ
White Paper: Best Practices for Data Replication with EMC Isilon SyncIQ   White Paper: Best Practices for Data Replication with EMC Isilon SyncIQ
White Paper: Best Practices for Data Replication with EMC Isilon SyncIQ EMC
 
TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
TechBook: IMS on z/OS Using EMC Symmetrix Storage SystemsTechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
TechBook: IMS on z/OS Using EMC Symmetrix Storage SystemsEMC
 
Intelligent storage management solution using VMware vSphere 5.0 Storage DRS:...
Intelligent storage management solution using VMware vSphere 5.0 Storage DRS:...Intelligent storage management solution using VMware vSphere 5.0 Storage DRS:...
Intelligent storage management solution using VMware vSphere 5.0 Storage DRS:...IBM India Smarter Computing
 
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...EMC
 
White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices EMC
 
White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices EMC
 
BOOK - IBM Z vse using db2 on linux for system z
BOOK - IBM Z vse using db2 on linux for system zBOOK - IBM Z vse using db2 on linux for system z
BOOK - IBM Z vse using db2 on linux for system zSatya Harish
 
Pivotal gem fire_twp_distributed-main-memory-platform_042313
Pivotal gem fire_twp_distributed-main-memory-platform_042313Pivotal gem fire_twp_distributed-main-memory-platform_042313
Pivotal gem fire_twp_distributed-main-memory-platform_042313EMC
 
Ref arch for ve sg248155
Ref arch for ve sg248155Ref arch for ve sg248155
Ref arch for ve sg248155Accenture
 
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...EMC
 
H10986 emc its-oracle-br-wp
H10986 emc its-oracle-br-wpH10986 emc its-oracle-br-wp
H10986 emc its-oracle-br-wpsmdsamee384
 
Best practices for running Microsoft sql server on xtremIO X2_h16920
Best practices for running Microsoft sql server on xtremIO X2_h16920Best practices for running Microsoft sql server on xtremIO X2_h16920
Best practices for running Microsoft sql server on xtremIO X2_h16920Itzik Reich
 
H4160 emc solutions for oracle database
H4160 emc solutions for oracle databaseH4160 emc solutions for oracle database
H4160 emc solutions for oracle databasevoyna
 
White Paper: EMC Backup and Recovery for Microsoft Exchange and SharePoint 20...
White Paper: EMC Backup and Recovery for Microsoft Exchange and SharePoint 20...White Paper: EMC Backup and Recovery for Microsoft Exchange and SharePoint 20...
White Paper: EMC Backup and Recovery for Microsoft Exchange and SharePoint 20...EMC
 

Similar to consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821 (20)

White Paper: EMC Infrastructure for VMware Cloud Environments
White Paper: EMC Infrastructure for VMware Cloud Environments  White Paper: EMC Infrastructure for VMware Cloud Environments
White Paper: EMC Infrastructure for VMware Cloud Environments
 
V mware v-sphere-replication-overview
V mware v-sphere-replication-overviewV mware v-sphere-replication-overview
V mware v-sphere-replication-overview
 
Backup and Recovery Solution for VMware vSphere on EMC Isilon Storage
Backup and Recovery Solution for VMware vSphere on EMC Isilon Storage Backup and Recovery Solution for VMware vSphere on EMC Isilon Storage
Backup and Recovery Solution for VMware vSphere on EMC Isilon Storage
 
Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...
Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...
Business Continuity and Disaster Recovery for Oracle11g Enabled by EMC Symmet...
 
What's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - StorageWhat's New in VMware vSphere 5.0 - Storage
What's New in VMware vSphere 5.0 - Storage
 
White Paper: Best Practices for Data Replication with EMC Isilon SyncIQ
White Paper: Best Practices for Data Replication with EMC Isilon SyncIQ   White Paper: Best Practices for Data Replication with EMC Isilon SyncIQ
White Paper: Best Practices for Data Replication with EMC Isilon SyncIQ
 
TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
TechBook: IMS on z/OS Using EMC Symmetrix Storage SystemsTechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
TechBook: IMS on z/OS Using EMC Symmetrix Storage Systems
 
Intelligent storage management solution using VMware vSphere 5.0 Storage DRS:...
Intelligent storage management solution using VMware vSphere 5.0 Storage DRS:...Intelligent storage management solution using VMware vSphere 5.0 Storage DRS:...
Intelligent storage management solution using VMware vSphere 5.0 Storage DRS:...
 
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
Reference Architecture: EMC Infrastructure for VMware View 5.1 EMC VNX Series...
 
White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices
 
White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices   White Paper: DB2 and FAST VP Testing and Best Practices
White Paper: DB2 and FAST VP Testing and Best Practices
 
BOOK - IBM Z vse using db2 on linux for system z
BOOK - IBM Z vse using db2 on linux for system zBOOK - IBM Z vse using db2 on linux for system z
BOOK - IBM Z vse using db2 on linux for system z
 
Pivotal gem fire_twp_distributed-main-memory-platform_042313
Pivotal gem fire_twp_distributed-main-memory-platform_042313Pivotal gem fire_twp_distributed-main-memory-platform_042313
Pivotal gem fire_twp_distributed-main-memory-platform_042313
 
Ref arch for ve sg248155
Ref arch for ve sg248155Ref arch for ve sg248155
Ref arch for ve sg248155
 
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...
White Paper - EMC IT's Oracle Backup and Recovery-4X Cheaper, 8X Faster, and ...
 
H10986 emc its-oracle-br-wp
H10986 emc its-oracle-br-wpH10986 emc its-oracle-br-wp
H10986 emc its-oracle-br-wp
 
Fasg02 mr
Fasg02 mrFasg02 mr
Fasg02 mr
 
Best practices for running Microsoft sql server on xtremIO X2_h16920
Best practices for running Microsoft sql server on xtremIO X2_h16920Best practices for running Microsoft sql server on xtremIO X2_h16920
Best practices for running Microsoft sql server on xtremIO X2_h16920
 
H4160 emc solutions for oracle database
H4160 emc solutions for oracle databaseH4160 emc solutions for oracle database
H4160 emc solutions for oracle database
 
White Paper: EMC Backup and Recovery for Microsoft Exchange and SharePoint 20...
White Paper: EMC Backup and Recovery for Microsoft Exchange and SharePoint 20...White Paper: EMC Backup and Recovery for Microsoft Exchange and SharePoint 20...
White Paper: EMC Backup and Recovery for Microsoft Exchange and SharePoint 20...
 

More from Itzik Reich

DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed ReviewDELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed ReviewItzik Reich
 
Dell EMC XtremIO & Stratoscale White Paper
Dell EMC XtremIO & Stratoscale White PaperDell EMC XtremIO & Stratoscale White Paper
Dell EMC XtremIO & Stratoscale White PaperItzik Reich
 
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16Itzik Reich
 
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White PaperMicrosoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White PaperItzik Reich
 
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...Itzik Reich
 
VMUG ISRAEL November 2012, EMC session by Itzik Reich
VMUG ISRAEL November 2012, EMC session by Itzik ReichVMUG ISRAEL November 2012, EMC session by Itzik Reich
VMUG ISRAEL November 2012, EMC session by Itzik ReichItzik Reich
 
Vce vdi reference_architecture_knowledgeworkerenvironments
Vce vdi reference_architecture_knowledgeworkerenvironmentsVce vdi reference_architecture_knowledgeworkerenvironments
Vce vdi reference_architecture_knowledgeworkerenvironmentsItzik Reich
 
Emc world svpg68_2011_05_06_final
Emc world svpg68_2011_05_06_finalEmc world svpg68_2011_05_06_final
Emc world svpg68_2011_05_06_finalItzik Reich
 

More from Itzik Reich (9)

DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed ReviewDELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
DELL EMC XTREMIO X2 CSI PLUGIN INTEGRATION WITH PIVOTAL PKS A Detailed Review
 
Dell EMC XtremIO & Stratoscale White Paper
Dell EMC XtremIO & Stratoscale White PaperDell EMC XtremIO & Stratoscale White Paper
Dell EMC XtremIO & Stratoscale White Paper
 
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
Reference architecture xtrem-io-x2-with-citrix-xendesktop-7-16
 
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White PaperMicrosoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
Microsoft Hyper-V 2016 with Dell EMC XtremIO X2 White Paper
 
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
Itzik Reich-EMC World 2015-Best Practices for running virtualized workloads o...
 
VMUG ISRAEL November 2012, EMC session by Itzik Reich
VMUG ISRAEL November 2012, EMC session by Itzik ReichVMUG ISRAEL November 2012, EMC session by Itzik Reich
VMUG ISRAEL November 2012, EMC session by Itzik Reich
 
Bca1931 final
Bca1931 finalBca1931 final
Bca1931 final
 
Vce vdi reference_architecture_knowledgeworkerenvironments
Vce vdi reference_architecture_knowledgeworkerenvironmentsVce vdi reference_architecture_knowledgeworkerenvironments
Vce vdi reference_architecture_knowledgeworkerenvironments
 
Emc world svpg68_2011_05_06_final
Emc world svpg68_2011_05_06_finalEmc world svpg68_2011_05_06_final
Emc world svpg68_2011_05_06_final
 

Recently uploaded

Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfngoud9212
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 

Recently uploaded (20)

Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Bluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdfBluetooth Controlled Car with Arduino.pdf
Bluetooth Controlled Car with Arduino.pdf
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 

consolidating and protecting virtualized enterprise environments with Dell EMC Xtremio x2 h16821

  • 1. VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. CONSOLIDATING AND PROTECTING VIRTUALIZED ENTERPRISE ENVIRONMENTS WITH DELL EMC XTREMIO X2 Abstract This white paper describes the components, design, functionality, and advantages of hosting a VMware-based multisite virtual server on the DELL EMC XtremIO X2 All-Flash array and protecting this environment with DELL EMC RecoverPoint, RP4VMS, AppSync and VMware SRM. December 2017 WHITE PAPER VMware Integrated Replication and Disaster Recovery with DELL EMC XtremIO X2, RecoverPoint, RP4VMS, AppSync and VMware SRM
  • 2. 2 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Contents Abstract.............................................................................................................................................................1 Executive Summary...........................................................................................................................................4 Introduction........................................................................................................................................................4 Business Case ..................................................................................................................................................5 Solution Overview..............................................................................................................................................5 Dell EMC XtremIO X2 for VMware Environments ..............................................................................................8 XtremIO X2 Overview ......................................................................................................................................................... 9 Architecture ....................................................................................................................................................................... 10 Multi-dimensional Scaling .................................................................................................................................................11 XIOS and the I/O Flow ......................................................................................................................................................13 XtremIO Write I/O Flow .................................................................................................................................................14 XtremIO Read I/O Flow.................................................................................................................................................16 System Features ...............................................................................................................................................................17 Inline Data Reduction....................................................................................................................................................17 Thin Provisioning...........................................................................................................................................................18 Integrated Copy Data Management..............................................................................................................................19 XtremIO Data Protection ...............................................................................................................................................21 Data at Rest Encryption ................................................................................................................................................21 Write Boost....................................................................................................................................................................22 VMware APIs for Array Integration (VAAI)........................................................................................................................23 Dashboard.....................................................................................................................................................................25 Notifications...................................................................................................................................................................27 Configuration.................................................................................................................................................................28 Reports.......................................................................................................................................................................... 29 Hardware....................................................................................................................................................................... 31 Inventory........................................................................................................................................................................ 32 XtremIO X2 Space Management and Reclamation in vSphere Environments .................................................32 VMFS Datastores Reclamation.........................................................................................................................................33 Asynchronous Reclamation of Free Space on VMFS 6 Datastore...............................................................................33 Space Reclamation Granularity ....................................................................................................................................33 In-Guest Space Reclamation for Virtual Machines ...........................................................................................................35 Space Reclamation for VMFS 6 Virtual Machines........................................................................................................35 Space Reclamation for VMFS5 Virtual Machines .........................................................................................................35 Space Reclamation prerequisites .................................................................................................................................35 In-Guest Unmap Alignment Requirements ...................................................................................................................36 EMC VSI for VMware vSphere Web Client Integration with XtremIO X2..........................................................38
  • 3. 3 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Setting Best Practices Host Parameters for XtremIO X2 Storage Array ..........................................................................40 Provisioning VMFS Datastores .........................................................................................................................................40 Provisioning RDM Disks....................................................................................................................................................41 Setting Space Reclamation...............................................................................................................................................41 Creating Native Clones on XtremIO VMFS Datastores ................................................................................................42 Working with XtremIO X2 XVCs........................................................................................................................................42 XtremIO X2 Storage Analytics for VMware vRealize Operations Manager.......................................................43 XtremIO X2 Content Pack for vRealize Log Insight..........................................................................................45 XtremIO X2 Workflows for VMware vRealize Orchestrator ..............................................................................47 Compute Hosts: Dell PowerEdge Servers .......................................................................................................49 Compute Integration – Dell OpenManage ........................................................................................................................49 Firmware Update Assurances...........................................................................................................................................50 Enabling Integrated Copy Data Management with XtremIO X2 & AppSync 3.5 ...............................................51 Registering a New AppSync System ................................................................................................................................52 Restoring a Datastore from a Copy...................................................................................................................................54 Managing Virtual Machine Copies ....................................................................................................................................55 File or Folder Restore with VMFS Datastores ..................................................................................................................56 RecoverPoint Snap-Based Replication for XtremIO X2....................................................................................58 Snap-Based Replication Use Cases.............................................................................................................................59 XtremIO Virtual Copies (XVCs).....................................................................................................................................59 Replication Flow................................................................................................................................................................59 XtremIO Volumes Configured on the Production Copy ................................................................................................59 XtremIO Volumes Configured on the Target Copy .......................................................................................................61 Configuring RecoverPoint Consistency Groups............................................................................................................64 Registering vCenter Server...........................................................................................................................................65 Configuring the Consistency Group for Management by SRM.....................................................................................66 Configuring Site Recovery with VMware vCenter Site Recovery Manager 6.6.................................................66 Point-in-Time Recovery Images........................................................................................................................................68 Testing the Recovery Plan................................................................................................................................................69 Failover.............................................................................................................................................................................. 70 RecoverPoint 5.1.1 for VMS ............................................................................................................................71 References......................................................................................................................................................78 How to Learn More..........................................................................................................................................79
  • 4. 4 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Executive Summary This white paper describes the components, design and functionality of a VMware-based multisite Virtual Server Infrastructure (VSI), running consolidated, virtualized enterprise applications protected by DELL EMC RecoverPoint or RP4VMs, all hosted on a DELL EMC XtremIO X2 All-Flash array. This white paper discusses and highlights the advantages presented to enterprise IT operations and applications already virtualized or considering hosting virtualized enterprise application deployments on a DELL EMC XtremIO X2 All-Flash array. The primary issues examined in this white paper include: • Performance of consolidated virtualized enterprise applications • Business continuity and disaster recovery considerations • Management and monitoring efficiencies Introduction The goal of this document is to showcase the benefits of deploying a multisite VMware-based virtualized enterprise environment hosted on a DELL EMC XtremIO X2 All-Flash array. This document provides information and procedures highlighting XtremIO's ability to consolidate multiple business-critical enterprise application workloads within a single cluster, providing data efficiencies, consistent predictable performance and multiple integration vectors to assist in disaster recovery and business continuity, as well as monitoring and managing of the environment. This document demonstrates how the integrated solution of a DELL EMC XtremIO X2 All-Flash array, coupled with VMware-based virtualized infrastructure, is a true enabler for architecting and implementing a multisite virtual data center to support Business Continuity and Disaster Recovery (BCDR) services during data center failover scenarios. This document outlines a process for implementing a cost-effective BCDR solution to support the most common disaster readiness scenarios for a VMware-based infrastructure hosted on a DELL EMC XtremIO X2 All-Flash array. It provides reference material for data center architects and administrators creating a scalable, fault-tolerant and highly available BCDR solution. This document demonstrates the advantages of RecoverPoint array-based replication and RecoverPoint for VMs for XtremIO X2 and discusses examples of replication options relating to Recovery Point Objectives (RPO). Combining XtremIO X2 with Dell EMC AppSync simplifies, orchestrates and automates the process of generating and consuming copies of production data. Among the benefits of this solution are ease of setup, linear scalability, consistent performance and data-storage efficiencies, as well as the various integration capabilities available for a VMware-XtremIO-based environment. These integration capabilities, across the various products used within this solution, provide customers increased management, monitoring and business continuity options. This document demonstrates that the DELL EMC XtremIO X2 All-Flash array, when paired with EMC RecoverPoint replication technology, both physical and virtual, in support of a VMware-based virtualized data center architecture, delivers an industry-leading ability to consolidate business-critical applications and provide an enterprise-level business continuity solution as compared with today's alternative all-flash array offerings.
  • 5. 5 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Business Case A well designed and efficiently orchestrated enterprise-class data center ensures the organization meets the operational policies and objectives of the business through predictable performance and consistent availability of the business-critical applications supporting the actualization of the organization's goals. Due to the non-insignificant cost required to manage data layout across the entire infrastructure, scalability and management are additional and important challenges for enterprise environments, with the main goal being the avoidance of contention between independent workloads competing for shared storage resources. This document offers a solution design allowing for consistent performance of consolidated production applications without the possibility of contention from organizational development activities and the storage efficiencies and dynamism demanded by modern-day test and development activities. Together with a demonstration of XtremIO's ability to consolidate multiple concurrent enterprise application workloads onto a single platform without penalty, this solution highlights an innovative data protection scheme involving RecoverPoint native integration with the XtremIO X2 platform. In this solution, the recovery point objective for protected virtual machines reduces to less than sixty seconds. Space efficient point-in-time (PiT) copies of production databases without penalty for BCDR and DevOps requirements is available. XtremIO X2 brings tremendous value by providing consistent performance at scale by means of always-on inline deduplication, compression, thin provisioning and unique data protection capabilities. Seamless interoperability with VMware vSphere by means of VMware APIs for Array Integration (VAAI), Dell EMC Solutions Integration Service (SIS) and Virtual Storage Integrator's (VSI) ease of management make choosing this best-of-breed all-flash array for desktop virtualization purposes even more attractive. XtremIO X2 is a scale-out and scale-up storage system capable of growing in storage capacity, compute resources and bandwidth capacity whenever you enhance storage requirements for the environment. With the advent of multi-core server systems and the number of CPU cores per processor (following Moore's law), we are able to consolidate an increasing number of virtual workloads on a single enterprise-class server. When combined with XtremIO X2 All-Flash Array, we can consolidate vast numbers of virtualized servers on a single storage array, thereby achieving high consolidation at great performance from a storage and a computational perspective. Solution Overview The solutions described in Figure 1 and Figure 2 represent a two-site virtualized, distributed data center environment. The consolidated virtualized enterprise applications run on the production site. These include Oracle and Microsoft SQL database workloads, as well as additional Data Warehousing profiles. These workloads make up our pseudo- organization's primary production workload. For the purposes of this proposed solution, these workloads are essential to the continued fulfillment of crucial business operational objectives. They should behave as expected consistently, remain undisrupted, and in the course of a disaster event impacting the primary data center, be migrated and resume on a secondary site with minimal operational interruption. We describe the hardware layer of our solution. Later, we take a wide look at our XtremIO X2 array and the features and benefits it provides to VMware environments. The software layer is also discussed later in the document, including the configuration details for VMware vSphere, VMware SRM and Dell EMC plugins for VMware environments such as VSI, ESA and AppSync. We follow this with details about our replication solutions - based on DELL EMC RecoverPoint and RP4VMS - that when paired with XtremIO X2, deliver an industry-leading ability to consolidate business-critical applications and provide an enterprise-level business continuity solution.
  • 6. 6 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Physical Replication Architecture Topology - XtremIO X2 Combined with RecoverPoint and VMware SRMFigure 1. Virtual Replication Architecture Topology - XtremIO X2 Combined with RecoverPoint for VMsFigure 2.
  • 7. 7 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Table 1. Solution Hardware HARDWARE QUANTITY CONFIGURATION NOTES DELL EMC XtremIO X2 2 Two Storage Controllers (SCs) with: Two dual socket Haswell CPUs 346GB RAM DAEs configured with: 18 400 GB SSDs drives XtremIO X2 X2-S 400GB 18 DRIVES DELL EMC RecoverPoint 5.1 4 Gen 6 hardware 1 RPA cluster per site with 2RPAs per cluster. Brocade 6510 SAN switch 4 32 or 16 Gbps FC switches 2 switches per site, dual FC fabric configuration Mellanox MSX1016 10GbE 2 10 or 1 Gbps Ethernet switches Infrastructure Ethernet switch PowerEdge FC630 16 Intel Xeon CPU E5-2695 v4 @ 2.10GHz 524 GB 2 for management cluster and 6 for workload cluster in each site Table 2. Solution Software SOFTWARE QUANTITY CONFIGURATION vCenter Server Appliance VM 6.5 update 1 2 16 vCPU 32 GB Memory 100 GB VMDK VMware Site Recovery Manager Server 6.6 VM 2 4 vCPU 16 GB Memory 40 GB VMDK MSSQL Server 2017 VM 2 8 vCPU 16 GB Memory 100 GB VMDK VSI for VMware vSphere 7.2 VM 1 2 vCPU 8 GB Memory 80 GB VMDK RecoverPoint for VMs 5.1.1 4 4 vCPU 16 GB Memory 40 GB VMDK vRealize Operations Manager 6.6 VM 1 4 vCPU 16 GB Memory 256 GB VMDK VMware Log Insight 4.5 VM 1 4 vCPU 8 GB Memory 256 GB VMDK AppSync 3.5 VM 1 4 vCPU 16 GB Memory 40 GB VMDK vRealize Orchestrator 7.3 1 2 vCPU 4 GB Memory 32 GB VMDK vSphere ESXi 6.5 update 1 16 N/A ESA Plugin for VROPS 4.4 1 N/A RecoverPoint Storage Replication Adapter 2.2.1 2 N/A
  • 8. 8 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Dell EMC XtremIO X2 for VMware Environments Dell EMC's XtremIO X2 is an enterprise-class scalable all-flash storage array that provides rich data services with high performance. It is designed from the ground up to unlock flash technology's full performance potential by uniquely leveraging the characteristics of SSDs and uses advanced inline data reduction methods to reduce the physical data that must be stored on the disks. XtremIO X2’s storage system uses industry-standard components and proprietary intelligent software to deliver unparalleled levels of performance, achieving consistent low latency for up to millions of IOPS. It comes with a simple, easy-to-use interface for storage administrators and fits a wide variety of use cases for customers in need of a fast and efficient storage system for their data centers, requiring very little planning to set-up before provisioning. XtremIO X2 storage system serves many use cases in the IT world, due to its high performance and advanced abilities. One major use case is for virtualized environments and cloud computing. Figure 3 shows XtremIO X2’s incredible performance of an intensive live VMware production environment. We can see an extremely high IOPS (~1.6M) stats handled by XtremIO X2 storage array with a latency mostly below 1 msec. In addition, we can see an impressive data reduction factor of 6.6:1 (2.8:1 for deduplication and 2.4:1 for compression) which lowers the physical footprint of the data. Intensive VMware Production Environment Workload for XtremIO X2 Array PerspectiveFigure 3. XtremIO leverages flash to deliver value across multiple dimensions: • Performance (consistent low-latency and up to millions of IOPS) • Scalability (using a scale-out and scale-up architecture) • Storage efficiency (using data reduction techniques such as deduplication, compression and thin-provisioning) • Data Protection (with a proprietary flash-optimized algorithm named XDP) • Environment Consolidation (using XtremIO Virtual Copies or VMware's XCOPY)
  • 9. 9 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. XtremIO Key Values for Virtualized EnvironmentsFigure 4. XtremIO X2 Overview XtremIO X2 is the new generation of the Dell EMC's All-Flash Array storage system. It adds enhancements and flexibility in several aspects to the already proficient and high-performant storage array's former generation. Features such as scale-up for a more flexible system, write boost for a more sensible and high-performing storage array, NVRAM for improved data availability and a new web-based UI for managing the storage array and monitoring its alerts and performance stats, add the extra value and advancements required in the evolving world of computer infrastructure. The XtremIO X2 Storage Array uses building blocks called X-Bricks. Each X-Brick has its own compute, bandwidth and storage resources. Each X-Brick can be clustered with additional X-Bricks to grow in both performance and capacity (scale-out). Each X-Brick can also grow individually in terms of capacity, with an option to add to up to 72 SSDs in each brick. XtremIO architecture is based on a metadata-centric content-aware system, which helps streamlining data operations efficiently without requiring any movement of data post-write for any maintenance reason (data protection, data reduction, etc. – all done inline). Using unique fingerprints of the incoming data, the system lays out the data uniformly across all SSDs in all X-Bricks in the system, and controls access using metadata tables. This contributes to an extremely balanced system across all X-Bricks in terms of compute power, storage bandwidth and capacity. Using the same unique fingerprints, XtremIO is equipped with exceptional always-on inline data deduplication abilities, which highly benefits virtualized environments. Together with its data compression and thin provisioning capabilities (both inline and always-on), it achieves incomparable data reduction rates. System operation is controlled by storage administrators via a stand-alone dedicated Linux-based server called the XtremIO Management Server (XMS). An intuitive user interface is used to manage and monitor the storage cluster and its performance. The XMS can be either a physical or a virtual server and can manage multiple XtremIO clusters. With its intelligent architecture, XtremIO provides a storage system that is easy to set-up, needs zero tuning by the client and does not require complex capacity or data protection planning, as the system handles it on its own.
  • 10. 10 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Architecture An XtremIO X2 Storage System is comprised of a set of X-Bricks that form a cluster. This is the basic building block of an XtremIO array. There are two types of X2 X-Bricks available: X2-S and X2-R. X2-S is for environments whose storage needs are more I/O intensive than capacity intensive, as they use smaller SSDs and less RAM. An effective use of the X2-S is for environments that have high data reduction ratios (high compression ratio or significant duplicated data) which lowers the capacity footprint of the data significantly. X2-R X-Bricks clusters are made for the capacity intensive environments, with larger disks, more RAM and a larger expansion potential in future releases. The two X-Brick types cannot be mixed together in a single system. Therefore, decide which type is suitable for your environment in advance. Each X-Brick is comprised of: • Two 1U Storage Controllers (SCs) with: o Two dual socket Haswell CPUs o 346GB RAM (for X2-S) or 1TB RAM (for X2-R) o Two 1/10GbE iSCSI ports o Two user interface interchangeable ports (either 4/8/16Gb FC or 1/10GbE iSCSI) o Two 56Gb/s InfiniBand ports o One 100/1000/10000 Mb/s management port o One 1Gb/s IPMI port o Two redundant power supply units (PSUs) • One 2U Disk Array Enclosure (DAE) containing: o Up to 72 SSDs of sizes 400GB (for X2-S) or 1.92TB (for X2-R) o Two redundant SAS interconnect modules o Two redundant power supply units (PSUs) An XtremIO X2 X-BrickFigure 5. The Storage Controllers on each X-Brick are connected to their DAE via redundant SAS interconnects. An XtremIO X2 storage array can have one or multiple X-Bricks. Multiple X-Bricks are clustered together into an XtremIO X2 array, using an InfiniBand switch and the Storage Controllers' InfiniBand ports for back-end connectivity between Storage Controllers and DAEs across all X-Bricks in the cluster. The system uses the Remote Direct Memory Access (RDMA) protocol for this back-end connectivity, ensuring a highly-available ultra-low latency network for communication between all components of the cluster. The InfiniBand switches are the same size (1U) for both X2-S and X2-R cluster types, but include 12 ports for X2-S and 36 ports for X2-R. By leveraging RDMA, an XtremIO X2 system is essentially a single shared-memory space spanning all of its Storage Controllers. 4U First Storage Controller DAE2U Second Storage Controller 1U 1U
  • 11. 11 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. The 1GB port for management is configured with an IPv4 address. The XMS, which is the cluster's management software, communicates with the Storage Controllers via the management interface. Through this interface, the XMS communicates with the Storage Controllers and sends storage management requests such as creating an XtremIO X2 Volume, mapping a Volume to an Initiator Group, etc. The second 1GB/s port for IPMI interconnects the X-Brick's two Storage Controllers. IPMI connectivity is strictly within the bounds of an X-Brick and never connects to an IPMI port of a Storage Controller in another X-Brick in the cluster. Multi-dimensional Scaling With X2, an XtremIO cluster has both scale-out and scale-up capabilities, enabling a flexible growth capability adapted to the customer's unique workload and needs. Scale-out is implemented by adding X-Bricks to an existing cluster. The addition of an X-Brick to an existing cluster increases its compute power, bandwidth and capacity linearly. Each X-Brick that is added to the cluster brings with it two Storage Controllers, each with its CPU power, RAM and FC/iSCSI ports to service the clients of the environment, together with a DAE with SSDs to increase the capacity provided by the cluster. Adding an X-Brick to scale-out an XtremIO cluster is for environments that grow both in capacity and in performance needs, such as in the case of an increase in the number of active users and the data that they hold, or a database that grows in data and complexity. An XtremIO cluster can start with any number of X-Bricks that fits the environment's initial needs and can currently grow to up to 4 X-Bricks (for both X2-S and X2-R). Future code upgrades of XtremIO X2 will allow up to 8 supported X-Bricks for X2-R arrays. Scale Out Capabilities – Single to Multiple X2 X-Brick ClustersFigure 6.
  • 12. 12 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Scale-up of an XtremIO cluster is implemented by adding SSDs to existing DAEs in the cluster. Adding SSDs to existing DAEs to scale-up an XtremIO cluster is for environments that currently grow in capacity needs and have no need for extra performance. This occurs, for example, when the same number of users has an increasing amount of data to save, or when an environment grows in both capacity and performance needs, but has only reached its capacity limits with room to grow in performance with its current infrastructure. Each DAE can hold up to 72 SSDs and is divided into up to 2 groups of SSDs called Data Protection Groups (DPGs). Each DPG can hold a minimum of 18 SSDs and can grow by increments of 6 SSDs up to a maximum of 36 SSDs. In other words, 18, 24, 30 or 36 are the possible numbers of SSDs per DPG. Up to 2 DPGs can occupy a DAE. SSDs are 400GB per drive for X2-S clusters and 1.92TB per drive for X2-R clusters. Future releases will allow customers to populate their X2-R clusters with 3.84TB sized drives, doubling the physical capacity available in their clusters. Multi-Dimensional ScalingFigure 7.
  • 13. 13 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. XIOS and the I/O Flow Each Storage Controller within the XtremIO cluster runs a specially customized lightweight Linux-based operating system as the base platform of the array. The XtremIO Operating System (XIOS) handles all activities within a Storage Controller and runs on top of the Linux-based operating system. XIOS is optimized for handling high I/O rates and manages the system's functional modules, RDMA communication, monitoring, etc. X-Brick ComponentsFigure 8. XIOS has a proprietary process-scheduling-and-handling algorithm designed to meet the specific requirements of a content-aware, low-latency and high-performing storage system. It provides efficient scheduling and data access, full exploitation of CPU resources, optimized inter-sub-process communication and minimized dependency between sub- processes that run on different sockets. The XtremIO Operating System gathers a variety of metadata tables on incoming data that includes data fingerprint, its location in the system, mappings and reference counts. The metadata is used as the fundamental insight for performing system operations, such as laying out incoming data uniformly, implementing inline data reduction services and accessing the data on read requests. The metadata is also involved in communication with external applications (such as VMware XCOPY and Microsoft ODX) to optimize integration with the storage system. Regardless of which Storage Controller receives an I/O request from the host, multiple Storage Controllers on multiple X-Bricks cooperate to process the request. The data layout in the XtremIO X2 system ensures that all components share the load and participate evenly in processing I/O operations. An important functionality of XIOS is its data reduction capabilities. Inline data deduplication and compression achieves data reduction. Data deduplication and data compression complement each other. Data deduplication removes redundancies, whereas data compression compresses the already deduplicated data before writing the data to the flash media. XtremIO is an always-on thin-provisioned storage system, further realizing storage savings by the storage system, which never writes a block of zeros to the disks. XtremIO integrates with existing SANs through 16Gb/s Fibre Channel or 10Gb/s Ethernet iSCSI connectivity to service hosts' I/O requests.
  • 14. 14 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. XtremIO Write I/O Flow In a write operation to the storage array, the incoming data stream reaches any one of the Active-Active Storage Controllers and is broken into data blocks. For every data block, the array fingerprints the data with a unique identifier and stores it in the cluster's mapping table. The mapping table maps the host's Logical Block Addresses (LBA) to the blocks' fingerprints and the blocks' fingerprints to its physical location in the array (the DAE, SSD and offset the block is located at). The fingerprint of a block has two objectives: (1) to determine if the block is a duplicate of a block that already exists in the array and (2) to distribute blocks uniformly across the cluster. The array divides the list of potential fingerprints among Storage Controllers in the array and gives each Storage Controller a range of fingerprints to manage. The mathematical process that calculates the fingerprints results in a uniform distribution of fingerprint values. As a result, fingerprints and blocks are evenly spread across all Storage Controllers in the cluster. A write operation works as such: 1. A new write request reaches the cluster. 2. The new write is broken into data blocks. 3. For each data block: 1. A fingerprint is calculated for the block. 2. An LBA-to-fingerprint mapping is created for this write request. 3. The fingerprint is checked to see if it already exists in the array. • If it exists: o The reference count for this fingerprint is incremented by one. • If it does not exist: 1. A location is chosen on the array where the block is written (distributed uniformly across the array according to fingerprint value). 2. A fingerprint-to-physical location mapping is created. 3. The data is compressed. 4. The data is written. 5. The reference count for the fingerprint is set to one.
  • 15. 15 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Deduplicated writes are of course much faster than original writes. Once the array identifies a write as a duplicate, it updates the LBA-to-fingerprint mapping for the write and updates the reference count for this fingerprint. No data is additionally written to the array and the operation is completed quickly, adding an extra benefit to inline deduplication. Figure 9 shows an example of an incoming data stream which contains duplicate blocks with identical fingerprints. Incoming Data Stream Example with Duplicate BlocksFigure 9. As mentioned, fingerprints also help to decide where to write the block in the array. Figure 10 shows the incoming stream after duplicates were removed as it is being written to the array. The blocks are divided to their appointed Storage Controller according to their fingerprint values ensuring a uniform distribution of the data across the cluster. The blocks are transferred to their destinations in the array using Remote Direct Memory Access (RDMA) via the low-latency InfiniBand network. Incoming Deduplicated Data Stream Written to the Storage ControllersFigure 10. The actual write of the data blocks to the SSDs is asynchronous. At the time of the application write, the system places the data blocks in the in-memory write buffer and protects it using journaling to local and remote NVRAMs. Once it is written to the local NVRAM and replicated to a remote one, the Storage Controller returns an acknowledgment to the host. This guarantees a quick response to the host, ensures low-latency of I/O traffic and preserves the data in case of system failure (power-related or any other). When enough blocks are collected in the buffer (to fill up a full stripe), the system writes them to the SSDs on the DAE. Figure 11 demonstrates the phase of writing the data to the DAEs after a full stripe of data blocks is collected in each Storage Controller. Storage Controller Storage Controller DAE Storage Controller Storage Controller DAE CA38C90 Data 134F871 Data 0325F7A Data F3AFBA3 Data AB45CB7 Data 20147A8 Data 963FE7B Data Data DataData DataData Data Data X-Brick 1 X-Brick 2 F, … 2, A, … 1, 9, … 0, C, … InfiniBand
  • 16. 16 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Full Stripe of Blocks Written to the DAEsFigure 11. XtremIO Read I/O Flow In a read operation, the system first performs a look-up of the logical address in the LBA-to-fingerprint mapping. The found fingerprint is then located in the fingerprint-to-physical mapping and the data is retrieved from the right physical location. In the same fashion as write operations, the read load is also evenly shared across the cluster, blocks are evenly distributed and all Volumes are accessible across all X-Bricks. If the requested block size is larger than the data block size, the system performs parallel data block reads across the cluster and assembles them into bigger blocks before returning them to the application. A compressed data block is decompressed before it is delivered to the host. XtremIO has a memory-based read cache in each Storage Controller. The read cache is organized by content fingerprint. Blocks whose contents are more likely to be read are placed in the read cache for faster retrieval. A read operation works as such: 1. A new read request reaches the cluster. 2. The read request is analyzed to determine the LBAs for all data blocks and a buffer is created to hold the data. 3. For each LBA: 1. The LBA-to-fingerprint mapping is checked to find the fingerprint of each data block to be read. 2. The fingerprint-to-physical location mapping is checked to find the physical location of each of the data blocks. 3. The requested data block is read from its physical location (read cache or a place in the disk) and transmitted to the buffer created in step 2 in the Storage Controller that processes the request via RDMA over InfiniBand. 4. The system assembles the requested read from all data blocks transmitted to the buffer and sends it back to the host. Storage Controller Storage Controller DAE Storage Controller Storage Controller DAE Data Data Data Data P1 P2DataDataDataDataDataData Data Data Data Data P1 P2DataDataDataDataDataData Data Data Data Data P1 P2DataDataDataDataDataData Data Data Data Data P1 P2DataDataDataDataDataData Data DataData DataData Data Data X-Brick 1 X-Brick 2
  • 17. 17 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. System Features The XtremIO X2 Storage Array provides and offers a wide range of built-in features that require no special license. The architecture and implementation of these features are unique to XtremIO and are designed around the capabilities and limitations of flash media. We will list some key features included in the system. Inline Data Reduction XtremIO's unique Inline Data Reduction is achieved by these two mechanisms: Inline Data Deduplication and Inline Data Compression Data Deduplication Inline Data Deduplication is the removal of duplicate I/O blocks from a stream of data prior to it being written to the flash media. XtremIO inline deduplication is always on, meaning no configuration is needed for this important feature. The deduplication is at a global level, meaning no duplicate blocks are written over the entire array. Being an inline and global process, no resource-consuming background processes or additional reads and writes (which are mainly associated with post-processing deduplication) are necessary for the feature's activity, which increases SSD endurance and eliminates performance degradation. As mentioned earlier, deduplication on XtremIO is performed using the content's fingerprints (see XtremIO Write I/O Flow on page 14). The fingerprints are also used for uniform distribution of data blocks across the array. This provides inherent load balancing for performance and enhances flash wear-level efficiency, since the data never needs to be rewritten or rebalanced. XtremIO uses a content-aware, globally deduplicated Unified Data Cache for highly efficient data deduplication. The system's unique content-aware storage architecture enables achieving a substantially larger cache size with a small DRAM allocation. Therefore, XtremIO is the ideal solution for difficult data access patterns, such as "boot storms" that are common in VSI environments. XtremIO has excellent data deduplication ratios, especially for virtualized environments. SSD usage is smarter, flash longevity is maximized, the logical storage capacity is multiplied and total cost of ownership is reduced. Figure 12 shows the CPU utilization of our Storage Controllers during VMware production workload. When new blocks are written to the system, the hash calculation is distributed across all Storage Controllers. We can see here the excellent synergy across our X2 cluster, when all our Active-Active Storage Controllers' CPUs share the load and effort, as the CPU utilization between all is virtually equal for the entire workload. XtremIO X2 CPU UtilizationFigure 12.
  • 18. 18 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Data Compression Inline data compression is the compression of data prior to writing the data to the flash media. XtremIO automatically compresses data after all duplications are removed, ensuring that the compression is performed only for unique data blocks. The compression is performed in real-time and not as a post-processing operation. As a result, compression does not overuse the SSDs or impact performance. Compressibility rates depend on the type of data written. Data Compression complements data deduplication in many cases and saves storage capacity by storing only unique data block in the most efficient manner. Data compression is always inline and never performed as a post-processing activity. Therefore, XtremIO writes the data only once, always. It increases overall endurance of the flash array's SSDs In a VSI environment, deduplication dramatically reduces the required capacity for the virtual servers. Consequently, compression reduces the specific user data. As a result, a single X-Brick can manage an increased number of virtual servers. Therefore, less physical capacity is required to store the data, increasing the storage array's efficiency and dramatically reducing the $/GB cost of storage, even when compared to hybrid storage systems. We can see the benefits and capacity savings for the deduplication-compression combination demonstrated in Figure 13. Data Deduplication and Data Compression DemonstratedFigure 13. In the above example, the twelve data blocks written by the host are first deduplicated to four data blocks, demonstrating a 3:1 data deduplication ratio. Following the data compression process, the four data blocks are then each compressed, by a ratio of 2:1, resulting in a total data reduction ratio of 6:1. Thin Provisioning XtremIO storage is natively thin provisioned, using a small internal block size. All Volumes in the system are thin provisioned, meaning the system only consumes capacity as needed. No storage space is ever pre-allocated before writing. XtremIO's content-aware architecture permits blocks to be stored at any location in the system (when the metadata is used to refer to their location) and the data is written only when unique blocks are received. Therefore, as opposed to disk-oriented architecture, no space creeping or garbage collection is necessary on XtremIO, Volume fragmentation does not occur in the array and no defragmentation utilities are needed. This XtremIO feature enables consistent performance and data management across the entire life cycle of a Volume, regardless of the system capacity utilization or the write patterns of clients. Data Written by Host 3:1 Data Deduplication 2:1 Data Compression 6:1 Total Data Reduction This is the only data written to the flash media.
  • 19. 19 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. This characteristic allows manual and frequent automatic reclaiming of unused space directly from VMFS datastores and virtual machines that has the following benefits: • The allocated disks can be used optimally and the actual space reports are more accurate. • More efficient snapshots (called XVCs - XtremIO Virtual Copies). Blocks that are no longer needed are not protected by additional snapshots. Integrated Copy Data Management XtremIO pioneered the concept of integrated Copy Data Management (iCDM) – the ability to consolidate both primary data and its associated copies on the same scale-out all-flash array for unprecedented agility and efficiency. XtremIO is one of a kind in its capabilities to consolidate multiple workloads and entire business processes safely and efficiently, providing organizations with a new level of agility and self-service for on-demand procedures. XtremIO provides consolidation, supporting on-demand copy operations at scale while maintaining delivery of all performance SLAs in a consistent and predictable way. Consolidation of primary data and its copies in the same array has numerous benefits: • It can make development and testing activities up to 50% faster, creating copies of production code quickly for development and testing purposes, then refreshing the output back into production for the full cycle of code upgrades in the same array. This dramatically reduces complexity and infrastructure needs, as well as development risks, and increases the quality of the product. • Production data can be extracted and pushed to all downstream analytics applications on-demand as a simple in- memory operation. Copies of the data are high performance and can get the same SLA as production copies without compromising production SLAs. XtremIO offers this on-demand as both self-service and automated workflows for both application and infrastructure teams. • Operations such as patches, upgrades and tuning tests can be quickly performed using copies of production data. Diagnosing problems of applications and databases can be done using these copies, and applying the changes back to production can be done by refreshing copies back. The same goes for testing new technologies and combining them in production environments. • iCDM can also be used for data protection purposes, as it enables creating many copies at low point-in-time intervals for recovery. Application integration and orchestration policies can be set to auto-manage data protection, using different SLAs. XtremIO Virtual Copies XtremIO uses its own implementation of snapshots for all iCDM purposes, called XtremIO Virtual Copies (XVCs). XVCs are created by capturing the state of data in Volumes at a particular point in time and allowing users to access that data when needed, no matter the state of the source Volume (even deletion). They allow any access type. XVCs can be taken either from a source Volume or from another Virtual Copy. XtremIO's Virtual Copy technology is implemented by leveraging the content-aware capabilities of the system, optimized for SSDs, with a unique metadata tree structure that directs I/O to the right timestamp of the data. This allows efficient copy creation that can sustain high performance, while maximizing the media endurance.
  • 20. 20 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. A Metadata Tree Structure Example of XVCsFigure 14. When creating a Virtual Copy, the system only generates a pointer to the ancestor metadata of the actual data in the system, making the operation very quick. This operation does not have any impact on the system and does not consume any capacity at the point of creation, unlike traditional snapshots, which may need to reserve space or copy the metadata for each snapshot. Virtual Copies capacity consumption occurs only when changes are made to any copy of the data. Then, the system updates the metadata of the changed Volume to reflect the new write, and stores its blocks in the system using the standard write flow process. The system supports the creation of Virtual Copies on a single, as well as on a set, of Volumes. All Virtual Copies of the Volumes in the set are cross-consistent and contain the exact same point in time for them all. This can be done manually by selecting a set of Volumes for copying, or by placing Volumes in a Consistency Group and making copies of that Consistency Group. Virtual Copy deletions are lightweight and proportional only to the amount of changed blocks between the entities. The system uses its content-aware capabilities to handle copy deletions. Each data block has a counter that indicates the number of instances of that block in the system. If a block is referenced from some copy of the data, it will not be deleted. Any block whose counter value reaches zero is marked as deleted and will be overwritten when new unique data enters the system. With XVCs, XtremIO's iCDM offers the following tools and workflows to provide the consolidation capabilities: • Consistency Groups (CG) – Grouping of Volumes to allow Virtual Copies to be taken on a group of Volumes as a single entity. • Snapshot Sets – A group of Virtual Copies of Volumes taken together using CGs or a group of manually chosen Volumes. • Protection Copies – Immutable read-only copies created for data protection and recovery purposes. • Protection Scheduler – Used for local protection of a Volume or a CG. It can be defined using intervals of seconds/minutes/hours or can be set using a specific time of day or week. It has a retention policy based on the number of copies wanted or the permitted age of the oldest XVC. • Restore from Protection – Restore a production Volume or CG from one of its descendant Snapshot Sets. • Repurposing Copies – Virtual Copies configured with changing access types (read-write / read-only / no-access) for alternating purposes. • Refresh a Repurposing Copy – Refresh a Virtual Copy of a Volume or a CG from the parent object or other related copies with relevant updated data. It does not require Volume provisioning changes for the refresh to take effect, but only host-side logical Volume management operations to discover the changes.
  • 21. 21 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. XtremIO Data Protection XtremIO Data Protection (XDP) provides a "self-healing" double-parity data protection with very high efficiency to the storage system. It requires very little capacity overhead and metadata space, and does not require dedicated spare drives for rebuilds. Instead, XDP leverages the "hot space" concept, where any free space available in the array can be utilized for failed drive reconstructions. The system always reserves sufficient distributed capacity for performing at least a single drive rebuild. In the rare case of a double SSD failure, the second drive is rebuilt only if there is enough space to rebuild the second drive or when one of the failed SSDs is replaced. The XDP algorithm provides: • N+2 drives protection • Capacity overhead of only 5.5%-11% (depends on the number of disks in the protection group) • 60% more write-efficient than RAID1 • Superior flash endurance to any RAID algorithm, due to the smaller number of writes and even distribution of data • Automatic rebuilds that are faster than traditional RAID algorithms As shown in Figure 15, XDP uses a variation of N+2 row and diagonal parity that provides protection from two simultaneous SSD errors. An X-Brick DAE may contain up to 72 SSDs organized in two Data Protection Groups (DPGs). XDP is managed independently on the DPG level. A DPG of 36 SSDs will result in capacity overhead of only 5.5% for its data protection needs. N+2 Row and Diagonal ParityFigure 15. Data at Rest Encryption Data at Rest Encryption (DARE) provides a solution to securing critical data even when the media is removed from the array, for customers in need of such security. XtremIO arrays utilize a high-performance inline encryption technique to ensure that all data stored on the array is unusable if the SSD media is removed. This prevents unauthorized access in the event of theft or loss during transport, and makes it possible to return/replace failed components containing sensitive data. DARE has been established as a mandatory requirement in several industries, such as health care, banking, and government institutions. At the heart of XtremIO's DARE solution lies the use of the Self-Encrypting Drive (SED) technology. An SED has a dedicated hardware that is used to encrypt and decrypt data as it is written to or read from the drive. Offloading the encryption task to the SSDs enables XtremIO to maintain the same software architecture whether encryption is enabled or disabled on the array. All XtremIO's features and services (including Inline Data Reduction, XtremIO Data Protection, Thin Provisioning, XtremIO Virtual Copies, etc.) are available on an encrypted cluster as well as on a non-encrypted cluster, and performance is not impacted when using encryption. 1 2 2 3 3 4 D0 D1 3 4 4 5 5 1 D2 D3 k = 5 (prime) 5 1 2 D4 1 2 3 P Q 4 5 1 2 3 4 k-1 5
  • 22. 22 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. A unique Data Encryption Key (DEK) is created during the drive manufacturing process and does not leave the drive at any time. The DEK can be erased or changed, rendering its current data unreadable forever. To ensure that only authorized hosts can access the data on the SED, the DEK is protected by an Authentication Key (AK) that resides on the Storage Controller. Without the AK, the DEK is encrypted and cannot be used to encrypt or decrypt data. Data at Rest Encryption in XtremIOFigure 16. Write Boost In the new X2 storage array, the write flow algorithm was improved significantly to improve array performance, countering the rise in compute power and disk speeds and taking into account common applications' I/O patterns and block sizes. As mentioned when discussing the write I/O flow, the commit to the host is now asynchronous to the actual writing of the blocks to disk. The commit is sent after the changes are written to a local and remote NVRAMs for protection, and are written to the disk only later, at a time that best optimizes the system's activity. In addition to the shortened procedure from write to commit, the new algorithm addresses an issue relevant to many applications and clients: a high percentage of small I/Os creating load on the storage system and influencing latency, especially on bigger I/O blocks. Examining customers' applications and I/O patterns, the algorithm finds that many I/Os from common applications come in small blocks, under 16K pages, creating high loads on the storage array. Figure 17 shows the block size histogram from the entire XtremIO install base. The percentage of blocks smaller than 16KB is highly evident. The new algorithm takes care of this issue by aggregating small writes to bigger blocks in the array before writing them to disk, making them less demanding on the system, which is now more capable of taking care of bigger I/Os faster. The test results for the improved algorithm were amazing: the improvement in latency for several cases is around 400% and allows XtremIO X2 to address application requirements with 0.5msec or lower latency.
  • 23. 23 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. XtremIO Install Base Block Size HistogramFigure 17. VMware APIs for Array Integration (VAAI) VAAI was first introduced as VMware's improvements to host-based VM cloning. It offloads the workload of cloning a VM to the storage array, making cloning much more efficient. Instead of copying all blocks of a VM from the array and back to it for the creation of a new cloned VM, the application lets the array do it internally. This utilizes the array's features and saving host and network resources that are no longer involved in the actual cloning of data. This procedure of offloading the operation to the storage array is backed by the X-copy (extended copy) command to the array, which is used when cloning large amounts of complex data. XtremIO is fully VAAI compliant, allowing the array to communicate directly with vSphere and provide accelerated storage vMotion, VM provisioning and thin provisioning functionality. In addition, XtremIO's VAAI integration improves X-copy efficiency even further by making the whole operation metadata driven. Due to its inline data reduction features and in- memory metadata, no actual data blocks are copied during an X-copy command and the system only creates new pointers to the existing data. This is all done inside the Storage Controllers' memory. Therefore, the operation saves host and network resources and does not consume storage resources, leaving no impact on the system's performance, as opposed to other implementations of VAAI and the X-copy command.
  • 24. 24 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Figure 18 illustrates the X-copy operation when performed against an XtremIO storage array and shows the efficiency in metadata-based cloning. VAAI X-Copy with XtremIOFigure 18. The XtremIO features for VAAI support include: • Zero Blocks / Write Same – Used for zeroing-out disk regions and provides accelerated Volume formatting. • Clone Blocks / Full Copy / X-Copy – Used for copying or migrating data within the same physical array, an almost instantaneous operation on XtremIO due to its metadata-driven operations. • Record Based Locking / Atomic Test & Set (ATS) – Used during creation and locking of files on VMFS Volumes, such as during power-down and powering-up of VMS. • Block Delete / Unmap / Trim – Used for reclamation of unused space using the SCSI unmap feature. Ptr Ptr Ptr Ptr Ptr Ptr A B C D Metadata in RAM Data on SSD XtremIO X-Copy command (full clone) A VM1 Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6 Ptr Ptr Ptr Ptr Ptr Ptr A B C D Copy metadata pointers Data on SSD XtremIO B VM1 Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6 Ptr Ptr Ptr Ptr Ptr Ptr A B C D Ptr Ptr Ptr Ptr Ptr Ptr Metadata in RAM Data on SSD XtremIO C • No data blocks are copied. • New pointers are created to the existing data. VM1 VM2 New Addr 1 New Addr 2 New Addr 3 New Addr 4 New Addr 5 New Addr 6Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6
  • 25. 25 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Figure 19 shows the exceptional performance during multiple VMware cloning operations. X2 is handling storage bandwidths as high as ~160GB/s with over 220k IOPS (read+write), resulting in a quick and efficient production delivery. Multiple VMware Cloning Operations (X-Copy) from XtremIO X2 PerspectiveFigure 19. Other features of XtremIO X2 (some of them will be described in next sections): • Even Data Distribution (uniformity) • High Availability (no single points of failures) • Non-disruptive Upgrade and Expansion • RecoverPoint Integration (for replications to local or remote arrays) • XtremIO Management Server The XtremIO Management Server (XMS) is the component that manages XtremIO clusters (up to 8 clusters). It is preinstalled with the CLI, GUI and RESTful API interfaces, and can be installed on a dedicated physical server or a VMware virtual machine. The XMS manages the cluster through the management ports on both Storage Controllers of the first X-Brick in the cluster, and uses a standard TCPI/IP connection to communicate with them. It is not part of the XtremIO data path, thus can be disconnected from an XtremIO cluster without jeopardizing usual I/O tasks. A failure on the XMS only affects monitoring and configuration activities, such as creating and attaching Volumes. A virtual XMS is naturally less vulnerable to such failures. The GUI is based on a new Web User Interface (WebUI), which is accessible via any browser, and provides easy-to-use tools for performing most system operations (certain management operations must be performed using the CLI). Some of the useful features of the new WebUI are described in the following sections. Dashboard The Dashboard window presents a main overview of the cluster. It has three panels: • Health - the main overview of the system's health status, alerts, etc. • Performance (shown in Figure 20) – the main overview of the system's overall performance and top used Volumes and Initiator Groups • Capacity (shown in Figure 21) – the main overview of the system's physical capacity and data savings
  • 26. 26 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. XtremIO WebUI – Dashboard – Performance PanelFigure 20. XtremIO WebUI – Dashboard – Capacity PanelFigure 21. The main Navigation menu bar is located on the left side of the UI. Users can select one of the navigation menu options pertaining to XtremIO's management actions. The main menus contain the Dashboard, Notifications, Configuration, Reports, Hardware and Inventory.
  • 27. 27 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Notifications In the Notifications menu, we can navigate to the Events window (shown in Figure 22) and the Alerts window, showing major and minor issues related to the cluster's health and operations. XtremIO WebUI – Notifications – Events WindowFigure 22.
  • 28. 28 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Configuration The Configuration window displays the cluster's logical components: Volumes (shown in Figure 23), Consistency Groups, Snapshot Sets, Initiator Groups, Initiators, and Protection Schedulers. Through this window, we can create and modify these entities, using the action panel on the top right side. XtremIO WebUI – ConfigurationFigure 23.
  • 29. 29 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Reports In the Reports menu, we can navigate to different windows to show graphs and data of different aspects of the system's activities, mainly related to the system's performance and resource utilization. The menu options we can choose to view include Overview, Performance, Blocks, Latency, CPU Utilization, Capacity, Savings, Endurance, SSD Balance, Usage or User-defined reports. We can view reports using different resolutions of time and components: selecting specific entities we want to view reports on in the "Select Entity" option (shown in Figure 24) that appears above when in the Reports menus, or selecting predefined and custom days and times to review reports for (shown in Figure 25). XtremIO WebUI – Reports – Selecting Specific Entities to ViewFigure 24. XtremIO WebUI – Reports – Selecting Specific Times to ViewFigure 25.
  • 30. 30 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. The Overview window shows basic reports on the system, including performance, weekly I/O patterns and storage capacity information. The Performance window shows extensive performance reports that mainly include Bandwidth, IOPS and Latency information. The Blocks window shows block distribution and statistics of I/Os going through the system. The Latency window (shown in Figure 26) shows Latency reports, including latency as a function of block sizes and IOPS metrics. The CPU Utilization window shows CPU utilization of all Storage Controllers in the system. XtremIO WebUI – Reports – Latency WindowFigure 26. The Capacity window (shown in Figure 27) shows capacity statistics and the change in storage capacity over time. The Savings window shows Data Reduction statistics and change over time. The Endurance window shows SSD's endurance status and statistics. The SSD Balance window shows how much the SSDs are balanced with data and the variance between them all. The Usage window shows Bandwidth and IOPS usage, both overall and divided to reads and writes. The User-defined window allows users to define their own reports to view.
  • 31. 31 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. XtremIO WebUI – Reports – Capacity WindowFigure 27. Hardware In the Hardware menu, we can overview our cluster and X-Bricks with visual illustrations. When viewing the FRONT panel, we can choose and highlight any component of the X-Brick and view information about it in the Information panel on the right. In Figure 28 we can see extended information on Storage Controller 1 in X-Brick 1, but we can view information on more granular specifics such as local disks and Status LEDs. We can further click on the "OPEN DAE" button to see visual illustration of the X-Brick's DAE and its SSDs, and view additional information on each SSD and Row Controller. XtremIO WebUI – Hardware – Front PanelFigure 28. In the BACK panel, we can view an illustration of the back of the X-Brick and see every physical connection to the X-Brick and inside of it, including FC connections, Power, iSCSI, SAS, Management, IPMI and InfiniBand, filtered by the "Show Connections" list at the top right. An example of this view is seen in Figure 29.
  • 32. 32 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. XtremIO WebUI – Hardware – Back Panel – Show ConnectionsFigure 29. Inventory In the Inventory menu, we can see all components of our environment with information about them, including: XMS, Clusters, X-Bricks, Storage Controllers, Local Disks, Storage Controller PSUs, XEnvs, Data Protection Groups, SSDs, DAEs, DAE Controllers, DAE PSUs, DAE Row Controllers, Infiniband Switches and NVRAMs. As mentioned earlier, other interfaces to monitor and manage an XtremIO cluster through the XMS server are available. The system's Command Line Interface (CLI) provides all the functionality of the GUI, as well as additional functionality. A RESTful API is another pre-installed interface in the system that allows HTTP-based commands to manage clusters. A PowerShell API Module is also an option to use Windows' PowerShell console to administer XtremIO clusters. XtremIO X2 Space Management and Reclamation in vSphere Environments VMFS file systems are managed by the ESXi hosts. Because of this, block storage arrays have no visibility inside a VMFS Volume so when any data is deleted by vSphere the array is unaware of it and it remains allocated on the array. In XtremIO storage array, all LUNs are thin provisioned and that space could be immediately allocated to another device/application or just returned to the pool of available storage. Space consumed by files that have been deleted or moved is referred to as "dead space". Reclaiming the dead space from an XtremIO X2 storage array frequently has the following benefits: • The allocated disks can be used optimally and the actual space reports are more accurate. • More space is available for use of the virtual environment. • More efficient replication when using RecoverPoint since it will not replicate blocks that are no longer needed. The feature that can be used to reclaim space is called Space Reclamation, which uses the SCSI command called unmap. Unmap can be issued to underlying thin-provisioned devices to inform the array that certain blocks are no longer needed by the host and can be "reclaimed". The array can then return those blocks to the pool of free storage. The VMFS 6 datastore can send the space reclamation command automatically. With the VMFS5 datastore, Space reclaim can be done manually via an esxcli command or via the VSI plugin, which will be detailed later in this document. Storage space inside the VMFS datastore can be freed by deleting or migrating a VM, consolidating an XVC and so on. Inside the virtual machine, storage space is freed when files are deleted on a thin virtual disk. These operations leave blocks of unused space on the storage array. However, when the array is not aware that the data was deleted from the blocks, the blocks remain allocated by the array until the datastore releases them. VMFS uses the SCSI unmap command to indicate to the array that the storage blocks contain deleted data, so that the array can deallocate these blocks.
  • 33. 33 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Unmap ProcessFigure 30. Dead space can be reclaimed using one of the following options: • Space Reclamation Requests from VMFS Datastores - Deleting or removing files from a VMFS datastore frees space within the file system. This free space is mapped to a storage device until the file system releases or unmaps it. ESXi supports reclamation of free space, which is also called the unmap operation. • Space Reclamation Requests from Guest Operating Systems - ESXi supports the unmap commands issued directly from a guest operating system to reclaim storage space. The level of support and requirements depend on the type of datastore where your virtual machine resides. VMFS Datastores Reclamation Asynchronous Reclamation of Free Space on VMFS 6 Datastore On VMFS 6 datastores, ESXi supports the automatic asynchronous reclamation of free space. VMFS 6 can run the unmap command to release free storage space in the background on thin-provisioned storage arrays that support unmap operations. Asynchronous unmap processing has several advantages: • Unmap requests are sent at a constant rate, which helps to avoid any instant load on the backing array. • Freed regions are batched and unmapped together. • Unmap processing and truncate I/O paths are disconnected, so I/O performance is not impacted. Space Reclamation Granularity Granularity defines the minimum size of a released space sector that an underlying storage can reclaim. Storage cannot reclaim sectors that are smaller in size than the specified granularity. For VMFS 6, reclamation granularity equals to the block size. When you specify the block size as 1 MB, the granularity is also 1 MB. Storage sectors smaller than 1 MB are not reclaimed. Automatic unmap is an asynchronous task and reclamation will not occur immediately and will typically take 12 to 24 hours to complete. Each ESXi 6.5 host has an unmap "crawler" that will work in tandem to reclaim space on all VMFS 6 Volumes they have access to.
  • 34. 34 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Space Reclamation PriorityFigure 31. Manual Reclamation of Free Space on VMFS5 Datastore VMFS5 and earlier file systems do not unmap free space automatically. We recommend using the esxcli storage vmfs unmap command to reclaim space manually using the parameter --reclaim-unit=20000’, indicating the number of vmfs blocks to unmap per iteration. Esxcli Command for Manual Space ReclamationFigure 32. Using the space reclamation feature in VSI, you can reclaim unused storage on datastores, hosts, clusters, folders and storage folders on XtremIO storage arrays. It allows us to schedule space reclamation on a daily basis, or run it once, for a specific datastore or on all datastores under the same datastore cluster. Setting Space Reclamation Scheduler via VSI PluginFigure 33. Figure 34 shows the logical space in use before and after space reclamation.
  • 35. 35 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Logical Space in Use Before and After Space ReclamationFigure 34. In-Guest Space Reclamation for Virtual Machines Space Reclamation for VMFS 6 Virtual Machines Inside a virtual machine, storage space is freed when, for example, you delete files on a thin virtual disk. The guest operating system notifies VMFS about freed space by sending the unmap command. The unmap command sent from the guest operating system releases space within the VMFS datastore. The command then proceeds to the array, so that the array can reclaim the freed blocks of space. VMFS 6 generally supports automatic space reclamation requests that are generated from the guest operating systems, and passes these requests to the array. Many guest operating systems can send the unmap command and do not require any additional configuration. Guest operating systems that do not support automatic unmaps might require user intervention Generally, guest operating systems send the unmap commands based on the unmap granularity they advertise. VMFS 6 processes unmap requests from the guest OS only when the space to reclaim equals 1 MB or is a multiple of 1 MB. If the space is less than 1 MB or is not aligned to 1 MB, the unmap requests are not processed. Space Reclamation for VMFS5 Virtual Machines Typically, the unmap command generated from the guest operation system on VMFS5 cannot be passed directly to the array. You must run the esxcli storage vmfs unmap command to trigger unmaps on the array. However, for a limited number of guest operating systems, VMFS5 supports the automatic space reclamation requests. Space Reclamation prerequisites To send the unmap requests from the guest operating system to the array, the virtual machine must meet the following prerequisites: • The virtual disk must be thin-provisioned. • Virtual machine hardware must be of version 11 (ESXi 6.0) or later. • The advanced setting EnableBlockDelete must be set to 1. • The guest operating system must be able to identify the virtual disk as thin. ESXi 6.5 expands support for in-guest unmap to additional guest types; ESXi 6.0 in-guest unmap is supported only for Windows Server 2012 R2 and later. ESXi 6.5 introduces support for Linux operating systems. The underlying reason for this is that ESXi 6.0 and earlier only supported SCSI version 2. Windows uses SCSI-2 unmap and therefore could take advantage of this feature set. Linux uses SCSI version 5 and could not. In ESXi 6.5, VMware enhanced their SCSI support to go up to SCSI-6, which allows Linux-based guests to issue commands that they could not issue before.
  • 36. 36 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. In-Guest Unmap Alignment Requirements VMware ESXi requires that any unmap request sent down by a guest must be aligned to 1 MB. For a variety of reasons, not all unmap requests will be aligned as such and in ESXi 6.5 and earlier a large percentage fails. In ESXi 6.5 P1, ESXi has been altered to be more tolerant of misaligned unmap requests. See the VMware patch information here: https://kb.vmware.com/kb/2148989 Prior to this, any unmap request that was even partially misaligned would fail entirely leading to no reclamation. In ESXi 6.5 PI, any portion of unmap requests that are aligned will be accepted and passed along to the underlying array. Misaligned portions will be accepted but not passed down. Instead, the affected blocks to which the misaligned unmaps refer will be zeroed out with WRITE SAME. The benefit of this behavior on the XtremIO X2 is that zeroing is identical in behavior to unmap so all of the space is reclaimed regardless of any misalignment. In-Guest Unmap in Windows OS Starting with ESXi 6.0, In-Guest unmap is supported with Windows 2012 R2 and later Windows-based operating systems. For a full report of unmap support with Windows, refer to Microsoft documentation. NTFS supports automatic unmap by default—this means (assuming the underlying storage supports it) Windows will issue unmap to the blocks a file consumed once the file has been deleted or moved. Windows also supports manual unmap, which can be run on-demand or per a schedule. This is performed using the Disk Optimizer tool. Thin virtual disks can be identified in the tool as Volume media types of "thin provisioned drive”. These are the Volumes that support unmap. Manual Space Reclamation using Optimize Drives Utility Inside a Windows Virtual MachineFigure 35. In- Guest Unmap in Linux OS Starting with ESXi 6.5, In-Guest unmap is supported with Linux-based operating systems and most common file systems. To enable this behavior, it is necessary to use Virtual Machine Hardware Version 13 or later. Linux supports both automatic and manual methods of unmap. Linux file systems do not support automatic unmap by default—this behavior needs to be enabled during the mount operation of the file system. This is achieved by mounting the file system with the "discard" option. Mounting Drive Using the Discard OptionFigure 36. When mounted with the discard option, Linux will issue unmap to the blocks a file consumed once the file has been deleted or moved.
  • 37. 37 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. With vSphere 6.5, SPC-4 is fully supported so you can run space reclamation inside the Linux OS using either manual CLI or a crone job. In order to check that the Linux OS does indeed support space reclamation, run the “sg_vpd” command as seen in Figure 37 and look for the LBPU:1 output. Running the sg_inq command will actually show if SPC-4 is enabled at the Linux OS level or not. Running sg_vpd and sg_inq Commands to Verify Support for Space ReclamationFigure 37. Figure 38 shows the I/O pattern during an in-guest unmap process. The unmap commands appear to be sent from ESXi in 100 MB chunks until the space reclamation process completes. In-Guest Space Reclamation Pattern from XtremIO PerspectiveFigure 38.
  • 38. 38 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. EMC VSI for VMware vSphere Web Client Integration with XtremIO X2 EMC Solutions Integration Service 7.2 (EMC SIS) provides us with unique storage integration capabilities between VMware vSphere 6.5 and EMC XtremIO X2 (XMS 6.0.0 and above). The EMC VSI (Virtual Storage Integrator) 7.2 plugin for VMware vSphere web client can be registered via EMC SIS. The plugin enables VMware administrators to view, manage and optimize EMC storage for their ESX/ESXi servers. It consists of a graphical user interface and the EMC Solutions Integration Service (SIS), which provides communication and access to XtremIO array(s). The VSI plugin allows the users to interact with their XtremIO array directly from the vCenter web client. This provides VMware administrators with the capabilities to monitor, manage and optimize their XtremIO hosted storage from a single GUI. For example, a user can provision VMFS datastores and RDM Volumes, create full clones using XtremIO Virtual Copy technology, view on-array used logical capacity of datastores and RDM Volumes, extend datastore capacity, and do bulk provisioning of datastores and RDM Volumes. Incorporating the VSI plugin into an existing vSphere infrastructure involves deploying a free to use, pre-packaged OVA, and then registering the connection from the VSI Solution Integration Service (SIS) to both the vCenter Server and the XtremIO cluster. Installation requires a minimum of 2.7GB, if thin provisioned, and maximum of 80GB storage capacity, if thick provisioned. VSI Plugin OVF DeploymentFigure 39.
  • 39. 39 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. After the VSI virtual application is powered on and the SIS becomes available, the vCenter server should be first registered with the VSI plugin. Following this action, the SIS instance can then be registered within the vCenter server via the web client. Registering VSI Solutions Integration Service Within the vCenter Server Web ClientFigure 40. From the vCenter Inventory listing within the web client, we can register XtremIO X2 system with the vCenter Server by specifying the XMS details. Registering XtremIO Storage System Within the vCenter Server Web ClientFigure 41.
  • 40. 40 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Setting Best Practices Host Parameters for XtremIO X2 Storage Array The VSI plugin can be used for modifying ESXi host/cluster storage-related settings, setting multipath management and policies and for invoking space reclamation operations from an ESX server or from a cluster. The VSI plugin is the best way to enforce the following XtremIO-recommended best practices for ESX servers: • Enable VAAI. • Set Queue depth on FC HBA to 256. • Set multi-pathing policy to "round robin" on each of the XtremIO SCSI Disks. • Set I/O path switching parameter to 1. • Set outstanding number of I/O request limit to 256. • Set the "SchedQuantum" parameter to 64. • Set the maximum limit on disk I/O size to 4096. Configuring XtremIO X2 Recommended Settings using the VSI PluginFigure 42. Provisioning VMFS Datastores New VMFS datastores can be created using the VSI plugin, and backed-up by XtremIO Volumes at the click of a button. The VSI plugin interacts with EMC XtremIO to create Volumes of the required size, map them to the appropriate Initiator Groups and create a VMFS datastore on vSphere, ready for use. When VMFS datastores start to run out of free space, you can add more storage space by extending them, using the VSI plugin.
  • 41. 41 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Create a Datastore using the EMC VSI PluginFigure 43. Provisioning RDM Disks RDM disks can be provisioned directly from XtremIO at the virtual machine level. The process creates a LUN on the XtremIO storage arrays, maps it to the ESXi cluster where the virtual machine resides and attaches it as a physical/virtual RDM disk to the Virtual machine. Provisioning RDM DisksFigure 44. Setting Space Reclamation Using the space reclamation feature in VSI, we can reclaim unused storage on datastores, hosts, clusters, folders and storage folders on XtremIO storage arrays. We can schedule space reclamation on a daily basis, or run it once, for a specific datastore or on all datastores under the same datastore cluster.
  • 42. 42 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Setting Space Reclamation Scheduler via VSI PluginFigure 45. Creating Native Clones on XtremIO VMFS Datastores The Native Clone feature uses the VMware Native Clone API to create a clone of a virtual machine in a VMFS datastore. This function is especially useful for cloning a large number of machines, while specifying various options such as containing folder, destination datastore, cluster, naming pattern, customization specification and more. Creating Native ClonesFigure 46. Working with XtremIO X2 XVCs The following actions for XtremIO XVCs (XtremIO Virtual Copies) can be performed directly from the VSI plugin, providing maximum protection for critical virtual machines and datastores, backed up by XtremIO X2 XVC technology: • Creating XVCs of XtremIO datastores • Viewing XtremIO XVCs generated for virtual machine restore • Mounting a datastore from an XVC • Creating a writable or read-only XVC • Creating and managing XVC schedules • Restoring virtual machines and datastores from XtremIO XVCs
  • 43. 43 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Managing XtremIO XVC (Snapshot) SchedulesFigure 47. XtremIO X2 Storage Analytics for VMware vRealize Operations Manager VMware vRealize Operations Manager is a software product that collects performance and capacity data from monitored software and hardware resources. It provides users with real-time information about potential problems in their infrastructure. vRealize Operations Manager presents data and analysis in several ways: • Through alerts that warn of potential or occurring problems. • In configurable dashboards and predefined pages that show commonly needed information. • In predefined reports, EMC Storage Analytics links vRealize Operations Manager with the EMC Adapter. EMC Storage Analytics (ESA) is a management pack for VMware vRealize Operations Manager that enables the collection of analytical data from EMC resources. ESA complies with VMware management pack certification requirements and has received the VMware Ready certification. The XtremIO X2 Adapter is bundled with a connector that enables vRealize Operations Manager to collect performance metrics on an X2 array. The adapter is installed with the vRealize Operations Manager user interface. EMC Storage Analytics uses the power of existing vCenter features to aggregate data from multiple sources and process the data with proprietary analytic algorithms. XtremIO X2 Storage Analytics solution provides a single, end-to-end view of virtualized infrastructures (servers to storage) powered by the VMware vRealize Operations Manager analytics engine. EMC Storage Analytics (ESA) delivers actionable performance analysis and proactively facilitates increased insight into storage resource pools to help detect capacity and performance issues, so they can be corrected before they cause a major impact. ESA provides increased visibility, metrics and a rich collection of storage analytics and metrics for XtremIO X2 for clusters, Data Protection Groups, XVCs, SSD Disks, Storage Controllers, Volumes and X-Bricks. XtremIO X2 Storage Analytics further extend the integration capabilities across EMC and VMware solutions to provide out-of-the-box analytics and visualization across your physical and virtual infrastructure. Storage Analytics provide preconfigured, customizable dashboards so users can optimally manage their storage environment.
  • 44. 44 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. The preconfigured dashboards include: 1. Performance - Provides greater visibility across the VMware and storage domains in terms of end-to-end mapping. Mappings include storage system components, storage system objects and vCenter objects. It enables health scores and alerts from storage system components, such as storage processors and disks, to appear on affected vCenter objects, such as LUNs, datastores and VMs. XtremIO Performance DashboardFigure 48. 2. Overview - Populates heat maps that show administrators the health of their system and reflect which workloads are stressed. XtremIO Overview DashboardFigure 49.
  • 45. 45 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. 3. Metrics - Provides metrics based on “normal” behavior of that application workload (which it learns over a period of time), after which it can analyze and make sense of all the data that has been collected and appropriately point out anomalies in behavior. This dashboard displays resource and metrics for storage systems and graphs of resource metrics. XtremIO Metrics DashboardFigure 50. XtremIO X2 Content Pack for vRealize Log Insight VMware vRealize Log Insight delivers automated log management through log analytics, aggregation and search. An integrated cloud operations management approach provides the operational intelligence and enterprise-wide visibility needed to proactively enable service levels and operational efficiency in dynamic hybrid cloud environments. VMware vRealize Log Insight provides real-time log administration for heterogeneous environments that span across physical, virtual and cloud environments. Log Insight provides: • Universal Log Collection • Powerful Log Analytics • Enterprise-class Scalability • Ease of Use and Deployment • Built-in vSphere Knowledge The Dell EMC XtremIO X2 Content Pack, when integrated into VMware vRealize Log Insight, provides predefined dashboards and user-defined fields specifically for XtremIO arrays to enable administrators to conduct problem analysis and analytics on their array(s). The vRealize Log Insight Content Pack with dashboards, alerts and chart widgets generated from XtremIO logs, visualizes log information generated by XtremIO X2 devices to ensure a clear insight into the performance of the XtremIO X2 flash storage connected to the environment. The XtremIO X2 Content Pack includes 3 predefined dashboards, over 20 widgets, and alerts for understanding the logs and graphically representing the operations, critical events and faults of the XtremIO X2 storage array. The XtremIO X2 Content Pack can be installed directly from the Log Insight Marketplace. Once installed, the Content Pack uses the syslog protocol to send remote syslog data from an XtremIO X2 array to the Log Insight Server. Log Insight IP should be set on the XtremIO console under Administration Notification Syslog Configuration in the list of Targets.
  • 46. 46 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. XtremIO Content PackFigure 51. XtremIO management server dashboard collects all events sent from XMS over time and allows search and graphical display of all the events of X-Bricks managed by this XMS. XtremIO Management Server DashboardFigure 52.
  • 47. 47 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. XtremIO errors dashboard collects all error and faults sent from the XMS over time and allows search and graphical display of all the errors and faults of X-Bricks managed by this XMS. XtremIO Errors DashboardFigure 53. XtremIO X2 Workflows for VMware vRealize Orchestrator VMware vRealize Orchestrator is an IT process automation tool that allows automated management and operational tasks across both VMware and third-party applications. XtremIO workflows for vRealize Orchestrator facilitate the automation and orchestration of tasks that involve the XtremIO X2 Storage Array. It augments the capabilities of VMware’s vRealize Orchestrator solution by providing access to XtremIO X2 Storage Array-specific management workflows. The XtremIO workflows for VMware vRealize Orchestrator contain both basic and high-level workflows. A basic workflow is a workflow that allows for the management of a discrete piece of XtremIO functionality, such as Consistency Groups, Clusters, Initiator Groups, Protection Schedulers, Snapshot Sets, Tags, Volumes, RecoverPoint and XMS Management. A high-level workflow is a collection of basic workflows put together in such a way as to achieve a higher level of automation, simplicity and efficiency than what is available from the available basic workflows. The high-level workflows in the XtremIO Storage Management and XtremIO VMware Storage Management folders combine both XtremIO and VMware specific functionality into a set of high-level workflows. The workflows in the XtremIO Storage Management folder allow for rapid provisioning of datastores to ESXi hosts and VMDKs/RDMs to VMs. The VM Clone Storage workflow, for instance, allows rapid cloning of datastores associated with a set of source VMs to a set of target VMs accompanied by automatic VMDK reattachment to the set of target VMs. Another example is the Host Expose Storage workflow in the XtremIO VMware Storage Management folder, which allows a user to create Volumes, create any necessary Initiator Groups and map those Volumes to a host, all from one workflow. All the input needed for this workflow is supplied prior to the calling of the first workflow in the chain of basic workflows that are utilized. The XtremIO workflows for VMware vRealize Orchestrator allows the vRealize architect to either rapidly design and deploy high-level workflows from the rich set of supplied basic workflows or utilize the pre-existing XtremIO high-level workflows to automate the provisioning, backup and recovery of XtremIO storage in a VMware vCenter environment.
  • 48. 48 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. VRO and XtremIO X2 Integration ArchitectureFigure 54. XtremIO X2 Workflows for VMware vRealize OrchestratorFigure 55.
  • 49. 49 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. Compute Hosts: Dell PowerEdge Servers For our environment, we set up two homogenous clusters at each site: one cluster with 6 ESXi servers for hosting VSI servers and a second cluster with 2 ESXi servers for virtual platforms, which are used to manage the VSI infrastructure. We used Dell's PowerEdge FC630 as our ESX hosts, as they have the compute power to deal with an environment at such a scale, and are a good fit for virtualization environments. Dell PowerEdge servers work with the Dell OpenManage systems management portfolio that simplifies and automates server lifecycle management, and can be integrated with VMware vSphere with a dedicated plugin. Compute Integration – Dell OpenManage Dell OpenManage is a program providing simplicity and automation of hardware management tasks and monitoring for both Dell and multi-vendor hardware systems. Among its capabilities are: • Rapid deployment of PowerEdge servers, operating systems and agent-free updates • Maintenance of policy-based configuration profiles • Streamlined template-driven network setup and management for Dell Modular Infrastructure • Providing a "geographic view" of Dell-related hardware Dell OpenManage can integrate with VMware vCenter using the OpenManage Integration for VMware vCenter (OMIVV), which provides VMware vCenter with the ability to manage a data center's entire server infrastructure, both physical and virtual. It can assist with monitoring the physical environment, send system alerts to the user, roll out firmware updates to an ESXi cluster, etc. The integration is more profitable when using Dell PowerEdge servers as the ESX hosts of the VMware environment. Figure 56 shows an example of a cluster's hardware information provided by the OpenManage Integration for VMware vCenter. Dell Cluster Information Menu Provided by the Dell OpenManage Plugin for VMwareFigure 56.
  • 50. 50 | VSI protected by DELL EMC RecoverPoint or RP4VMs hosted on DELL EMC XtremIO X2. © 2017 Dell Inc. or its subsidiaries. The OpenManage Integration enables users to schedule firmware updates for clusters from within VMware vCenter web client. In addition, users can schedule the firmware update to run at a future time. This feature helps users to perform the firmware updates at the scheduled maintenance window without having to be present personally to attend the firmware updates. This capability reduces complexity by natively integrating the key management capabilities into the VMware vSphere Client console. It minimizes risk with hardware alarms, streamlined firmware updates and deep visibility into inventory, health and warranty details. Firmware Update Assurances • Sequential execution: To make sure not all the hosts are brought down to perform firmware updates, the firmware update is performed sequentially, one host at a time. • Single failure stoppage: If an update job fails on a server being updated, the existing jobs for that server continues; however, the firmware update task stops and does not update any remaining servers. • One firmware update job for each vCenter: To avoid the possibility of multiple update jobs interacting with a server or cluster, only one firmware update job for each vCenter is allowed. If a firmware update is scheduled or running for a vCenter, a second firmware update job cannot be scheduled or invoked on that vCenter. • Entering Maintenance Mode: If an update requires a reboot, the host is placed into maintenance mode prior to the update being applied. Before a host can enter maintenance mode, VMware requires that you power off or migrate guest virtual machines to another host. This can be performed automatically when DRS is set to fully automated mode. • Exiting Maintenance Mode: Once the updates for a host have completed, the host will be taken out of maintenance mode, if a host was in maintenance mode prior to the updates. Applying Firmware Update Directly from vSphere Web ClientFigure 57.