White Paper: Deploying and Implementing RecoverPoint in a Virtual Machine for Demonstration and Proof-of-Concept Purposes
This white paper explains the best practices for deploying EMC®
RecoverPoint or demonstration purposes as a virtual machine under ESX
server 4.01 or later using the VMware®
Deploying and Implementing RecoverPoint in a
Virtual Machine for demonstration and proof of
3 Deploying and implementing RecoverPoint in a virtual machine
Table of Contents
Executive summary ................................................................................................4
Why Virtualize RecoverPoint? ..................................................................................4
VMware considerations for vRPA/D deployment.........................................................5
Moving vRPA/Ds over different ESX servers ............................................................................................................ 5
vRPA/D Cluster Deployment Type ........................................................................................................................... 5
VMware DirectPath and PowerPath......................................................................................................................... 6
Prerequisites for vRPA/D Deployment ......................................................................6
Deploying vRPA/D .................................................................................................8
Preparing the VMware Hypervisor (ESX Server) ....................................................................................................... 8
Deploying ESX server.............................................................................................................................................. 8
Configuring VMware DirectPath devices on ESX server............................................................................................ 8
Deploying vRPA/D Cluster using Deployment Manager ......................................................................................... 15
Moving the vRPA/D .............................................................................................................................................. 29
Conclusion ......................................................................................................... 31
References ......................................................................................................... 31
4 Deploying and implementing RecoverPoint in a virtual machine
With the rapid growth of virtualization world, today’s solutions, which consist of both physical hardware
and software code elements, are expected to be able to also function in the virtual cloud.
RecoverPoint solution is formed of both a physical hardware which known as a RecoverPoint Appliance
(RPA) and the running application code. When implemented as a virtualized instance it is known as a
virtual RPA with Directpath or vRPA/D.
Installing RecoverPoint as a virtual instance requires both specific hardware (which will be discussed
thoroughly in the “Prerequisites for vRPA/D Deployment” chapter) and a current RecoverPoint ISO image.
Deploying vRPA/D is currently intended only for demo or proof of concept (POC) purposes; EMC does not
guarantee that vRPA/D performance characteristics are equivalent to RecoverPoint’s performance.
The purpose of this document is to explain and demonstrate the steps involved in deploying vRPA/D,
which is running the RecoverPoint software in a VMware Virtual machine using a specified QLogic HBA.
This document describes the recommended way to build a vRPA/D based on research and development
activities in EMC Labs. The content included in this document provides a simple to deploy guide for
vRPA/D and is to be used only for demo and POC purposes.
This white paper is intended for customers, ESN certified partners and EMC internal staff that are VMware
and RecoverPoint professionals or other identical trained audience.
Note: vRPA/D is to be used only for demo or proof of concept (POC) purposes, it is not intended for
• EMC does not provide any support for vRPA/D
• EMC doesn’t guarantee that the performance of vRPA/D has any relationship to the performance of a RecoverPoint
• Issues will be fixed according to engineering case evaluation
Why Virtualize RecoverPoint?
Virtualizing RecoverPoint can provide some new beneficial features, which arrive from VMware’s virtual
consolidation environment, such as:
• RecoverPoint “Cluster in a box“ – With ESX you can run multiple RecoverPoint instances thus you
can set up two RecoverPoint sites in a single VMware ESX server
• Thin provisioning of both Memory & CPU resources –multiple RecoverPoint instances can share
CPU & Memory resources dynamically, without the need to pre allocate full capacity of CPU &
Memory levels and utilizing VMware Thin Provisions technologies (such as Memory — page
sharing, ballooning, and swapping)
• Simple RPA Backup and Snapshot – due to the fact that RecoverPoint is only a set of VM files, it is
faster to clone it and even utilize VMware hot Snapshots which allows safe point in time
protection of your RecoverPoint instance (for example, you might do this before changing major
RecoverPoint configurations or upgrading the RecoverPoint code)
5 Deploying and implementing RecoverPoint in a virtual machine
• In collaboration with RecoverPoint’s “Virtual WWN” feature, a vRPA/D can be roamed across
multiple VMware ESX servers, as long as you understand the limitations that DirectPath imposes
(such as identical HBA adapters are required and vMotion is not supported)
VMware considerations for vRPA/D deployment
Moving vRPA/Ds to different ESX servers
Due to VMware DirectPath feature limitations (such as the unique reservation of PCI ports on specific ESX
server), there can be implications on or failures to a vRPA/D that must be understood when considering
VMware based Failover scenarios. Both use cases will require additional user configuration to assure
correct bindings of the new ESX server PCI slot as a DirectPath FC Adapter device. If you have such a
configuration and you need additional assistance please send an email to RecoverPoint-vRPA-
DirectPath@emc.com and we will help as time permits. If you are a customer, please have your Account
Representative send this email.
• vMotion as part of the VMware Cluster failover will require manual steps with vSphere as
shown in “Moving the vRPA/D” chapter
• VMware Site Recovery Manager Failover
vRPA/D Cluster Deployment Type
There can be various deployments of RecoverPoint vRPA/D clusters over VMware ESX hosts. Table 1
shows the decision matrix for the available vRPA/D deployments:
Table 1 vRPA/D Deployment matrix
“Both Sites in a box”
sites reside on single
Requires a single ESX
Server for both
The ESX server acts as
single point of failure
for all vRPA/Ds in
“Site per box”
Site’s vRPA/Ds are
managed on their own
Site ESX server
Requires only 2 ESX
servers for the entire
Each ESX server is
single point of failure
for a Site
vRPA/Ds are spread
among multiple ESX
Hosts to ensure
deployment for vRPA/Ds
Requires at a minimum
of 4 ESX servers, 2 in
Best Redundancy for both
Site and Cluster fail level
Can use commodity
6 Deploying and implementing RecoverPoint in a virtual machine
VMware DirectPath and PowerPath
VMware DirectPath provides the Virtual Machine with direct and exclusive access to physical Fibre
Channel host bus adapters in the ESX server. These HBA’s are separate from the HBA’s that the ESX
server uses to access its own fibre channel storage. When you use Direct Path you have some limitations
in other VMware functions, such as:
• vMotion and Storage vMotion
• Fault Tolerance
• Snapshots and VM suspend
• Device Hot Add
Note: vRPA/D can not be used with PowerPath/VE
Prerequisites for vRPA/D Deployment
The main feature that allows RecoverPoint virtualization comes from VMware technology and was first
introduced in ESX 4.01 – named as “VMware DirectPath”.
This feature utilize an offloading of server I/O devices communication into the hypervisor thus allowing
virtual machines to access a specific physical I/O device (HBA or NIC) using “pass-through”
communication instead of the former VMware virtualized drivers.
The RecoverPoint appliance hardware (Gen 4) specifications (A Dell R610 derived 1U server with 8GB
RAM/Dual Quad-core CPU’s, two 146GB internal hard disks, and two 8Gb quad port QLA2564 FC HBA’s)
introduces a high physical resource demands (to support both new features and higher storage
performance) which using virtualization might consume less resources (assuming performance utilization
is average and multiple vRPA/D instances are leveled correctly with overall memory and CPU load).
Following are the detailed hardware and software components that are required for a vRPA/D
The following pre-requisites are necessary to deploy a vRPA/D configuration:
• Hardware for the ESX Server
o Any hardware on the VMware HCL that supports the ESX/ESXi 4.0, 4.1 and 5.0
• VMware DirectPath server architecture:
o Intel VT-d (Xeon 5500 systems and Nehalem processors)
o AMD platforms with I/O Virtualization Technology (AMD IOMMU)
• VMware DirectPath FC HBA:
o QLogic FC HBA’s – QLA24xx/25xx
o Only these, others may not work.
Note: Both ESX and ESXi support a maximum of 8 VMware DirectPath supported HBA’s, which
caps the maximum amount of RecoverPoint VMs per ESX server that can be installed to 8 if dual
port HBAs are installed or to 16 if the quad port FC HBAs are used.
• Physical Memory:
The following recommended memory settings can vary according to the total memory load of all
the running vRPA/D instances in the ESX server with the help of VMware advanced memory
management capabilities which requires “VMware Tools”
7 Deploying and implementing RecoverPoint in a virtual machine
The following values represent recommended “minimum / optimal” values of physical memory
which will be required for a given amount of deployed RecoverPoint VMs on a single ESX server:
• For 1 VM instance: 4GB / 8GB
• For 2 VM instances: 8GB / 16GB
• For 3 VM instances: 12GB / 16GB
• For 4 VM instances: 16GB / 24GB
Note: For more than 4 RecoverPoint VMs per single ESX server, you will be required to obey the
hardware limitations of the running ESX Server system according to the manufacture technical
specifications and the supported maximum memory per the running ESX server version
vRPA/D only supports EMC Storage Arrays and SCSI based LUNs. Note that a VMAX 20K and 40K
has FTS that enables non-EMC Storage Arrays to be attached to the VMAX. Also note that a VPLEX
supports over 35 non-EMC Storage Array families.
o Choosing EMC VMAX SAN storage, an EMC VPLEX platform or EMC VNX/CLARiiON SAN
storage will allow vRPA/D to support “Array based splitter” (aka the “Symmetrx Splitter”,
“VPLEX Splitter” or “VNX/CLARiiON Splitter”) as well as the “Host based splitter” (aka
o Choosing non-EMC SAN storage is not possible.
The SAN Array should have enough provisioned free space to allocate for RecoverPoint
volumes (including two Repository volumes, the pairing LUNs and the Journal volumes
according to RecoverPoint documentation and best practices)
• FC SAN Switch:
o A RecoverPoint supported FC SAN based switch (with applicable installed license)
Note: If the RecoverPoint Splitter technology is “Fabric Splitter” type, then make sure that
required switch configuration is configured according to RecoverPoint documentation for
“Fabric Splitter” deployments
• VMware Virtual Server OS (hosting the RecoverPoint VM) can be:
o ESX 4.0.1 / 4.1 / 5.0
o ESXi 4.0.1 / 4.1 / 5.0
• EMC RecoverPoint 3.4 or 3.5
• A license for EMC RecoverPoint 3.4 or 3.5 (see the section “Requesting a RecoverPoint license”
• Storage Array license: if you using the VNX/CLARiiON Array splitter then you will need to install
an enabler in your VNX/CLARiiON Array to support it– see the applicable RecoverPoint
• SAN FC Switch license: if you are using a Fabric based splitter then a specific license may be
required to be installed in addition to supported switch firmware version– see the applicable
8 Deploying and implementing RecoverPoint in a virtual machine
Preparing the VMware Hypervisor (ESX Server)
Verifying that Virtualization is enabled in server BIOS
VMware virtualization hypervisor (VMware ESX) requires that server BIOS will be enabled for
Figure 1 shows this option as “Enabled” which confirm to VMware ESX server installation prerequisite
Example: In DELL servers: after powering on server - hit F2 to enter the system BIOS console,
navigate to “Processor Settings” section in the main BIOS screen).
Figure 1 - DELL BIOS menu to enable virtualization support by CPU
Deploying ESX server
Proceed with normal installation of your ESX server setup.
Configuring VMware DirectPath devices on ESX server
Upon successful completion of ESX server installation, the ESX server performs its first full reboot. At this
point, the ESX server is up and running and ready to setup the pass-through option for the vRPA/D PCI
devices (required for use by VMware DirectPath).
Error! Reference source not found. shows VMware DirectPath maximum values for both ESX 4.x &
Table 2 VMware DirectPath Maximum values
9 Deploying and implementing RecoverPoint in a virtual machine
vRPA/D supports both a dual port and quad port HBA. It is commended that Quad Port HBAs be used
since you can:
• Increase the count of available VMware DirectPath HBAs (and RecoverPoint VM counts per single
ESX) on a limited PCI slot server
• Utilize ESX server for both VMware DirectPath (vRPA/D) and regular ESX to SAN connectivity (by
using only 2 ports out of the 4 ports on the HBA for vRPA/D
Enabling the VMware DirectPath devices
1) Connect to the ESX server using the VI Client to the either the ESX server or the managing vCenter
2) Select the ESX server in question, go to the “Configuration” tab and under Advanced
“Settings”, on the far right side of the screen, choose “Configure Passthrough”.
3) A full list of the devices available for VMware DirectPath use are then presented under a pop-up
window titled “Mark devices for Passthrough”.
4) Select the HBA ports as appropriate (see Figure 2 which demonstrate HBA enabling for VMware
Figure 2 - Selecting DirectPath PCI Devices for vRPA/D
5) An ESX reboot is required for this setting to take effect.
Install RecoverPoint as VM
1) Download the current RecoverPoint ISO from Powerlink, if you are a customer the operation must be
performed by your Account Representative.
2) Select appropriate machine(s) that run ESX
3) Install the physical HBA card(s) into these machines
4) Deploy a “New Virtual Machine” using VMware wizard
10 Deploying and implementing RecoverPoint in a virtual machine
a. Give it an appropriate name such as vRPA/D1
5) Select the VM type as “Debian GNU/ Linux 5 (64-bit)”
6) Assign the relevant virtual hardware resources to the new VM as described below:
• 8GB RAM (minimum of 4GB)
• 4 vCPU (minimum if 2vCPU)
• 2 x vNIC (WAN & LAN connectivity and management)
• 70GB Hard disk (the initial utilized disk space for the OS is 8GB)
Figure 3 - vRPA/D VM hardware resources view
7) Attach the RecoverPoint install image/CD using one of the following options:
a. Mount a local bootable RecoverPoint DVD (mounting the physical DVD/CD Drive on the ESX
Server Hardware) using a DVD image burned from the ISO you downloaded in Step 1.
b. Mount a copied bootable RecoverPoint image from the desktop you are working on (by clicking
the “cd icon” in the virtual console) or also from other datastore (if you previously copied it
over) using the ISO image downloaded in Step 1.
8) VMware Tools – since RecoverPoint code does not support the VMware support tools, you must skip
Note: It is important to provision sufficient virtual resources or else the RecoverPoint deployment may
fail to complete and errors will be triggered.
Binding VMware DirectPath ports for the vRPA/D
Once the vRPA/D has been installed as a VM, we will be required to power down this VM and “Add” a
new “PCI device” from the list of the available VMware DirectPath device ports.
Figure 4 shows a Virtual Machine Properties which was configured to expose two VMware DirectPath
11 Deploying and implementing RecoverPoint in a virtual machine
Figure 4 - Binding available DirectPath HBA into vRPA/D VM
Pre Configuring vRPA/D – RPA Network settings
The following steps will provide the minimum connectivity configuration that will later allow deploying
RecoverPoint cluster using the “RecoverPoint Deployment Manager”
1) While connected through the VI Client, open a “Console” session (virtual KVM) to the vRPA/D virtual
2) After logging into the RecoverPoint management console (using “boxmgmt” user), you are prompted
to enter a temporary IP address, subnet and default gateway – proceed with temporary ip network
settings (as shown in Figure 5)
Figure 5 - Pre Configure fresh vRPA/D installation
12 Deploying and implementing RecoverPoint in a virtual machine
Note: In this environment, a default gateway was not required. RecoverPoint can then be configured
either via the GUI or CLI wizards.
Pre Configuring vRPA/D – FC Port settings
1) Review current WWN’s which are registered by RecoverPoint vRPA/D using RecoverPoint CLI
“Main Menu” by entering the following menu sequence:
 “Diagnostics” à  “Fibre Channel Diagnostics” à  “View Fibre Channel Details”
Note: Although QLogic HBA’s have their own WWN’s, the RecoverPoint appliance layers its own native
WWN’s on top of those.
Figure 7 - vRPA/D Native WWN mapping
2) RecoverPoint WWN’s will also appear in the FC switch as KASHYA ports (Figure 8 reflect an
example output of Brocade FC Switch)
Figure 8 - vRPA/D FC Ports view in FC Switch
Figure 6 - Review vRPA/D FC Detail menu
13 Deploying and implementing RecoverPoint in a virtual machine
3) In order to review the array controllers, in the “Fibre Channel Diagnostics” menu, select
Option  to “Detect Fibre Channel Targets”.
Figure 9 displays the WWN’s of a CX4 ports that have been zoned to the vRPA/D.
Figure 9 - Detecting target WWN via vRPA/D
Zoning the vRPA/D to the Storage Array
A vRPA/D is bounded to the splitter environment which being used (Host based or Array based)
Example: in the VNX/CLARiiON array-based splitter, the required zoning must include zoning each of
each vRPA/D HBA ports to both of the EMC Array controller ports (in CLARiiON this refers to SPA and SPB).
Note: VMAX 10K support requires RecoverPoint v3.4.1 or later, VPLEX, VMAX 20K and VMAX 40K
requires RecoverPoint v3.5 or later
For VNX/CLARiiON arrays the zoning should include:
• vRPA/D HBA0 ports -> Both Array controllers ports
• vRPA/D HBA1 ports -> Both Array controllers port
vRPA/D Initiator Registration & Storage Allocation
Once the vRPA/D port initiators are zoned and successfully logging into the storage array, those initiators
need to be manually registered. The example below shows a CLARiiON (for VMAX please consult with
Symmetrix Technical Notes in EMC Powerlink and for VPLEX Local and VPLEX Metro please consult with
the VPLEX Technical Notes in EMC Powerlink) equivalent registration steps.
1) Register the newly discovered initiators as a “New Host” with its own IP address. The initiators
for the vRPA/D need to be registered with an Initiator Type of “RecoverPoint Appliance” and
have a “Failover Mode” equal to “4”. (Figure 10 shows an example for adding vRPA/D initiators
as RecoverPoint appliance initiators
14 Deploying and implementing RecoverPoint in a virtual machine
Figure 10 - vRPA/D FC port registration in Array management
2) Once the initiators are registered to the new vRPA/D, the vRPA/D can be added to a
Navisphere/Unisphere Storage Group as a host in order to access the required storage/LUNs.
3) The vRPA/D(s) requires LUN Masking access in the same manner as physical RPA would require;
The below bullets emphasize the core requirement for each LUN type (for further details, you are
recommended to visit RecoverPoint Admin Guide available on the EMC Powerlink):
a. Journal volumes – must be exposed only to the applicable site vRPA/Ds
b. Repository volume – must be exposed only to the applicable site vRPA/Ds
c. Replicated volume copies – must be exposed to both the applicable site vRPA/Ds and the
4) All of the masked LUNs can be easily verified using the vRPA/D “Diagnostics” menu using the
management CLI of RecoverPoint code.
Figure 11 - Verifying LUN masking via vRPA/D "Diagnostics" menu
Note: Figure 11 - describes such a verification attempt for two masked LUNs (A production LUN which is
4GB and a second LUN which is 50GB) which are both being exposed correctly by the vRPA/D
15 Deploying and implementing RecoverPoint in a virtual machine
Deploying vRPA/D Cluster
Upon successful configuring of vRPA/D storage and network connectivity, we can proceed into full-scale
deployment of RecoverPoint Using RecoverPoint Deployment Manager wizard that provides the safest
and fully automated deployment of RecoverPoint appliances
Deploying vRPA/D Cluster using Deployment Manager
A vRPA/D cluster deployment is handled in the same manner as a regular physical RPA Cluster.
RecoverPoint Deployment Manager is used for RecoverPoint deployment and provides the most
automated and error free deployment method.
Below is the full procedure for vRPA/D Cluster deployment using RecoverPoint Deployment Manager Tool.
1) Execute RecoverPoint Deployment Manager Wizard, you will be asked to first log into the RP
Figure 12 - RP Deployment Manager: Authentication screen
Note: The RP Deployment Manager also contains wizards relative to RPA upgrades and replacement.
2) Select the “RecoverPoint Installer Wizard” to begin the vRPA/D network identity configuring
(IP Address, Subnet Mask, Default Gateway, Management IP Addresses and the RPA Cluster
16 Deploying and implementing RecoverPoint in a virtual machine
Figure 13 - RP Deployment Manager: Deployment wizard
3) Review the prerequisites for the installation. At this stage, after completing all of the previous
steps for the vRPA/D, all of the prerequisites should be satisfied (see Figure 14).
Figure 14 - RP Deployment Manager: vRPA/D Prerequisites
4) The next screen will prompt for an installation structure file; create a new file or use an existing
saved configuration file.
Note: Figure 15 shows a consolidated view of the settings required when configuring a vRPA/D cluster
(i.e. number of sites, amount of cluster nodes at each site and the type of replication between sites).
17 Deploying and implementing RecoverPoint in a virtual machine
Figure 15 - RP Deployment Manager: Environment Settings screen
5) Upon completion of the previous installer screen, you will be required to configure the vRPA/D
networks (Management and WAN) details for vRPA/D site A, including the site’s vRPA/D
instances (Figure 16 shows an example of two vRPA/Ds configuration in Site A)
Figure 16 - RP Deployment Manager: Configuring vRPA/D Site A networks
18 Deploying and implementing RecoverPoint in a virtual machine
6) The next wizard screen (Figure 17) will require answering the “Advanced settings” questions that
relates to splitter type in use and other environment variables specific to the storage arrays type in
Figure 17 - Configuring vRPA/D Sites advanced settings screen
7) Upon completion of the previous installer screen, you will be required to configure the vRPA/D
networks (Management and WAN) details for vRPA/D site B, including the site’s vRPA/D
instances (Figure 18 shows an example of two vRPA/Ds configuration in Site A)
Figure 18 - RP Deployment Manager: Configuring vRPA/D Site B networks
8) Upon completion of previous step, you will be instructed to approve the overall vRPA/D
configuration and for which vRPA/D sites. This step will lock the required vRPA/D sites
19 Deploying and implementing RecoverPoint in a virtual machine
configuration and prepare them to be applied on each of the related vRPA/D instances (see Figure
19 for this step screen).
Figure 19 - RP Deployment Manager: Applying configuration
Note: that if only one of the sites is to be installed at this stage, the wizard provides a checkbox to
confirm whether or not the other site is already installed.
9) The next wizard screen will provide the installer confirmation for the previous applied settings
(see Figure 20)
Figure 20 - RP Deployment Manager: result screen of applying vRPA/D Configuration
10)Upon successful confirmation in previous step, the installer will begin the vRPA/D storage
configuration wizard while showing the managed vRPA/D WWN (see Figure 21)
20 Deploying and implementing RecoverPoint in a virtual machine
Figure 21 - RP Deployment Manager: Site A Zoning and LUN Masking configuration
11)The wizard then runs the vRPA/D SAN diagnostics, thus providing the list of available LUNs to be
used as the vRPA/D Cluster Repository volume for Site A (equivalent to traditional cluster’s
Quorum disk). You will be required to select the desired LUN to act as the Site (see Figure 22)
Figure 22 - RP Deployment Manager: Site A Repository volume selection
12)Completing repository volume selection in previous step, will display the summary for the
storage configuration for Site A (see Figure 23)
21 Deploying and implementing RecoverPoint in a virtual machine
Figure 23 - RP Deployment Manager: Site Summary screen
13)The installer wizard proceeds through the exact sequence of the previous storage configuration
details (Site A), this time for the remote/target site (Site B)
14)Upon completion of the storage configuration for Site B, a summary screen appear to
indicate the success of the installer process which also allows deploying RecoverPoint
Management Application through a given Site (see Figure 24)
Figure 24 - RP Deployment Manager: Success summary of vRPA/D Cluster
22 Deploying and implementing RecoverPoint in a virtual machine
Configuring the RecoverPoint Splitters
Note: This procedure assumes that splitters were installed correctly. To configure the RecoverPoint
splitter, perform the following steps:
1) Open “RecoverPoint Management Application”, and right click the “Splitters” object
choose “Add New Splitter” target.
2) From the list of the available splitters, choose the applicable splitters which will be required to
allow RecoverPoint replication (Figure 25 shows an example of discovered VNX/CLARiiON splitters
for both vRPA/D sites) and click “Next”
Figure 25 - Configuring vRPA/D splitters screen
3) Proceed with the on screen instructions (For the VNX/CLARiiON-based array splitter you will be
asked to provide the array “login credentials” or to select “Configure login credential
later” for both sites) and upon completion of splitter information, hit “Finish” (Figure 26 shows
successful summary of added VNX/CLARiiON splitters)
Figure 26 - RecoverPoint validated splitters
Configuring RecoverPoint CGs with vRPA/Ds
Configuring RecoverPoint Consistency Group (CG) using vRPA/Ds is possible due to the transparency of
the virtualization layer from the Application management.
23 Deploying and implementing RecoverPoint in a virtual machine
The consistency group wizard navigate through the required CG elements such as: CG name, the
preferred RPA, the Policy attributes for each copy, volumes to be used as the source/replica in the
Replication Sets and the relevant Journal volumes.
Once the entire consistency group configuration has been completed, a summary screen will be shown
before initiating the new replication (see Figure 27)
Figure 27 - Configured vRPA/D CG summary screen
Upon completing the CG wizard, we will be able to review the replication status for the given CG. Figure
28 shows the initial synchronization completion for a RecoverPoint CLR configuration, where the
“Production Source” copy has a “Direct Access”, while both replica copies (“Local Replica” and
“Remote Replica” shows “No Access” state)
Figure 28 - RecoverPoint CLR replication topology
More in depth replication analysis is available through RecoverPoint’s Management GUI through the
“statistics” tab (see Figure 29)
24 Deploying and implementing RecoverPoint in a virtual machine
Figure 29 - RecoverPoint statistics panel to indicate replication state
Replacing a vRPA/D with the RPA Replacement Wizard
Replacing a vRPA/D within a clustered RecoverPoint configuration requires the RecoverPoint Deployment
The below procedure will guide through the needed steps to replace a vRPA/D using the Deployment
1) Deploy the RecoverPoint Deployment Manager Wizard
2) Select the “RPA Replacement Wizard” option, and click “Next” (see Figure 30)
25 Deploying and implementing RecoverPoint in a virtual machine
Figure 30 - RP Deployment Manager: choosing vRPA/D replacement option
Note: This procedure will import the vRPA/D into the existing configuration, providing the new vRPA/D
with the same configuration and management details as the previous/failed vRPA/D.
3) Highlight the required failed vRPA/D (which is about to be replaced) as shown in Figure 31.
Note: Notice the checkbox at the bottom of the screen that prompts the user to confirm whether or not
the replacement vRPA/D has been configured with required RP code and network identity to allow an
4) When the new/replacement vRPA/D is online and configured with the required temporary network
connectivity, check the bottom screen checkbox to allow the wizard proceed and click “Next”
Figure 31 - RPA Replacement wizard: select failed vRPA/D
5) Confirm the status of the replacement RPA, by checking the bottom screen checkbox (shown in
Figure 32) and click “Next”
26 Deploying and implementing RecoverPoint in a virtual machine
Figure 32 - RPA Replacement wizard: Confirm failed vRPA/D
6) The next screen will require the approval for cloning (spoofing) the failed vRPA/D WWNs
configuration into the new vRPA/D. By spoofing the WWNs there is no requirement for new zoning
at the SAN level.
Notice: If new WWNs are introduced then they will need to be zoned accordingly!
27 Deploying and implementing RecoverPoint in a virtual machine
Figure 33 - RPA Replacement wizard: validating storage configuration
7) The wizard automatically runs through the validation process of the storage and SAN
configurations (before the final “apply changes” phase for the settings on the new vRPA/D).
8) Once all of those changes have been applied then the wizard provides a summary of the steps
completed as part of replacing the faulted vRPA/D and resuming cluster operations with the new
vRPA/D (shown in Figure 34).
Figure 34 - vRPA/D Replacement wizard: Applying configuration screen
There are 5 options to choose from when considering the RecoverPoint Splitter
• Windows Host Splitter (for RecoverPoint/CL and RecoverPoint/EX with RecoverPoint 3.5 and with
RecoverPoint/SE, RecoverPoint/EX and RecoverPoint/L with RecoverPoint 3.4)
• VMAX-based Splitter
• VPLEX-based Splitter
• VNX/CLARiiON-based Splitter
• Brocade/Cisco Intelligent Fabric Splitter
Choosing a RecoverPoint splitter is based upon many environmental scenarios. In this example,
RecoverPoint is using the array based VNX/CLARiiON Splitter.
Enabling the “RecoverPoint Splitter” in the FLARE or VNX Operating Environment can enable this feature
directly on the array. For a Symmetrix VMAX and VPLEX the splitter is already enabled. The following
displays a list of all of the software features that are enabled on one of the CX4 arrays being used in this
28 Deploying and implementing RecoverPoint in a virtual machine
Figure 35 - RecoverPoint splitter view in CLARiiON management GUI
The Software tab under the “Properties” section of the CX4 array is the only field in which the
RecoverPoint Splitter can be viewed from the Navisphere perspective. There is nothing else to tune or
configure on the CLARiiON array in relation to RecoverPoint.
As with other Layered Applications, the RecoverPoint Splitter is pre-installed as part of the FLARE code,
but is not visible or available to the user until the RecoverPoint Splitter enabler key is installed. This
enabler key can be installed via the Navisphere Service Taskbar.
When an array-based splitter is used the maximum size volume (LUN) that can be replicated is 32TB. In
environments where an array-based splitter is not being used then the maximum size for a replicated LUN
is 2TB. The VMAX splitter is supported on the VMAX series, the VPLEX splitter is supported on VPLEX Local
and VPLEX Metro, and the VNX/CLARiiON splitter is supported on VNX series, CX3 and CX4 arrays. (The
VNX/CLARiiON splitter does not support VNXe, AX4-5 or pre CX3 storage arrays).
When moving or replacing a vRPA/D it is possible to retain the WWN’s of the previous vRPA/D’s WWNs
and apply them to the new vRPA/D.
A RecoverPoint appliance generates its own WWNs during installation, based in part on the underlying
HBA WWN. The trick to enabling easy mobility of a vRPA/D is to hardcode the WWNs so that they don’t
change when ported to a new set of HBAs (in the same host, or a different one).
Doing this allows a vRPA/D to move another host with different HBAs without the need for additional
zoning or LUN masking. The process looks like this:
Hard Coding the WWNs
1. On the vRPA/D console (connect via SSH or via the VI Client), carry out the following steps
2. Enter the Diagnostics Menu
29 Deploying and implementing RecoverPoint in a virtual machine
3. Enter the Fibre Channel Diagnostic Menu
4. Select the View Fibre Channel Details option.
5. If using SSH copy and paste the WWNs out to a text file for later use.
6. Navigate back through the menus and enter the Cluster Operations Menu
7. Detach the vRPA/D from the cluster.
8. Once detached, go into the Setup menu, and then option 1 to Modify, then specify the site of the
vRPA/D you want to modify
9. Select Option 3 to set the WWN Name / Port Pair Addresses
10.Then specify the vRPA/D you want to change, and the number of HBA ports that the RPA uses
11.Using the WWN details we copied earlier, paste in the WWN and Node WWN details for each HBA
port in sequence.
12.Once done, backup three levels in the menu tree and select option 5 to Apply the configuration.
13.This gives you a summary of the entire cluster configuration, and you can see the WWNs that you
just hardcoded for the relevant vRPA/D.
14.Confirm that you want to apply the configuration, and then enter the site and box number to apply
the details to.
15.Finally, reattach the vRPA/D to the cluster, which will cause a reboot of the vRPA/D.
16.Confirm that the cluster resumes normal operation
Moving the vRPA/D
There are can be various ways to perform vRPA/D relocation among ESX servers, as shown below:
Ø Manual move using vMotion as part of vSphere Cluster (applicable for vSphere 4.01 and
Ø Automated Failover using vSphere Cluster as part of HA/DRS Failover policy(valid
for only vSphere 4.1 and later)
Ø Automated Failover using SRM (Compatible)
Note: it is recommended to configure your vRPA/D with spoofed WWN’s when you consider
moving/failing over the vRPA/D into other ESX servers due to the fact that each ESX server has its own
unique attached HBA’s WWN which can result in a failure of the vRPA/D code.
Manual move using vMotion as part of vSphere Cluster
1. Verify that the new ESX server has identical HBA (otherwise, the vRPA/D will fail to start on the
new ESX server) as the old ESX (where the vRPA/D is now hosted)
2. Move the vRPA/D using a simple drag and drop in vCenter, keeping the storage locations as they
3. Re-configure the vRPA/D to assign the correct set of physical HBAs that you want the RPA to use in
the new host. A vRPA/D uses VMware DirectPath to get direct access to the required QLogic HBAs,
so remove the two HBAs that were being used in the original host, and move them to the new
host. On the new host assign access to two new HBAs in the new host.
30 Deploying and implementing RecoverPoint in a virtual machine
4. Once complete, power on the virtual machine, and validate that the vRPA/D comes up cleanly by
observing the VM state in the vSphere GUI or using the RecoverPoint GUI under the “RPA” tab.
Note: This process can be done in advance of setting up the vRPA/D cluster, or it can be done afterwards
if you decide to enable this behavior at a later date.
This feature might be useful if:
• You want to do some maintenance on the physical host, and want the RecoverPoint cluster to run
on all vRPA/Ds while this is happening.
• You want to upgrade the hardware that a vRPA/D runs on by moving it to another machine with
better processors or faster HBAs as long as this new hardware still adheres to the support list
• If the customer wants to migrate their RecoverPoint appliances from physical to virtual, in which
case they can hardcode the WWNs from the physical RPA into the vRPA/D, allowing for a quick and
Automated Failover using vSphere Cluster as part of HA/DRS Failover policy
In vSphere 4.1 VMware has introduced new vMotion feature named as dvMotion (which is the acronym
for DirectPath vMotion) which can be used to provide an automated Failover of vRPA/D using the vMotion
engine. The details are complex; if you are interested in this please send an email to the vRPA/d team at
Note: This feature relays on the vSphere 4.1 experimental “dvMotion” feature
Automated Failover using SRM
VMware Site Recovery product enables automated failover of VMware Sites and clusters. It is highly
suggested to use the compatible SRM functionality with ESX4i and later. vSphere 5 introduced improved
vMotion & SRM capabilities, refer to the appropriate VMware documentation for full details.
Comments and getting help
Product and Technical support are available as follows:
Product information. For documentation and release notes, or for information about licensing and
service, go to the RecoverPoint landing page on Powerlink: RecoverPoint Family or send an email to
RecoverPoint licensing information
To request a license for your vRPA/D configuration do the following:
Go to PowerLink, and in the top-level menu navigate to Request Support - > Create Service Request.
• Mark it as “this is a: technical problem”
• Enter “N/A” as the customer site ID
• Enter contact name
• Select product as RecoverPoint
• In the Problem Summary enter “License Request for vRPA/D”
• In the Problem Description enter the following information:
31 Deploying and implementing RecoverPoint in a virtual machine
This white paper contains enough information to install and operate RecoverPoint as a virtual machine. If
you have issues, comments, or questions about this document include the relevant page numbers and
any other information that will help us locate the information you are addressing. Send comments to:
If you are having difficulty with vRPA/D ensure you read these references before sending an email.
• Introduction to EMC RecoverPoint 3.5 New Features and Functions
• EMC RecoverPoint Family Overview
• Configuration Examples and Troubleshooting for VMDirectPath
• Configuring VMDirectPath I/O pass-through devices on an ESX host
• PCI Passthrough with PCIe devices behind a non-ACS switch in vSphere
• VMware Tools Installation Guide For Operating System Specific Packages
• Performance Best Practices for VMware vSphere® 4.0
• Installing VMware Tools in a Linux virtual machine using a Compiler
• Configuration Examples and Troubleshooting for VMDirectPath
• Configuration Maximums - ESX 4.1
• Configuration Maximums - ESX 4.0