SlideShare a Scribd company logo
1 of 28
© Copyright IBM Corporation 2015
Technical University/Symposia materials may not be reproduced in whole or in part without the prior written permission of IBM.
IBM Spectrum Virtualize
HyperSwap Deep Dive
Bill Wiegand
Spectrum Virtualize – Consulting IT Specialist
IBM
Accelerate with IBM Storage
© Copyright IBM Corporation 2015
Agenda
• High Availability vs Disaster Recovery
• Overview of HyperSwap Function
• Overview of Demo Lab Setup
• Outline of Steps and Commands to Configure HyperSwap
• Show Host View of Its Storage
• Demo Scenario 1
• Fail paths from host at site 1 to its primary storage controller at site 1
• Demo Scenario 2
• Fail externally virtualized MDisk used as active quorum disk
• Fail paths to externally virtualized storage system providing active quorum disk
• Demo Scenario 3
• Configure existing Volume as HyperSwap Volume
• Demo Scenario 4
• Fail entire storage controller at site 2 for newly configured HyperSwap Volume
1
© Copyright IBM Corporation 2015
High Availability vs Disaster Recovery
Site 1
HA
Site 2
DR
ISL 1
Volume Mirroring Metro Mirror
or
Global Mirror
TotalStorage Storage Engine 336
TotalStorage Storage Engine 336
Cluster 2
TotalStorage Storage Engine 336
TotalStorage Storage Engine 336
Cluster 1
ISL 2
Manual intervention required:
1.Stop all running servers
2.Perform failover operations
3.Remove server access in Site 1
4.Grant server access in Site 2
5.Start the servers in Site 2
6.Import Volume Groups
7.Vary on Volume Groups
8.Mount Filesystems
9.Recover applications
2
© Copyright IBM Corporation 2015
Today: SVC Enhanced Stretched Cluster
• Today’s stretched cluster technology splits an SVC’s two-way cache
across two sites
• Allows host I/O to continue without loss of access to data if a site is lost
• Enhanced Stretched Cluster in version 7.2 introduced site concept to
the code for policing configurations and optimizing data flow
Quorum storage
Power domain 3
Node 2
Power domain 2
Storage
Switch
Host
Node 1
Power domain 1
Storage
Switch
Host
Read Read
Write
3
© Copyright IBM Corporation 2015
HyperSwap
• HyperSwap is next step of HA (High Availability) solution
• Provides most disaster recovery (DR) benefits of Metro Mirror as well
• Uses intra-cluster synchronous remote copy (Metro Mirror) capabilities
along with existing change volume and access I/O group technologies
• Essentially makes a host’s volumes accessible across two Storwize or
SVC I/O groups in a clustered system by making the primary and
secondary volumes of the Metro Mirror relationship, running under the
covers, look like one volume to the host
4
© Copyright IBM Corporation 2015
High Availability with HyperSwap
• Hosts, SVC nodes, and storage are in one of two failure domains/sites
• Volumes visible as a single object across both sites (I/O groups)
I/O group 0
Node 1 Node 2
I/O group 1
Node 3 Node 4
HostA
HostB
Site 1 Site 2
Vol-1p Vol-2pVol-1sVol-2s
Vol-1p Vol-2p
5
© Copyright IBM Corporation 2015
High Availability with HyperSwap
Site 1 Site 2
Host A Host B
Clustered Host C
Public
Fabric 1A
Public
Fabric 2A
Public ISL
Public
Fabric 1B
Public
Fabric 2B
Storage Storage
Site 3
Quorum
IBM Spectrum
Virtualize system
IBM Spectrum
Virtualize system
Private ISL
Private
Fabric 1
Private
Fabric 2
6
Public ISL
Hosts’ ports can be
• Zoned to see IBM Spectrum Virtualize system ports on both sites, and will be automatically
configured to use correct paths.
• Zoned only locally to simplify configuration, which only loses the ability for a host on one site to
continue in the absence of local IBM Spectrum Virtualize system nodes
Two SANs required for Enhanced Stretched Cluster, and recommended for HyperSwap:
• Private SAN for node-to-node communication
• Public SAN for everything else
See Redbook SG24-8211-00 for more details
Storage Systems can be
• IBM SVC for either HyperSwap or Enhanced Stretched Cluster
• IBM Storwize V5000, V7000 for HyperSwap only
Quorum provided by SCSI controller marked with “Extended Quorum support” on the
interoperability matrix.
Quorum storage must be in a 3rd site independent of site 1 and site 2, but visible by all nodes.
Storage systems need to be zoned/connected only to nodes/node canisters in their site (stretched
and hyperswap topologies only, excluding quorum storage)
© Copyright IBM Corporation 2015
HyperSwap – What is a Failure Domain
• Generally a failure domain will
represent a physical location, but
depends on what type of failure
you are trying to protect against
• Could all be in one building on different
floors/rooms or just different power domains in
same data center
• Could be multiple buildings on the same
campus
• Could be multiple buildings up to 300KM apart
• Key is the quorum disk
• If only have two physical sites and quorum disk
to be in one of them then some failure
scenarios won’t allow cluster to survive
automatically
• Minimum is to have active quorum disk system
on separate power grid in one of the two failure
domains
7
© Copyright IBM Corporation 2015
HyperSwap – Overview
• Stretched Cluster requires splitting nodes in an I/O group
• Impossible with Storwize family since an I/O group is confined to an enclosure
• After a site fails write cache is disabled
• Could affect performance
• HyperSwap keeps nodes in an I/O group together
• Copies data between two I/O groups
• Suitable for Storwize family of products as well as SVC
• Retains full read/write performance with only one site
8
© Copyright IBM Corporation 2015
HyperSwap – Overview
• SVC Stretched Cluster is not application aware
• If one volume used by an application is unable to keep a site up-to-date, the other volumes won’t
pause at the same point, likely making the site’s data unusable for disaster recovery
• HyperSwap allows grouping of multiple volumes together in a
consistency group
• Data will be maintained consistently across the volumes
• Significantly improves the use of HyperSwap for disaster recovery scenarios as well
• There is no remote copy partnership configuration since this is a single
clustered system
• Intra-cluster replication initial sync and resync rates can be configured normally using the
‘chpartnership’ CLI command
9
© Copyright IBM Corporation 2015
HyperSwap – Overview
• Stretched Cluster discards old data during resynchronization
• If one site is out-of-date, and the system is automatically resynchronizing that copy, that site’s data
isn’t available for disaster recovery, giving windows where both sites are online but loss of one site
could lose data
• HyperSwap uses Global Mirror with Change Volumes technology to
retain the old data during resynchronization
• Allows a site to continually provide disaster recovery protection throughout its lifecycle
• Stretched cluster did not know which sites hosts were in
• To minimize I/O traffic across sites more complex zoning and management of preferred nodes for
volumes was required
• Can use HyperSwap function on any Storwize family system supporting
multiple I/O groups
• Two Storwize V5000 control enclosures
• Two-four Storwize V7000 Gen1/Gen2 control enclosures
• Four-eight SVC node cluster
• Note that HyperSwap is not a supported configuration with Storwize V3700 since it can’t be clustered
10
© Copyright IBM Corporation 2015
HyperSwap – Overview
• Limits and Restrictions
• Max of 1024 HyperSwap volumes per cluster
• Each HyperSwap volume requires four FC mappings and max mappings is 4096
• Max capacity is 1PB per I/O group or 2PB per cluster
• Much lower limit for Gen1 Storwize V7000
• Run into limit of remote copy bitmap space
• Can’t replicate HyperSwap volumes to another cluster for DR using remote copy
• Limited FlashCopy Manager support
• Can’t do reverse flashcopy to HyperSwap volumes
• Max of 8 paths per HyperSwap volume same as regular volume
• AIX LPM not supported today
• No GUI support currently
• Requirements
• Remote copy license
• For Storwize configurations an external virtualization license is required
• Minimum one enclosure license for the storage system providing active quorum disk
• Size public/private SANs as we do with ESC today
• Only applicable if using ISLs between sites/IO groups
• Recommended Use Cases
• Active/Passive site configuration
• Hosts access given volumes from one site only
11
© Copyright IBM Corporation 2015
Example Configuration
12
IOGroup-0 IOGroup-1
Local
Host
Vol-1
EMC
EMC
IBM
IBM
2TB
HPHP
3TB
IBMIBM
HyperSwap
Volume
Primary Secondary
Federated
Host
Federated
Host
SVC SVC
© Copyright IBM Corporation 2015
Local Host Connectivity
13
IOGroup-0 IOGroup-1
Local
Host
2TB Flash
Mdisk
EMC 3TB
V5000
MdiskIBM
2TB Flash
Mdisk
HP 3TB
V5000
MdiskIBM
Fab-A Fab-B
2 HBA’s
4 Path’s
SVC
SVC SVC
SVC
© Copyright IBM Corporation 2015
Federated Host Connectivity
14
2TB Flash
Mdisk
EMC 3TB
V5000
MdiskIBM
2TB Flash
Mdisk
HP 3TB
V5000
MdiskIBM
Federated
Host
Fab-B
Fab-A
2 HBA’s
8 Path’s
SVC SVC SVC SVC
© Copyright IBM Corporation 2015
Storage Connectivity
15
IOGroup-0 IOGroup-1
Storage Controller
Fab-A Fab-B
2
2 2 2
2 2
SVC SVC SVC SVC
© Copyright IBM Corporation 2015
HyperSwap – Understanding Quorum Disks
• By default clustered system selects three quorum disk candidates
automatically
• With SVC it is on the first three MDisks it discovers from any supported disk controller
• On Storwize it is three internal disk drives unless we have external disk virtualized, then like SVC it is first
three MDisks discovered
• When cluster topology is set to “hyperswap” the quorum disks are
dynamically changed for proper configuration for a HyperSwap enabled
clustered system
• IBM_Storwize:ATS_OXFORD3:superuser> lsquorum
quorum_index status id name controller_id controller_name active object_type override
0 online 79 no drive no
1 online 13 no drive no
2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no
• There is only ever one active quorum disk
• Used solely for tie-break situations when two sites loss access to each other
• Must be on externally virtualized storage that supports Extended Quorum
• The three are used to store critical cluster configuration data
16
© Copyright IBM Corporation 2015
• Quorum disk configuration not exposed in GUI
• ‘lsquorum’ shows which three MDisks or drives are the quorum candidates and which one is
currently the active one
• No need to set override to ‘yes’ as needed in past with Enhanced Stretch Cluster
• Active quorum disk must be external and on a storage system that
supports “Extended Quorum” as noted on support matrix
• http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003741
• http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003658
• Only certain IBM disk systems support extended quorum
HyperSwap – Understanding Quorum Disks
17
© Copyright IBM Corporation 2015
HyperSwap – Lab Setup
Storwize V7000
Clustered System
I/O Group 0
Control Enclosure
Expansion
Enclosures
Expansion
Enclosures
Storwize V7000
Clustered System
I/O Group 1
Control Enclosure
Expansion
Enclosures
Expansion
Enclosures
Site 1 Site 2
Clustered System
Separated at distance
Host
Volume
• A HyperSwap clustered system
provides high availability between
different sites or within same data
center
• I/O Group assigned to each site
• A copy of the data is at each site
• Host associated with a site
• If you lose access to I/O Group 0
from the host then the host multi-
pathing will automatically access
the data via I/O Group 1
• If you only lose primary copy of
data then HyperSwap function will
forward request to I/O Group 1 to
service I/O
• If you lose I/O Group 0 entirely then
the host multi-pathing will
automatically access the other
copy of the data on I/O Group 1
18
© Copyright IBM Corporation 2015
HyperSwap – Configuration
• NAMING THE 3 DIFFERENT SITES:
• IBM_Storwize:ATS_OXFORD3:superuser> lssite
id site_name
1 Site1
2 Site2
3 Site3
• IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-03 1
• IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-05 2
• IBM_Storwize:ATS_OXFORD3:superuser> chsite -name QUORUM 3
• LIST THE 4 CLUSTER NODES:
• IBM_Storwize:ATS_OXFORD3:superuser> lsnodecanister -delim :
id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:config_node:UPS_unique_id:hard
ware:iscsi_name:iscsi_alias:panel_name:enclosure_id:canister_id:enclosure_serial_number
1:node1::500507680200005D:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node1::30-
1:30:1:78G00PV
2:node2::500507680200005E:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node2::30-
2:30:2:78G00PV
3:node3::500507680205EF71:online:1:io_grp1:yes::300:iqn.1986-03.com.ibm:2145.atsoxford3.node3::50-
1:50:1:78REBAX
4:node4::500507680205EF72:online:1:io_grp1:no::300:iqn.1986-03.com.ibm:2145.atsoxford3.node4::50-
2:50:2:78REBAX
19
© Copyright IBM Corporation 2015
HyperSwap – Configuration
• ASSIGN NODES TO SITES (SITE 1 MAIN, SITE 2 AUX):
• IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node1
• IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node2
• IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node3
• IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node4
• ASSIGN HOSTS TO SITES (SITE 1 MAIN, SITE 2 AUX):
• IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-03 SAN355-04
• IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-05 SAN3850-1
• ASSIGN QUORUM DISK ON CONTROLLER TO QUORUM SITE:
• IBM_Storwize:ATS_OXFORD3:superuser> chcontroller -site QUORUM DS8K-SJ9A
20
© Copyright IBM Corporation 2015
HyperSwap – Configuration
• LIST QUORUM LOCATIONS:
• IBM_Storwize:ATS_OXFORD3:superuser> lsquorum
quorum_index status id name controller_id controller_name active object_type override
0 online 79 no drive no
1 online 13 no drive no
2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no
• DEFINE TOPOLOGY:
• IBM_Storwize:ATS_OXFORD3:superuser> chsystem -topology hyperswap
21
© Copyright IBM Corporation 2015
HyperSwap – Configuration
• MAKE VDISKS (SITE 1 MAIN, SITE 2 AUX):
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL10 -size 10 -unit gb
-iogrp 0 -mdiskgrp GBURG-03_POOL
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL20 -size 10 -unit gb
-iogrp 0 -mdiskgrp GBURG-03_POOL
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX10 -size 10 -unit gb
-iogrp 1 -mdiskgrp GBURG-05_POOL
• Virtual Disk, id [2], successfully created
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX20 -size 10 -unit gb
-iogrp 1 -mdiskgrp GBURG-05_POOL
• MAKE CHANGE VOLUME VDISKS (SITE 1 MAIN, SITE 2 AUX):
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV10 -size 10 -unit gb
-iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV20 -size 10 -unit gb
-iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV10 -size 10 -unit gb
-iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV20 -size 10 -unit gb
-iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand
22
© Copyright IBM Corporation 2015
HyperSwap – Configuration
• ADD ACCESS TO THE MAIN SITE VDISKS TO THE OTHER SITE
(IOGRP1):
• IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL10
• IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL20
• DEFINE CONSISTENCY GROUP :
• IBM_Storwize:ATS_OXFORD3:superuser> mkrcconsistgrp -name GBURG_CONGRP
• DEFINE THE TWO REMOTE COPY RELATIONSHIPS:
• IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL10 –aux
GBURG05_AUX10 –cluster ATS_OXFORD3 –activeactive –name VOL10REL –consistgrp
GBURG_CONGRP
• IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL20 –aux
GBURG05_AUX20 –cluster ATS_OXFORD3 –activeactive –name VOL20REL –consistgrp
GBURG_CONGRP
23
© Copyright IBM Corporation 2015
HyperSwap – Configuration
• ADDING THE CHANGE VOLUMES TO EACH VDISK DEFINED:
• IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV10
VOL10REL
• IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV20
VOL20REL
• IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV10
VOL10REL
• IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV20
VOL20REL
• At this point the replication between master and aux volumes starts
automatically
• Remote copy relationship state will be “inconsistent copying” until primary and secondary volumes
are in sync, then state changes to “consistent synchronized”
• MAP HYPERSWAP VOLUMES TO HOST:
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL10
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL20
** Note that we map only the primary/master volume to the host, not the secondary/auxiliary volume of the
Metro Mirror relationship created earlier
24
© Copyright IBM Corporation 2015
HyperSwap – Configuration
25
© Copyright IBM Corporation 2015
Demonstration
• Show Host View of Its Storage
• Demo Scenario 1
• Fail paths from host at site 1 to its primary storage controller at site 1
• Demo Scenario 2
• Fail externally virtualized MDisk used as active quorum disk
• Fail paths to externally virtualized storage system providing active quorum disk
• Demo Scenario 3
• Configure existing Volume as HyperSwap Volume
• Demo Scenario 4
• Fail entire storage controller at site 2 for newly configured HyperSwap Volume
26
© Copyright IBM Corporation 2015
Miscellaneous
• Recommended to use 8 FC ports per node canister so we can dedicate
some ports strictly for the synchronous mirroring between the IO groups
• Link to HyperSwap whitepaper in Techdocs
• https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102538
27

More Related Content

What's hot

What's hot (20)

Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
 
MySQL Performance Tuning. Part 1: MySQL Configuration (includes MySQL 5.7)
MySQL Performance Tuning. Part 1: MySQL Configuration (includes MySQL 5.7)MySQL Performance Tuning. Part 1: MySQL Configuration (includes MySQL 5.7)
MySQL Performance Tuning. Part 1: MySQL Configuration (includes MySQL 5.7)
 
Presentation citrix desktop virtualization
Presentation   citrix desktop virtualizationPresentation   citrix desktop virtualization
Presentation citrix desktop virtualization
 
Storage Technology Overview
Storage Technology OverviewStorage Technology Overview
Storage Technology Overview
 
Big ip f5 ltm load balancing methods
Big ip f5 ltm load balancing methodsBig ip f5 ltm load balancing methods
Big ip f5 ltm load balancing methods
 
Amazon RDS for MySQL: Best Practices and Migration
Amazon RDS for MySQL: Best Practices and MigrationAmazon RDS for MySQL: Best Practices and Migration
Amazon RDS for MySQL: Best Practices and Migration
 
Optimizing MariaDB for maximum performance
Optimizing MariaDB for maximum performanceOptimizing MariaDB for maximum performance
Optimizing MariaDB for maximum performance
 
Server virtualization by VMWare
Server virtualization by VMWareServer virtualization by VMWare
Server virtualization by VMWare
 
VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5
 
Best practices for MySQL/MariaDB Server/Percona Server High Availability
Best practices for MySQL/MariaDB Server/Percona Server High AvailabilityBest practices for MySQL/MariaDB Server/Percona Server High Availability
Best practices for MySQL/MariaDB Server/Percona Server High Availability
 
Ceph RBD Update - June 2021
Ceph RBD Update - June 2021Ceph RBD Update - June 2021
Ceph RBD Update - June 2021
 
Veeam Solutions for SMB_2022.pptx
Veeam Solutions for SMB_2022.pptxVeeam Solutions for SMB_2022.pptx
Veeam Solutions for SMB_2022.pptx
 
Tanel Poder - Troubleshooting Complex Oracle Performance Issues - Part 1
Tanel Poder - Troubleshooting Complex Oracle Performance Issues - Part 1Tanel Poder - Troubleshooting Complex Oracle Performance Issues - Part 1
Tanel Poder - Troubleshooting Complex Oracle Performance Issues - Part 1
 
VMworld 2017 vSAN Network Design
VMworld 2017 vSAN Network Design VMworld 2017 vSAN Network Design
VMworld 2017 vSAN Network Design
 
Oracle Security Presentation
Oracle Security PresentationOracle Security Presentation
Oracle Security Presentation
 
MySQL Group Replication - Ready For Production? (2018-04)
MySQL Group Replication - Ready For Production? (2018-04)MySQL Group Replication - Ready For Production? (2018-04)
MySQL Group Replication - Ready For Production? (2018-04)
 
VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014VMware VSAN Technical Deep Dive - March 2014
VMware VSAN Technical Deep Dive - March 2014
 
MariaDB: in-depth (hands on training in Seoul)
MariaDB: in-depth (hands on training in Seoul)MariaDB: in-depth (hands on training in Seoul)
MariaDB: in-depth (hands on training in Seoul)
 
My SYSAUX tablespace is full - please help
My SYSAUX tablespace is full - please helpMy SYSAUX tablespace is full - please help
My SYSAUX tablespace is full - please help
 
Linux Survival Kit for Proof of Concept & Proof of Technology
Linux Survival Kit for Proof of Concept & Proof of TechnologyLinux Survival Kit for Proof of Concept & Proof of Technology
Linux Survival Kit for Proof of Concept & Proof of Technology
 

Similar to Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive

CloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWestCloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWest
ke4qqq
 
Virtualizing Tier One Applications - Varrow
Virtualizing Tier One Applications - VarrowVirtualizing Tier One Applications - Varrow
Virtualizing Tier One Applications - Varrow
Andrew Miller
 

Similar to Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive (20)

Whats new in Microsoft Windows Server 2016 Clustering and Storage
Whats new in Microsoft Windows Server 2016 Clustering and StorageWhats new in Microsoft Windows Server 2016 Clustering and Storage
Whats new in Microsoft Windows Server 2016 Clustering and Storage
 
CloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWestCloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWest
 
What is coming for VMware vSphere?
What is coming for VMware vSphere?What is coming for VMware vSphere?
What is coming for VMware vSphere?
 
Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015
 
Technical sales education enterprise- svc and ibm flash best practices update
Technical sales education   enterprise- svc and ibm flash best practices updateTechnical sales education   enterprise- svc and ibm flash best practices update
Technical sales education enterprise- svc and ibm flash best practices update
 
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
 
Elastic vSphere, Now With More Stretch
Elastic vSphere, Now With More StretchElastic vSphere, Now With More Stretch
Elastic vSphere, Now With More Stretch
 
Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015
 
Virtualizing Tier One Applications - Varrow
Virtualizing Tier One Applications - VarrowVirtualizing Tier One Applications - Varrow
Virtualizing Tier One Applications - Varrow
 
VMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphereVMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphere
 
2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management
 
How to Design a Scalable Private Cloud
How to Design a Scalable Private CloudHow to Design a Scalable Private Cloud
How to Design a Scalable Private Cloud
 
VMworld - sto7650 -Software defined storage @VMmware primer
VMworld - sto7650 -Software defined storage  @VMmware primerVMworld - sto7650 -Software defined storage  @VMmware primer
VMworld - sto7650 -Software defined storage @VMmware primer
 
Schizophrenic High Availability with SQL and Windows 2016
Schizophrenic High Availability with SQL and Windows 2016Schizophrenic High Availability with SQL and Windows 2016
Schizophrenic High Availability with SQL and Windows 2016
 
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
 
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
 
Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015
 
Txlf2012
Txlf2012Txlf2012
Txlf2012
 
HA and DR for Cloud Workloads
HA and DR for Cloud WorkloadsHA and DR for Cloud Workloads
HA and DR for Cloud Workloads
 
Inter connect2016 yss1841-cloud-storage-options-v4
Inter connect2016 yss1841-cloud-storage-options-v4Inter connect2016 yss1841-cloud-storage-options-v4
Inter connect2016 yss1841-cloud-storage-options-v4
 

More from xKinAnx

More from xKinAnx (20)

Engage for success ibm spectrum accelerate 2
Engage for success   ibm spectrum accelerate 2Engage for success   ibm spectrum accelerate 2
Engage for success ibm spectrum accelerate 2
 
Software defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloudSoftware defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloud
 
Ibm spectrum virtualize 101
Ibm spectrum virtualize 101 Ibm spectrum virtualize 101
Ibm spectrum virtualize 101
 
04 empalis -ibm_spectrum_protect_-_strategy_and_directions
04 empalis -ibm_spectrum_protect_-_strategy_and_directions04 empalis -ibm_spectrum_protect_-_strategy_and_directions
04 empalis -ibm_spectrum_protect_-_strategy_and_directions
 
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
 
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
 
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
 
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
 
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
 
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
 
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
 
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
 
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
 
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
 
Presentation disaster recovery in virtualization and cloud
Presentation   disaster recovery in virtualization and cloudPresentation   disaster recovery in virtualization and cloud
Presentation disaster recovery in virtualization and cloud
 
Presentation disaster recovery for oracle fusion middleware with the zfs st...
Presentation   disaster recovery for oracle fusion middleware with the zfs st...Presentation   disaster recovery for oracle fusion middleware with the zfs st...
Presentation disaster recovery for oracle fusion middleware with the zfs st...
 
Presentation differentiated virtualization for enterprise clouds, large and...
Presentation   differentiated virtualization for enterprise clouds, large and...Presentation   differentiated virtualization for enterprise clouds, large and...
Presentation differentiated virtualization for enterprise clouds, large and...
 
Presentation desktops for the cloud the view rollout
Presentation   desktops for the cloud the view rolloutPresentation   desktops for the cloud the view rollout
Presentation desktops for the cloud the view rollout
 
Presentation design - key concepts and approaches for designing your deskto...
Presentation   design - key concepts and approaches for designing your deskto...Presentation   design - key concepts and approaches for designing your deskto...
Presentation design - key concepts and approaches for designing your deskto...
 
Presentation desarrollos cloud con oracle virtualization
Presentation   desarrollos cloud con oracle virtualizationPresentation   desarrollos cloud con oracle virtualization
Presentation desarrollos cloud con oracle virtualization
 

Recently uploaded

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Recently uploaded (20)

"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 

Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive

  • 1. © Copyright IBM Corporation 2015 Technical University/Symposia materials may not be reproduced in whole or in part without the prior written permission of IBM. IBM Spectrum Virtualize HyperSwap Deep Dive Bill Wiegand Spectrum Virtualize – Consulting IT Specialist IBM Accelerate with IBM Storage
  • 2. © Copyright IBM Corporation 2015 Agenda • High Availability vs Disaster Recovery • Overview of HyperSwap Function • Overview of Demo Lab Setup • Outline of Steps and Commands to Configure HyperSwap • Show Host View of Its Storage • Demo Scenario 1 • Fail paths from host at site 1 to its primary storage controller at site 1 • Demo Scenario 2 • Fail externally virtualized MDisk used as active quorum disk • Fail paths to externally virtualized storage system providing active quorum disk • Demo Scenario 3 • Configure existing Volume as HyperSwap Volume • Demo Scenario 4 • Fail entire storage controller at site 2 for newly configured HyperSwap Volume 1
  • 3. © Copyright IBM Corporation 2015 High Availability vs Disaster Recovery Site 1 HA Site 2 DR ISL 1 Volume Mirroring Metro Mirror or Global Mirror TotalStorage Storage Engine 336 TotalStorage Storage Engine 336 Cluster 2 TotalStorage Storage Engine 336 TotalStorage Storage Engine 336 Cluster 1 ISL 2 Manual intervention required: 1.Stop all running servers 2.Perform failover operations 3.Remove server access in Site 1 4.Grant server access in Site 2 5.Start the servers in Site 2 6.Import Volume Groups 7.Vary on Volume Groups 8.Mount Filesystems 9.Recover applications 2
  • 4. © Copyright IBM Corporation 2015 Today: SVC Enhanced Stretched Cluster • Today’s stretched cluster technology splits an SVC’s two-way cache across two sites • Allows host I/O to continue without loss of access to data if a site is lost • Enhanced Stretched Cluster in version 7.2 introduced site concept to the code for policing configurations and optimizing data flow Quorum storage Power domain 3 Node 2 Power domain 2 Storage Switch Host Node 1 Power domain 1 Storage Switch Host Read Read Write 3
  • 5. © Copyright IBM Corporation 2015 HyperSwap • HyperSwap is next step of HA (High Availability) solution • Provides most disaster recovery (DR) benefits of Metro Mirror as well • Uses intra-cluster synchronous remote copy (Metro Mirror) capabilities along with existing change volume and access I/O group technologies • Essentially makes a host’s volumes accessible across two Storwize or SVC I/O groups in a clustered system by making the primary and secondary volumes of the Metro Mirror relationship, running under the covers, look like one volume to the host 4
  • 6. © Copyright IBM Corporation 2015 High Availability with HyperSwap • Hosts, SVC nodes, and storage are in one of two failure domains/sites • Volumes visible as a single object across both sites (I/O groups) I/O group 0 Node 1 Node 2 I/O group 1 Node 3 Node 4 HostA HostB Site 1 Site 2 Vol-1p Vol-2pVol-1sVol-2s Vol-1p Vol-2p 5
  • 7. © Copyright IBM Corporation 2015 High Availability with HyperSwap Site 1 Site 2 Host A Host B Clustered Host C Public Fabric 1A Public Fabric 2A Public ISL Public Fabric 1B Public Fabric 2B Storage Storage Site 3 Quorum IBM Spectrum Virtualize system IBM Spectrum Virtualize system Private ISL Private Fabric 1 Private Fabric 2 6 Public ISL Hosts’ ports can be • Zoned to see IBM Spectrum Virtualize system ports on both sites, and will be automatically configured to use correct paths. • Zoned only locally to simplify configuration, which only loses the ability for a host on one site to continue in the absence of local IBM Spectrum Virtualize system nodes Two SANs required for Enhanced Stretched Cluster, and recommended for HyperSwap: • Private SAN for node-to-node communication • Public SAN for everything else See Redbook SG24-8211-00 for more details Storage Systems can be • IBM SVC for either HyperSwap or Enhanced Stretched Cluster • IBM Storwize V5000, V7000 for HyperSwap only Quorum provided by SCSI controller marked with “Extended Quorum support” on the interoperability matrix. Quorum storage must be in a 3rd site independent of site 1 and site 2, but visible by all nodes. Storage systems need to be zoned/connected only to nodes/node canisters in their site (stretched and hyperswap topologies only, excluding quorum storage)
  • 8. © Copyright IBM Corporation 2015 HyperSwap – What is a Failure Domain • Generally a failure domain will represent a physical location, but depends on what type of failure you are trying to protect against • Could all be in one building on different floors/rooms or just different power domains in same data center • Could be multiple buildings on the same campus • Could be multiple buildings up to 300KM apart • Key is the quorum disk • If only have two physical sites and quorum disk to be in one of them then some failure scenarios won’t allow cluster to survive automatically • Minimum is to have active quorum disk system on separate power grid in one of the two failure domains 7
  • 9. © Copyright IBM Corporation 2015 HyperSwap – Overview • Stretched Cluster requires splitting nodes in an I/O group • Impossible with Storwize family since an I/O group is confined to an enclosure • After a site fails write cache is disabled • Could affect performance • HyperSwap keeps nodes in an I/O group together • Copies data between two I/O groups • Suitable for Storwize family of products as well as SVC • Retains full read/write performance with only one site 8
  • 10. © Copyright IBM Corporation 2015 HyperSwap – Overview • SVC Stretched Cluster is not application aware • If one volume used by an application is unable to keep a site up-to-date, the other volumes won’t pause at the same point, likely making the site’s data unusable for disaster recovery • HyperSwap allows grouping of multiple volumes together in a consistency group • Data will be maintained consistently across the volumes • Significantly improves the use of HyperSwap for disaster recovery scenarios as well • There is no remote copy partnership configuration since this is a single clustered system • Intra-cluster replication initial sync and resync rates can be configured normally using the ‘chpartnership’ CLI command 9
  • 11. © Copyright IBM Corporation 2015 HyperSwap – Overview • Stretched Cluster discards old data during resynchronization • If one site is out-of-date, and the system is automatically resynchronizing that copy, that site’s data isn’t available for disaster recovery, giving windows where both sites are online but loss of one site could lose data • HyperSwap uses Global Mirror with Change Volumes technology to retain the old data during resynchronization • Allows a site to continually provide disaster recovery protection throughout its lifecycle • Stretched cluster did not know which sites hosts were in • To minimize I/O traffic across sites more complex zoning and management of preferred nodes for volumes was required • Can use HyperSwap function on any Storwize family system supporting multiple I/O groups • Two Storwize V5000 control enclosures • Two-four Storwize V7000 Gen1/Gen2 control enclosures • Four-eight SVC node cluster • Note that HyperSwap is not a supported configuration with Storwize V3700 since it can’t be clustered 10
  • 12. © Copyright IBM Corporation 2015 HyperSwap – Overview • Limits and Restrictions • Max of 1024 HyperSwap volumes per cluster • Each HyperSwap volume requires four FC mappings and max mappings is 4096 • Max capacity is 1PB per I/O group or 2PB per cluster • Much lower limit for Gen1 Storwize V7000 • Run into limit of remote copy bitmap space • Can’t replicate HyperSwap volumes to another cluster for DR using remote copy • Limited FlashCopy Manager support • Can’t do reverse flashcopy to HyperSwap volumes • Max of 8 paths per HyperSwap volume same as regular volume • AIX LPM not supported today • No GUI support currently • Requirements • Remote copy license • For Storwize configurations an external virtualization license is required • Minimum one enclosure license for the storage system providing active quorum disk • Size public/private SANs as we do with ESC today • Only applicable if using ISLs between sites/IO groups • Recommended Use Cases • Active/Passive site configuration • Hosts access given volumes from one site only 11
  • 13. © Copyright IBM Corporation 2015 Example Configuration 12 IOGroup-0 IOGroup-1 Local Host Vol-1 EMC EMC IBM IBM 2TB HPHP 3TB IBMIBM HyperSwap Volume Primary Secondary Federated Host Federated Host SVC SVC
  • 14. © Copyright IBM Corporation 2015 Local Host Connectivity 13 IOGroup-0 IOGroup-1 Local Host 2TB Flash Mdisk EMC 3TB V5000 MdiskIBM 2TB Flash Mdisk HP 3TB V5000 MdiskIBM Fab-A Fab-B 2 HBA’s 4 Path’s SVC SVC SVC SVC
  • 15. © Copyright IBM Corporation 2015 Federated Host Connectivity 14 2TB Flash Mdisk EMC 3TB V5000 MdiskIBM 2TB Flash Mdisk HP 3TB V5000 MdiskIBM Federated Host Fab-B Fab-A 2 HBA’s 8 Path’s SVC SVC SVC SVC
  • 16. © Copyright IBM Corporation 2015 Storage Connectivity 15 IOGroup-0 IOGroup-1 Storage Controller Fab-A Fab-B 2 2 2 2 2 2 SVC SVC SVC SVC
  • 17. © Copyright IBM Corporation 2015 HyperSwap – Understanding Quorum Disks • By default clustered system selects three quorum disk candidates automatically • With SVC it is on the first three MDisks it discovers from any supported disk controller • On Storwize it is three internal disk drives unless we have external disk virtualized, then like SVC it is first three MDisks discovered • When cluster topology is set to “hyperswap” the quorum disks are dynamically changed for proper configuration for a HyperSwap enabled clustered system • IBM_Storwize:ATS_OXFORD3:superuser> lsquorum quorum_index status id name controller_id controller_name active object_type override 0 online 79 no drive no 1 online 13 no drive no 2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no • There is only ever one active quorum disk • Used solely for tie-break situations when two sites loss access to each other • Must be on externally virtualized storage that supports Extended Quorum • The three are used to store critical cluster configuration data 16
  • 18. © Copyright IBM Corporation 2015 • Quorum disk configuration not exposed in GUI • ‘lsquorum’ shows which three MDisks or drives are the quorum candidates and which one is currently the active one • No need to set override to ‘yes’ as needed in past with Enhanced Stretch Cluster • Active quorum disk must be external and on a storage system that supports “Extended Quorum” as noted on support matrix • http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003741 • http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003658 • Only certain IBM disk systems support extended quorum HyperSwap – Understanding Quorum Disks 17
  • 19. © Copyright IBM Corporation 2015 HyperSwap – Lab Setup Storwize V7000 Clustered System I/O Group 0 Control Enclosure Expansion Enclosures Expansion Enclosures Storwize V7000 Clustered System I/O Group 1 Control Enclosure Expansion Enclosures Expansion Enclosures Site 1 Site 2 Clustered System Separated at distance Host Volume • A HyperSwap clustered system provides high availability between different sites or within same data center • I/O Group assigned to each site • A copy of the data is at each site • Host associated with a site • If you lose access to I/O Group 0 from the host then the host multi- pathing will automatically access the data via I/O Group 1 • If you only lose primary copy of data then HyperSwap function will forward request to I/O Group 1 to service I/O • If you lose I/O Group 0 entirely then the host multi-pathing will automatically access the other copy of the data on I/O Group 1 18
  • 20. © Copyright IBM Corporation 2015 HyperSwap – Configuration • NAMING THE 3 DIFFERENT SITES: • IBM_Storwize:ATS_OXFORD3:superuser> lssite id site_name 1 Site1 2 Site2 3 Site3 • IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-03 1 • IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-05 2 • IBM_Storwize:ATS_OXFORD3:superuser> chsite -name QUORUM 3 • LIST THE 4 CLUSTER NODES: • IBM_Storwize:ATS_OXFORD3:superuser> lsnodecanister -delim : id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:config_node:UPS_unique_id:hard ware:iscsi_name:iscsi_alias:panel_name:enclosure_id:canister_id:enclosure_serial_number 1:node1::500507680200005D:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node1::30- 1:30:1:78G00PV 2:node2::500507680200005E:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node2::30- 2:30:2:78G00PV 3:node3::500507680205EF71:online:1:io_grp1:yes::300:iqn.1986-03.com.ibm:2145.atsoxford3.node3::50- 1:50:1:78REBAX 4:node4::500507680205EF72:online:1:io_grp1:no::300:iqn.1986-03.com.ibm:2145.atsoxford3.node4::50- 2:50:2:78REBAX 19
  • 21. © Copyright IBM Corporation 2015 HyperSwap – Configuration • ASSIGN NODES TO SITES (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node1 • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node2 • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node3 • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node4 • ASSIGN HOSTS TO SITES (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-03 SAN355-04 • IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-05 SAN3850-1 • ASSIGN QUORUM DISK ON CONTROLLER TO QUORUM SITE: • IBM_Storwize:ATS_OXFORD3:superuser> chcontroller -site QUORUM DS8K-SJ9A 20
  • 22. © Copyright IBM Corporation 2015 HyperSwap – Configuration • LIST QUORUM LOCATIONS: • IBM_Storwize:ATS_OXFORD3:superuser> lsquorum quorum_index status id name controller_id controller_name active object_type override 0 online 79 no drive no 1 online 13 no drive no 2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no • DEFINE TOPOLOGY: • IBM_Storwize:ATS_OXFORD3:superuser> chsystem -topology hyperswap 21
  • 23. © Copyright IBM Corporation 2015 HyperSwap – Configuration • MAKE VDISKS (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL10 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL20 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX10 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL • Virtual Disk, id [2], successfully created • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX20 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL • MAKE CHANGE VOLUME VDISKS (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV10 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV20 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV10 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV20 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand 22
  • 24. © Copyright IBM Corporation 2015 HyperSwap – Configuration • ADD ACCESS TO THE MAIN SITE VDISKS TO THE OTHER SITE (IOGRP1): • IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL10 • IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL20 • DEFINE CONSISTENCY GROUP : • IBM_Storwize:ATS_OXFORD3:superuser> mkrcconsistgrp -name GBURG_CONGRP • DEFINE THE TWO REMOTE COPY RELATIONSHIPS: • IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL10 –aux GBURG05_AUX10 –cluster ATS_OXFORD3 –activeactive –name VOL10REL –consistgrp GBURG_CONGRP • IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL20 –aux GBURG05_AUX20 –cluster ATS_OXFORD3 –activeactive –name VOL20REL –consistgrp GBURG_CONGRP 23
  • 25. © Copyright IBM Corporation 2015 HyperSwap – Configuration • ADDING THE CHANGE VOLUMES TO EACH VDISK DEFINED: • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV10 VOL10REL • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV20 VOL20REL • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV10 VOL10REL • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV20 VOL20REL • At this point the replication between master and aux volumes starts automatically • Remote copy relationship state will be “inconsistent copying” until primary and secondary volumes are in sync, then state changes to “consistent synchronized” • MAP HYPERSWAP VOLUMES TO HOST: • IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL10 • IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL20 ** Note that we map only the primary/master volume to the host, not the secondary/auxiliary volume of the Metro Mirror relationship created earlier 24
  • 26. © Copyright IBM Corporation 2015 HyperSwap – Configuration 25
  • 27. © Copyright IBM Corporation 2015 Demonstration • Show Host View of Its Storage • Demo Scenario 1 • Fail paths from host at site 1 to its primary storage controller at site 1 • Demo Scenario 2 • Fail externally virtualized MDisk used as active quorum disk • Fail paths to externally virtualized storage system providing active quorum disk • Demo Scenario 3 • Configure existing Volume as HyperSwap Volume • Demo Scenario 4 • Fail entire storage controller at site 2 for newly configured HyperSwap Volume 26
  • 28. © Copyright IBM Corporation 2015 Miscellaneous • Recommended to use 8 FC ports per node canister so we can dedicate some ports strictly for the synchronous mirroring between the IO groups • Link to HyperSwap whitepaper in Techdocs • https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102538 27