SlideShare a Scribd company logo
© Copyright IBM Corporation 2015
Technical University/Symposia materials may not be reproduced in whole or in part without the prior written permission of IBM.
IBM Spectrum Virtualize
HyperSwap Deep Dive
Bill Wiegand
Spectrum Virtualize – Consulting IT Specialist
IBM
Accelerate with IBM Storage
© Copyright IBM Corporation 2015
Agenda
• High Availability vs Disaster Recovery
• Overview of HyperSwap Function
• Overview of Demo Lab Setup
• Outline of Steps and Commands to Configure HyperSwap
• Show Host View of Its Storage
• Demo Scenario 1
• Fail paths from host at site 1 to its primary storage controller at site 1
• Demo Scenario 2
• Fail externally virtualized MDisk used as active quorum disk
• Fail paths to externally virtualized storage system providing active quorum disk
• Demo Scenario 3
• Configure existing Volume as HyperSwap Volume
• Demo Scenario 4
• Fail entire storage controller at site 2 for newly configured HyperSwap Volume
1
© Copyright IBM Corporation 2015
High Availability vs Disaster Recovery
Site 1
HA
Site 2
DR
ISL 1
Volume Mirroring Metro Mirror
or
Global Mirror
TotalStorage Storage Engine 336
TotalStorage Storage Engine 336
Cluster 2
TotalStorage Storage Engine 336
TotalStorage Storage Engine 336
Cluster 1
ISL 2
Manual intervention required:
1.Stop all running servers
2.Perform failover operations
3.Remove server access in Site 1
4.Grant server access in Site 2
5.Start the servers in Site 2
6.Import Volume Groups
7.Vary on Volume Groups
8.Mount Filesystems
9.Recover applications
2
© Copyright IBM Corporation 2015
Today: SVC Enhanced Stretched Cluster
• Today’s stretched cluster technology splits an SVC’s two-way cache
across two sites
• Allows host I/O to continue without loss of access to data if a site is lost
• Enhanced Stretched Cluster in version 7.2 introduced site concept to
the code for policing configurations and optimizing data flow
Quorum storage
Power domain 3
Node 2
Power domain 2
Storage
Switch
Host
Node 1
Power domain 1
Storage
Switch
Host
Read Read
Write
3
© Copyright IBM Corporation 2015
HyperSwap
• HyperSwap is next step of HA (High Availability) solution
• Provides most disaster recovery (DR) benefits of Metro Mirror as well
• Uses intra-cluster synchronous remote copy (Metro Mirror) capabilities
along with existing change volume and access I/O group technologies
• Essentially makes a host’s volumes accessible across two Storwize or
SVC I/O groups in a clustered system by making the primary and
secondary volumes of the Metro Mirror relationship, running under the
covers, look like one volume to the host
4
© Copyright IBM Corporation 2015
High Availability with HyperSwap
• Hosts, SVC nodes, and storage are in one of two failure domains/sites
• Volumes visible as a single object across both sites (I/O groups)
I/O group 0
Node 1 Node 2
I/O group 1
Node 3 Node 4
HostA
HostB
Site 1 Site 2
Vol-1p Vol-2pVol-1sVol-2s
Vol-1p Vol-2p
5
© Copyright IBM Corporation 2015
High Availability with HyperSwap
Site 1 Site 2
Host A Host B
Clustered Host C
Public
Fabric 1A
Public
Fabric 2A
Public ISL
Public
Fabric 1B
Public
Fabric 2B
Storage Storage
Site 3
Quorum
IBM Spectrum
Virtualize system
IBM Spectrum
Virtualize system
Private ISL
Private
Fabric 1
Private
Fabric 2
6
Public ISL
Hosts’ ports can be
• Zoned to see IBM Spectrum Virtualize system ports on both sites, and will be automatically
configured to use correct paths.
• Zoned only locally to simplify configuration, which only loses the ability for a host on one site to
continue in the absence of local IBM Spectrum Virtualize system nodes
Two SANs required for Enhanced Stretched Cluster, and recommended for HyperSwap:
• Private SAN for node-to-node communication
• Public SAN for everything else
See Redbook SG24-8211-00 for more details
Storage Systems can be
• IBM SVC for either HyperSwap or Enhanced Stretched Cluster
• IBM Storwize V5000, V7000 for HyperSwap only
Quorum provided by SCSI controller marked with “Extended Quorum support” on the
interoperability matrix.
Quorum storage must be in a 3rd site independent of site 1 and site 2, but visible by all nodes.
Storage systems need to be zoned/connected only to nodes/node canisters in their site (stretched
and hyperswap topologies only, excluding quorum storage)
© Copyright IBM Corporation 2015
HyperSwap – What is a Failure Domain
• Generally a failure domain will
represent a physical location, but
depends on what type of failure
you are trying to protect against
• Could all be in one building on different
floors/rooms or just different power domains in
same data center
• Could be multiple buildings on the same
campus
• Could be multiple buildings up to 300KM apart
• Key is the quorum disk
• If only have two physical sites and quorum disk
to be in one of them then some failure
scenarios won’t allow cluster to survive
automatically
• Minimum is to have active quorum disk system
on separate power grid in one of the two failure
domains
7
© Copyright IBM Corporation 2015
HyperSwap – Overview
• Stretched Cluster requires splitting nodes in an I/O group
• Impossible with Storwize family since an I/O group is confined to an enclosure
• After a site fails write cache is disabled
• Could affect performance
• HyperSwap keeps nodes in an I/O group together
• Copies data between two I/O groups
• Suitable for Storwize family of products as well as SVC
• Retains full read/write performance with only one site
8
© Copyright IBM Corporation 2015
HyperSwap – Overview
• SVC Stretched Cluster is not application aware
• If one volume used by an application is unable to keep a site up-to-date, the other volumes won’t
pause at the same point, likely making the site’s data unusable for disaster recovery
• HyperSwap allows grouping of multiple volumes together in a
consistency group
• Data will be maintained consistently across the volumes
• Significantly improves the use of HyperSwap for disaster recovery scenarios as well
• There is no remote copy partnership configuration since this is a single
clustered system
• Intra-cluster replication initial sync and resync rates can be configured normally using the
‘chpartnership’ CLI command
9
© Copyright IBM Corporation 2015
HyperSwap – Overview
• Stretched Cluster discards old data during resynchronization
• If one site is out-of-date, and the system is automatically resynchronizing that copy, that site’s data
isn’t available for disaster recovery, giving windows where both sites are online but loss of one site
could lose data
• HyperSwap uses Global Mirror with Change Volumes technology to
retain the old data during resynchronization
• Allows a site to continually provide disaster recovery protection throughout its lifecycle
• Stretched cluster did not know which sites hosts were in
• To minimize I/O traffic across sites more complex zoning and management of preferred nodes for
volumes was required
• Can use HyperSwap function on any Storwize family system supporting
multiple I/O groups
• Two Storwize V5000 control enclosures
• Two-four Storwize V7000 Gen1/Gen2 control enclosures
• Four-eight SVC node cluster
• Note that HyperSwap is not a supported configuration with Storwize V3700 since it can’t be clustered
10
© Copyright IBM Corporation 2015
HyperSwap – Overview
• Limits and Restrictions
• Max of 1024 HyperSwap volumes per cluster
• Each HyperSwap volume requires four FC mappings and max mappings is 4096
• Max capacity is 1PB per I/O group or 2PB per cluster
• Much lower limit for Gen1 Storwize V7000
• Run into limit of remote copy bitmap space
• Can’t replicate HyperSwap volumes to another cluster for DR using remote copy
• Limited FlashCopy Manager support
• Can’t do reverse flashcopy to HyperSwap volumes
• Max of 8 paths per HyperSwap volume same as regular volume
• AIX LPM not supported today
• No GUI support currently
• Requirements
• Remote copy license
• For Storwize configurations an external virtualization license is required
• Minimum one enclosure license for the storage system providing active quorum disk
• Size public/private SANs as we do with ESC today
• Only applicable if using ISLs between sites/IO groups
• Recommended Use Cases
• Active/Passive site configuration
• Hosts access given volumes from one site only
11
© Copyright IBM Corporation 2015
Example Configuration
12
IOGroup-0 IOGroup-1
Local
Host
Vol-1
EMC
EMC
IBM
IBM
2TB
HPHP
3TB
IBMIBM
HyperSwap
Volume
Primary Secondary
Federated
Host
Federated
Host
SVC SVC
© Copyright IBM Corporation 2015
Local Host Connectivity
13
IOGroup-0 IOGroup-1
Local
Host
2TB Flash
Mdisk
EMC 3TB
V5000
MdiskIBM
2TB Flash
Mdisk
HP 3TB
V5000
MdiskIBM
Fab-A Fab-B
2 HBA’s
4 Path’s
SVC
SVC SVC
SVC
© Copyright IBM Corporation 2015
Federated Host Connectivity
14
2TB Flash
Mdisk
EMC 3TB
V5000
MdiskIBM
2TB Flash
Mdisk
HP 3TB
V5000
MdiskIBM
Federated
Host
Fab-B
Fab-A
2 HBA’s
8 Path’s
SVC SVC SVC SVC
© Copyright IBM Corporation 2015
Storage Connectivity
15
IOGroup-0 IOGroup-1
Storage Controller
Fab-A Fab-B
2
2 2 2
2 2
SVC SVC SVC SVC
© Copyright IBM Corporation 2015
HyperSwap – Understanding Quorum Disks
• By default clustered system selects three quorum disk candidates
automatically
• With SVC it is on the first three MDisks it discovers from any supported disk controller
• On Storwize it is three internal disk drives unless we have external disk virtualized, then like SVC it is first
three MDisks discovered
• When cluster topology is set to “hyperswap” the quorum disks are
dynamically changed for proper configuration for a HyperSwap enabled
clustered system
• IBM_Storwize:ATS_OXFORD3:superuser> lsquorum
quorum_index status id name controller_id controller_name active object_type override
0 online 79 no drive no
1 online 13 no drive no
2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no
• There is only ever one active quorum disk
• Used solely for tie-break situations when two sites loss access to each other
• Must be on externally virtualized storage that supports Extended Quorum
• The three are used to store critical cluster configuration data
16
© Copyright IBM Corporation 2015
• Quorum disk configuration not exposed in GUI
• ‘lsquorum’ shows which three MDisks or drives are the quorum candidates and which one is
currently the active one
• No need to set override to ‘yes’ as needed in past with Enhanced Stretch Cluster
• Active quorum disk must be external and on a storage system that
supports “Extended Quorum” as noted on support matrix
• http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003741
• http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003658
• Only certain IBM disk systems support extended quorum
HyperSwap – Understanding Quorum Disks
17
© Copyright IBM Corporation 2015
HyperSwap – Lab Setup
Storwize V7000
Clustered System
I/O Group 0
Control Enclosure
Expansion
Enclosures
Expansion
Enclosures
Storwize V7000
Clustered System
I/O Group 1
Control Enclosure
Expansion
Enclosures
Expansion
Enclosures
Site 1 Site 2
Clustered System
Separated at distance
Host
Volume
• A HyperSwap clustered system
provides high availability between
different sites or within same data
center
• I/O Group assigned to each site
• A copy of the data is at each site
• Host associated with a site
• If you lose access to I/O Group 0
from the host then the host multi-
pathing will automatically access
the data via I/O Group 1
• If you only lose primary copy of
data then HyperSwap function will
forward request to I/O Group 1 to
service I/O
• If you lose I/O Group 0 entirely then
the host multi-pathing will
automatically access the other
copy of the data on I/O Group 1
18
© Copyright IBM Corporation 2015
HyperSwap – Configuration
• NAMING THE 3 DIFFERENT SITES:
• IBM_Storwize:ATS_OXFORD3:superuser> lssite
id site_name
1 Site1
2 Site2
3 Site3
• IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-03 1
• IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-05 2
• IBM_Storwize:ATS_OXFORD3:superuser> chsite -name QUORUM 3
• LIST THE 4 CLUSTER NODES:
• IBM_Storwize:ATS_OXFORD3:superuser> lsnodecanister -delim :
id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:config_node:UPS_unique_id:hard
ware:iscsi_name:iscsi_alias:panel_name:enclosure_id:canister_id:enclosure_serial_number
1:node1::500507680200005D:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node1::30-
1:30:1:78G00PV
2:node2::500507680200005E:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node2::30-
2:30:2:78G00PV
3:node3::500507680205EF71:online:1:io_grp1:yes::300:iqn.1986-03.com.ibm:2145.atsoxford3.node3::50-
1:50:1:78REBAX
4:node4::500507680205EF72:online:1:io_grp1:no::300:iqn.1986-03.com.ibm:2145.atsoxford3.node4::50-
2:50:2:78REBAX
19
© Copyright IBM Corporation 2015
HyperSwap – Configuration
• ASSIGN NODES TO SITES (SITE 1 MAIN, SITE 2 AUX):
• IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node1
• IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node2
• IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node3
• IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node4
• ASSIGN HOSTS TO SITES (SITE 1 MAIN, SITE 2 AUX):
• IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-03 SAN355-04
• IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-05 SAN3850-1
• ASSIGN QUORUM DISK ON CONTROLLER TO QUORUM SITE:
• IBM_Storwize:ATS_OXFORD3:superuser> chcontroller -site QUORUM DS8K-SJ9A
20
© Copyright IBM Corporation 2015
HyperSwap – Configuration
• LIST QUORUM LOCATIONS:
• IBM_Storwize:ATS_OXFORD3:superuser> lsquorum
quorum_index status id name controller_id controller_name active object_type override
0 online 79 no drive no
1 online 13 no drive no
2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no
• DEFINE TOPOLOGY:
• IBM_Storwize:ATS_OXFORD3:superuser> chsystem -topology hyperswap
21
© Copyright IBM Corporation 2015
HyperSwap – Configuration
• MAKE VDISKS (SITE 1 MAIN, SITE 2 AUX):
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL10 -size 10 -unit gb
-iogrp 0 -mdiskgrp GBURG-03_POOL
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL20 -size 10 -unit gb
-iogrp 0 -mdiskgrp GBURG-03_POOL
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX10 -size 10 -unit gb
-iogrp 1 -mdiskgrp GBURG-05_POOL
• Virtual Disk, id [2], successfully created
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX20 -size 10 -unit gb
-iogrp 1 -mdiskgrp GBURG-05_POOL
• MAKE CHANGE VOLUME VDISKS (SITE 1 MAIN, SITE 2 AUX):
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV10 -size 10 -unit gb
-iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV20 -size 10 -unit gb
-iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV10 -size 10 -unit gb
-iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV20 -size 10 -unit gb
-iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand
22
© Copyright IBM Corporation 2015
HyperSwap – Configuration
• ADD ACCESS TO THE MAIN SITE VDISKS TO THE OTHER SITE
(IOGRP1):
• IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL10
• IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL20
• DEFINE CONSISTENCY GROUP :
• IBM_Storwize:ATS_OXFORD3:superuser> mkrcconsistgrp -name GBURG_CONGRP
• DEFINE THE TWO REMOTE COPY RELATIONSHIPS:
• IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL10 –aux
GBURG05_AUX10 –cluster ATS_OXFORD3 –activeactive –name VOL10REL –consistgrp
GBURG_CONGRP
• IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL20 –aux
GBURG05_AUX20 –cluster ATS_OXFORD3 –activeactive –name VOL20REL –consistgrp
GBURG_CONGRP
23
© Copyright IBM Corporation 2015
HyperSwap – Configuration
• ADDING THE CHANGE VOLUMES TO EACH VDISK DEFINED:
• IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV10
VOL10REL
• IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV20
VOL20REL
• IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV10
VOL10REL
• IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV20
VOL20REL
• At this point the replication between master and aux volumes starts
automatically
• Remote copy relationship state will be “inconsistent copying” until primary and secondary volumes
are in sync, then state changes to “consistent synchronized”
• MAP HYPERSWAP VOLUMES TO HOST:
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL10
• IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL20
** Note that we map only the primary/master volume to the host, not the secondary/auxiliary volume of the
Metro Mirror relationship created earlier
24
© Copyright IBM Corporation 2015
HyperSwap – Configuration
25
© Copyright IBM Corporation 2015
Demonstration
• Show Host View of Its Storage
• Demo Scenario 1
• Fail paths from host at site 1 to its primary storage controller at site 1
• Demo Scenario 2
• Fail externally virtualized MDisk used as active quorum disk
• Fail paths to externally virtualized storage system providing active quorum disk
• Demo Scenario 3
• Configure existing Volume as HyperSwap Volume
• Demo Scenario 4
• Fail entire storage controller at site 2 for newly configured HyperSwap Volume
26
© Copyright IBM Corporation 2015
Miscellaneous
• Recommended to use 8 FC ports per node canister so we can dedicate
some ports strictly for the synchronous mirroring between the IO groups
• Link to HyperSwap whitepaper in Techdocs
• https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102538
27

More Related Content

What's hot

OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...
OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...
OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...
ScyllaDB
 
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...
Monica Beckwith
 
[D12] NonStop SQLって何? by Susumu Yamamoto
[D12] NonStop SQLって何? by Susumu Yamamoto[D12] NonStop SQLって何? by Susumu Yamamoto
[D12] NonStop SQLって何? by Susumu YamamotoInsight Technology, Inc.
 
Ceph on arm64 upload
Ceph on arm64   uploadCeph on arm64   upload
Ceph on arm64 upload
Ceph Community
 
DeNAの動画配信サービスを支えるインフラの内部 #denatechcon
DeNAの動画配信サービスを支えるインフラの内部  #denatechconDeNAの動画配信サービスを支えるインフラの内部  #denatechcon
DeNAの動画配信サービスを支えるインフラの内部 #denatechcon
DeNA
 
フィッシングとドメイン名・DNS
フィッシングとドメイン名・DNSフィッシングとドメイン名・DNS
フィッシングとドメイン名・DNS
Shiojiri Ohhara
 
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
ssuserecfcc8
 
vSAN Beyond The Basics
vSAN Beyond The BasicsvSAN Beyond The Basics
vSAN Beyond The Basics
Sumit Lahiri
 
Metal³ – Metal Kubed, Bare Metal Provisioning for Kubernetes | Kim Bảo Long
Metal³ – Metal Kubed, Bare Metal Provisioning for Kubernetes | Kim Bảo LongMetal³ – Metal Kubed, Bare Metal Provisioning for Kubernetes | Kim Bảo Long
Metal³ – Metal Kubed, Bare Metal Provisioning for Kubernetes | Kim Bảo Long
Vietnam Open Infrastructure User Group
 
Oracle Gen 2 Exadata Cloud@Customer:サービス概要のご紹介 [2021年7月版]
Oracle Gen 2 Exadata Cloud@Customer:サービス概要のご紹介 [2021年7月版]Oracle Gen 2 Exadata Cloud@Customer:サービス概要のご紹介 [2021年7月版]
Oracle Gen 2 Exadata Cloud@Customer:サービス概要のご紹介 [2021年7月版]
オラクルエンジニア通信
 
Reliability, Availability, and Serviceability (RAS) on ARM64 status - SAN19-118
Reliability, Availability, and Serviceability (RAS) on ARM64 status - SAN19-118Reliability, Availability, and Serviceability (RAS) on ARM64 status - SAN19-118
Reliability, Availability, and Serviceability (RAS) on ARM64 status - SAN19-118
Wei Fu
 
Rootlinux17: Hypervisors on ARM - Overview and Design Choices by Julien Grall...
Rootlinux17: Hypervisors on ARM - Overview and Design Choices by Julien Grall...Rootlinux17: Hypervisors on ARM - Overview and Design Choices by Julien Grall...
Rootlinux17: Hypervisors on ARM - Overview and Design Choices by Julien Grall...
The Linux Foundation
 
Qemu Introduction
Qemu IntroductionQemu Introduction
Qemu Introduction
Chiawei Wang
 
Static Partitioning with Xen, LinuxRT, and Zephyr: A Concrete End-to-end Exam...
Static Partitioning with Xen, LinuxRT, and Zephyr: A Concrete End-to-end Exam...Static Partitioning with Xen, LinuxRT, and Zephyr: A Concrete End-to-end Exam...
Static Partitioning with Xen, LinuxRT, and Zephyr: A Concrete End-to-end Exam...
Stefano Stabellini
 
Embedded Linux/ Debian with ARM64 Platform
Embedded Linux/ Debian with ARM64 PlatformEmbedded Linux/ Debian with ARM64 Platform
Embedded Linux/ Debian with ARM64 Platform
SZ Lin
 
Demystifying the use of wallets and ssl with your database
Demystifying the use of wallets and  ssl with your databaseDemystifying the use of wallets and  ssl with your database
Demystifying the use of wallets and ssl with your database
Aishwarya Kala
 
Introduction to Civil Infrastructure Platform
Introduction to Civil Infrastructure PlatformIntroduction to Civil Infrastructure Platform
Introduction to Civil Infrastructure Platform
SZ Lin
 
コンテナネットワーキング(CNI)最前線
コンテナネットワーキング(CNI)最前線コンテナネットワーキング(CNI)最前線
コンテナネットワーキング(CNI)最前線
Motonori Shindo
 
Ansibleの最近の動向を追ってみた
Ansibleの最近の動向を追ってみたAnsibleの最近の動向を追ってみた
Ansibleの最近の動向を追ってみた
KeijiUehata1
 

What's hot (20)

OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...
OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...
OSv Unikernel — Optimizing Guest OS to Run Stateless and Serverless Apps in t...
 
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...
Garbage First Garbage Collector (G1 GC) - Migration to, Expectations and Adva...
 
[D12] NonStop SQLって何? by Susumu Yamamoto
[D12] NonStop SQLって何? by Susumu Yamamoto[D12] NonStop SQLって何? by Susumu Yamamoto
[D12] NonStop SQLって何? by Susumu Yamamoto
 
Ceph on arm64 upload
Ceph on arm64   uploadCeph on arm64   upload
Ceph on arm64 upload
 
DeNAの動画配信サービスを支えるインフラの内部 #denatechcon
DeNAの動画配信サービスを支えるインフラの内部  #denatechconDeNAの動画配信サービスを支えるインフラの内部  #denatechcon
DeNAの動画配信サービスを支えるインフラの内部 #denatechcon
 
Tomcatx performance-tuning
Tomcatx performance-tuningTomcatx performance-tuning
Tomcatx performance-tuning
 
フィッシングとドメイン名・DNS
フィッシングとドメイン名・DNSフィッシングとドメイン名・DNS
フィッシングとドメイン名・DNS
 
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
FlashSystem 7300 Midrange Enterprise for Hybrid Cloud L2 Sellers Presentation...
 
vSAN Beyond The Basics
vSAN Beyond The BasicsvSAN Beyond The Basics
vSAN Beyond The Basics
 
Metal³ – Metal Kubed, Bare Metal Provisioning for Kubernetes | Kim Bảo Long
Metal³ – Metal Kubed, Bare Metal Provisioning for Kubernetes | Kim Bảo LongMetal³ – Metal Kubed, Bare Metal Provisioning for Kubernetes | Kim Bảo Long
Metal³ – Metal Kubed, Bare Metal Provisioning for Kubernetes | Kim Bảo Long
 
Oracle Gen 2 Exadata Cloud@Customer:サービス概要のご紹介 [2021年7月版]
Oracle Gen 2 Exadata Cloud@Customer:サービス概要のご紹介 [2021年7月版]Oracle Gen 2 Exadata Cloud@Customer:サービス概要のご紹介 [2021年7月版]
Oracle Gen 2 Exadata Cloud@Customer:サービス概要のご紹介 [2021年7月版]
 
Reliability, Availability, and Serviceability (RAS) on ARM64 status - SAN19-118
Reliability, Availability, and Serviceability (RAS) on ARM64 status - SAN19-118Reliability, Availability, and Serviceability (RAS) on ARM64 status - SAN19-118
Reliability, Availability, and Serviceability (RAS) on ARM64 status - SAN19-118
 
Rootlinux17: Hypervisors on ARM - Overview and Design Choices by Julien Grall...
Rootlinux17: Hypervisors on ARM - Overview and Design Choices by Julien Grall...Rootlinux17: Hypervisors on ARM - Overview and Design Choices by Julien Grall...
Rootlinux17: Hypervisors on ARM - Overview and Design Choices by Julien Grall...
 
Qemu Introduction
Qemu IntroductionQemu Introduction
Qemu Introduction
 
Static Partitioning with Xen, LinuxRT, and Zephyr: A Concrete End-to-end Exam...
Static Partitioning with Xen, LinuxRT, and Zephyr: A Concrete End-to-end Exam...Static Partitioning with Xen, LinuxRT, and Zephyr: A Concrete End-to-end Exam...
Static Partitioning with Xen, LinuxRT, and Zephyr: A Concrete End-to-end Exam...
 
Embedded Linux/ Debian with ARM64 Platform
Embedded Linux/ Debian with ARM64 PlatformEmbedded Linux/ Debian with ARM64 Platform
Embedded Linux/ Debian with ARM64 Platform
 
Demystifying the use of wallets and ssl with your database
Demystifying the use of wallets and  ssl with your databaseDemystifying the use of wallets and  ssl with your database
Demystifying the use of wallets and ssl with your database
 
Introduction to Civil Infrastructure Platform
Introduction to Civil Infrastructure PlatformIntroduction to Civil Infrastructure Platform
Introduction to Civil Infrastructure Platform
 
コンテナネットワーキング(CNI)最前線
コンテナネットワーキング(CNI)最前線コンテナネットワーキング(CNI)最前線
コンテナネットワーキング(CNI)最前線
 
Ansibleの最近の動向を追ってみた
Ansibleの最近の動向を追ってみたAnsibleの最近の動向を追ってみた
Ansibleの最近の動向を追ってみた
 

Similar to Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive

Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
xKinAnx
 
Whats new in Microsoft Windows Server 2016 Clustering and Storage
Whats new in Microsoft Windows Server 2016 Clustering and StorageWhats new in Microsoft Windows Server 2016 Clustering and Storage
Whats new in Microsoft Windows Server 2016 Clustering and Storage
John Moran
 
CloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWestCloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWestke4qqq
 
What is coming for VMware vSphere?
What is coming for VMware vSphere?What is coming for VMware vSphere?
What is coming for VMware vSphere?
Duncan Epping
 
Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015
VMUG IT
 
Technical sales education enterprise- svc and ibm flash best practices update
Technical sales education   enterprise- svc and ibm flash best practices updateTechnical sales education   enterprise- svc and ibm flash best practices update
Technical sales education enterprise- svc and ibm flash best practices update
solarisyougood
 
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld
 
Elastic vSphere, Now With More Stretch
Elastic vSphere, Now With More StretchElastic vSphere, Now With More Stretch
Elastic vSphere, Now With More Stretch
Scott Lowe
 
Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015
Duncan Epping
 
Virtualizing Tier One Applications - Varrow
Virtualizing Tier One Applications - VarrowVirtualizing Tier One Applications - Varrow
Virtualizing Tier One Applications - VarrowAndrew Miller
 
VMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphereVMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphere
VMworld
 
2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management
Cormac Hogan
 
How to Design a Scalable Private Cloud
How to Design a Scalable Private CloudHow to Design a Scalable Private Cloud
How to Design a Scalable Private Cloud
AFCOM
 
VMworld - sto7650 -Software defined storage @VMmware primer
VMworld - sto7650 -Software defined storage  @VMmware primerVMworld - sto7650 -Software defined storage  @VMmware primer
VMworld - sto7650 -Software defined storage @VMmware primer
Duncan Epping
 
Schizophrenic High Availability with SQL and Windows 2016
Schizophrenic High Availability with SQL and Windows 2016Schizophrenic High Availability with SQL and Windows 2016
Schizophrenic High Availability with SQL and Windows 2016
Mark Broadbent
 
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
Hendrik van Run
 
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld
 
Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015
VMUG IT
 
HA and DR for Cloud Workloads
HA and DR for Cloud WorkloadsHA and DR for Cloud Workloads
HA and DR for Cloud Workloads
swamybabu
 

Similar to Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive (20)

Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
 
Whats new in Microsoft Windows Server 2016 Clustering and Storage
Whats new in Microsoft Windows Server 2016 Clustering and StorageWhats new in Microsoft Windows Server 2016 Clustering and Storage
Whats new in Microsoft Windows Server 2016 Clustering and Storage
 
CloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWestCloudStack - LinuxFest NorthWest
CloudStack - LinuxFest NorthWest
 
What is coming for VMware vSphere?
What is coming for VMware vSphere?What is coming for VMware vSphere?
What is coming for VMware vSphere?
 
Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015
 
Technical sales education enterprise- svc and ibm flash best practices update
Technical sales education   enterprise- svc and ibm flash best practices updateTechnical sales education   enterprise- svc and ibm flash best practices update
Technical sales education enterprise- svc and ibm flash best practices update
 
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep DiveVMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
 
Elastic vSphere, Now With More Stretch
Elastic vSphere, Now With More StretchElastic vSphere, Now With More Stretch
Elastic vSphere, Now With More Stretch
 
Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015Five common customer use cases for Virtual SAN - VMworld US / 2015
Five common customer use cases for Virtual SAN - VMworld US / 2015
 
Virtualizing Tier One Applications - Varrow
Virtualizing Tier One Applications - VarrowVirtualizing Tier One Applications - Varrow
Virtualizing Tier One Applications - Varrow
 
VMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphereVMworld 2015: Advanced SQL Server on vSphere
VMworld 2015: Advanced SQL Server on vSphere
 
2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management
 
How to Design a Scalable Private Cloud
How to Design a Scalable Private CloudHow to Design a Scalable Private Cloud
How to Design a Scalable Private Cloud
 
VMworld - sto7650 -Software defined storage @VMmware primer
VMworld - sto7650 -Software defined storage  @VMmware primerVMworld - sto7650 -Software defined storage  @VMmware primer
VMworld - sto7650 -Software defined storage @VMmware primer
 
Schizophrenic High Availability with SQL and Windows 2016
Schizophrenic High Availability with SQL and Windows 2016Schizophrenic High Availability with SQL and Windows 2016
Schizophrenic High Availability with SQL and Windows 2016
 
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
2689 - Exploring IBM PureApplication System and IBM Workload Deployer Best Pr...
 
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server VMworld 2013: Successfully Virtualize Microsoft Exchange Server
VMworld 2013: Successfully Virtualize Microsoft Exchange Server
 
Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015Presentazione VMware @ VMUGIT UserCon 2015
Presentazione VMware @ VMUGIT UserCon 2015
 
Txlf2012
Txlf2012Txlf2012
Txlf2012
 
HA and DR for Cloud Workloads
HA and DR for Cloud WorkloadsHA and DR for Cloud Workloads
HA and DR for Cloud Workloads
 

More from xKinAnx

Engage for success ibm spectrum accelerate 2
Engage for success   ibm spectrum accelerate 2Engage for success   ibm spectrum accelerate 2
Engage for success ibm spectrum accelerate 2
xKinAnx
 
Software defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloudSoftware defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloud
xKinAnx
 
Ibm spectrum virtualize 101
Ibm spectrum virtualize 101 Ibm spectrum virtualize 101
Ibm spectrum virtualize 101
xKinAnx
 
04 empalis -ibm_spectrum_protect_-_strategy_and_directions
04 empalis -ibm_spectrum_protect_-_strategy_and_directions04 empalis -ibm_spectrum_protect_-_strategy_and_directions
04 empalis -ibm_spectrum_protect_-_strategy_and_directions
xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
xKinAnx
 
Presentation disaster recovery in virtualization and cloud
Presentation   disaster recovery in virtualization and cloudPresentation   disaster recovery in virtualization and cloud
Presentation disaster recovery in virtualization and cloud
xKinAnx
 
Presentation disaster recovery for oracle fusion middleware with the zfs st...
Presentation   disaster recovery for oracle fusion middleware with the zfs st...Presentation   disaster recovery for oracle fusion middleware with the zfs st...
Presentation disaster recovery for oracle fusion middleware with the zfs st...
xKinAnx
 
Presentation differentiated virtualization for enterprise clouds, large and...
Presentation   differentiated virtualization for enterprise clouds, large and...Presentation   differentiated virtualization for enterprise clouds, large and...
Presentation differentiated virtualization for enterprise clouds, large and...
xKinAnx
 
Presentation desktops for the cloud the view rollout
Presentation   desktops for the cloud the view rolloutPresentation   desktops for the cloud the view rollout
Presentation desktops for the cloud the view rollout
xKinAnx
 
Presentation design - key concepts and approaches for designing your deskto...
Presentation   design - key concepts and approaches for designing your deskto...Presentation   design - key concepts and approaches for designing your deskto...
Presentation design - key concepts and approaches for designing your deskto...
xKinAnx
 
Presentation desarrollos cloud con oracle virtualization
Presentation   desarrollos cloud con oracle virtualizationPresentation   desarrollos cloud con oracle virtualization
Presentation desarrollos cloud con oracle virtualization
xKinAnx
 

More from xKinAnx (20)

Engage for success ibm spectrum accelerate 2
Engage for success   ibm spectrum accelerate 2Engage for success   ibm spectrum accelerate 2
Engage for success ibm spectrum accelerate 2
 
Software defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloudSoftware defined storage provisioning using ibm smart cloud
Software defined storage provisioning using ibm smart cloud
 
Ibm spectrum virtualize 101
Ibm spectrum virtualize 101 Ibm spectrum virtualize 101
Ibm spectrum virtualize 101
 
04 empalis -ibm_spectrum_protect_-_strategy_and_directions
04 empalis -ibm_spectrum_protect_-_strategy_and_directions04 empalis -ibm_spectrum_protect_-_strategy_and_directions
04 empalis -ibm_spectrum_protect_-_strategy_and_directions
 
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
 
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
Ibm spectrum scale fundamentals workshop for americas part 2 IBM Spectrum Sca...
 
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
Ibm spectrum scale fundamentals workshop for americas part 3 Information Life...
 
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
Ibm spectrum scale fundamentals workshop for americas part 4 Replication, Str...
 
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
 
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
Ibm spectrum scale fundamentals workshop for americas part 5 spectrum scale_c...
 
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 6 spectrumscale el...
 
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
Ibm spectrum scale fundamentals workshop for americas part 7 spectrumscale el...
 
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
Ibm spectrum scale fundamentals workshop for americas part 8 spectrumscale ba...
 
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
 
Presentation disaster recovery in virtualization and cloud
Presentation   disaster recovery in virtualization and cloudPresentation   disaster recovery in virtualization and cloud
Presentation disaster recovery in virtualization and cloud
 
Presentation disaster recovery for oracle fusion middleware with the zfs st...
Presentation   disaster recovery for oracle fusion middleware with the zfs st...Presentation   disaster recovery for oracle fusion middleware with the zfs st...
Presentation disaster recovery for oracle fusion middleware with the zfs st...
 
Presentation differentiated virtualization for enterprise clouds, large and...
Presentation   differentiated virtualization for enterprise clouds, large and...Presentation   differentiated virtualization for enterprise clouds, large and...
Presentation differentiated virtualization for enterprise clouds, large and...
 
Presentation desktops for the cloud the view rollout
Presentation   desktops for the cloud the view rolloutPresentation   desktops for the cloud the view rollout
Presentation desktops for the cloud the view rollout
 
Presentation design - key concepts and approaches for designing your deskto...
Presentation   design - key concepts and approaches for designing your deskto...Presentation   design - key concepts and approaches for designing your deskto...
Presentation design - key concepts and approaches for designing your deskto...
 
Presentation desarrollos cloud con oracle virtualization
Presentation   desarrollos cloud con oracle virtualizationPresentation   desarrollos cloud con oracle virtualization
Presentation desarrollos cloud con oracle virtualization
 

Recently uploaded

FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Paul Groth
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Inflectra
 

Recently uploaded (20)

FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 

Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive

  • 1. © Copyright IBM Corporation 2015 Technical University/Symposia materials may not be reproduced in whole or in part without the prior written permission of IBM. IBM Spectrum Virtualize HyperSwap Deep Dive Bill Wiegand Spectrum Virtualize – Consulting IT Specialist IBM Accelerate with IBM Storage
  • 2. © Copyright IBM Corporation 2015 Agenda • High Availability vs Disaster Recovery • Overview of HyperSwap Function • Overview of Demo Lab Setup • Outline of Steps and Commands to Configure HyperSwap • Show Host View of Its Storage • Demo Scenario 1 • Fail paths from host at site 1 to its primary storage controller at site 1 • Demo Scenario 2 • Fail externally virtualized MDisk used as active quorum disk • Fail paths to externally virtualized storage system providing active quorum disk • Demo Scenario 3 • Configure existing Volume as HyperSwap Volume • Demo Scenario 4 • Fail entire storage controller at site 2 for newly configured HyperSwap Volume 1
  • 3. © Copyright IBM Corporation 2015 High Availability vs Disaster Recovery Site 1 HA Site 2 DR ISL 1 Volume Mirroring Metro Mirror or Global Mirror TotalStorage Storage Engine 336 TotalStorage Storage Engine 336 Cluster 2 TotalStorage Storage Engine 336 TotalStorage Storage Engine 336 Cluster 1 ISL 2 Manual intervention required: 1.Stop all running servers 2.Perform failover operations 3.Remove server access in Site 1 4.Grant server access in Site 2 5.Start the servers in Site 2 6.Import Volume Groups 7.Vary on Volume Groups 8.Mount Filesystems 9.Recover applications 2
  • 4. © Copyright IBM Corporation 2015 Today: SVC Enhanced Stretched Cluster • Today’s stretched cluster technology splits an SVC’s two-way cache across two sites • Allows host I/O to continue without loss of access to data if a site is lost • Enhanced Stretched Cluster in version 7.2 introduced site concept to the code for policing configurations and optimizing data flow Quorum storage Power domain 3 Node 2 Power domain 2 Storage Switch Host Node 1 Power domain 1 Storage Switch Host Read Read Write 3
  • 5. © Copyright IBM Corporation 2015 HyperSwap • HyperSwap is next step of HA (High Availability) solution • Provides most disaster recovery (DR) benefits of Metro Mirror as well • Uses intra-cluster synchronous remote copy (Metro Mirror) capabilities along with existing change volume and access I/O group technologies • Essentially makes a host’s volumes accessible across two Storwize or SVC I/O groups in a clustered system by making the primary and secondary volumes of the Metro Mirror relationship, running under the covers, look like one volume to the host 4
  • 6. © Copyright IBM Corporation 2015 High Availability with HyperSwap • Hosts, SVC nodes, and storage are in one of two failure domains/sites • Volumes visible as a single object across both sites (I/O groups) I/O group 0 Node 1 Node 2 I/O group 1 Node 3 Node 4 HostA HostB Site 1 Site 2 Vol-1p Vol-2pVol-1sVol-2s Vol-1p Vol-2p 5
  • 7. © Copyright IBM Corporation 2015 High Availability with HyperSwap Site 1 Site 2 Host A Host B Clustered Host C Public Fabric 1A Public Fabric 2A Public ISL Public Fabric 1B Public Fabric 2B Storage Storage Site 3 Quorum IBM Spectrum Virtualize system IBM Spectrum Virtualize system Private ISL Private Fabric 1 Private Fabric 2 6 Public ISL Hosts’ ports can be • Zoned to see IBM Spectrum Virtualize system ports on both sites, and will be automatically configured to use correct paths. • Zoned only locally to simplify configuration, which only loses the ability for a host on one site to continue in the absence of local IBM Spectrum Virtualize system nodes Two SANs required for Enhanced Stretched Cluster, and recommended for HyperSwap: • Private SAN for node-to-node communication • Public SAN for everything else See Redbook SG24-8211-00 for more details Storage Systems can be • IBM SVC for either HyperSwap or Enhanced Stretched Cluster • IBM Storwize V5000, V7000 for HyperSwap only Quorum provided by SCSI controller marked with “Extended Quorum support” on the interoperability matrix. Quorum storage must be in a 3rd site independent of site 1 and site 2, but visible by all nodes. Storage systems need to be zoned/connected only to nodes/node canisters in their site (stretched and hyperswap topologies only, excluding quorum storage)
  • 8. © Copyright IBM Corporation 2015 HyperSwap – What is a Failure Domain • Generally a failure domain will represent a physical location, but depends on what type of failure you are trying to protect against • Could all be in one building on different floors/rooms or just different power domains in same data center • Could be multiple buildings on the same campus • Could be multiple buildings up to 300KM apart • Key is the quorum disk • If only have two physical sites and quorum disk to be in one of them then some failure scenarios won’t allow cluster to survive automatically • Minimum is to have active quorum disk system on separate power grid in one of the two failure domains 7
  • 9. © Copyright IBM Corporation 2015 HyperSwap – Overview • Stretched Cluster requires splitting nodes in an I/O group • Impossible with Storwize family since an I/O group is confined to an enclosure • After a site fails write cache is disabled • Could affect performance • HyperSwap keeps nodes in an I/O group together • Copies data between two I/O groups • Suitable for Storwize family of products as well as SVC • Retains full read/write performance with only one site 8
  • 10. © Copyright IBM Corporation 2015 HyperSwap – Overview • SVC Stretched Cluster is not application aware • If one volume used by an application is unable to keep a site up-to-date, the other volumes won’t pause at the same point, likely making the site’s data unusable for disaster recovery • HyperSwap allows grouping of multiple volumes together in a consistency group • Data will be maintained consistently across the volumes • Significantly improves the use of HyperSwap for disaster recovery scenarios as well • There is no remote copy partnership configuration since this is a single clustered system • Intra-cluster replication initial sync and resync rates can be configured normally using the ‘chpartnership’ CLI command 9
  • 11. © Copyright IBM Corporation 2015 HyperSwap – Overview • Stretched Cluster discards old data during resynchronization • If one site is out-of-date, and the system is automatically resynchronizing that copy, that site’s data isn’t available for disaster recovery, giving windows where both sites are online but loss of one site could lose data • HyperSwap uses Global Mirror with Change Volumes technology to retain the old data during resynchronization • Allows a site to continually provide disaster recovery protection throughout its lifecycle • Stretched cluster did not know which sites hosts were in • To minimize I/O traffic across sites more complex zoning and management of preferred nodes for volumes was required • Can use HyperSwap function on any Storwize family system supporting multiple I/O groups • Two Storwize V5000 control enclosures • Two-four Storwize V7000 Gen1/Gen2 control enclosures • Four-eight SVC node cluster • Note that HyperSwap is not a supported configuration with Storwize V3700 since it can’t be clustered 10
  • 12. © Copyright IBM Corporation 2015 HyperSwap – Overview • Limits and Restrictions • Max of 1024 HyperSwap volumes per cluster • Each HyperSwap volume requires four FC mappings and max mappings is 4096 • Max capacity is 1PB per I/O group or 2PB per cluster • Much lower limit for Gen1 Storwize V7000 • Run into limit of remote copy bitmap space • Can’t replicate HyperSwap volumes to another cluster for DR using remote copy • Limited FlashCopy Manager support • Can’t do reverse flashcopy to HyperSwap volumes • Max of 8 paths per HyperSwap volume same as regular volume • AIX LPM not supported today • No GUI support currently • Requirements • Remote copy license • For Storwize configurations an external virtualization license is required • Minimum one enclosure license for the storage system providing active quorum disk • Size public/private SANs as we do with ESC today • Only applicable if using ISLs between sites/IO groups • Recommended Use Cases • Active/Passive site configuration • Hosts access given volumes from one site only 11
  • 13. © Copyright IBM Corporation 2015 Example Configuration 12 IOGroup-0 IOGroup-1 Local Host Vol-1 EMC EMC IBM IBM 2TB HPHP 3TB IBMIBM HyperSwap Volume Primary Secondary Federated Host Federated Host SVC SVC
  • 14. © Copyright IBM Corporation 2015 Local Host Connectivity 13 IOGroup-0 IOGroup-1 Local Host 2TB Flash Mdisk EMC 3TB V5000 MdiskIBM 2TB Flash Mdisk HP 3TB V5000 MdiskIBM Fab-A Fab-B 2 HBA’s 4 Path’s SVC SVC SVC SVC
  • 15. © Copyright IBM Corporation 2015 Federated Host Connectivity 14 2TB Flash Mdisk EMC 3TB V5000 MdiskIBM 2TB Flash Mdisk HP 3TB V5000 MdiskIBM Federated Host Fab-B Fab-A 2 HBA’s 8 Path’s SVC SVC SVC SVC
  • 16. © Copyright IBM Corporation 2015 Storage Connectivity 15 IOGroup-0 IOGroup-1 Storage Controller Fab-A Fab-B 2 2 2 2 2 2 SVC SVC SVC SVC
  • 17. © Copyright IBM Corporation 2015 HyperSwap – Understanding Quorum Disks • By default clustered system selects three quorum disk candidates automatically • With SVC it is on the first three MDisks it discovers from any supported disk controller • On Storwize it is three internal disk drives unless we have external disk virtualized, then like SVC it is first three MDisks discovered • When cluster topology is set to “hyperswap” the quorum disks are dynamically changed for proper configuration for a HyperSwap enabled clustered system • IBM_Storwize:ATS_OXFORD3:superuser> lsquorum quorum_index status id name controller_id controller_name active object_type override 0 online 79 no drive no 1 online 13 no drive no 2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no • There is only ever one active quorum disk • Used solely for tie-break situations when two sites loss access to each other • Must be on externally virtualized storage that supports Extended Quorum • The three are used to store critical cluster configuration data 16
  • 18. © Copyright IBM Corporation 2015 • Quorum disk configuration not exposed in GUI • ‘lsquorum’ shows which three MDisks or drives are the quorum candidates and which one is currently the active one • No need to set override to ‘yes’ as needed in past with Enhanced Stretch Cluster • Active quorum disk must be external and on a storage system that supports “Extended Quorum” as noted on support matrix • http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003741 • http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003658 • Only certain IBM disk systems support extended quorum HyperSwap – Understanding Quorum Disks 17
  • 19. © Copyright IBM Corporation 2015 HyperSwap – Lab Setup Storwize V7000 Clustered System I/O Group 0 Control Enclosure Expansion Enclosures Expansion Enclosures Storwize V7000 Clustered System I/O Group 1 Control Enclosure Expansion Enclosures Expansion Enclosures Site 1 Site 2 Clustered System Separated at distance Host Volume • A HyperSwap clustered system provides high availability between different sites or within same data center • I/O Group assigned to each site • A copy of the data is at each site • Host associated with a site • If you lose access to I/O Group 0 from the host then the host multi- pathing will automatically access the data via I/O Group 1 • If you only lose primary copy of data then HyperSwap function will forward request to I/O Group 1 to service I/O • If you lose I/O Group 0 entirely then the host multi-pathing will automatically access the other copy of the data on I/O Group 1 18
  • 20. © Copyright IBM Corporation 2015 HyperSwap – Configuration • NAMING THE 3 DIFFERENT SITES: • IBM_Storwize:ATS_OXFORD3:superuser> lssite id site_name 1 Site1 2 Site2 3 Site3 • IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-03 1 • IBM_Storwize:ATS_OXFORD3:superuser> chsite -name GBURG-05 2 • IBM_Storwize:ATS_OXFORD3:superuser> chsite -name QUORUM 3 • LIST THE 4 CLUSTER NODES: • IBM_Storwize:ATS_OXFORD3:superuser> lsnodecanister -delim : id:name:UPS_serial_number:WWNN:status:IO_group_id:IO_group_name:config_node:UPS_unique_id:hard ware:iscsi_name:iscsi_alias:panel_name:enclosure_id:canister_id:enclosure_serial_number 1:node1::500507680200005D:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node1::30- 1:30:1:78G00PV 2:node2::500507680200005E:online:0:io_grp0:no::100:iqn.1986-03.com.ibm:2145.atsoxford3.node2::30- 2:30:2:78G00PV 3:node3::500507680205EF71:online:1:io_grp1:yes::300:iqn.1986-03.com.ibm:2145.atsoxford3.node3::50- 1:50:1:78REBAX 4:node4::500507680205EF72:online:1:io_grp1:no::300:iqn.1986-03.com.ibm:2145.atsoxford3.node4::50- 2:50:2:78REBAX 19
  • 21. © Copyright IBM Corporation 2015 HyperSwap – Configuration • ASSIGN NODES TO SITES (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node1 • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-03 node2 • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node3 • IBM_Storwize:ATS_OXFORD3:superuser> chnodecanister -site GBURG-05 node4 • ASSIGN HOSTS TO SITES (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-03 SAN355-04 • IBM_Storwize:ATS_OXFORD3:superuser> chhost -site GBURG-05 SAN3850-1 • ASSIGN QUORUM DISK ON CONTROLLER TO QUORUM SITE: • IBM_Storwize:ATS_OXFORD3:superuser> chcontroller -site QUORUM DS8K-SJ9A 20
  • 22. © Copyright IBM Corporation 2015 HyperSwap – Configuration • LIST QUORUM LOCATIONS: • IBM_Storwize:ATS_OXFORD3:superuser> lsquorum quorum_index status id name controller_id controller_name active object_type override 0 online 79 no drive no 1 online 13 no drive no 2 online 0 DS8K_mdisk0 1 DS8K-SJ9A yes mdisk no • DEFINE TOPOLOGY: • IBM_Storwize:ATS_OXFORD3:superuser> chsystem -topology hyperswap 21
  • 23. © Copyright IBM Corporation 2015 HyperSwap – Configuration • MAKE VDISKS (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL10 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_VOL20 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX10 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL • Virtual Disk, id [2], successfully created • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_AUX20 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL • MAKE CHANGE VOLUME VDISKS (SITE 1 MAIN, SITE 2 AUX): • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV10 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG03_CV20 -size 10 -unit gb -iogrp 0 -mdiskgrp GBURG-03_POOL -rsize 1% -autoexpand • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV10 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand • IBM_Storwize:ATS_OXFORD3:superuser> mkvdisk -name GBURG05_CV20 -size 10 -unit gb -iogrp 1 -mdiskgrp GBURG-05_POOL -rsize 1% -autoexpand 22
  • 24. © Copyright IBM Corporation 2015 HyperSwap – Configuration • ADD ACCESS TO THE MAIN SITE VDISKS TO THE OTHER SITE (IOGRP1): • IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL10 • IBM_Storwize:ATS_OXFORD3:superuser> addvdiskaccess -iogrp 1 GBURG03_VOL20 • DEFINE CONSISTENCY GROUP : • IBM_Storwize:ATS_OXFORD3:superuser> mkrcconsistgrp -name GBURG_CONGRP • DEFINE THE TWO REMOTE COPY RELATIONSHIPS: • IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL10 –aux GBURG05_AUX10 –cluster ATS_OXFORD3 –activeactive –name VOL10REL –consistgrp GBURG_CONGRP • IBM_Storwize:ATS_OXFORD3:superuser> mkrcrelationship –master GBURG03_VOL20 –aux GBURG05_AUX20 –cluster ATS_OXFORD3 –activeactive –name VOL20REL –consistgrp GBURG_CONGRP 23
  • 25. © Copyright IBM Corporation 2015 HyperSwap – Configuration • ADDING THE CHANGE VOLUMES TO EACH VDISK DEFINED: • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV10 VOL10REL • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -masterchange GBURG03_CV20 VOL20REL • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV10 VOL10REL • IBM_Storwize:ATS_OXFORD3:superuser> chrcrelationship -auxchange GBURG05_CV20 VOL20REL • At this point the replication between master and aux volumes starts automatically • Remote copy relationship state will be “inconsistent copying” until primary and secondary volumes are in sync, then state changes to “consistent synchronized” • MAP HYPERSWAP VOLUMES TO HOST: • IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL10 • IBM_Storwize:ATS_OXFORD3:superuser> mkvdiskhostmap -host SAN355-04 GBURG03_VOL20 ** Note that we map only the primary/master volume to the host, not the secondary/auxiliary volume of the Metro Mirror relationship created earlier 24
  • 26. © Copyright IBM Corporation 2015 HyperSwap – Configuration 25
  • 27. © Copyright IBM Corporation 2015 Demonstration • Show Host View of Its Storage • Demo Scenario 1 • Fail paths from host at site 1 to its primary storage controller at site 1 • Demo Scenario 2 • Fail externally virtualized MDisk used as active quorum disk • Fail paths to externally virtualized storage system providing active quorum disk • Demo Scenario 3 • Configure existing Volume as HyperSwap Volume • Demo Scenario 4 • Fail entire storage controller at site 2 for newly configured HyperSwap Volume 26
  • 28. © Copyright IBM Corporation 2015 Miscellaneous • Recommended to use 8 FC ports per node canister so we can dedicate some ports strictly for the synchronous mirroring between the IO groups • Link to HyperSwap whitepaper in Techdocs • https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102538 27