SlideShare a Scribd company logo
ATS Masters - Storage
© 2012 IBM Corporation
Stretched SVC Cluster
ATS Masters - Storage
© 2012 IBM Corporation
Agenda
 A little history
 Options and issues
 Requirements and restrictions
ATS Masters - Storage
© 2012 IBM Corporation3
Terminology
 SVC Split I/O Group = SVC Stretched Cluster = SVC Split Cluster
– Two independent SVC nodes in two independent sites + one independent site for Quorum
– Acts just like a single I/O Group with distributed high availability
 Distributed I/O groups – NOT a HA Configuration and not recommended, if one site failed:
– Manual volume move required
– Some data still in cache of offline I/O Group
I/O Group 1 I/O Group 1
Site 1 Site 2
I/O Group 1
I/O Group 2
I/O Group 1 I/O Group 2
Site 1 Site 2
Site 2
Site 1
 Storwize V7000 Split I/O Group not an option:
– Single enclosure includes both nodes
– Physical distribution across two sites not possible
Site 1 Site 2
ATS Masters - Storage
© 2012 IBM Corporation
Early Days - separate racks in machine room
Servers
SVC + UPS SVC + UPS
Fabric
A
Fabric
B
Protected from physical problems
Plumbing leak, fire
“A chance to tackle the guy with a
chainsaw”
Where’s the quorum disk?
Lots of cross-cabling
ATS Masters - Storage
© 2012 IBM Corporation
Fabric presence on both sides of machine room
Servers
SVC + UPS SVC + UPS
Fabric
A
Fabric
B
Cross cabling only needed
for fabrics and SVC nodes
Fabric
B
Fabric
A
Requirement for zero-hop between nodes in an I/O Group
Needed to “zone away” ISL connections within an I/O Group
Each fabric has two sets of zones for the two sets of ports.
Quorum concern remains
ATS Masters - Storage
© 2012 IBM Corporation
Server cluster also stretched across machine room
Server Cluster
SVC + UPS SVC + UPS
Fabric
A
Fabric
B
Cross cabling only needed
for fabrics and SVC nodes
Fabric
B
Fabric
A
Requirement for zero-hop between nodes in an I/O Group
Needed to “zone away” ISL connections within an I/O Group
Each fabric has two sets of zones for the two sets of ports.
Quorum concern remains
Server Cluster
Can do cluster failover
Where’s the storage?
ATS Masters - Storage
© 2012 IBM Corporation
SVC V4.3 Vdisk (Volume) Mirroring
Server Cluster
SVC + UPS SVC + UPS
Fabric
A
Fabric
B
Cross cabling only needed
for fabrics and SVC nodes
Fabric
B
Fabric
A
Requirement for zero-hop between nodes in an I/O Group
Needed to “zone away” ISL connections within an I/O Group
Each fabric has two sets of zones for the two sets of ports.
Quorum concern remains
Server Cluster
SVC Volume has a copy
on either side of machine room
Can do cluster failover and
Mirroring allows single Volume
to be visible on both sides
ATS Masters - Storage
© 2012 IBM Corporation
SVC V5.1 – put LW SFPs in nodes for 10km distance
Server Cluster
SVC + UPS SVC + UPS
Fabric
A
Fabric
B
LW SFPs w/SM fibre allows up to 10km
Fabric
B
Fabric
A
Requirement for zero-hop between nodes in an I/O Group
Needed to “zone away” ISL connections within an I/O Group
Each fabric has two sets of zones for the two sets of ports.
Quorum concern remains
Server Cluster
SVC Volume has a copy
at each site
Where’s the quorum?
Can do cluster failover and
Mirroring allows single Volume
to be visible on both sides
ATS Masters - Storage
© 2012 IBM Corporation
SVC V5.1 – stretched cluster with 3rd
site -1
Server Cluster
SVC + UPS SVC + UPS
Fabric
A
Fabric
B
LW SFPs w/SM fibre allows up to 10km
Fabric
B
Fabric
A
Server Cluster
Can do cluster failover and
Mirroring allows single Volume
to be visible on both sides
Active / passive storage devices
(like DS3/4/5K):
Each quorum disk storage
controller must be connected to
both sites
Ok, but?
ATS Masters - Storage
© 2012 IBM Corporation
SVC V5.1 – stretched cluster with 3rd
site -2
Server Cluster
SVC + UPS SVC + UPS
Fabric
A
Fabric
B
LW SFPs w/SM fibre allows up to 10km
Fabric
B
Fabric
A
Server Cluster
Can do cluster failover and
Mirroring allows single Volume
to be visible on both sides
Active / passive storage devices
(like DS3/4/5K):
Each quorum disk storage
controller must be
connected to both sites
ATS Masters - Storage
© 2012 IBM Corporation
SVC V5.1 – stretched cluster with 3rd
site -3
Server Cluster
SVC + UPS SVC + UPS
Fabric
A
Fabric
B
LW SFPs w/SM fibre allows up to 10km
Fabric
B
Fabric
A
Server Cluster
Can do cluster failover and
Mirroring allows single Volume
to be visible on both sides
Active / passive storage devices
(like DS3/4/5K):
Each quorum disk storage
controller must be
connected to both sites
LOTS OF CROSS CABLING!
ATS Masters - Storage
© 2012 IBM Corporation
SVC V6.3-option 1: Same as V5 but farther using DWDM
Server Cluster
SVC + UPS SVC + UPS
Fabric
A
Fabric
B
DWDM allows up to 40km
Speed drops after 10km
Fabric
B
Fabric
A
Server Cluster
Can do cluster failover and
Mirroring allows single Volume
to be visible on both sides
Active / passive storage devices
(like DS3/4/5K):
Each quorum disk storage
controller must be
connected to both sites
ATS Masters - Storage
© 2012 IBM Corporation
SVC V6.3-option 1 (cont)
Server Cluster
SVC + UPS
Server Cluster
SVC + UPS
Fabric
A
Fabric
A
Fabric
B
Fabric
B
User chooses number of ISLs on SAN
Still no hops between nodes in an I/O Group
These connections can be on DWDM too.
Two ports per SVC node attached to local switchess
Two ports per SVC node attached to remote switches via DWDM
Hosts and storage attached to local switches, need enough ISLs
3rd
site quorum (not shown) attached to both fabrics
Active or
passive
DWDM over
shared single
mode fibre(s)
0-10 KM Fibre Channel distance
supported up to 8Gbps
11-20KM Fibre Channel distance
supported up to 4Gbps
21-40KM Fibre Channel distance
supported up to 2Gbps
User chooses number of ISLs on SAN
Still no hops between nodes in an I/O Group
These connections can be on DWDM too.
ATS Masters - Storage
© 2012 IBM Corporation
SVC V6.3 option 2: Dedicated ISLs for nodes (can use DWDM)
Server Cluster
SVC + UPS
Server Cluster
SVC + UPS
Private
Fabric
C
Private
Fabric
C
Private
Fabric
D
Private
Fabric
D
Public
Fabric
A
Public
Fabric
B
Public
Fabric
B
Public
Fabric
A
At least 1 ISL
Trunk if more than 1
At least 1 ISL
Trunk if more than 1
User chooses number of ISLs on public SAN
Only half of all SVC ports used for host I/O
Two ports per SVC node attached to public fabrics
Two ports per SVC node attached to dedicated fabrics
Hosts and storage attached to public fabrics
3rd
site quorum (not shown) attached to public fabrics
User chooses number of ISLs on public SAN
Only half of all SVC ports used for host I/O
Distance now up to 300km
Apps may require less
ATS Masters - Storage
© 2012 IBM Corporation
Configuration Using Brocade Virtual Fabrics
Server Cluster
SVC + UPS
Server Cluster
SVC + UPS
Public
Fabric
A
Private
Fabric
A
Public
Fabric
B
Private
Fabric
B
Private
Fabric
A
Public
Fabric
A
Private
Fabric
B
Public
Fabric
B
Physical switches are partitioned into
two logical switches, two virtual fabrics
Note ISLs/Trunks for private fabrics are dedicated
rather than being shared to guarantee dedicated
bandwidth is available for node to node traffic
Note ISLs/Trunks for private
SANs are dedicated rather than
being shared to guarantee
dedicated bandwidth available
for node to node traffic
ATS Masters - Storage
© 2012 IBM Corporation
Configuration Using CISCO VSANs
Server Cluster
SVC + UPS
Server Cluster
SVC + UPS
Public
VSAN
A
Private
VSAN
A
Public
VSAN
B
Private
VSAN
B
Private
VSAN
A
Public
VSAN
A
Private
VSAN
B
Public
VSAN
B
Note ISLs/Trunks for private
VSANs are dedicated rather
than being shared to guarantee
dedicated bandwidth available
for node to node traffic
Switches/fabrics are partitioned using VSANs
1 ISL per I/O group
Configured as trunk
1 ISL per I/O group
Configured as trunk
User chooses number of ISLs on public SAN
User chooses number of ISLs on public SAN
Two ports per SVC node attached to public VSANs
Two ports per SVC node attached to private VSANs
Hosts and storage attached to public VSANs
3rd
site quorum (not shown) attached to public VSANs
ATS Masters - Storage
© 2012 IBM Corporation17
Split I/O Group – Distance
 The new Split I/O Group configurations will support
distances of up to 300km (same recommendation as for
Metro Mirror)
 However for the typical deployment of a split I/O group
only 1/2 or 1/3rd
of this distance is recommended because
there will be 2 or 3 times as much latency depending on
what distance extension technology is used
 The following charts explain why
ATS Masters - Storage
© 2012 IBM Corporation18
Metro/Global Mirror
o Technically SVC supports distances up to 8000km

SVC will tolerate a round trip delay of up to 80ms between
nodes
o The same code is used for all inter-node communication
• Global Mirror, Metro Mirror, Cache Mirroring, Clustering
• SVCs proprietary SCSI protocol only has 1 round trip
o In practice Applications are not designed to support a Write
I/O latency of 80ms

Hence Metro Mirror is deployed for shorter distances (up to
300km) and Global Mirror is used for longer distances
ATS Masters - Storage
© 2012 IBM Corporation19
M/etro Mirror: Application Latency = 1 long distance round trip
IBM Presentation Template Full Version
Data center 1 Data center 2
4) Metro Mirror Data transfer to remote site
5) Acknowledgment
Steps 1–6 affect application latency
Steps 7–10 should not affect the application
Server Cluster 1
1) Write request from host
2) Xfer ready to host
3) Data transfer from host
6) Write completed to host
7a) Write request from SVC
8a) Xfer ready to SVC
9a) Data transfer from SVC
10a) Write completed to SVC
SVC Cluster 1
Server Cluster 2
7b) Write request from SVC
8b) Xfer ready to SVC
9b) Data transfer from SVC
10b) Write completed to SVC
SVC Cluster 2
1 round trip
ATS Masters - Storage
© 2012 IBM Corporation20
Split I/O Group – Preferred Node local, Write uses 1 round trip
IBM Presentation Template Full Version
Data center 1 Data center 2
Steps 1–6 affect application latency
Steps 7–10 should not affect the application
Server Cluster 1
1) Write request from host
2) Xfer ready to host
3) Data transfer from host
6) Write completed to host
Server Cluster 2
4) Cache Mirror Data transfer to remote site
5) Acknowledgment
SVC Split I/O Group
7b) Write request from SVC
8b) Xfer ready to SVC
9b) Data transfer from SVC
10b) Write completed to SVC
1 round trip
2 round trips – but SVC
write cache hides this
latency from the host
Node 1 Node 2
ATS Masters - Storage
© 2012 IBM Corporation21
Split I/O Group – Preferred node remote, Write = 3 round trips
IBM Presentation Template Full Version
Data center 1 Data center 2
Server Cluster 1
1) Write request from host
2) Xfer ready to host
3) Data transfer from host
6) Write completed to host
Server Cluster 2
7b) Write request from SVC
8b) Xfer ready to SVC
9b) Data transfer from SVC
10b) Write completed to SVC
4) Cache Mirror Data transfer to remote site
5) Acknowledgment
SVC Split I/O Group
2 round trips
1 round trip
2 round trips – but SVC
write cache hides this
latency from the host
Node 1 Node 2
Steps 1–6 affect application latency
Steps 7–10 should not affect the application
ATS Masters - Storage
© 2012 IBM Corporation22
Help with some round trips

Some switches and distance extenders use extra buffers and
proprietary protocols to eliminate one of the round trips worth
of latency for SCSI Write commands

These devices are already supported for use with SVC

No benefit or impact inter-node communication

Does benefit Host to remote SVC I/Os

Does benefit SVC to remote Storage Controller I/Os
ATS Masters - Storage
© 2012 IBM Corporation23
Split I/O Group – Preferred Node Remote with help, 2 round trips
IBM Presentation Template Full Version
Data center 1 Data center 2
Steps 1 to 12 affect application latency
Steps 13 to 22 should not affect the application
Server Cluster 1
1) Write request from host
2) Xfer ready to host
3) Data transfer from host
12) Write completed to host
Server Cluster 2
4) Write+ data transfer to remote site
8) Cache Mirror Data transfer to remote site
9) Acknowledgment
SVC Split I/O Group
11) Write completion to remote site
21) Write completion to remote site
16) Write+ data transfer to remote site
Distance Extenders
5) Write request to SVC
6) Xfer ready from SVC
7) Data transfer to SVC
10) Write completed from SVC
13) Write request from SVC
14) Xfer ready to SVC
15) Data transfer from SVC
22) Write completed to SVC
17) Write request to storage
18) Xfer ready from storage
19) Data transfer to storage
20) Write completed from
storage
1 round trip
1 round trip
1 round trip – hidden
from the host
Node 1 Node 2
ATS Masters - Storage
© 2012 IBM Corporation24
Long Distance Impact
 Additional latency because of long distance
 Light speed in glass: ~ 200.000 km/sec
– 1 km distance = 2 km round trip
 Additional round trip time because of distance:
– 1 km = 0.01 ms
– 10 km = 0.10 ms
– 25 km = 0.25 ms
– 100 km = 1.00 ms
– 300 km = 3.00 ms
 SCSI protocol:
– Read: 1 I/O operation = 0.01 ms / km
• Initiator requests data and target provides data
– Write: 2 I/O operations = 0.02 ms / km
• Initiator announces amount of data, target acknowledges
• Initiator send data, target acknowledge
– SVC’s proprietary SCSI protocol for node-to-node traffic has only 1 round trip
 Fibre channel frame:
– User data per FC frame (Fibre channel payload): up to 2048 bytes = 2KB
• Also for very small user data (< 2KB) a complete frame is required
• Large user data is split across multiple frames
ATS Masters - Storage
© 2012 IBM Corporation25
SVC Split I/O Group – Quorum Disk
 SVC creates three Quorum disk candidates on the first three
managed MDisks
 One Quorum disk is active
 SVC 5.1 and later:
– SVC is able to handle the Quorum disk management in a very flexible
way, but in a Split I/O Group configuration a well defined setup is
required.
– -> Disable the dynamic quorum feature using the “override” flag for V6.2
and later
• svctask chquorum -MDisk <mdisk_id or name> -override yes
• This flag is currently not configurable in the GUI
 “Split Brain” situation:
– SVC uses the quorum disk to decide which SVC node(s) should survive
 No access to the active Quorum disk:
– In a standard situation (no split brain): SVC will select one of the other
Quorum candidates as active Quorum
ATS Masters - Storage
© 2012 IBM Corporation
 Quorum disk requirements: Only certain storage supported
– Must be placed in a third, independent site
– Storage box must be fibre channel connected
– ISLs with one hop to Quorum storage system are supported
 Supported infrastructure:
– WDM equipment similar to Metro Mirror
– Link requirement similar to Metro Mirror
• Max round trip delay time is 80 ms, 40 ms each direction
– FCIP to Quorum disk can be used with the following requirements:
• Max round trip delay time is 80 ms, 40 ms each direction
• If fabrics are not merged, routers required
• Independent long distance equipment from each site to Site 3 is required
 iSCSI storage not supported
 Requirement for active / passive storage devices (like DS3/4/5K):
– Each quorum disk storage controller must be connected to both sites
26
SVC Split I/O Group – Quorum Disk
ATS Masters - Storage
© 2012 IBM Corporation27
3rd
-site quorum supports Extended Quorum
ATS Masters - Storage
© 2012 IBM Corporation28
Split I/O Group without ISLs between SVC
nodes
Minimum
distance
Maximum
distance
Maximum
Link
Speed
>= 0 km = 10 km 8 Gbps
> 10 km = 20 km 4 Gbps
> 20km = 40km 2 Gbps
 SVC 6.3:
– Similar to the support statement in SVC 6.2
– Additional: support for active WDM devices
– Quorum disk requirement similar to Remote Copy
(MM/GM) requirments:
• Max. 80 ms Round Trip delay time, 40 ms each
direction
• FCIP connectivity supported for quorum disk
• No support for iSCSI storage system
Split I/O Group without ISLs between SVC nodes (Classic Split I/O Group)
 SVC 6.2 and earlier:
– Two ports on each SVC node needed to be connected to the “remote” switch
– No ISLs between SVC nodes
– Third site required for Quorum disk
– ISLs with max. 1 hop can be used for storage traffic and Quorum disk attachment
 SVC 6.2 (late) update:
– Distance extension to max. 40 km with passive WDM devices
• Up to 20km at 4Gb/s or up to 40km at 2Gb/s.
• LongWave SFPs for long distances required
• LongWave SFPs must be supported from the switch and WDM vendor
ATS Masters - Storage
© 2012 IBM Corporation
S w it c h 1
S w it c h 2
S w it c h 3
S w it c h 4
A c t iv e
Q u o r u m
S V C n o d e 1 S V C n o d e 2
S e r v e r 1 S e r v e r 2
S t o r a g e S t o r a g e
S i t e 1 S i t e 2
S i t e 3
29
Split I/O Group without ISLs between SVC
nodes
ATS Masters - Storage
© 2012 IBM Corporation30
Split I/O Group without ISLs between SVC
nodes
S w it c h 1
S w it c h 2
S w it c h 3
S w it c h 4
S w it c h 5 S w it c h 6
S V C n o d e 1 S V C n o d e 2
I S L ( S e r v e r )
I S L ( S e r v e r )
S e r v e r 1 S e r v e r 2
S t o r a g e 3 S t o r a g e 2
I S L ( S e r v e r )
I S L ( S e r v e r )
S i t e 1 S i t e 2
S i t e 3
A c t . Q u o r u m
ATS Masters - Storage
© 2012 IBM Corporation31
Split I/O Group without ISLs between SVC
nodes
S w it c h 1
S w it c h 2
S w it c h 3
S w it c h 4
S w it c h 5 S w it c h 6
S V C n o d e 1 S V C n o d e 2
I S L ( S e r v e r )
I S L ( S e r v e r )
S e r v e r 1 S e r v e r 2
S t o r a g e 3 S t o r a g e 2
I S L ( S e r v e r )
I S L ( S e r v e r )
S i t e 1 S i t e 2
S i t e 3
D S 4 7 0 0
A c t . Q u o r u m
C t l . A C t l . B
ATS Masters - Storage
© 2012 IBM Corporation32
SAN and Buffer-to-Buffer Credits
 Buffer-to-Buffer (B2B) credits
– Are used as a flow control method by Fibre Channel technology and
represent the number of frames a port can store
• Provides best performance
 Light must cover the distance 2 times
– Submit data from Node 1 to Node 2
– Submit acknowledge from Node 2 back to Node 1
 B2B Calculation depends on link speed and distance
– Number of multiple frames in flight increase equivalent to the link speed
ATS Masters - Storage
© 2012 IBM Corporation33
Split I/O Group without ISLs: Long distance
configuration
 SVC Buffer to Buffer credits
– 2145–CF8 / CG8 have 41 B2B credits
•Enough for 10km at 8Gb/sec with 2 KB payload
– All earlier models:
•Use 1/2/4Gb/sec fibre channel adapters
•Have 8 B2B credits which is enough for 4km at 4Gb/sec
 Recommendation 1:
– Use CF8 / CG8 nodes for more than 4km distance for best performance
 Recommendation 2:
– SAN switches do not auto-negotiate B2B credits and 8 B2B credits is the default setting so change
the B2B credits in the switch to 41 as well
L in k s p e e d F C fr a m e
le n g th
R e q u ir e d B 2 B
c r e d its fo r 1 0 k m
d is ta n c e
M a x d is ta n c e w ith
8 B 2 B c r e d its
1 G b /s e c 1 k m 5 1 6 k m
2 G b /s e c 0 .5 k m 1 0 8 k m
4 G b /s e c 0 .2 5 k m 2 0 4 k m
8 G b /s e c 0 .1 2 5 k m 4 0 2 k m
L in k s p e e d F C fr a m e
le n g th
R e q u ir e d B 2 B
c r e d its fo r 1 0 k m
d is ta n c e
M a x d is ta n c e w ith
8 B 2 B c r e d its
1 G b /s e c 1 k m 5 1 6 k m
2 G b /s e c 0 .5 k m 1 0 8 k m
4 G b /s e c 0 .2 5 k m 2 0 4 k m
8 G b /s e c 0 .1 2 5 k m 4 0 2 k m
ATS Masters - Storage
© 2012 IBM Corporation34
Split I/O Group with ISLs between SVC nodes
P r iv . S A N 1
P u b l. S A N 2
P r iv . S A N 1
P u b l. S A N 2
S w it c h S w it c h
S V C - 0 1 S V C - 0 2
I S L
I S L
I S L
I S L
S e r v e r 1
S e r v e r 2
S e r v e r 3
S e r v e r 4
P u b l. S A N 1 P u b l. S A N 1
I S L
P r iv . S A N 2 P r iv . S A N 2
I S L
S it e 1 S it e 2
S it e 3
A c t . Q u o r u m
C t l . A C t l . B
Q u o r u m c a n d i d a t e
S t o r a g e
Q u o r u m c a n d i d a t e
S t o r a g e
W D M W D M
W D M W D M
ATS Masters - Storage
© 2012 IBM Corporation35
Long distance with ISLs between SVC nodes
 Some switches and distance extenders use extra buffers and proprietary
protocols to eliminate one of the round trips worth of latency for SCSI Write
commands
– These devices are already supported for use with SVC
– No benefit or impact inter-node communication
– Does benefit Host to remote SVC I/Os
– Does benefit SVC to remote Storage Controller I/Os
 Consequences:
– Metro Mirror is deployed for shorter distances (up to 300km)
– Global Mirror is used for longer distances
– Split I/O Group supported distance will depend on application latency restrictions
• 100km for live data mobility (150km with distance extenders)
• 300km for fail-over / recovery scenarios
• SVC supports up to 80ms latency, far greater than most application workloads would tolerate
ATS Masters - Storage
© 2012 IBM Corporation36
Split I/O Group Configuration: Examples
P r iv .S A N 1
P u b l.S A N 2
P r iv. S A N 1
P u b l.S A N 2
S w itc h S w itc h
S V C - 0 1 S V C - 0 2
IS L
IS L
IS L
IS L
S e r v e r 1
S e r v e r 2
S e r v e r 3
S e r v e r 4
P u b l. S A N 1 P u b l.S A N 1
IS L
P r iv.S A N 2 P r iv.S A N 2
IS L
S it e 1 S it e 2
S it e 3
A c t . Q u o r u m
C t l. A C t l. B
Q u o r u m c a n d id a t e
S t o r a g e
Q u o r u m c a n d id a t e
S to r a g e
W D M W D M
W D M W D M
Example 3)
Configuration without live data mobility :
 VMware ESX with SRM, AIX HACMP, or MS Cluster
 Distance between sites: 180km
-> Only SVC 6.3 Split I/O Group with ISLs is supported or
-> Metro Mirror configuration
 Because of long distances: only in active / passive configuration
Example 1)
Configuration with live data mobility:
 VMware ESX with VMotion or AIX with live partition mobility
 Distance between sites: 12km
-> SVC 6.3: Configuration with or without ISLs are supported
-> SVC 6.2: Only configuration without ISLs is supported
Example 2)
Configuration with live data mobility :
 VMware ESX with VMotion or AIX with live partition mobility
 Distance between sites: 70km
-> Only SVC 6.3 Split I/O Group with ISLs is supported.
ATS Masters - Storage
© 2012 IBM Corporation37
Split I/O Group - Disaster Recovery
 Split I/O groups provide distributed HA functionality
 Usage of Metro Mirror / Global Mirror is recommended for disaster
protection
 Both major Split I/O Group sites must be connected to the MM / GM
infrastructure
 Without ISLs between SVC nodes:
– All SVC ports can be used for MM / GM connectivity
 With ISLs between SVC nodes:
– Only MM / GM connectivity to the public SAN network is supported
– Only 2 FC ports per SVC node will be available for MM or GM and will also be
used for host to SVC and SVC to disk system I/O
• Thus limited capability currently
• Congestion on GM ports would affect host I/O, but not node-to-node (heartbeats,
etc.)
• Might need more than one cluster to handle all traffic
– More expensive, more ports and paths to deal with
ATS Masters - Storage
© 2012 IBM Corporation38
Summary
 SVC Split I/O Group:
– Is a very powerful solution for automatic and fast handling of storage failures
– Transparent for servers
– Perfect fit in a vitualized environment (like VMware VMotion, AIX Live Partition
Mobility)
– Transparent for all OS based clusters
– Distances up to 300 km (SVC 6.3) are supported
 Two possible scenarios:
– Without ISLs between SVC nodes (classic SVC Split I/O Group)
• Up to 40 km distance with support for active (SVC 6.3) and passive (SVC 6.2) WDM
– With ISLs between SVC nodes:
• Up to 100 km distance for live data mobility (150 km with distance extenders)
• Up to 300 km for fail-over / recovery scenarios
 Long distance performance impact can be optimized by:
– Load distribution across both sites
– Appropriate SAN Buffer to Buffer credits

More Related Content

What's hot

SQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
SQream DB - Bigger Data On GPUs: Approaches, Challenges, SuccessesSQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
SQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
Arnon Shimoni
 
Object Storage in a Cloud-Native Container Envirnoment
Object Storage in a Cloud-Native Container EnvirnomentObject Storage in a Cloud-Native Container Envirnoment
Object Storage in a Cloud-Native Container Envirnoment
Minio
 
IBM Cloud Object Storage: How it works and typical use cases
IBM Cloud Object Storage: How it works and typical use casesIBM Cloud Object Storage: How it works and typical use cases
IBM Cloud Object Storage: How it works and typical use cases
Tony Pearson
 
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
xKinAnx
 
Block I/O Layer Tracing: blktrace
Block I/O Layer Tracing: blktraceBlock I/O Layer Tracing: blktrace
Block I/O Layer Tracing: blktrace
Babak Farrokhi
 
AMD Chiplet Architecture for High-Performance Server and Desktop Products
AMD Chiplet Architecture for High-Performance Server and Desktop ProductsAMD Chiplet Architecture for High-Performance Server and Desktop Products
AMD Chiplet Architecture for High-Performance Server and Desktop Products
AMD
 
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
AMD
 
The Online Tech of Titanfall
The Online Tech of TitanfallThe Online Tech of Titanfall
The Online Tech of Titanfall
vtslothy
 
EMC Vnx master-presentation
EMC Vnx master-presentationEMC Vnx master-presentation
EMC Vnx master-presentation
solarisyougood
 
Q4.11: NEON Intrinsics
Q4.11: NEON IntrinsicsQ4.11: NEON Intrinsics
Q4.11: NEON Intrinsics
Linaro
 
이것이 레디스다.
이것이 레디스다.이것이 레디스다.
이것이 레디스다.
Kris Jeong
 
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AI
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AIArm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AI
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AI
inside-BigData.com
 
Broadcom PCIe & CXL Switches OCP Final.pptx
Broadcom PCIe & CXL Switches OCP Final.pptxBroadcom PCIe & CXL Switches OCP Final.pptx
Broadcom PCIe & CXL Switches OCP Final.pptx
Memory Fabric Forum
 
Optimizing ARM cortex a and cortex-m based heterogeneous multiprocessor syste...
Optimizing ARM cortex a and cortex-m based heterogeneous multiprocessor syste...Optimizing ARM cortex a and cortex-m based heterogeneous multiprocessor syste...
Optimizing ARM cortex a and cortex-m based heterogeneous multiprocessor syste...
Arm
 
Basics of storage Technology
Basics of storage TechnologyBasics of storage Technology
Basics of storage Technology
Lopamudra Das
 
Student guide power systems for aix - virtualization i implementing virtual...
Student guide   power systems for aix - virtualization i implementing virtual...Student guide   power systems for aix - virtualization i implementing virtual...
Student guide power systems for aix - virtualization i implementing virtual...
solarisyougood
 
Android 电源管理 power_management_(英文版)
Android 电源管理 power_management_(英文版)Android 电源管理 power_management_(英文版)
Android 电源管理 power_management_(英文版)
borderj
 
AMD Hot Chips Bulldozer & Bobcat Presentation
AMD Hot Chips Bulldozer & Bobcat PresentationAMD Hot Chips Bulldozer & Bobcat Presentation
AMD Hot Chips Bulldozer & Bobcat Presentation
AMD
 
DAT202_Getting started with Amazon Aurora
DAT202_Getting started with Amazon AuroraDAT202_Getting started with Amazon Aurora
DAT202_Getting started with Amazon Aurora
Amazon Web Services
 
redis 소개자료 - 네오클로바
redis 소개자료 - 네오클로바redis 소개자료 - 네오클로바
redis 소개자료 - 네오클로바
NeoClova
 

What's hot (20)

SQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
SQream DB - Bigger Data On GPUs: Approaches, Challenges, SuccessesSQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
SQream DB - Bigger Data On GPUs: Approaches, Challenges, Successes
 
Object Storage in a Cloud-Native Container Envirnoment
Object Storage in a Cloud-Native Container EnvirnomentObject Storage in a Cloud-Native Container Envirnoment
Object Storage in a Cloud-Native Container Envirnoment
 
IBM Cloud Object Storage: How it works and typical use cases
IBM Cloud Object Storage: How it works and typical use casesIBM Cloud Object Storage: How it works and typical use cases
IBM Cloud Object Storage: How it works and typical use cases
 
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
Ibm spectrum scale fundamentals workshop for americas part 5 ess gnr-usecases...
 
Block I/O Layer Tracing: blktrace
Block I/O Layer Tracing: blktraceBlock I/O Layer Tracing: blktrace
Block I/O Layer Tracing: blktrace
 
AMD Chiplet Architecture for High-Performance Server and Desktop Products
AMD Chiplet Architecture for High-Performance Server and Desktop ProductsAMD Chiplet Architecture for High-Performance Server and Desktop Products
AMD Chiplet Architecture for High-Performance Server and Desktop Products
 
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
 
The Online Tech of Titanfall
The Online Tech of TitanfallThe Online Tech of Titanfall
The Online Tech of Titanfall
 
EMC Vnx master-presentation
EMC Vnx master-presentationEMC Vnx master-presentation
EMC Vnx master-presentation
 
Q4.11: NEON Intrinsics
Q4.11: NEON IntrinsicsQ4.11: NEON Intrinsics
Q4.11: NEON Intrinsics
 
이것이 레디스다.
이것이 레디스다.이것이 레디스다.
이것이 레디스다.
 
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AI
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AIArm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AI
Arm A64fx and Post-K: Game-Changing CPU & Supercomputer for HPC, Big Data, & AI
 
Broadcom PCIe & CXL Switches OCP Final.pptx
Broadcom PCIe & CXL Switches OCP Final.pptxBroadcom PCIe & CXL Switches OCP Final.pptx
Broadcom PCIe & CXL Switches OCP Final.pptx
 
Optimizing ARM cortex a and cortex-m based heterogeneous multiprocessor syste...
Optimizing ARM cortex a and cortex-m based heterogeneous multiprocessor syste...Optimizing ARM cortex a and cortex-m based heterogeneous multiprocessor syste...
Optimizing ARM cortex a and cortex-m based heterogeneous multiprocessor syste...
 
Basics of storage Technology
Basics of storage TechnologyBasics of storage Technology
Basics of storage Technology
 
Student guide power systems for aix - virtualization i implementing virtual...
Student guide   power systems for aix - virtualization i implementing virtual...Student guide   power systems for aix - virtualization i implementing virtual...
Student guide power systems for aix - virtualization i implementing virtual...
 
Android 电源管理 power_management_(英文版)
Android 电源管理 power_management_(英文版)Android 电源管理 power_management_(英文版)
Android 电源管理 power_management_(英文版)
 
AMD Hot Chips Bulldozer & Bobcat Presentation
AMD Hot Chips Bulldozer & Bobcat PresentationAMD Hot Chips Bulldozer & Bobcat Presentation
AMD Hot Chips Bulldozer & Bobcat Presentation
 
DAT202_Getting started with Amazon Aurora
DAT202_Getting started with Amazon AuroraDAT202_Getting started with Amazon Aurora
DAT202_Getting started with Amazon Aurora
 
redis 소개자료 - 네오클로바
redis 소개자료 - 네오클로바redis 소개자료 - 네오클로바
redis 소개자료 - 네오클로바
 

Viewers also liked

Ds8000 Practical Performance Analysis P04 20060718
Ds8000 Practical Performance Analysis P04 20060718Ds8000 Practical Performance Analysis P04 20060718
Ds8000 Practical Performance Analysis P04 20060718
brettallison
 
IBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance AnalysisIBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance Analysis
brettallison
 
Xiv svc best practices - march 2013
Xiv   svc best practices - march 2013Xiv   svc best practices - march 2013
Xiv svc best practices - march 2013
Jinesh Shah
 
SVC / Storwize: cache partition analysis (BVQ howto)
SVC / Storwize: cache partition analysis  (BVQ howto)   SVC / Storwize: cache partition analysis  (BVQ howto)
SVC / Storwize: cache partition analysis (BVQ howto)
Michael Pirker
 
Storage Spectrum and Cloud deck late 2016
Storage Spectrum and Cloud deck late 2016Storage Spectrum and Cloud deck late 2016
Storage Spectrum and Cloud deck late 2016
Joe Krotz
 
Xiv overview
Xiv overviewXiv overview
Xiv overview
Jinesh Shah
 
Ibm flash system v9000 technical deep dive workshop
Ibm flash system v9000 technical deep dive workshopIbm flash system v9000 technical deep dive workshop
Ibm flash system v9000 technical deep dive workshop
solarisyougood
 
Linux on System z – disk I/O performance
Linux on System z – disk I/O performanceLinux on System z – disk I/O performance
Linux on System z – disk I/O performance
IBM India Smarter Computing
 
Impacto de las tic en el aula
Impacto de las tic en el aulaImpacto de las tic en el aula
Impacto de las tic en el aula
Bladimir Hoyos
 
Linux on System z disk I/O performance
Linux on System z disk I/O performanceLinux on System z disk I/O performance
Linux on System z disk I/O performance
IBM India Smarter Computing
 
Rubrica del proyecto tarea 5
Rubrica del proyecto tarea 5Rubrica del proyecto tarea 5
Rubrica del proyecto tarea 5
Bladimir Hoyos
 
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
xKinAnx
 
3487570
34875703487570
Presentación curso itsm cap8
Presentación curso itsm cap8Presentación curso itsm cap8
Presentación curso itsm cap8Bladimir Hoyos
 
Presentación curso itsm cap4
Presentación curso itsm cap4Presentación curso itsm cap4
Presentación curso itsm cap4Bladimir Hoyos
 
Presentación curso itsm cap11
Presentación curso itsm cap11Presentación curso itsm cap11
Presentación curso itsm cap11Bladimir Hoyos
 
Presentación curso itsm cap10
Presentación curso itsm cap10Presentación curso itsm cap10
Presentación curso itsm cap10Bladimir Hoyos
 
IBM FlashSystems A9000/R presentation
IBM FlashSystems A9000/R presentation IBM FlashSystems A9000/R presentation
IBM FlashSystems A9000/R presentation
Joe Krotz
 
IBM Flash System® Family webinar 9 de noviembre de 2016
IBM Flash System® Family   webinar 9 de noviembre de 2016 IBM Flash System® Family   webinar 9 de noviembre de 2016
IBM Flash System® Family webinar 9 de noviembre de 2016
Diego Alberto Tamayo
 

Viewers also liked (20)

Ds8000 Practical Performance Analysis P04 20060718
Ds8000 Practical Performance Analysis P04 20060718Ds8000 Practical Performance Analysis P04 20060718
Ds8000 Practical Performance Analysis P04 20060718
 
IBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance AnalysisIBM SAN Volume Controller Performance Analysis
IBM SAN Volume Controller Performance Analysis
 
Xiv svc best practices - march 2013
Xiv   svc best practices - march 2013Xiv   svc best practices - march 2013
Xiv svc best practices - march 2013
 
SVC / Storwize: cache partition analysis (BVQ howto)
SVC / Storwize: cache partition analysis  (BVQ howto)   SVC / Storwize: cache partition analysis  (BVQ howto)
SVC / Storwize: cache partition analysis (BVQ howto)
 
Storage Spectrum and Cloud deck late 2016
Storage Spectrum and Cloud deck late 2016Storage Spectrum and Cloud deck late 2016
Storage Spectrum and Cloud deck late 2016
 
Xiv overview
Xiv overviewXiv overview
Xiv overview
 
IBM XIV Gen3 Storage System
IBM XIV Gen3 Storage SystemIBM XIV Gen3 Storage System
IBM XIV Gen3 Storage System
 
Ibm flash system v9000 technical deep dive workshop
Ibm flash system v9000 technical deep dive workshopIbm flash system v9000 technical deep dive workshop
Ibm flash system v9000 technical deep dive workshop
 
Linux on System z – disk I/O performance
Linux on System z – disk I/O performanceLinux on System z – disk I/O performance
Linux on System z – disk I/O performance
 
Impacto de las tic en el aula
Impacto de las tic en el aulaImpacto de las tic en el aula
Impacto de las tic en el aula
 
Linux on System z disk I/O performance
Linux on System z disk I/O performanceLinux on System z disk I/O performance
Linux on System z disk I/O performance
 
Rubrica del proyecto tarea 5
Rubrica del proyecto tarea 5Rubrica del proyecto tarea 5
Rubrica del proyecto tarea 5
 
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...Accelerate with ibm storage  ibm spectrum virtualize hyper swap deep dive dee...
Accelerate with ibm storage ibm spectrum virtualize hyper swap deep dive dee...
 
3487570
34875703487570
3487570
 
Presentación curso itsm cap8
Presentación curso itsm cap8Presentación curso itsm cap8
Presentación curso itsm cap8
 
Presentación curso itsm cap4
Presentación curso itsm cap4Presentación curso itsm cap4
Presentación curso itsm cap4
 
Presentación curso itsm cap11
Presentación curso itsm cap11Presentación curso itsm cap11
Presentación curso itsm cap11
 
Presentación curso itsm cap10
Presentación curso itsm cap10Presentación curso itsm cap10
Presentación curso itsm cap10
 
IBM FlashSystems A9000/R presentation
IBM FlashSystems A9000/R presentation IBM FlashSystems A9000/R presentation
IBM FlashSystems A9000/R presentation
 
IBM Flash System® Family webinar 9 de noviembre de 2016
IBM Flash System® Family   webinar 9 de noviembre de 2016 IBM Flash System® Family   webinar 9 de noviembre de 2016
IBM Flash System® Family webinar 9 de noviembre de 2016
 

Similar to Masters stretched svc-cluster-2012-04-13 v2

Cisco nx os
Cisco nx os Cisco nx os
Cisco nx os
Utpal Sinha
 
#VMUGMTL - Xsigo Breakout
#VMUGMTL - Xsigo Breakout#VMUGMTL - Xsigo Breakout
#VMUGMTL - Xsigo Breakout
1CloudRoad.com
 
Datacenter 2014: IPnett - Martin Milnert
Datacenter 2014: IPnett - Martin MilnertDatacenter 2014: IPnett - Martin Milnert
Datacenter 2014: IPnett - Martin Milnert
Mediehuset Ingeniøren Live
 
Cloud stack networking shapeblue technical deep dive
Cloud stack networking   shapeblue technical deep diveCloud stack networking   shapeblue technical deep dive
Cloud stack networking shapeblue technical deep dive
ShapeBlue
 
2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANL2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANL
dgoodell
 
Cisco Connect Halifax 2018 Application agility and programmability with cis...
Cisco Connect Halifax 2018   Application agility and programmability with cis...Cisco Connect Halifax 2018   Application agility and programmability with cis...
Cisco Connect Halifax 2018 Application agility and programmability with cis...
Cisco Canada
 
PLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
PLNOG16: Data center interconnect dla opornych, Krzysztof MazepaPLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
PLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
PROIDEA
 
Cisco Connect Toronto 2018 dc-aci-anywhere
Cisco Connect Toronto 2018   dc-aci-anywhereCisco Connect Toronto 2018   dc-aci-anywhere
Cisco Connect Toronto 2018 dc-aci-anywhere
Cisco Canada
 
Logical_Routing_NSX_T_2.4.pptx.pptx
Logical_Routing_NSX_T_2.4.pptx.pptxLogical_Routing_NSX_T_2.4.pptx.pptx
Logical_Routing_NSX_T_2.4.pptx.pptx
AnwarAnsari40
 
SAN Extension Design and Solutions
SAN Extension Design and SolutionsSAN Extension Design and Solutions
SAN Extension Design and Solutions
Tony Antony
 
Presentation cloud computing and the internet
Presentation   cloud computing and the internetPresentation   cloud computing and the internet
Presentation cloud computing and the internet
xKinAnx
 
CloudStack Build A Cloud Day (SCaLE 2013)
CloudStack Build A Cloud Day (SCaLE 2013)CloudStack Build A Cloud Day (SCaLE 2013)
CloudStack Build A Cloud Day (SCaLE 2013)
Clayton Weise
 
640 802 exam
640 802 exam640 802 exam
640 802 examliemgpc2
 
Đề Thi Trắc Nghiệm CCNA Full
Đề Thi Trắc Nghiệm CCNA Full Đề Thi Trắc Nghiệm CCNA Full
Đề Thi Trắc Nghiệm CCNA Full
nataliej4
 
T1-9-2.ppt
T1-9-2.pptT1-9-2.ppt
Presentation deploying cloud based services
Presentation   deploying cloud based servicesPresentation   deploying cloud based services
Presentation deploying cloud based services
xKinAnx
 
Virtual Switch System.pdf
Virtual Switch System.pdfVirtual Switch System.pdf
Virtual Switch System.pdf
TrungNguyen335833
 
Avaya Networking Solution Overview
Avaya Networking Solution OverviewAvaya Networking Solution Overview
Avaya Networking Solution Overview
Motty Ben Atia
 
CCNP Switching Chapter 1
CCNP Switching Chapter 1CCNP Switching Chapter 1
CCNP Switching Chapter 1
Chaing Ravuth
 
Designing and deploying converged storage area networks final
Designing and deploying converged storage area networks finalDesigning and deploying converged storage area networks final
Designing and deploying converged storage area networks final
Bhavin Yadav
 

Similar to Masters stretched svc-cluster-2012-04-13 v2 (20)

Cisco nx os
Cisco nx os Cisco nx os
Cisco nx os
 
#VMUGMTL - Xsigo Breakout
#VMUGMTL - Xsigo Breakout#VMUGMTL - Xsigo Breakout
#VMUGMTL - Xsigo Breakout
 
Datacenter 2014: IPnett - Martin Milnert
Datacenter 2014: IPnett - Martin MilnertDatacenter 2014: IPnett - Martin Milnert
Datacenter 2014: IPnett - Martin Milnert
 
Cloud stack networking shapeblue technical deep dive
Cloud stack networking   shapeblue technical deep diveCloud stack networking   shapeblue technical deep dive
Cloud stack networking shapeblue technical deep dive
 
2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANL2014/09/02 Cisco UCS HPC @ ANL
2014/09/02 Cisco UCS HPC @ ANL
 
Cisco Connect Halifax 2018 Application agility and programmability with cis...
Cisco Connect Halifax 2018   Application agility and programmability with cis...Cisco Connect Halifax 2018   Application agility and programmability with cis...
Cisco Connect Halifax 2018 Application agility and programmability with cis...
 
PLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
PLNOG16: Data center interconnect dla opornych, Krzysztof MazepaPLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
PLNOG16: Data center interconnect dla opornych, Krzysztof Mazepa
 
Cisco Connect Toronto 2018 dc-aci-anywhere
Cisco Connect Toronto 2018   dc-aci-anywhereCisco Connect Toronto 2018   dc-aci-anywhere
Cisco Connect Toronto 2018 dc-aci-anywhere
 
Logical_Routing_NSX_T_2.4.pptx.pptx
Logical_Routing_NSX_T_2.4.pptx.pptxLogical_Routing_NSX_T_2.4.pptx.pptx
Logical_Routing_NSX_T_2.4.pptx.pptx
 
SAN Extension Design and Solutions
SAN Extension Design and SolutionsSAN Extension Design and Solutions
SAN Extension Design and Solutions
 
Presentation cloud computing and the internet
Presentation   cloud computing and the internetPresentation   cloud computing and the internet
Presentation cloud computing and the internet
 
CloudStack Build A Cloud Day (SCaLE 2013)
CloudStack Build A Cloud Day (SCaLE 2013)CloudStack Build A Cloud Day (SCaLE 2013)
CloudStack Build A Cloud Day (SCaLE 2013)
 
640 802 exam
640 802 exam640 802 exam
640 802 exam
 
Đề Thi Trắc Nghiệm CCNA Full
Đề Thi Trắc Nghiệm CCNA Full Đề Thi Trắc Nghiệm CCNA Full
Đề Thi Trắc Nghiệm CCNA Full
 
T1-9-2.ppt
T1-9-2.pptT1-9-2.ppt
T1-9-2.ppt
 
Presentation deploying cloud based services
Presentation   deploying cloud based servicesPresentation   deploying cloud based services
Presentation deploying cloud based services
 
Virtual Switch System.pdf
Virtual Switch System.pdfVirtual Switch System.pdf
Virtual Switch System.pdf
 
Avaya Networking Solution Overview
Avaya Networking Solution OverviewAvaya Networking Solution Overview
Avaya Networking Solution Overview
 
CCNP Switching Chapter 1
CCNP Switching Chapter 1CCNP Switching Chapter 1
CCNP Switching Chapter 1
 
Designing and deploying converged storage area networks final
Designing and deploying converged storage area networks finalDesigning and deploying converged storage area networks final
Designing and deploying converged storage area networks final
 

More from solarisyougood

Emc vipr srm workshop
Emc vipr srm workshopEmc vipr srm workshop
Emc vipr srm workshop
solarisyougood
 
Emc recoverpoint technical
Emc recoverpoint technicalEmc recoverpoint technical
Emc recoverpoint technical
solarisyougood
 
Emc vmax3 technical deep workshop
Emc vmax3 technical deep workshopEmc vmax3 technical deep workshop
Emc vmax3 technical deep workshop
solarisyougood
 
EMC Atmos for service providers
EMC Atmos for service providersEMC Atmos for service providers
EMC Atmos for service providers
solarisyougood
 
Cisco prime network 4.1 technical overview
Cisco prime network 4.1 technical overviewCisco prime network 4.1 technical overview
Cisco prime network 4.1 technical overview
solarisyougood
 
Designing your xen desktop 7.5 environment with training guide
Designing your xen desktop 7.5 environment with training guideDesigning your xen desktop 7.5 environment with training guide
Designing your xen desktop 7.5 environment with training guide
solarisyougood
 
Ibm aix technical deep dive workshop advanced administration and problem dete...
Ibm aix technical deep dive workshop advanced administration and problem dete...Ibm aix technical deep dive workshop advanced administration and problem dete...
Ibm aix technical deep dive workshop advanced administration and problem dete...
solarisyougood
 
Ibm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshopIbm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshop
solarisyougood
 
Power8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshopPower8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshop
solarisyougood
 
Power systems virtualization with power kvm
Power systems virtualization with power kvmPower systems virtualization with power kvm
Power systems virtualization with power kvm
solarisyougood
 
Power vc for powervm deep dive tips &amp; tricks
Power vc for powervm deep dive tips &amp; tricksPower vc for powervm deep dive tips &amp; tricks
Power vc for powervm deep dive tips &amp; tricks
solarisyougood
 
Emc data domain technical deep dive workshop
Emc data domain  technical deep dive workshopEmc data domain  technical deep dive workshop
Emc data domain technical deep dive workshop
solarisyougood
 
Emc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshopEmc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshop
solarisyougood
 
Emc isilon technical deep dive workshop
Emc isilon technical deep dive workshopEmc isilon technical deep dive workshop
Emc isilon technical deep dive workshop
solarisyougood
 
Emc ecs 2 technical deep dive workshop
Emc ecs 2 technical deep dive workshopEmc ecs 2 technical deep dive workshop
Emc ecs 2 technical deep dive workshop
solarisyougood
 
Emc vplex deep dive
Emc vplex deep diveEmc vplex deep dive
Emc vplex deep dive
solarisyougood
 
Cisco mds 9148 s training workshop
Cisco mds 9148 s training workshopCisco mds 9148 s training workshop
Cisco mds 9148 s training workshop
solarisyougood
 
Cisco cloud computing deploying openstack
Cisco cloud computing deploying openstackCisco cloud computing deploying openstack
Cisco cloud computing deploying openstack
solarisyougood
 
Se training storage grid webscale technical overview
Se training   storage grid webscale technical overviewSe training   storage grid webscale technical overview
Se training storage grid webscale technical overview
solarisyougood
 
Vmware 2015 with vsphereHigh performance application platforms
Vmware 2015 with vsphereHigh performance application platformsVmware 2015 with vsphereHigh performance application platforms
Vmware 2015 with vsphereHigh performance application platforms
solarisyougood
 

More from solarisyougood (20)

Emc vipr srm workshop
Emc vipr srm workshopEmc vipr srm workshop
Emc vipr srm workshop
 
Emc recoverpoint technical
Emc recoverpoint technicalEmc recoverpoint technical
Emc recoverpoint technical
 
Emc vmax3 technical deep workshop
Emc vmax3 technical deep workshopEmc vmax3 technical deep workshop
Emc vmax3 technical deep workshop
 
EMC Atmos for service providers
EMC Atmos for service providersEMC Atmos for service providers
EMC Atmos for service providers
 
Cisco prime network 4.1 technical overview
Cisco prime network 4.1 technical overviewCisco prime network 4.1 technical overview
Cisco prime network 4.1 technical overview
 
Designing your xen desktop 7.5 environment with training guide
Designing your xen desktop 7.5 environment with training guideDesigning your xen desktop 7.5 environment with training guide
Designing your xen desktop 7.5 environment with training guide
 
Ibm aix technical deep dive workshop advanced administration and problem dete...
Ibm aix technical deep dive workshop advanced administration and problem dete...Ibm aix technical deep dive workshop advanced administration and problem dete...
Ibm aix technical deep dive workshop advanced administration and problem dete...
 
Ibm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshopIbm power ha v7 technical deep dive workshop
Ibm power ha v7 technical deep dive workshop
 
Power8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshopPower8 hardware technical deep dive workshop
Power8 hardware technical deep dive workshop
 
Power systems virtualization with power kvm
Power systems virtualization with power kvmPower systems virtualization with power kvm
Power systems virtualization with power kvm
 
Power vc for powervm deep dive tips &amp; tricks
Power vc for powervm deep dive tips &amp; tricksPower vc for powervm deep dive tips &amp; tricks
Power vc for powervm deep dive tips &amp; tricks
 
Emc data domain technical deep dive workshop
Emc data domain  technical deep dive workshopEmc data domain  technical deep dive workshop
Emc data domain technical deep dive workshop
 
Emc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshopEmc vnx2 technical deep dive workshop
Emc vnx2 technical deep dive workshop
 
Emc isilon technical deep dive workshop
Emc isilon technical deep dive workshopEmc isilon technical deep dive workshop
Emc isilon technical deep dive workshop
 
Emc ecs 2 technical deep dive workshop
Emc ecs 2 technical deep dive workshopEmc ecs 2 technical deep dive workshop
Emc ecs 2 technical deep dive workshop
 
Emc vplex deep dive
Emc vplex deep diveEmc vplex deep dive
Emc vplex deep dive
 
Cisco mds 9148 s training workshop
Cisco mds 9148 s training workshopCisco mds 9148 s training workshop
Cisco mds 9148 s training workshop
 
Cisco cloud computing deploying openstack
Cisco cloud computing deploying openstackCisco cloud computing deploying openstack
Cisco cloud computing deploying openstack
 
Se training storage grid webscale technical overview
Se training   storage grid webscale technical overviewSe training   storage grid webscale technical overview
Se training storage grid webscale technical overview
 
Vmware 2015 with vsphereHigh performance application platforms
Vmware 2015 with vsphereHigh performance application platformsVmware 2015 with vsphereHigh performance application platforms
Vmware 2015 with vsphereHigh performance application platforms
 

Recently uploaded

Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
sonjaschweigert1
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
nkrafacyberclub
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems S.M.S.A.
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Nexer Digital
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Adtran
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
Aftab Hussain
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
Matthew Sinclair
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
DianaGray10
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Neo4j
 

Recently uploaded (20)

Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
 

Masters stretched svc-cluster-2012-04-13 v2

  • 1. ATS Masters - Storage © 2012 IBM Corporation Stretched SVC Cluster
  • 2. ATS Masters - Storage © 2012 IBM Corporation Agenda  A little history  Options and issues  Requirements and restrictions
  • 3. ATS Masters - Storage © 2012 IBM Corporation3 Terminology  SVC Split I/O Group = SVC Stretched Cluster = SVC Split Cluster – Two independent SVC nodes in two independent sites + one independent site for Quorum – Acts just like a single I/O Group with distributed high availability  Distributed I/O groups – NOT a HA Configuration and not recommended, if one site failed: – Manual volume move required – Some data still in cache of offline I/O Group I/O Group 1 I/O Group 1 Site 1 Site 2 I/O Group 1 I/O Group 2 I/O Group 1 I/O Group 2 Site 1 Site 2 Site 2 Site 1  Storwize V7000 Split I/O Group not an option: – Single enclosure includes both nodes – Physical distribution across two sites not possible Site 1 Site 2
  • 4. ATS Masters - Storage © 2012 IBM Corporation Early Days - separate racks in machine room Servers SVC + UPS SVC + UPS Fabric A Fabric B Protected from physical problems Plumbing leak, fire “A chance to tackle the guy with a chainsaw” Where’s the quorum disk? Lots of cross-cabling
  • 5. ATS Masters - Storage © 2012 IBM Corporation Fabric presence on both sides of machine room Servers SVC + UPS SVC + UPS Fabric A Fabric B Cross cabling only needed for fabrics and SVC nodes Fabric B Fabric A Requirement for zero-hop between nodes in an I/O Group Needed to “zone away” ISL connections within an I/O Group Each fabric has two sets of zones for the two sets of ports. Quorum concern remains
  • 6. ATS Masters - Storage © 2012 IBM Corporation Server cluster also stretched across machine room Server Cluster SVC + UPS SVC + UPS Fabric A Fabric B Cross cabling only needed for fabrics and SVC nodes Fabric B Fabric A Requirement for zero-hop between nodes in an I/O Group Needed to “zone away” ISL connections within an I/O Group Each fabric has two sets of zones for the two sets of ports. Quorum concern remains Server Cluster Can do cluster failover Where’s the storage?
  • 7. ATS Masters - Storage © 2012 IBM Corporation SVC V4.3 Vdisk (Volume) Mirroring Server Cluster SVC + UPS SVC + UPS Fabric A Fabric B Cross cabling only needed for fabrics and SVC nodes Fabric B Fabric A Requirement for zero-hop between nodes in an I/O Group Needed to “zone away” ISL connections within an I/O Group Each fabric has two sets of zones for the two sets of ports. Quorum concern remains Server Cluster SVC Volume has a copy on either side of machine room Can do cluster failover and Mirroring allows single Volume to be visible on both sides
  • 8. ATS Masters - Storage © 2012 IBM Corporation SVC V5.1 – put LW SFPs in nodes for 10km distance Server Cluster SVC + UPS SVC + UPS Fabric A Fabric B LW SFPs w/SM fibre allows up to 10km Fabric B Fabric A Requirement for zero-hop between nodes in an I/O Group Needed to “zone away” ISL connections within an I/O Group Each fabric has two sets of zones for the two sets of ports. Quorum concern remains Server Cluster SVC Volume has a copy at each site Where’s the quorum? Can do cluster failover and Mirroring allows single Volume to be visible on both sides
  • 9. ATS Masters - Storage © 2012 IBM Corporation SVC V5.1 – stretched cluster with 3rd site -1 Server Cluster SVC + UPS SVC + UPS Fabric A Fabric B LW SFPs w/SM fibre allows up to 10km Fabric B Fabric A Server Cluster Can do cluster failover and Mirroring allows single Volume to be visible on both sides Active / passive storage devices (like DS3/4/5K): Each quorum disk storage controller must be connected to both sites Ok, but?
  • 10. ATS Masters - Storage © 2012 IBM Corporation SVC V5.1 – stretched cluster with 3rd site -2 Server Cluster SVC + UPS SVC + UPS Fabric A Fabric B LW SFPs w/SM fibre allows up to 10km Fabric B Fabric A Server Cluster Can do cluster failover and Mirroring allows single Volume to be visible on both sides Active / passive storage devices (like DS3/4/5K): Each quorum disk storage controller must be connected to both sites
  • 11. ATS Masters - Storage © 2012 IBM Corporation SVC V5.1 – stretched cluster with 3rd site -3 Server Cluster SVC + UPS SVC + UPS Fabric A Fabric B LW SFPs w/SM fibre allows up to 10km Fabric B Fabric A Server Cluster Can do cluster failover and Mirroring allows single Volume to be visible on both sides Active / passive storage devices (like DS3/4/5K): Each quorum disk storage controller must be connected to both sites LOTS OF CROSS CABLING!
  • 12. ATS Masters - Storage © 2012 IBM Corporation SVC V6.3-option 1: Same as V5 but farther using DWDM Server Cluster SVC + UPS SVC + UPS Fabric A Fabric B DWDM allows up to 40km Speed drops after 10km Fabric B Fabric A Server Cluster Can do cluster failover and Mirroring allows single Volume to be visible on both sides Active / passive storage devices (like DS3/4/5K): Each quorum disk storage controller must be connected to both sites
  • 13. ATS Masters - Storage © 2012 IBM Corporation SVC V6.3-option 1 (cont) Server Cluster SVC + UPS Server Cluster SVC + UPS Fabric A Fabric A Fabric B Fabric B User chooses number of ISLs on SAN Still no hops between nodes in an I/O Group These connections can be on DWDM too. Two ports per SVC node attached to local switchess Two ports per SVC node attached to remote switches via DWDM Hosts and storage attached to local switches, need enough ISLs 3rd site quorum (not shown) attached to both fabrics Active or passive DWDM over shared single mode fibre(s) 0-10 KM Fibre Channel distance supported up to 8Gbps 11-20KM Fibre Channel distance supported up to 4Gbps 21-40KM Fibre Channel distance supported up to 2Gbps User chooses number of ISLs on SAN Still no hops between nodes in an I/O Group These connections can be on DWDM too.
  • 14. ATS Masters - Storage © 2012 IBM Corporation SVC V6.3 option 2: Dedicated ISLs for nodes (can use DWDM) Server Cluster SVC + UPS Server Cluster SVC + UPS Private Fabric C Private Fabric C Private Fabric D Private Fabric D Public Fabric A Public Fabric B Public Fabric B Public Fabric A At least 1 ISL Trunk if more than 1 At least 1 ISL Trunk if more than 1 User chooses number of ISLs on public SAN Only half of all SVC ports used for host I/O Two ports per SVC node attached to public fabrics Two ports per SVC node attached to dedicated fabrics Hosts and storage attached to public fabrics 3rd site quorum (not shown) attached to public fabrics User chooses number of ISLs on public SAN Only half of all SVC ports used for host I/O Distance now up to 300km Apps may require less
  • 15. ATS Masters - Storage © 2012 IBM Corporation Configuration Using Brocade Virtual Fabrics Server Cluster SVC + UPS Server Cluster SVC + UPS Public Fabric A Private Fabric A Public Fabric B Private Fabric B Private Fabric A Public Fabric A Private Fabric B Public Fabric B Physical switches are partitioned into two logical switches, two virtual fabrics Note ISLs/Trunks for private fabrics are dedicated rather than being shared to guarantee dedicated bandwidth is available for node to node traffic Note ISLs/Trunks for private SANs are dedicated rather than being shared to guarantee dedicated bandwidth available for node to node traffic
  • 16. ATS Masters - Storage © 2012 IBM Corporation Configuration Using CISCO VSANs Server Cluster SVC + UPS Server Cluster SVC + UPS Public VSAN A Private VSAN A Public VSAN B Private VSAN B Private VSAN A Public VSAN A Private VSAN B Public VSAN B Note ISLs/Trunks for private VSANs are dedicated rather than being shared to guarantee dedicated bandwidth available for node to node traffic Switches/fabrics are partitioned using VSANs 1 ISL per I/O group Configured as trunk 1 ISL per I/O group Configured as trunk User chooses number of ISLs on public SAN User chooses number of ISLs on public SAN Two ports per SVC node attached to public VSANs Two ports per SVC node attached to private VSANs Hosts and storage attached to public VSANs 3rd site quorum (not shown) attached to public VSANs
  • 17. ATS Masters - Storage © 2012 IBM Corporation17 Split I/O Group – Distance  The new Split I/O Group configurations will support distances of up to 300km (same recommendation as for Metro Mirror)  However for the typical deployment of a split I/O group only 1/2 or 1/3rd of this distance is recommended because there will be 2 or 3 times as much latency depending on what distance extension technology is used  The following charts explain why
  • 18. ATS Masters - Storage © 2012 IBM Corporation18 Metro/Global Mirror o Technically SVC supports distances up to 8000km  SVC will tolerate a round trip delay of up to 80ms between nodes o The same code is used for all inter-node communication • Global Mirror, Metro Mirror, Cache Mirroring, Clustering • SVCs proprietary SCSI protocol only has 1 round trip o In practice Applications are not designed to support a Write I/O latency of 80ms  Hence Metro Mirror is deployed for shorter distances (up to 300km) and Global Mirror is used for longer distances
  • 19. ATS Masters - Storage © 2012 IBM Corporation19 M/etro Mirror: Application Latency = 1 long distance round trip IBM Presentation Template Full Version Data center 1 Data center 2 4) Metro Mirror Data transfer to remote site 5) Acknowledgment Steps 1–6 affect application latency Steps 7–10 should not affect the application Server Cluster 1 1) Write request from host 2) Xfer ready to host 3) Data transfer from host 6) Write completed to host 7a) Write request from SVC 8a) Xfer ready to SVC 9a) Data transfer from SVC 10a) Write completed to SVC SVC Cluster 1 Server Cluster 2 7b) Write request from SVC 8b) Xfer ready to SVC 9b) Data transfer from SVC 10b) Write completed to SVC SVC Cluster 2 1 round trip
  • 20. ATS Masters - Storage © 2012 IBM Corporation20 Split I/O Group – Preferred Node local, Write uses 1 round trip IBM Presentation Template Full Version Data center 1 Data center 2 Steps 1–6 affect application latency Steps 7–10 should not affect the application Server Cluster 1 1) Write request from host 2) Xfer ready to host 3) Data transfer from host 6) Write completed to host Server Cluster 2 4) Cache Mirror Data transfer to remote site 5) Acknowledgment SVC Split I/O Group 7b) Write request from SVC 8b) Xfer ready to SVC 9b) Data transfer from SVC 10b) Write completed to SVC 1 round trip 2 round trips – but SVC write cache hides this latency from the host Node 1 Node 2
  • 21. ATS Masters - Storage © 2012 IBM Corporation21 Split I/O Group – Preferred node remote, Write = 3 round trips IBM Presentation Template Full Version Data center 1 Data center 2 Server Cluster 1 1) Write request from host 2) Xfer ready to host 3) Data transfer from host 6) Write completed to host Server Cluster 2 7b) Write request from SVC 8b) Xfer ready to SVC 9b) Data transfer from SVC 10b) Write completed to SVC 4) Cache Mirror Data transfer to remote site 5) Acknowledgment SVC Split I/O Group 2 round trips 1 round trip 2 round trips – but SVC write cache hides this latency from the host Node 1 Node 2 Steps 1–6 affect application latency Steps 7–10 should not affect the application
  • 22. ATS Masters - Storage © 2012 IBM Corporation22 Help with some round trips  Some switches and distance extenders use extra buffers and proprietary protocols to eliminate one of the round trips worth of latency for SCSI Write commands  These devices are already supported for use with SVC  No benefit or impact inter-node communication  Does benefit Host to remote SVC I/Os  Does benefit SVC to remote Storage Controller I/Os
  • 23. ATS Masters - Storage © 2012 IBM Corporation23 Split I/O Group – Preferred Node Remote with help, 2 round trips IBM Presentation Template Full Version Data center 1 Data center 2 Steps 1 to 12 affect application latency Steps 13 to 22 should not affect the application Server Cluster 1 1) Write request from host 2) Xfer ready to host 3) Data transfer from host 12) Write completed to host Server Cluster 2 4) Write+ data transfer to remote site 8) Cache Mirror Data transfer to remote site 9) Acknowledgment SVC Split I/O Group 11) Write completion to remote site 21) Write completion to remote site 16) Write+ data transfer to remote site Distance Extenders 5) Write request to SVC 6) Xfer ready from SVC 7) Data transfer to SVC 10) Write completed from SVC 13) Write request from SVC 14) Xfer ready to SVC 15) Data transfer from SVC 22) Write completed to SVC 17) Write request to storage 18) Xfer ready from storage 19) Data transfer to storage 20) Write completed from storage 1 round trip 1 round trip 1 round trip – hidden from the host Node 1 Node 2
  • 24. ATS Masters - Storage © 2012 IBM Corporation24 Long Distance Impact  Additional latency because of long distance  Light speed in glass: ~ 200.000 km/sec – 1 km distance = 2 km round trip  Additional round trip time because of distance: – 1 km = 0.01 ms – 10 km = 0.10 ms – 25 km = 0.25 ms – 100 km = 1.00 ms – 300 km = 3.00 ms  SCSI protocol: – Read: 1 I/O operation = 0.01 ms / km • Initiator requests data and target provides data – Write: 2 I/O operations = 0.02 ms / km • Initiator announces amount of data, target acknowledges • Initiator send data, target acknowledge – SVC’s proprietary SCSI protocol for node-to-node traffic has only 1 round trip  Fibre channel frame: – User data per FC frame (Fibre channel payload): up to 2048 bytes = 2KB • Also for very small user data (< 2KB) a complete frame is required • Large user data is split across multiple frames
  • 25. ATS Masters - Storage © 2012 IBM Corporation25 SVC Split I/O Group – Quorum Disk  SVC creates three Quorum disk candidates on the first three managed MDisks  One Quorum disk is active  SVC 5.1 and later: – SVC is able to handle the Quorum disk management in a very flexible way, but in a Split I/O Group configuration a well defined setup is required. – -> Disable the dynamic quorum feature using the “override” flag for V6.2 and later • svctask chquorum -MDisk <mdisk_id or name> -override yes • This flag is currently not configurable in the GUI  “Split Brain” situation: – SVC uses the quorum disk to decide which SVC node(s) should survive  No access to the active Quorum disk: – In a standard situation (no split brain): SVC will select one of the other Quorum candidates as active Quorum
  • 26. ATS Masters - Storage © 2012 IBM Corporation  Quorum disk requirements: Only certain storage supported – Must be placed in a third, independent site – Storage box must be fibre channel connected – ISLs with one hop to Quorum storage system are supported  Supported infrastructure: – WDM equipment similar to Metro Mirror – Link requirement similar to Metro Mirror • Max round trip delay time is 80 ms, 40 ms each direction – FCIP to Quorum disk can be used with the following requirements: • Max round trip delay time is 80 ms, 40 ms each direction • If fabrics are not merged, routers required • Independent long distance equipment from each site to Site 3 is required  iSCSI storage not supported  Requirement for active / passive storage devices (like DS3/4/5K): – Each quorum disk storage controller must be connected to both sites 26 SVC Split I/O Group – Quorum Disk
  • 27. ATS Masters - Storage © 2012 IBM Corporation27 3rd -site quorum supports Extended Quorum
  • 28. ATS Masters - Storage © 2012 IBM Corporation28 Split I/O Group without ISLs between SVC nodes Minimum distance Maximum distance Maximum Link Speed >= 0 km = 10 km 8 Gbps > 10 km = 20 km 4 Gbps > 20km = 40km 2 Gbps  SVC 6.3: – Similar to the support statement in SVC 6.2 – Additional: support for active WDM devices – Quorum disk requirement similar to Remote Copy (MM/GM) requirments: • Max. 80 ms Round Trip delay time, 40 ms each direction • FCIP connectivity supported for quorum disk • No support for iSCSI storage system Split I/O Group without ISLs between SVC nodes (Classic Split I/O Group)  SVC 6.2 and earlier: – Two ports on each SVC node needed to be connected to the “remote” switch – No ISLs between SVC nodes – Third site required for Quorum disk – ISLs with max. 1 hop can be used for storage traffic and Quorum disk attachment  SVC 6.2 (late) update: – Distance extension to max. 40 km with passive WDM devices • Up to 20km at 4Gb/s or up to 40km at 2Gb/s. • LongWave SFPs for long distances required • LongWave SFPs must be supported from the switch and WDM vendor
  • 29. ATS Masters - Storage © 2012 IBM Corporation S w it c h 1 S w it c h 2 S w it c h 3 S w it c h 4 A c t iv e Q u o r u m S V C n o d e 1 S V C n o d e 2 S e r v e r 1 S e r v e r 2 S t o r a g e S t o r a g e S i t e 1 S i t e 2 S i t e 3 29 Split I/O Group without ISLs between SVC nodes
  • 30. ATS Masters - Storage © 2012 IBM Corporation30 Split I/O Group without ISLs between SVC nodes S w it c h 1 S w it c h 2 S w it c h 3 S w it c h 4 S w it c h 5 S w it c h 6 S V C n o d e 1 S V C n o d e 2 I S L ( S e r v e r ) I S L ( S e r v e r ) S e r v e r 1 S e r v e r 2 S t o r a g e 3 S t o r a g e 2 I S L ( S e r v e r ) I S L ( S e r v e r ) S i t e 1 S i t e 2 S i t e 3 A c t . Q u o r u m
  • 31. ATS Masters - Storage © 2012 IBM Corporation31 Split I/O Group without ISLs between SVC nodes S w it c h 1 S w it c h 2 S w it c h 3 S w it c h 4 S w it c h 5 S w it c h 6 S V C n o d e 1 S V C n o d e 2 I S L ( S e r v e r ) I S L ( S e r v e r ) S e r v e r 1 S e r v e r 2 S t o r a g e 3 S t o r a g e 2 I S L ( S e r v e r ) I S L ( S e r v e r ) S i t e 1 S i t e 2 S i t e 3 D S 4 7 0 0 A c t . Q u o r u m C t l . A C t l . B
  • 32. ATS Masters - Storage © 2012 IBM Corporation32 SAN and Buffer-to-Buffer Credits  Buffer-to-Buffer (B2B) credits – Are used as a flow control method by Fibre Channel technology and represent the number of frames a port can store • Provides best performance  Light must cover the distance 2 times – Submit data from Node 1 to Node 2 – Submit acknowledge from Node 2 back to Node 1  B2B Calculation depends on link speed and distance – Number of multiple frames in flight increase equivalent to the link speed
  • 33. ATS Masters - Storage © 2012 IBM Corporation33 Split I/O Group without ISLs: Long distance configuration  SVC Buffer to Buffer credits – 2145–CF8 / CG8 have 41 B2B credits •Enough for 10km at 8Gb/sec with 2 KB payload – All earlier models: •Use 1/2/4Gb/sec fibre channel adapters •Have 8 B2B credits which is enough for 4km at 4Gb/sec  Recommendation 1: – Use CF8 / CG8 nodes for more than 4km distance for best performance  Recommendation 2: – SAN switches do not auto-negotiate B2B credits and 8 B2B credits is the default setting so change the B2B credits in the switch to 41 as well L in k s p e e d F C fr a m e le n g th R e q u ir e d B 2 B c r e d its fo r 1 0 k m d is ta n c e M a x d is ta n c e w ith 8 B 2 B c r e d its 1 G b /s e c 1 k m 5 1 6 k m 2 G b /s e c 0 .5 k m 1 0 8 k m 4 G b /s e c 0 .2 5 k m 2 0 4 k m 8 G b /s e c 0 .1 2 5 k m 4 0 2 k m L in k s p e e d F C fr a m e le n g th R e q u ir e d B 2 B c r e d its fo r 1 0 k m d is ta n c e M a x d is ta n c e w ith 8 B 2 B c r e d its 1 G b /s e c 1 k m 5 1 6 k m 2 G b /s e c 0 .5 k m 1 0 8 k m 4 G b /s e c 0 .2 5 k m 2 0 4 k m 8 G b /s e c 0 .1 2 5 k m 4 0 2 k m
  • 34. ATS Masters - Storage © 2012 IBM Corporation34 Split I/O Group with ISLs between SVC nodes P r iv . S A N 1 P u b l. S A N 2 P r iv . S A N 1 P u b l. S A N 2 S w it c h S w it c h S V C - 0 1 S V C - 0 2 I S L I S L I S L I S L S e r v e r 1 S e r v e r 2 S e r v e r 3 S e r v e r 4 P u b l. S A N 1 P u b l. S A N 1 I S L P r iv . S A N 2 P r iv . S A N 2 I S L S it e 1 S it e 2 S it e 3 A c t . Q u o r u m C t l . A C t l . B Q u o r u m c a n d i d a t e S t o r a g e Q u o r u m c a n d i d a t e S t o r a g e W D M W D M W D M W D M
  • 35. ATS Masters - Storage © 2012 IBM Corporation35 Long distance with ISLs between SVC nodes  Some switches and distance extenders use extra buffers and proprietary protocols to eliminate one of the round trips worth of latency for SCSI Write commands – These devices are already supported for use with SVC – No benefit or impact inter-node communication – Does benefit Host to remote SVC I/Os – Does benefit SVC to remote Storage Controller I/Os  Consequences: – Metro Mirror is deployed for shorter distances (up to 300km) – Global Mirror is used for longer distances – Split I/O Group supported distance will depend on application latency restrictions • 100km for live data mobility (150km with distance extenders) • 300km for fail-over / recovery scenarios • SVC supports up to 80ms latency, far greater than most application workloads would tolerate
  • 36. ATS Masters - Storage © 2012 IBM Corporation36 Split I/O Group Configuration: Examples P r iv .S A N 1 P u b l.S A N 2 P r iv. S A N 1 P u b l.S A N 2 S w itc h S w itc h S V C - 0 1 S V C - 0 2 IS L IS L IS L IS L S e r v e r 1 S e r v e r 2 S e r v e r 3 S e r v e r 4 P u b l. S A N 1 P u b l.S A N 1 IS L P r iv.S A N 2 P r iv.S A N 2 IS L S it e 1 S it e 2 S it e 3 A c t . Q u o r u m C t l. A C t l. B Q u o r u m c a n d id a t e S t o r a g e Q u o r u m c a n d id a t e S to r a g e W D M W D M W D M W D M Example 3) Configuration without live data mobility :  VMware ESX with SRM, AIX HACMP, or MS Cluster  Distance between sites: 180km -> Only SVC 6.3 Split I/O Group with ISLs is supported or -> Metro Mirror configuration  Because of long distances: only in active / passive configuration Example 1) Configuration with live data mobility:  VMware ESX with VMotion or AIX with live partition mobility  Distance between sites: 12km -> SVC 6.3: Configuration with or without ISLs are supported -> SVC 6.2: Only configuration without ISLs is supported Example 2) Configuration with live data mobility :  VMware ESX with VMotion or AIX with live partition mobility  Distance between sites: 70km -> Only SVC 6.3 Split I/O Group with ISLs is supported.
  • 37. ATS Masters - Storage © 2012 IBM Corporation37 Split I/O Group - Disaster Recovery  Split I/O groups provide distributed HA functionality  Usage of Metro Mirror / Global Mirror is recommended for disaster protection  Both major Split I/O Group sites must be connected to the MM / GM infrastructure  Without ISLs between SVC nodes: – All SVC ports can be used for MM / GM connectivity  With ISLs between SVC nodes: – Only MM / GM connectivity to the public SAN network is supported – Only 2 FC ports per SVC node will be available for MM or GM and will also be used for host to SVC and SVC to disk system I/O • Thus limited capability currently • Congestion on GM ports would affect host I/O, but not node-to-node (heartbeats, etc.) • Might need more than one cluster to handle all traffic – More expensive, more ports and paths to deal with
  • 38. ATS Masters - Storage © 2012 IBM Corporation38 Summary  SVC Split I/O Group: – Is a very powerful solution for automatic and fast handling of storage failures – Transparent for servers – Perfect fit in a vitualized environment (like VMware VMotion, AIX Live Partition Mobility) – Transparent for all OS based clusters – Distances up to 300 km (SVC 6.3) are supported  Two possible scenarios: – Without ISLs between SVC nodes (classic SVC Split I/O Group) • Up to 40 km distance with support for active (SVC 6.3) and passive (SVC 6.2) WDM – With ISLs between SVC nodes: • Up to 100 km distance for live data mobility (150 km with distance extenders) • Up to 300 km for fail-over / recovery scenarios  Long distance performance impact can be optimized by: – Load distribution across both sites – Appropriate SAN Buffer to Buffer credits