SlideShare a Scribd company logo
© Copyright IBM Corporation 2018.
IBM Z and DS8880 IO Infrastructure
Modernization
Brian Sherman
IBM Distinguished Engineer
bsherman@ca.ibm.com
© Copyright IBM Corporation 2018.
Broadest Storage and Software Defined Portfolio in the Industry
2
Infrastructure Scale-Out FileScale-Out Block Scale-Out ObjectVirtualized Block
ArchiveBackup
Monitoring &
ControlManagement &
Cloud
Backup
& Archive
Copy Data
Management
Cloud Object Storage
System
Elastic Storage Server
XIV Gen3
High-Performance
Computing
New-Gen
Workloads
High-Performance
Analytics Cluster
Virtualization
Available as
FlashSystem A9000 FlashSystem A9000RFlashSystem V9000
Storwize V7000FStorwize V5030F
SAN Volume
Controller
Storwize V5000 Storwize V7000
High-end Server Tape & Virtual
Tape
TS7700
Family
TS2900
AutoloaderTape LibrariesLTO8 &
Tape Drives
VM Data
Availability
Acceleration
FlashSystem 900
DS8884
DS8884F
DS8886
DS8886F
DS8888F
Private
Cloud
Hybrid Cloud
Disaster
Recovery
2
© Copyright IBM Corporation 2018.
IBM Systems Flash Storage Offerings Portfolio
DS8888F
• Extreme performance
• Targeting database
acceleration & Spectrum
Storage booster
FlashSystem 900
Application
acceleration
IBM FlashCore™ Technology Optimized
FlashSystem
A9000
FlashSystem
A9000R
• Full time data
reduction
• Workloads: Cloud,
VDI, VMware
Large
deployments
FlashSystem
V9000
Virtualizing
the DC Cloud service providers
• Full time data reduction
• Workloads: Mixed and
cloud
Storwize
V7000F
Mid-Range
Storwize
V5030F
Entry /
Mid-Range
Enhanced data storage functions,
economics and flexibility with sophisticated
virtualization
SVC
Simplified management
Flexible consumption model
Virtualized, enterprise-class, flash-optimized, modular storage
Enterprise class heterogeneous data services and selectable data reduction
DS8884F
Business class
DS8886F
Enterprise
class
Analytic class with
superior
performance
Business critical, deepest integration with IBM Z, POWER
AIX and IBM i, superior performance, highest availability,
Three-site/Four-site replication and industry-leading
reliability
IBM Power
Systems OR
IBM Z
OR
Heterogenous flash
storage
3
© Copyright IBM Corporation 2018.
DS8880 Unique Technology Advantages Provides Value
Infrastructure Matters for Business Critical Environments - Don’t settle for less than optimal
• IBM Servers and DS8880 Integration
• IBM Z, Power i and p
• Available years ahead of competitors
• OLTP and Batch Performance
• High Performance FICON (zHPF), zHyperWrite, zHyperLink and Db2 integration
• Cache - efficiency, optimization algorithms and Db2 exploitation
• Easy Tier advancements and Db2 reorg integration
• QoS - IO Priority Manager (IOPM), Workload Manager (WLM)
• Hybrid-Flash Array (HFA) and All-Flash Array (AFA) options
• Proven Availability
• Built on POWER8 technology, fully non-disruptive operations
• Designed for highest levels of availability and data access protection
• State-of-the-art Remote Copy
• Lowest latency with Metro Mirror, zHyperWrite
• Best RPO and lowest bandwidth requirements with Global Mirror
• Superior automated failover/failback with GDPS / Copy Services Manager (CSM)
• Ease of Use
• Common GUI across the IBM platform
• Simplified creation, assignment and management of volumes
• Total Cost of Ownership
• Hybrid Cloud integration
• Bandwidth and infrastructure savings through GM and zHPF
• Thin Provisioning with zOS integration
Business Critical Storage for the World’s Most Demanding Clients 4
© Copyright IBM Corporation 2018.
Designing, developing,
and testing together is key
to unlocking true value
Synergy is much more than just interoperability:
DS8880 and IBM Z – Designed, developed and tested together
• IBM invented the IBM Z I/O architecture
• IBM Z, SAN and DS8880 are jointly developed
• IBM is best positioned for earliest delivery of new server support
• Shared technology between server team and storage team
• SAN is the key to 16Gbps, latency, and availability
• No other disk system delivers 24/7 availability and optimized performance for IBM Z
• Compatible ≠ identical – other vendors support new IBM Z features late or never at all
5
© Copyright IBM Corporation 2018.
IBM z14 and DS8880 – Continuing to Integrate by Design
• IBM zHyperLink
• Delivers less that 20us response times
• All DS8880 support zHyperLink technology
• Superior performance with FICON Express 16S+ and up to 9.4x more Flash capacity
• Automated tiering to the Cloud
• DFSMS policy control for DFSMShsm tiering to the cloud
• Amazon S3 support for Transparent Cloud Tiering (TCT)
• Cascading FlashCopy
• Allows target volume/dataset in one mapping to be the source volume/dataset in another mapping creating a cascade of
copied data
IBM DS8880 is the result of years of research and
collaboration between the IBM storage and IBM Z
teams, working together to transform businesses
with trust as a growth engine for the digital
economy
6
© Copyright IBM Corporation 2018.
Clear leadership position
90% greater revenue than next
closest competitor
Global market acceptance
#1 with 55% market share
19 of the top 20 world largest banks use
DS8000 for core banking data
Having the right infrastructure is essential:
IBM DS8000 is ranked #1 storage for the IBM Z
Market share 2Q 2017
0% 25% 50%
EMC
HP
Hitachi
IBM
Source: Calculations based on data from IDC Worldwide Quarterly Disk Storage Systems Tracker, 2017Q2(Worldwide vendor revenue
for external storage attached to z/OS hosts)
7
© Copyright IBM Corporation 2018.
DS8000 is the right infrastructure for Business Critical environments
•DS8000 is #1 storage for the IBM Z*
•19 of the top 20 world banks use DS8000 for core
banking
•First to integrate High Performance Flash into Tier 1
Storage
•Greater than 6-nines availability
•3 seconds RPO; automated site recovery well under
5 minutes
•First to deliver true four-way replication
19 of 20 Top Banks
*Source: Calculations based on data from IDC Worldwide Quarterly Disk Storage Systems Tracker, 2016Q3 (Worldwide vendor revenue for external storage attached to z/OS hosts)
9
© Copyright IBM Corporation 2018.
DS8880 Family
• IBM POWER8 based processors
• DS8884 Hybrid-Flash Array Model 984 and Model 84E Expansion Unit
• DS8884 All-Flash Array Model 984
• DS8886 Hybrid / All-Flash Array Model 985 and Model 85E Expansion Unit (single phase power)
• DS8886 Hybrid / All-Flash Array Model 986 and Model 86E Expansion Unit (three phase power)
• DS8888 All-Flash Array Model 988 and Model 88F Expansion Unit
• Scalable system memory and scalable process cores in the controllers
• Standard 19” rack
• I/O bay interconnect utilizes PCIe Gen3
• Integrated Hardware Management Console (HMC)
• Simple licensing structure
• Base functions license
• Copy Services (CS) license
• z-synergy Services (zsS) License
10
© Copyright IBM Corporation 2018.
DS8880/F – 8th Generation DS8000
Replication and Microcode Compatibility
2004
POWER5
DS8100
DS8300
2012
POWER7
DS8870
2013
POWER7+
2015 / 2016
POWER8
DS8870
DS8880
DS8884/DS8886/DS8888
HPFE Gen1
2017
POWER8
DS8880/F
HFA / AFA
HPFE Gen2
2010
POWER6+
DS8800
2009
POWER6
DS8700
2006
POWER5+
DS8300
Turbo
11
© Copyright IBM Corporation 2018.
DS8000 Enterprise Storage Evolution
DS8880DS8870DS8800DS8700DS8300
SASSASSASFCFCDisk
DC-UPSDC-UPSBulkBulkBulkPower
p8p7/p7+P6+p6p5/p5+CEC
PCIE3PCIE2PCIE1PCIE1RIO-GIO Bay
16Gb/8Gb16Gb/8Gb8Gb/8Gb4Gb/2Gb4Gb/2GbAdapters
19”33”33”33”33”Frame
12
© Copyright IBM Corporation 2018.
DS8880 ‘Three Layer Shared Everything’ Architecture
• Layer 1: Up to 32 distributed PowerPC / ASIC Host Adapters (HA)
• Manage the 16Gbps Fibre Channel host I/O protocol to servers and perform data
replication to remote DS8000s
• Checks FICON CRC from host, wraps data with internal check bytes. Checks
internal check bytes on reads and generates CRC
• Layer 2: Centralized POWER 8 Servers
• Two symmetric multiprocessing processor (SMP) complexes manage two
monolithic data caches, and advanced functions such as replication and Easy Tier
• Write data mirrored by Host Adapters into one server as write cache and the other
server and Nonvolatile Store
• Layer 3: Up to 16 distributed PowerPC / ASIC RAID Adapters (DA); up
to 8 dedicated Flash enclosures each with a pair of Flash optimized
RAID controllers
• DA’s manage the 8Gbps FC interfaces to internal HDD/SSD storage devices
• Flash Enclosures leverage PCIe Gen3 for performance and latency of Flash cards
• Checks internal check bytes and stores on disk
13
Up to 1TB cache Up to 1TB cache
© Copyright IBM Corporation 2018.
AFAs reach a new high :
28% of the external array market. Hybrids +0.5%Pts while all-HDD down -7.4%Pts
Source: IDC Storage Tracker 3Q17 Revenue based on US$
44%
32%
41%
40%
15%
28%
0%
100%
4Q15 1Q16 2Q16 3Q16 4Q16 1Q17 2Q17 3Q17
3Q17 QTR WW Storage Array Type Mix
All Flash Array (AFA)
Hybrid Flash Array
(HFA)
All Hard Disk Drive
(HDD)
14
© Copyright IBM Corporation 2018.
Flash technology can be used in many forms …
IBM Systems Flash Storage Offerings
All-Flash Array (AFA)
Mixed (HDD/SSD/CFH)
All-Custom Flash
Hardware (CFH)
All-SSD
Hybrid-Flash Array (HFA)
CFH defines an architecture that uses optimized flash modules to
provide better performance and lower latency than SSDs. Examples of
CFH are:
• High-Performance Flash Enclosure Gen2
• FlashSystem MicroLatency Module
All-flash arrays are storage solutions that only use flash media
(CFH or SSDs) designed to deliver maximum performance for
application and workload where speed is critical.
Hybrid-flash arrays are storage solutions that support a mix of
HDDs, SSDs and CFH designed to provide a balance between
performance, capacity and cost for a variety of workloads
DS8880 now offers an All-flash Family enabled with High-
Performance Flash Enclosures Gen2 designed to deliver superior
performance, more flash capacity and uncompromised availability
DS8880 also offers Hybrid-flash solutions with CFH, SSD and
HDD configurations designed to satisfy a wide range of business
needs from superior performance to cost efficient requirements
Source: IDC's Worldwide Flash in the Datacenter Taxonomy, 2016
15
© Copyright IBM Corporation 2018.
Why Flash on IBM Z?
• Very good overall z/OS average response times can hide many specific applications
which can gain significant performance benefits from the reduced latency of Flash
• Larger IBM Z memory sizes and newer Analytics and Cognitive workloads are
resulting in more cache unfriendly IO patterns which will benefit more from Flash
• Predictable performance is also about handling peak workloads and recovering
from abnormal conditions. Flash can provide an ability to burst significantly beyond
normal average workloads
• For clients with a focus on cost, Hybrid Systems with Flash and 10K Enterprise
drives are higher performance, greater density and lower cost than 15K Enterprise
drives
• Flash requires lower energy and less floor space consumption
16
z/OS
© Copyright IBM Corporation 2018.
DS8880 Family of Hybrid-FlashArrays (HFA)
DS8884 DS8886
Affordable hybrid-flash block storage
solution for midrange enterprises
Faster hybrid-flash block storage for large
enterprises designed to support a wide variety
of application workloads
Model
984 (Single Phase)
985 (Single Phase)
986 (Three Phase)
Max Cache 256GB 2TB
Max FC/FICON
ports
64 128
Media
768 HDD/SSD
96 Flash cards
1536 HDD/SSD
192 Flash cards
Max raw capacity 2.6 PB 5.2 PB
17
Business
Class
Enterprise
Class
© Copyright IBM Corporation 2018.
Hybrid-Flash Array - DS8884 Model 984/84E
• 12 cores
• Up to 256GB of system memory
• Maximum of 64 8/16GB FCP/FICON ports
• Maximum 768 HDD/SSD drives
• Maximum 96 Flash cards
• 19”, 40U rack
Hybrid-Flash Array -DS8886 Model 985/85E or 986/86E
• Up to 48 cores
• Up to 2TB of system memory
• Maximum of 128 8/16GB FCP/FICON ports
• Maximum1536 HDD/SSD drives
• Maximum 192 Flash cards
• 19”, 40U - 46U rack
18
DS8880 Hybrid-Flash Array Family – Built on POWER8
© Copyright IBM Corporation 2018.
DS8884 / DS8886 Hybrid-Flash Array (HFA) Platforms
• DS8884 HFA
• Model 984 (Single Phase)
• Expansion racks are 84E
• Maximum of 3 racks (base + 2 expansion)
• 19” 40U rack
• Based on POWER8 S822
• 6 core processors at 3.891 Ghz
• Up to 64 host adapter ports
• Up to 256 GB processor memory
• Up to 768 drives
• Up to two Flash enclosures – 96 Flash cards
• 1 Flash enclosure in base rack with 1 additional in first expansion rack
• 400/800/1600/3200/3800GB Flash card option
• Option for 1 or 2 HMCs installed in base frame
• Single phase power
• DS8886 HFA
• Model 985 (Single phase) / 986 (Three phase)
• Expansion racks are 85E / 86E
• Maximum of 5 racks (base + 4 expansion)
• 19” 46U rack
• 40U with a 6U top hat that is installed as part of the install when required
• Based on POWER8 S824
• Options for 8 / 16 / 24 core processors at 3.525 or 3.891 Ghz
• Up to 128 host adapter ports
• Up to 2 TB processor memory
• Up to 1536 drives
• Up to 4 Flash enclosures – 192 Flash cards
• 2 Flash enclosures in base rack with 2 additional in first expansion rack
• 400/800/1600/3200/3800GB Flash card option
• Option for 1 or 2 HMCs installed in base frame
• Model 985 – Single phase power
• Model 986 - Three phase power
19
© Copyright IBM Corporation 2018.
DS8880 Hybrid-FlashArray Configuration Summary
Processors per
CEC
Max System
Memory
Expansion
Frame
Max HA ports
Max flash raw capacity1
(TB)
Max DDM/SSD raw
capacity2 (TB)
Total raw capacity (TB)
DS8884 Hybrid-flash3
6-core 64 0 32 153.6 576 729.6
6-core 128 0 to 2 64 307.2 2304 2611.2
6-core 256 0 to 2 64 307.2 2304 2611.2
DS8886 Hybrid-flash3
8-core
256 0
64 307.2 432 739.2
16-core
512 0 to 4
128 614.4 4608 5222.4
24-core
2048 0 to 4
128 614.4 4608 5222.4
1 Considering 3.2 TB per Flash card
2 Considering 6 TB per HDD and the maximum number of LFF HDDs per storage system
3 Can be also offered as an All-flash configuration with all High-Performance Flash Enclosures Gen2
23
© Copyright IBM Corporation 2018.
DS8884 / DS8886 HFA Media Options – All Encryption Capable
• Flash – 2.5” in High Performance Flash
• 400/800/1600/3200GB Flash cards
• Flash – 2.5” in High Capacity Flash
• 3800GB Flash cards
• SSD – 2.5” Small Form Factor
• Latest generation with higher sequential bandwidth
• 200/400/800/1600GB SSD
• 2.5” Enterprise Class 15K RPM
• Drive selection traditionally used for OLTP
• 300/600GB HDD
• 2.5” Enterprise Class 10K RPM
• Large capacity, much faster than Nearline
• 600GB, 1.2/1.8TB HDD
• 3.5” Nearline – 7200RPM Native SAS
• Extremely high density, direct SAS interface
• 4/6TB HDD
Performance
24
© Copyright IBM Corporation 2018.
Entry level business class storage
solution with All-Flash performance
delivered within a flexible and space-
saving package
Enterprise class with ideal combination of
performance, capacity and cost to support a
wide variety of workloads and applications
Analytic class storage with superior performance
and capacity designed for the most demanding
business workload requirements
Processor complex (CEC) 2 x IBM Power Systems S822 2 x IBM Power Systems S824 2 x IBM Power Systems E850C
Frames (min / max) 1 / 1 1 / 2 1 / 3
POWER 8 cores per CEC (min / max) 6 / 6 8 / 24 24 / 48
System memory (min / max) 64 GB / 256 GB 256 GB / 2048 GB 1024 GB / 2048 GB
Ports (min / max) 8 / 64 8 / 128 8 / 128
Flash cards (min /max) 16 / 192 16 / 384 16 / 768
Capacity (min1 / max2 ) 6.4TB / 729.6TB 6.4 TB / 1.459 PB 6.4 TB / 2.918 PB
Max IOPs 550,000 1,800,000 3,000,000
Minimum response time 120µsec 120µsec 120µsec
1 Utilizing 400GB flash cards
2 Utilizing 3.8TB flash cards
Business
Class
Enterprise
Class
Analytics
Class
DS8884 DS8886 DS8888
http://www.crn.com/slide-shows/storage/300096451/the-10-coolest-flash-storage-and-ssd-products-of-2017.htm/pgno/0/4?itc=refresh
DS8880 Family ofAll-FlashArrays (AFA)
25
© Copyright IBM Corporation 2018.
All-Flash Array - DS8884 Model 984
• 12 cores
• Up to 256GB of system memory
• Maximum of 32 8/16GB FCP/FICON ports
• Maximum 192 Flash cards
• 19”, 40U rack
All-Flash Array - DS8886 Model 985/85E or 986/86E
• Up to 48 cores
• Up to 2TB of system memory
• Maximum of 128 8/16GB FCP/FICON ports
• Maximum 384 Flash cards
• 19”, 46U rack
All-Flash Array - DS8888 Model 988/88E
• Up to 96 cores
• Up to 2TB of system memory
• Maximum of 128 8/16GB FCP/FICON ports
• Maximum 768 Flash cards
• 19”, 46U rack
26
DS8880 All-Flash Array Family – Built on POWER8
© Copyright IBM Corporation 2018.
DS8884 / DS8886 All-Flash Array (AFA) Platforms
• DS8884 AFA
• Model 984 (Single Phase)
• Base rack
• 19” 40U rack
• Based on POWER8 S822
• 6 core processors at 3.891 Ghz
• Up to 32 host adapter ports
• Up to 256 GB processor memory
• Four Flash enclosures – 192 Flash cards
• 4 Flash enclosures in base rack
• 400/800/1600/3200/3800GB Flash card option
• Up to 729.6TB (raw)
• Option for 1 or 2 HMCs installed in base frame
• Single phase power
• DS8886 AFA
• Model 985 (Single phase) / 986 (Three phase)
• Expansion racks are 85E / 86E
• Maximum of 2 racks (base + 1 expansion)
• 19” 46U rack
• 40U with a 6U top hat that is installed as part of the install when required
• Based on POWER8 S824
• Options for 8 / 16 / 24 core processors at 3.525 or 3.891 Ghz
• Up to 128 host adapter ports
• Up to 2 TB processor memory
• Up to 8 Flash enclosures – 384 Flash cards
• 4 Flash enclosures in base rack with 4 additional in first expansion rack
• 400/800/1600/3200/3800GB Flash card option
• Up to 1.459PB (raw)
• Option for 1 or 2 HMCs installed in base frame
• Model 985 – Single phase power
• Model 986 - Three phase power
27
© Copyright IBM Corporation 2018.
All Flash DS8880 Configurations
HMC HMC
HPFE
Gen2 1
HPFE
Gen2 2
HPFE
Gen2 3
HPFE
Gen2 4
46
44
42
40
38
36
34
32
30
28
26
24
22
20
18
16
14
12
10
8
6
4
2 HMC HMCHMC HMC
TH 3
TH 4
TH 4
8U
HPFE
Gen2 1
HPFE
Gen2 2
HPFE
Gen2 3
HPFE
Gen2 4
8U
HPFE
Gen2 5
HPFE
Gen2 6
HPFE
Gen2 7
HPFE
Gen2 8
8U
HMC HMC
HMC HMC
HPFE
Gen2 1
HPFE
Gen2 2
HPFE
Gen2 3
HPFE
Gen2 4
HPFE Gen2 5
HPFE
Gen2 6
HPFE
Gen2 7
HPFE
Gen2 8
HPFE
Gen2 9
HPFE
Gen2 10
HPFE Gen2 15
HPFE Gen2 16
10U
HPFE
Gen2 11
HPFE
Gen2 12
HPFE
Gen2 13
HPFE
Gen2 14
HPFE
Gen2 15
HPFE
Gen2 16
DS8886FDS8884F DS8888F
• DS8884F
• 192 Flash Drives
• 64 FICON/FCP ports
• 256GB cache memory
• DS8884F
• 384 Flash Drives
• 128 FICON/FCP ports
• 2TB cache memory
• DS8888F
• 768 Flash Drives
• 128 FICON/FCP ports
• 2TB cache memory
28
© Copyright IBM Corporation 2018.
DS8886AFA Three Phase Physical layout: Capacity options
32
R8.2.x R8.3+
© Copyright IBM Corporation 2018.
DS8888 All-Flash Array (AFA) Platform
• DS8888 AFA
• Model 988 (Three Phase)
• Expansion rack 88E
• Maximum of 3 racks (base + 2 expansion)
• 19” 46U rack
• Based on POWER8 Alpine 4S4U E850C
• Options for 24 / 48 core processors at 3.6 Ghz
• DDR4 Memory
• Up to 384 threads per system with SMT4
• Up to 128 host adapter ports
• Up to 2 TB processor memory
• Up to 16 Flash enclosures – 768 Flash cards
• 4 Flash enclosures in base rack with 6 additional in first two expansion
racks
• 400/800/1600/3200/3800GB Flash card option
• Up to 2.918PB (raw)
• Option for 1 or 2 HMCs installed in base frame
• Three phase power
36
© Copyright IBM Corporation 2018.
DS8880All-FlashArray (AFA) Capacity Summary
R8.2.1
3.2TB Flash
R8.3
3.6TB Flash
DS8884F 153.6 TB 729.6 TB
DS8886F 614.4 TB 1459.2 TB
DS8888F 1128.8 TB 2918.4 TB
Manage business data growth with
up to 3.8x more flash capacity in the
same physical space for storage
consolidation and data volume
demanding workloads
37
© Copyright IBM Corporation 2018.
DS8880 AFA Media Options – All Encryption Capable
• Flash – 2.5” in High Performance Flash
• 400/800/1600/3200GB Flash cards
• Flash – 2.5” in High Capacity Flash
• 3800GB Flash cards
• Data is always encrypted on write to Flash and then decrypted on read
• Data stored on Flash is encrypted
• Customer data in flight is not encrypted
• Media does the encryption at full data rate
• No impact to response times
• Uses AES 256 bit encryption
• Supports cryptographic erasure data
• Change of encryption keys
• Requires authentication with key server before access to data is granted
• Key management options
• IBM Security Key Lifecycle Manager (SKLM)
• z/OS can also use IBM Security Key Lifecycle Manager (ISKLM)
• KMIP compliant key manager such as Safenet KeySecure
• Key exchange with key server is via 256 bit encryption
38
© Copyright IBM Corporation 2018.
DS8880 High Performance Flash Enclosure (HPFE) Gen2
• Performance optimized High Performance Flash Enclosure
• Each HPFE Gen2 enclosure
• Is 2U, installed in pairs for 4U of rack space
• Concurrently installable
• Contains up to 24 SFF (2.5”) Flash cards, for a maximum of 48 Flash cards in 4U
• Flash cards installed in 16 drive increments – 8 per enclosure
• Flash card capacity options
• 400GB, 800GB, 1.6TB , 3.2TB and 3.8TB
• Intermix of 3 different flash card capacities is allowed
• Size options are: 400GB, 800GB, 1.6TB and 3.2TB
• RAID6 default for all DS8880 media capacities
• RAID5 option available for 400/800GB Flash cards
• New Adapter card to support HPFE Gen2
• Installed in pairs
• Each adapter pair supports an enclosure pair
• PCIe Gen3 connection to IO bay as today’s HPFE
39
© Copyright IBM Corporation 2018.
Number of HPFE Gen2 allowed per DS8880 system
DS8884
Installed HPFE Gen1 HPFE Gen2 that can be
installed
4 0
3 1
2 2
1 2
0 2
DS8886
Installed HPFE Gen1 HPFE Gen2 that can be
installed
8 0
7 1
6 2
5 3
4 4
3 4
2 4
1 4
0 4
DS8888
Installed HPFE Gen1
A - Rack
HPFE Gen2 that can
be installed
A-Rack
Installed HPFE Gen1
B - Rack
HPFE Gen2 that can
be installed
B-Rack
8 0 8 0
7 0 7 1
6 1 6 2
5 1 5 2
4 1 4 3
3 1 3 3
2 2 2 4
1 2 1 4
0 N/A 0 4
For already existing 980/981/982 models, the number of HPFE
Gen2 that can be installed in the field is based on number of
HPFE Gen1 already installed as shown in these tables:
42
© Copyright IBM Corporation 2018.
Drive media is rapidly increasing in capacity to 10TB and more. The greater density provides real cost advantages
but requires changes in the types of RAID protection used. The DS8880 now defaults to RAID6 for all drive types
and a RPQ is required for RAID5 on drives >1TB
1
2
3
4
5
6
P
S
Traditionally RAID5 has been used over RAID6 for because:
• Performs better than RAID6 for random writes
• Provides more usable capacity
Performance concerns are significantly reduced with Flash and Hybrid
systems given very high Flash random write performance
RAID5
However as the drive capacity increases , RAID5 exposes enterprises to increased risks, since higher
capacity drives are more vulnerable to issues during array rebuild
• Data will be lost, if a second drive fails while the first failed drive is being rebuilt
• Media errors experienced on a drive during rebuild result in a portion of the data being non-recoverable
1
2
3
4
5
Q
P
S
RAID6
RAID6 for mission critical protection
44
© Copyright IBM Corporation 2018.
HPFE Gen 2 – RAID 6 Configuration
• Two spares shared across the arrays
• All Flash cards in the enclosure pair will be same capacity
• All arrays will be same RAID protection scheme (RAID-6 in this example)
• No intermix of RAID type within an enclosure pair
• No deferred maintenance – every Flash card failure will call home
HPFE Gen 2 Enclosure A
S
1
2
3
4
5
6
HPFE Gen 2 Enclosure B
S
Install Group 1
16 drives (8+8)
Two 5+P+Q
Two Spares
Install Group 2
16 drives (8+8)
Two 6+P+Q
No Spares*
Install Group 3
16 drives (8+8)
Two 6+P+Q
No Spares*
Q
1
2
3
4
5
P
Q
P
1
2
3
4
5
6
1
2
3
4
5
6
Q
P
Q
P
*Spares are shared across all arrays
1
2
3
4
5
6
1
2
3
4
5
6
Q
P
Q
P
Two 5+P+Q arrays
Four 6+P+Q arrays
Two shared spares
45
© Copyright IBM Corporation 2018.
3.8TB High Capacity Flash – Random Read / Write
• Random Read
• Equivalent random read performance to the
existing HPFE Gen2 flash drives
• Random Write
• Lower write performance than the existing
High Performance HPFE Gen2 flash drives
46
© Copyright IBM Corporation 2018.
3.8TB High Capacity Flash – Sequential Read / Write
• Sequential
• Equivalent sequential read performance, but lower sequential write performance than the existing HPFE
Gen2 flash drives
47
© Copyright IBM Corporation 2018.
Brocade IBM Z product timeline
48
FICON Introductions
• 08/2002 2 Gbps FICON
• 05/2002 FICON / FCP Intermix
• 11/2001 FICON Inband Mgmt
• 04/2001 64 Port Director
• 10/2002 140 Port Director
• 05/2005 256 Port Director
• 09/2006 4 Gbps FICON
ESCON Introductions
• 10/1994 9032 ESCON Directors
• 08/1999 FICON Bridge
Bus/Tag, ESCON, FICON and IP Extension
• 1986 CTC Extension/B&T
• 1991 High Speed Printer Extension
• 1993 Tape Storage Extension
• 1993 T3/ATM WAN Support
• 1995 Disk Mirroring Support
• 1998 IBM XRC Support
• 1999 Remote Virtual Tape
• 2001 FCIP Remote Mirroring
• 2003 FICON Emulation for Disk
• 2005 FICON Emulation for Tape
• 2015 IP Extension
1987 1990 2000 2001 2002 2003 2004 2005 2007 2008 20091997 2012
ED-5000
M6140
M6064
i10K
9032
48000B24000 DCXFC9000
DCX-4S
DCX 8510
2015
Channelink
USD
82xx
Edge
USDX 7500 &
FR4-18i
7800 &
FX8-24
7840
DCX Introductions
• 02/2008 DCX Backbone
• 02/2008 768 Port Platform
• 02/2008 Integrated WAN
• 03/2008 8 Gbps FICON
• 05/2008 Acceleration for FICON Tape
• 11/2009 New FCIP Platforms
• 12/2011 DCX 8510
• 01/2012 16 Gbps FICON
• 05/2016 X6 Directors
• 10/2016 32 Gbps FICON
2016
SX6
X6
© Copyright IBM Corporation 2018.
Current Brocade / IBM Z Portfolio
49
16 Gbps FC Fabric
Extension Switches
Extension Blades
Gen 5 - FX8-24 Gen 6 – SX6
X6-4 X6-8DCX-8510-4
6510
G620
32/128 Gbps FC Fabric
DCX-8510-8
FC16-32
Blade
FC16-48
Blade
FC32-48 Blade
7840
7800
© Copyright IBM Corporation 2018.
Performance
Availability
Management /
Growth
IBM DS8880 and IBM Z: Integration by Design
• zHPF Enhancements (now includes all z/OS Db2 I/O, BxAM/QSAM), IMS R15 WADS
• Db2 Castout Accelerator
• Extended Distance FICON
• Caching Algorithms – AMP, ARC, WOW, 4K Cache Blocking
• Cognitive Tiering - Easy Tier Application , Heat Map Transfer and Db2 integration with Reorgs
• Metro Mirror Bypass Extent Checking
• z/OS GM Multiple Reader support and WLM integration
• Flash + DFSMS + zHPF + HyperPAV/SuperPAV + Db2
• zWLM + DS8000 I/O Priority Manager
• zHyperWrite + DS8000 Metro Mirror
• zHyperLink
• FICON Dynamic Routing
• Forward Error Correction (FEC) code
• HyperPAV/SuperPAV
• GDPS and Copy Services Manager (CSM) Automation
• GDPS Active / Standby/Query/Active
• HyperSwap technology improvements
• Remote Pair FlashCopy and Incremental FlashCopy Enhancements
• zCDP for Db2, zCDP for IMS – Eliminating Backup windows
• Cognitive Tiering - Easy Tier Heat map transfer
• Hybrid Cloud – Transparent Cloud Tiering (TCT)
• zOS Health Checker
• Quick Init for CKD Volumes
• Dynamic Volume Expansion
• Extent Space Efficient (ESE) for all volume types
• z/OS Distributed Data Backup
• z/OS Discovery and Automatic Configuration (zDAC)
• Alternate Subchannel exploitation
• Disk Encryption
• Automation with CSM, GDPS
50
IBM z14 Hardware
z/OS (IOS, etc.), z/VM,
Linux for z Systems
Media Manager, SDM
DFSMS Device Support
DFSMS hsm, dss
Db2, IMS, CICS
GDPS
DS8880
© Copyright IBM Corporation 2018.
IBM Z / DS8880 Integration Capabilities – Performance
• Lowest latency performance for OLTP and Batch
• zHPF
• All Db2 IO is able to exploit zHPF
• IMS R15 WADS exploits zHPF and zHyperWrite
• DS8880 supports format write capability; multi-domain IO; QSAM, BSAM, BPAM; EXCP, EXCPVR; DFSORT, Db2
Dynamic or sequential prefetch, disorganized index scans and List Prefetch Optimizer
• HPF extended distance support provides 50% IO performance improvement for remote mirrors
• Cache segment size and algorithms
• 4K is optimized for OLTP environments
• Three unique cache management algorithms from IBM Research to optimize random, sequential and destage for
OLTP and Batch optimization
• IMS WADS guaranteed to be in cache
• Workload Manager Integration (WLM) and IO Priority Manager (IOPM)
• WLM policies honored by DS8880
• IBM zHyperLink and zHyperWrite™
• Low latency Db2 read/write and Parallel Db2 Log writes
• Easy Tier
• Application driven tier management whereby application informs Easy Tier of appropriate tier (e.g. Db2 Reorg)
• Db2 Castout Accelerator
• Metro Mirror
• Pre-deposit write provides lowest latency with single trip exchange
• FICON Dynamic Routing reduces costs with improved and persistent performance when sharing ISL traffic
52
IBM Z Hardware
z/OS (IOS, etc.), z/VM, Linux for
z Systems
DFSMSdfp: Device Services,
Media Manager, SDM
DFSMShsm, DFSMSdss
Db2, IMS, CICSGDPS
DS8880
© Copyright IBM Corporation 2018.
zHPF Evolution
Version 1 Version 4Version 2 Version 3
• Single domain, single track I/O
• Reads, update writes
• Media Manager exploitation
• z/OS 1.8 and above
• Multi-track but <= 64K
• Multi-track any size
• Extended distance I
• Format writes
• Multi-domain I/O
• QSAM/BSAM/BPAM
exploitation
• z/OS R1.11 and above
• EXCPVR
• EXCP Support
• ISV Exploitation
• Extended Distance II
• SDM, DFSORT, z/TPF
53
© Copyright IBM Corporation 2018.
zHPF and Db2 – Working Together
• Db2 functions are improved by zHPF
• Db2 database reorganizations
• Db2 incremental copy
• Db2 LOAD and REBUILD
• Db2 queries
• Db2 RUNSTATS table sampling
• Index scans
• Index-to-data access
• Log applies
• New extent allocation during inserts
• Reads from a non-partition index
• Reads of large fragmented objects
• Recover and restore functions
• Sequential reads
• Table scans
• Write to shadow objects
54
z/OS
DFSMS
DB2
© Copyright IBM Corporation 2018.
• Reduced batch window for I/O intensive batch
• DS8000 I/O commands optimize QSAM, BPAM, and BSAM access methods for exploiting zHPF
• Up to 30% improved I/O service times
• Complete conversion of Db2 I/O to zHPF maximizes resource utilization and performance
• Up to 52% more Format write throughput (4K pages)
• Up to 100% more Pre-formatting throughput
• Up to 19% more Sequential pre-fetch throughput
• Up to 23% more dynamic pre-fetch throughput (40% with Flash/SSD)
• Up to 111% more Disorganized index scans yield throughput (more with 8K pages)
• Db2 10 and zHPF is up to 11x faster over Db2 V9 w/o HPF
• Up to 30% reduction in Synchronous I/O cache hit response time
• Improvements in cache handling decrease response times
• 3x to 4x% improvement in Skip sequential index-to-data access cache miss processing
• Up to 50% reduction in the number of I/O operations for query and utility functions
• DS8000 algorithm optimizes Db2 List-Prefetch I/O
55
z/OS and DS8000 zHPF Performance Advantages
zHPF Performance Exclusive - Significant Throughput gains in many areas
Reduced transaction response time
Reduced batch window
Better customer experience
55
z/OS
DFSMS
DB2
© Copyright IBM Corporation 2018.
DFSORT zHPF Exploitation in z/OS2.2
• DFSORT zHPF Exploitation
• DFSORT normally uses EXCP for processing of basic and large format sequential input and
output data sets (SORTIN, SORTOUT, OUTFIL)
• DFSORT already uses BSAM for extended format sequential input and output data sets
(SORTIN, SORTOUT and OUTFIL). BSAM already supports zHPF
• New enhancement: Update DFSORT to prefer BSAM for SORTIN/SORTOUT/OUTFIL when
zHPF is available
• DFSORT will automatically take advantage of zHPF if it is available on your system; no user actions are
necessary.
• Why it Matters: Taking advantage of the higher start rates and bandwidth available
with zHPF is expected to provide significant performance benefits on systems where
zHPF is available
56
z/OS
© Copyright IBM Corporation 2018.
Utilizing zHPF functionality
• Clients can enable/disable specific zHPF features
• Requires APAR OA40239
• MODIFY DEVMAN command communicates with the device manager address
• For zHPF, following options are available
• HPF:4 - zHPF BiDi for List Prefetch Optimizer
• HPF:5 - zHPF for QSAM/BSAM
• HPF:6 - zHPF List Prefetch Optimizer / Db2 Cast Out Accelerator
• HPF:8 - zHPF Format Writes for Accelerating Db2 Table Space Provisioning
• Example 1 - Disable zHPF Db2 Cast Out Accelerator
• F DEVMAN,DISABLE(HPF:6)
• F DEVMAN,REPORT
• **** DEVMAN ****************************************************
• * HPF FEATURES DISABLED: 6
57
z/OS
© Copyright IBM Corporation 2018.
DS8000 Advanced Caching Algorithms
Classical (simple cache algorithms):
• LRU (Least Recently Used) / LRW (Least Recently Written)
Cache innovations in DS8000:
• 2004 – ARC / S-ARC dynamically partitions the read cache
between random and sequential portions
• 2007 – AMP manages the sequential read cache and decides
what, when, and how much to prefetch
• 2009 – IWC (or WOW: Wise Ordering for Writes) manages the write cache and decides what order and rate
to destage
• 2011 – ALP enables prefetch of a list of non-sequential tracks providing improved performance for Db2
workloads
59
© Copyright IBM Corporation 2018.
DS8880 Cache efficiency delivers higher Cache Hit Ratios
VMAX requires 2n GB cache to support n GB of “usable” cache
blk1
blk2
blk1
blk1
blk2
DS8880
4KB slots
G1000
16KB slots
VMAX
64KB slots
blk2
Two 4K cache segments allocated (8K stored, 24K unused)
Two 4K cache segments allocated (8K stored, 0K unused)
Two 4K cache segments allocated (8K stored, 120K unused)
Unused space
Unused space
Unused space
Unused space
60
© Copyright IBM Corporation 2018.
Continued innovation to reduce IBM Z I/O Response Times
IOSQ Time Pending Time Disconnect Time Connect Time
Parallel Access Volumes Multiple Allegiance Adaptive Multi-Stream Pre-
Fetching (AMP)
MIDAWs
HyperPAV Intelligent Write Caching (IWC) High Performance FICON for IBM
z (zHPF)
SuperPAV Sequential Adaptive Replacement
Cache (SARC)
FICON Express 16 Gb channel
zHPF List Prefetch Optimizer
4 KB cache slot size
zHyperWrite
Easy Tier integration with Db2
Db2 Castout Accelerator
Integrated DS8000 functions and features to address response time components (not all functions listed)
61
© Copyright IBM Corporation 2018.
I/O Latency Improvement Technologies for z/OS
* Not drawn to scale
zHyperLink
62
© Copyright IBM Corporation 2018.
QoS - I/O Priority Manager and Work Load Manager
• Application A and B initiate an I/O operation to the same DS8880 rank (may be different logical
volumes)
• zWLM sets the I/O importance value according to the application priority as defined by system
administrator
• If resources are constrained within the DS8880 (very high utilization on the disk rank), I/O Priority
Manager will handle the highest priority I/O request first and may throttle low priority I/Os to
guarantee a certain service level
63
DS8880
© Copyright IBM Corporation 2018.
zOS Global Mirror (XRC) / DS8880 Integration -
Workload Manager Based Write Pacing
• Software Defined Storage enhancement to allow IBM Z Workload Manager
(WLM) to control XRC Write Pacing
Client benefits
• Reduces administrative overhead on hand managing XRC write pacing
• Reduces the need to define XRC write pacing on a per volume level allowing greater flexibility in
configurations
• Prevents low priority work from interfering with the Recovery Point Objective
of critical applications
• Enables consolidation of workloads onto larger capacity volumes
64
SDM
WLMP
S
© Copyright IBM Corporation 2018.
SAP/Db2 Transactional Latency on z/OS
• How do we make transactions run faster on IBM Z and z/OS?
A banking workload running on z/OS:
Db2 Server time: 5%
Lock/Latch + Page Latch: 2-4%
Sync I/O: 60-65%
Dispatcher Latency: 20-25%
TCP/IP: 4-6%
This is the write
to the Db2 Log
Lowering the Db2 Log Write Latency will accelerate
transaction execution and reduce lock hold times
1. Faster CPU
2. Software scaling, reducing contention, faster I/O
3. Faster I/O technologies such as zHPF, 16 Gbs, zHyperWrite, zHPF ED II, etc…
4. Run at lower utilizations, address Dispatcher Queueing Delays
5. RoCE Express with SMC-R
65
© Copyright IBM Corporation 2018.
HyperSwap / Db2 / DS8880 Integration – zHyperWrite
• Db2 performs dual, parallel Log writes with DS8880 Metro Mirror
• Avoids latency overhead of storage based synchronous mirroring
• Improved Log throughput
• Reduced Db2 log write response time up to 43 percent
• Primary / Secondary HyperSwap enabled
• Db2 informs DFSMS to perform a dual log write and not use DS8880 Metro
Mirroring if a full duplex Metro Mirror relationship exists
• Fully integrated with GDPS and CSM
Client benefits
• Reduction in Db2 Log latency with parallel Log writes
• HyperSwap remains enabled
66
© Copyright IBM Corporation 2018.
HyperSwap / Db2 / DS8880 Integration – zHyperWrite + 16Gb FICON
• Db2 Log write latency improved by up to 58%* with the
combination of zHyperWrite and FICON Express16S
Client benefits
• Gain better end user visible transactional response time
• Provide additional headroom for growth within the same
hardware footprint
• Defer when additional Db2 data sharing members are
needed for more throughput
• Avoid re-engineering applications to reduce log write rates
• Improve resilience over workload spikes
Client Financial Transaction Test
-43%
* With {HyperWrite, z13, 16 Gbs HBA DS8870 and FICON Express16S}
vs {EC12, 8 Gbs DS8870 HBA and FICON Express8S}
0.000 0.100 0.200 0.300 0.400 0.500 0.600 0.700 0.800 0.900
zEC12 FEx8S zHPF Write 8Gb HBA
z13 FEx8S zHPF Write 8Gb HBA
z13 FEx16S zHPF Write 8Gb HBA
z13 FEx16S zHPF Write 16Gb HBA
PEND CONN
-23%
-14%
-15%
FICON Express16s
67 67
© Copyright IBM Corporation 2018.
zHyperWrite - Client Results
68
Geo State Result Comments
US Production 66% Large healthcare provider. I/O service time for DB2 log write was reduced up to
66% based on RMF data. Client reported that they are “extremely impressed by
the benefits”.
Brazil Production 50% Large financial institution in Brazil, zBLC member.
US (East) PoC 28% Large financial institution on east coast, zBLC member.
US (West) Production 43% Large financial institution on west coast, zBLC member. Measurement was for
43% reduction in DB2 commit times, 8 GBps channels.
US
(Central)
Production 28% Large agricultural provider. I/O service time for DB2 log write was reduced 25-
28%
China PoC 36% Job elapsed times with DB2 reduced by 36%. zHPF was active, 8 GBps
channels.
UK Production 40% Large financial institution in the UK, zBLC and GDPS member. Measurement
was a minimum 40% reduction in DB2 commit times, 8 GBps channels
… Many other clients have done PoC and now in production
© Copyright IBM Corporation 2018.
IMS Release 15 Enhancements for WADS Performance
https://developer.ibm.com/storage/2017/10/26/ds8880-enables-ims-release-15-reduce-wads-io-service-time-50/
69
© Copyright IBM Corporation 2018.
SAP/Db2 Transactional Latency on z/OS
Current Projected with
zHyperLink
Db2 Server CPU time: 5% 5%
Lock/Latch + Page Latch: 2-4% 1-2%
I/O service time 60-65% 5-7%
Dispatcher (CPU) Latency: 20-25% 5-10%
Network (TCP/IP): 4-6% 4-6%
zHyperLink savings - 80%
Latency Breakdown for a simple transaction
• How do we make transactions run faster on IBM Z and z/OS?
71
© Copyright IBM Corporation 2018.
IBM zHyperLink delivers NVMe-oF like latencies for the Mainframe!
• New storage technologies like Flash storage are driven by
market requirements of low latency
• Low latency helps organizations to improve customer satisfaction,
generate revenue and address new business opportunities
• Low latency drove the high adoption rate of I/O technologies including
zHyperWrite, FICON Express16S+, SuperPAV, and zHPF
• IBM zHyperLink™ is the result of an IBM research project
created to provide extreme low latency links between the IBM Z
and the DS8880
• Operating System and middleware (e.g. Db2) are changed to
keep running over an I/O
• zHyperWrite™ based replication solution allows zHyperLink™
replicated writes to complete in the same time as simplex
72
IBM Z IBM
DS8880
Point to point
interconnection between
the IBM Z Central
Electronics Complexes
(CECs) and the DS8880
I/O Bays
Less than
20msec
response
time !
© Copyright IBM Corporation 2018.
New business requirements demand fast and consistent application response times
• New storage technologies like Flash storage are driven by market
requirements of low latency
• Low latency helps organizations to improve customer satisfaction, generate revenue
and address new business opportunities
• Low latency drove the high adoption rate of I/O technologies including zHyperWrite,
FICON Express16S+, SuperPAV, and zHPF
• IBM zHyperLink™ is the result of an IBM research project created to
provide extreme low latency links between the IBM Z and the DS8880
• Operating System and middleware (e.g. Db2) are changed to keep running
over an I/O
• zHyperWrite™ based replication solution allows zHyperLink™ replicated
writes to complete in the same time as simplex
73
CF
Global
Buffer Pool
IB or PCIe IB or PCIe
8 usec
SENDMSG
FICON/zHPF
SAN
>50,000 IOP/sec
<20μsec
zHyperLink™
FICON/zHPF
© Copyright IBM Corporation 2018.
Components of zHyperLink
• DS8880 - Designed for Extreme Low Latency Access to Data and Continuous
Availability
• New zHyperLink is an order of magnitude faster for simple read and write of data
• zHyperWrite protocols built into zHyperLink protocols for acceleration of database logging with
continuous availability
• Investment protection for clients that already purchased the DS8880
• New zHyperLink compliment, do not replace, FICON channels
• Standard FICON channel (CHPID type FC) is required for exploiting the zHyperLink Express feature
• z14 – Designed from the Casters Up for High Availability, Low Latency I/O Processing
• New I/O paradigm transparent to client applications for extreme low latency I/O processing
• End-to-end data integrity policed by IBM Z CPU cores in cooperation with DS8880 storage system
• z/OS, Db2 - New approach to I/O Processing
• New I/O paradigm for the CPU synchronous execution of I/O operations to SAN attached storage.
Allows reduction of I/O interrupts, context switching, L1/L2 cache disruption and reduced lock hold
times typical in transaction processing work loads
• Statement of Direction (SOD) to support VSAM and IMS
.
74
z/OS
IBM z14
Hardware
Db2
zHyperLink
ExpressSAN
© Copyright IBM Corporation 2018.
zHyperLink™ provides real value to your business
0
5
10
15
zHPF zHyperLink
Application I/O Response Time
0
5
10
15
zHPF zHyperLink
Db2 Transaction Elapsed Time
10x Reduction
5x Reduction
Response time reduction compared to zHPF• zHyperLink™ is FAST enough that the CPU can just wait for
the data
• No Un-dispatch of the running task
• No CPU Queueing Delays to resume it
• No host CPU cache disruption
• Very small I/O service time
• Extreme data access acceleration for Online Transaction
Processing on IBM Z environment
• Reduction of the batch processing windows by providing
faster Db2™ faster index splits. Index split performance is the
main bottleneck for high volume INSERTs
• Transparent performance improvement without re-engineering
existing applications
• More resilient I/O infrastructure with predictable and
repeatable service level agreements
75
© Copyright IBM Corporation 2018.
1. I/O driver requests synchronous execution
2. Synchronous I/O completes normally
3. Synchronous I/O unsuccessful
4. Heritage I/O path
5. Heritage I/O completion
Synchronous I/O Software Flow
76
© Copyright IBM Corporation 2018.
Continuous Availability - IBM zHyperLink+ zHyperWrite
Metro Mirror Primary
Storage Subsystem
Node 1
Node 2
Optics
HyperSwap
< 150m
zHyperLink
Point-to-Point link
• zHyperLink™ are point-to point-connections
with a maximum distance of 150m
• For acceleration of Db2 Log Writes with Metro
Mirror, both the primary and the secondary
storage need to be no more than 150 meters from
the IBM Z
• When the Metro Mirror secondary subsystem is
further than 150 meters, exploitation is limited to
the read use case
• Local HyperSwap™ and long distance
asynchronous replication provide the best
combination of performance, high availability and
disaster recovery
• zHyperWrite™ based replication solution
allows zHyperLink™ replicated writes to
complete in the same time as non-replicated
data
Optics
Node 1
Node 2
Optics
Optics
IBM z14
zHyperLink
Adapter
zHyperLink
Adapter
Optics
< 150m
Metro Mirror Secondary
Storage Subsystem
160,000 IOOPs
8 GByte/s
16 zHyperLink Ports supported on each
Storage Subsystem
77
© Copyright IBM Corporation 2018.
The DS8880 I/O bay supports up to six external
interfaces using a CXP connector type.
I/O Bay EnclosureI/O Bay Enclosure
Base Rack Expansion
Rack
FICON/FCP
HPFE
DS8880 internal
PCIe Fabric
zHyperLink ports
HPFE
FICON/FCP
FICON/FCP
FICON/FCP
RAIDAdapter
RAIDAdapter
DS8880 zHyperLink™ Ports
Investment Protection – DS8880 hardware shipping 4Q2016 (models 984, 985,
986 and 988), older DS8880’s will be field upgradeable at December 2017 GA
78
© Copyright IBM Corporation 2018.
Protect your current DS8880 investment
 DS8880 provides investment protection by allowing
customers to enhance their existing 980/981/982 (R8.0
and R8.1) systems with zHyperLink technology
 Each IO Bay has two zHyperLink PCIe connections and
a single power out that is used to provide the 12V for
the Micro-bay
 Intermix of the older IO bay hardware and the new IO
bay hardware is allowed
Reduce the response time up to 10x in your
existing 980/981/982 (R8.0 and R8.1) systems
HPFE Gen1
RAIDAdapter
FICON/FCP
FICON/FCP
FICON/FCP
RAIDAdapter
FICON/FCP
DS8880 internal PCIe Fabric
Previous Cards
Field upgradeable card with
zHyperLink support
DS8880 internal
PCIe Fabric
HPFE Gen2
zHyperLink ports
79
© Copyright IBM Corporation 2018.
Continuous Availability – Synchronous zHyperWrite
IBM z14
Metro Mirror Primary
Storage Subsystem
Optics
zHyperLink
Adapter
z/OS performs synchronous dual writes across storage subsystems in
parallel to maintain HyperSwap capability
Node 1
Node 2
Optics
Optics
zHyperLink
Adapter
Node 1
Node 2
Optics
Metro Mirror Secondary
Storage Subsystem
80
© Copyright IBM Corporation 2018.
Performance (Latency and Bandwidth)
IBM z14
Metro Mirror Primary
Storage Subsystem
Optics
z/OS software performs synchronous writes in parallel across two or more links for striping
large write operations
Node 1
Node 2
Optics
Optics
Node 1
Node 2
Metro Mirror Secondary
Storage Subsystem
Optics
OpticsOptics
Optics
Optics
zHyperLink
Adapter
zHyperLink
Adapter
zHyperLink
Adapter
zHyperLink
Adapter
81
© Copyright IBM Corporation 2018.
Local Primary/Remote Secondary
IBM z14
Metro Mirror Primary
Storage Subsystem
Optics
Local Primary uses synchronous I/O for reads, zHPF with enhanced write protocols and zHyperWrite for writes at
distance
Node 1
Node 2
Optics
F
C
Optics
Node 1
Node 2
Metro Mirror Secondary
Storage Subsystem
Optics
OpticsOptics
F
C
Optics
Optics
zHyperLink
Adapter
zHyperLink
Adapter
FICON
FICON
zHPF Enhanced Write
Protocol
SAN
100 KM
< 150m
zHPF Enhanced Write
Protocol
zHyperWrite
Synchronous Reads
PPRC
82
© Copyright IBM Corporation 2018.
I/O Performance Chart – Evolution to IBM zHyperLink with DS8886
IOOPs per CHN
IBM DS8886
Average
latency (μsec)
Single channel BW
(GB/s)
Number of IOOPs (4K block size)
184.5
155
148
132 20
62K
95K
106K
315K
2.2M
2.4M
3.2M
3.8M
5.3M
0.75
1.6
2.5
3.2
8.0
83
© Copyright IBM Corporation 2018.
zHyperLink Infrastructure at a Glance
• Z14 zHyperLink Express Adapter
• Two ports per adapter
• Maximum of 16 adapters (32 ports)
• Function ID Type = HYL
• Up to 127 Virtual Functions (VFs) per PCHID
• Point to point connection using PCIe Gen3
• Maximum distance: 150 meters
• DS8880 zHyperLink Adapter
• Two ports per adapter
• Maximum adapters
• Up to 8 adapters (16 ports) on DS8888
• Up to 6 adapters (12 ports) on DS8886
• Point to point connection using PCIe Gen3
DS8880 internal
PCIe Fabric zHyperLink ports
HPFE Gen2
84
z/OS 2.1,
2.2, 2.3
IBM z14
Hardware
Db2 V11 or
V12
zHyperLink
ExpressSAN
DS8880
R8.3
© Copyright IBM Corporation 2018.
IBM DS8000 Restrictions – December 8, 2017 GA
• Physical Configuration Limits
• Initially only DS8886 model supported
• 16 Cores
• 256GB and 512GB Cache Sizes only
• Maximum of 4 zHyperLinks per DS8886, one per I/O Bay
• 4 Links, one per I/O Bay – plug order will specify that port 0 must be used
• Links plug into A-Frame only
• These restrictions will be enforced through the ordering process
• z/OS will restrict zHyperLink requests to 4K Control Interval Sizes or smaller
• Firmware Restriction
• DS8000 I/O Priority Manager cannot be used with zHyperLinks active
85
z/OS 2.1,
2.2, 2.3
IBM z14
Hardware
Db2 V12
zHyperLink
ExpressSAN
DS8880
R8.3.x
© Copyright IBM Corporation 2018.
IBM z14 Restrictions – December 8, 2017 GA
• Physical Configuration Limits
• Maximum of 8 zHyperLinks per z14 (4 zHyperLink Express Adapters)
• Recommended maximum 4 PFIDs per zHyperLink per LPAR
• Maximum 64 PFIDs per link
Note: 1 PFID can achieve ~50k IOPs/s for 4K Reads
4 PFIDs on a single link can achieve ~175K IOPs/s
86
z/OS 2.1,
2.2, 2.3
IBM z14
Hardware
Db2 V12
zHyperLink
ExpressSAN
DS8880
R8.3.x
© Copyright IBM Corporation 2018.
Fix Category: IBM.Function.zHyperLink
Exploitation for zHyperLink Express:
FMID APAR PTF Comments
======= ======= ======= ============================
HBB7790 OA50653 BCP (IOS)
HDZ2210 OA53199 DFSMS (Media Mgr, Dev. Support)
OA50681 DFSMS (Media Mgr, Dev. Support)
OA53287 DFSMS (Catalog)
OA53110 DFSMS (CMM)
OA52329 DFSMS (LISTDATA)
HRM7790 OA52452 RMF
Exploitation support for other products:
FMID APAR PTF Comments
======= ======= ======= ============================
HDBCC10 PI82575 DB2 12 support-zHyperLink Exp.
DB2 11 TBD
HDZ2210 OA52876 VSAM RLS zHyperlink Exp.
OA52941 VSAM zHyperlink Exp.
OA52790 SMS zHyperlink Exp.
Software Deliveries
87
z/OS 2.1,
2.2, 2.3
IBM z14
Hardware
Db2 V12
zHyperLink
ExpressSAN
DS8880
R8.3.x
© Copyright IBM Corporation 2018.
Preliminary Results – zHyperLink Performance
z/OS Dispatcher
Latencies can
exceed 725 usec
with high CPU
utilization
Disclaimer: This performance data was measured in a controlled environment running an I/O driver program under z/OS. The actual link latency that any user will experience may vary. z/OS
dispatch latencies are work load dependent. Dispatch latencies of 725 microseconds have been observed under the following conditions: The IBM measurement from Db2 Brokerage Online
Transaction Workload results on z13 with 12 CPs and an I/O Rate of 53,458 per second to one DS8870, 79% CPU utilization, average IOS service time from RMF is 4.875 milliseconds, DB2 (CL3)
average blocking I/O wait time is 5.6 milliseconds (this includes database I/O (predominantly read) and log write I/O).
4K Read at 150
meters
88
© Copyright IBM Corporation 2018.
Early Adopter Program
• Joint effort between z and DS8880 development teams
• If your customer is interested in begin to exploit zHyperLinks, nominate them for the
EAP
• Contacts:
• Addie M Richards/Tucson/IBM addie@us.ibm.com
• Katharine Kulchock/Poughkeepsie/IBM kathyk@us.ibm.com
89
z/OS 2.1,
2.2, 2.3
IBM z14
Hardware
Db2 V12
zHyperLink
ExpressSAN
DS8880
R8.3.x
• Z Batch Network Analyzer (BNA) tool supports zHyperLink to estimate benefits
• Generate customer reports with text and graphs to show zHyperLink benefit
• Top Data Set candidate list for zHyperLink
• Able to filter the data by time
• Provide support to aggregate zBNA LPAR results into CPC level views
• Requires APAR OA52133
• Only ECKD supported
• Fixed Block/SCSI to be considered for future release
• FICON and zHPF paths required in addition to zHyperLink Express
• zHyperLink Express is a two-port card residing in the PCIe z14 I/O drawer
• Up to 16 cards with up to 32 zHyperLink Express ports are supported in a z14
• Shared by multiple LPARs and each port can support up to 127 Virtual Functions (VFs)
• Maximum of 254 VFs per adapter
• Native LPAR supported
• z/VM and KVM guest support to be considered for a future release
Planning for zHyperLink
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5132
90
• Function ID Type = HYL
• PCHID keyword
• Db2 v11 and v12 with z/OS 2.1+
• zHyperLink connector on DS8880 I/O Bay
• DS8880 firmware R8.3 above
• zHyperLink uses optical cable with MTP connector
• Maximum supported cable length is 150m
Planning for zHyperLink
FUNCTION PCHID=100,PORT=2,FID=1000,VF=16,TYPE=HYL,PART=((LP1),(…))
91
z/OS
IBM z14
Hardware
Db2
zHyperLink
ExpressSAN
© Copyright IBM Corporation 2018.
HCD – Defining a zHyperLink
┌──────────────────────────── Add PCIe Function ────────────────────────────┐
│ CBDPPF10 │
│ │
│ Specify or revise the following values. │
│ │
│ Processor ID . . . . : S35 │
│ │
│ Function ID . . . . . . 300_ │
│ Type . . . . . . . . . ZHYPERLINK + │
│ │
│ Channel ID . . . . . . . . . . . 1C0 + │
│ Port . . . . . . . . . . . . . . 1 + │
│ Virtual Function ID . . . . . . 1__ + │
│ Number of virtual functions . . 1 │
│ UID . . . . . . . . . . . . . . ____ │
│ │
│ Description . . . . . . . . . . ________________________________ │
│ │
│ F1=Help F2=Split F3=Exit F4=Prompt F5=Reset F9=Swap │
│ F12=Cancel │
└───────────────────────────────────────────────────────────────────────────┘
92
Db2 for z/OS Enablement
Acceptable values: ENABLE, DISABLE,
DATABASE, or LOG
Default:
• ENABLE
• TBD after performance measurements are
done
• Data sharing scope:
• Member scope. It is recommended that all
members use the same setting
• Online changeable: Yes
ENABLE
• Db2 requests the zHyperLink protocol for all eligible I/O
requests
DISABLE
• Db2 does not use the zHyperLink for any I/O requests
DATABASE
• Db2 requests the zHyperLink protocol for only data base
synchronous read I/Os
LOG
• Db2 requests the zHyperLink protocol for only log write
I/Os
93
© Copyright IBM Corporation 2018.
Enabling zHyperLink on DS8886 - DSGUI
94
© Copyright IBM Corporation 2018.
Enabling zHyperLink on DS8886 - DSGUI
95
© Copyright IBM Corporation 2018.
DSCLI zHyperLink Commands
96
chzhyperlink
Description: Modify zHyperLink switch
Syntax:
chzhyperlink [-read enable | disable] [-write enable | disable] storage_image_ID |
Example:
dscli > chzhyperlink –read enable IBM.2107-75FA120
Aug 11 02:23:49 PST 2004 IBM DS CLI Version: 5.0.0.0 DS: IBM.2107-75FA120
CMUC00519I chzhyperlink: zHyperLink read is successfully modified.
© Copyright IBM Corporation 2018.
DSCLI zHyperLink Commands
97
lszhyperlink
Description:
Display the status of zHyperLink switch for a given Storage Image
Syntax:
lszhyperlink [ -s | -l ] [ storage_image_ID […] | -]
Example:
dscli > lszhyperlink
Date/Time: July 21, 2017 1:18:19 PM MST IBM DSCLI Version: 7.8.30.364 DS: -
ID Read Write
===============================
IBM.2107-75FBH11 enable disable
© Copyright IBM Corporation 2018.
DSCLI zHyperLink Commands
98
lszhyperlinkport
Description:
Display a list of zHyperLink ports for the given storage image
Syntax:
lszhyperlinkport [-s | -l] [-dev storage_image_ID] [port_ID […] | -]
Example:
dscli> lszhyperlinkport
Date/Time: July 12, 2017 9:54:02 AM CST IBM DSCLI Version: 0.0.0.0 DS: -
ID State loc Speed Width
=============================================================
HL0028 Connected U1500.1B3.RJBAY03-P1-C7-T3 GEN3 8
HL0029 Connected U1500.1B3.RJBAY03-P1-C7-T4 GEN3 8
HL0038 Disconnected U1500.1B4.RJBAY04-P1-C7-T3 GEN3 8
HL0039 Disconnected U1500.1B4.RJBAY04-P1-C7-T4 GEN3 8
© Copyright IBM Corporation 2018.
DSCLI zHyperLink Commands
99
showzhyperlinkport
Description:
Displays detailed properties of an individual zHyperLink port
Syntax:
showzhyperlinkport [-dev storage_image_ID] [-metrics] “ port_ID” | -
Example:
dscli> showzhyperlinkport –metrics HL0068
Date/Time: July 12, 2017 9:59:05 AM CST IBM DSCLI Version: 0.0.0.0 DS: -
ID HL0068
Date Fri Jun 23 11:26:15 PDT 2017
TxLayerErr 2
DataLayerErr 3
PhyLayerErr 4
================================
Lane RxPower (dBm) TxPower (dBm)
================================
0 0.4 0.5884
1 0.1845 -0.2909
2 -0.41 -0.0682
3 0.114 -0.4272
• A standard FICON channel (CHPID type FC) is required for exploiting the zHyperLink
Express feature
• A customer-supplied 24x MTP-MTP cable is required for each port of the zHyperLink
Express feature. The cable is a single 24-fiber cable with Multi-fiber Termination Push-on
(MTP) connectors.
• Internally, the single cable houses 12 fibers for transmit and 12 fibers for receive (Ports
are 8x, similar to ICA SR)
• Two fiber type options are available with specifications supporting different distances for
the zHyperLink Express:
• 150m: OM4 50/125 micrometer multimode fiber optic cable with a fiber bandwidth @wavelength: 4.7 GHz-km @ 850 nm.
• 40m: OM3 50/125 micrometer multimode fiber optic cable with a fiber bandwidth @wavelength: 2.0 GHz-km @ 850 nm.
zHyperLink Connectivity
100
© Copyright IBM Corporation 2018.
IBM z14 I/O and zHyperLink
101
© Copyright IBM Corporation 2018.
SuperPAV / DS8880 Integration
• Building upon IBM’s success with PAVs and HyperPAV, SuperPAVs which provide cross
control unit aliases
• Previously aliases must be from within the logical control unit (LCU)
• 3390 devices + aliases ≤ 256 could be a limiting factor
• LCUs with many EAVs could potential require additional aliases
• LCUs with many logical devices and few aliases required reconfiguration if they required additional aliases
• SuperPAVs, an IBM DS8880 exclusive, extends aliases beyond the LCU barrier
• SuperPAVs can cross control unit boundaries and enable aliases to be shared among multiple LCUs provided
that:
• The 3390 devices and the aliases are assigned to the same DS8000 server (even/odd LCU)
• The devices share a common path group on the z/OS system
• Even numbered control units with the exact same paths (CHPIDs [and destination addresses]) are considered peer
control units and may share aliases
• Odd numbered control units with the exact same paths (CHPIDs [and destination addresses]) are considered peer
control units and may share aliases
• There is still a requirement to have a least one base device per LCU so it is not possible to define a LCU with
nothing but aliases.
• Using SuperPAVs will provide benefits to clients especially with a large number of systems
(LPARs) or many LCUs sharing a path group
102
z/OS
© Copyright IBM Corporation 2018.
Db2 Castout Accelerator / DS8880 Integration
• In Db2, the process of writing pages from the group buffer pool to disk as referred to as
“castout”
• Db2 uses a defined process to move buffer pool pages from group buffer pool to private buffer pools to disk
• When this process occurs, Db2 writes long chains of writes which typically contain multiple locate record domains.
• Each I/O in the chain will be synchronized individually
• Reduces overheads for chains of scattered writes
• This process is not required for Db2 usage – Db2 requires that the updates are written in order
• What changed?
• Media Manager has been enhanced to signal to the DS8000 that there is a single logical locate record domain – even
though there are multiple imbedded locate records
• The data hardening requirement for the entire I/O chain are as if this was a single locate record domain
• This change is only done for zHPF I/O
• Significant benefit also when using Metro Mirror in this environment
• Prototype code results showed a 33% reduction in response time when replicating with Metro Mirror for typical write chain
for Db2 castout processing and 43% when Metro Mirror is not in use.
• Requires z/OS V1.13 or above with APAR OA49684 and OA49685
• DS8880 R8.1+
104
https://developer.ibm.com/storage/2017/04/04/Db2-cast-accelerator/
104
z/OS
Media
Manager
DB2
Performance - Db2 Castout Accelerator (CA)
Significant improvement in
Disconnect time
106
© Copyright IBM Corporation 2018.
Copy Pool Application CP Backup Storage Group
FlashCopy
Multiple
Disk Copies
Dump to
Tape
Onsite Offsite
• Up to 5 copies and 85 Versions for each copy pool
• Automatic Expiration
•Managed by Management Class
Integrated Db2 / DFSMShsm solution to manage Point-in-Time copies
• Solution based on FlashCopy backups combined with Db2 logging
• Db2 BACKUP SYSTEM provides non-disruptive backup and recovery to any point in time for Db2 databases and subsystems
• Db2 maintains cross Volume Data Consistency. No Quiesce of DB required
• Recovery at all levels from either disk or tape!
• Entire copy pool, individual volumes and individual data sets
zCDP for Db2 - Joint solution between DFSMS and Db2
107
© Copyright IBM Corporation 2018.
Db2 RESTORE SYSTEM
Copy Pool
Name: DSN$DSNDB0G$DB
Name: DB2DATA
Storage Group
Copy Pool
Name: DB2BKUP
Type: Copy Pool Backup
Storage Group
Version n
Fast
Replication
Recover
Apply
Log
Identify Recovery Point
Recover appropriate
PIT copy
(May be from disk or tape.
Disk provides short RTO
while tape will be a longer RTO).
Apply log records up
to Recovery Point
1
2
3
108
© Copyright IBM Corporation 2018.
16Gb Host Adapter – FCP and FICON
• 16Gb connectivity reduces latency and provides faster single stream and per port
throughput
• 8GFC, 4GFC compatibility (no FC-AL Connections)
• Quad core Power PC processor upgrade
• Dramatic (2-3x) full adapter IOPS improvements compared to existing 8Gb adapters (for
both CKD and distributed FCP)
• Lights on Fastload avoids path disturbance during code loads
• Forward Error Correction (FEC) for the utmost reliability
• Additional functional improvements for IBM Z environments combined with z13/z14
host channels
• zHPF extended distance performance feature
• (zHPF Extended Distance II)
109
© Copyright IBM Corporation 2018.
zHPF and 16Gb FICON reduces end-to-end latency
• Latency of the storage media is not the only
aspect to consider for performance
• zHPF significantly reduces read and write
response times compared to FICON
• With 16Gb SAN connectivity the benefits of
zHPF are even greater
110
z13 with 16Gb HBA provides up to 21% lower latency than the zEC12 with 8Gb HBA
z13 FEx16S 16G HBA zEC12 FEx8S 8G HBA
zHPF Read 0.122 0.155
zHPF Write 0.143 0.180
FICON Read 0.185 0.209
FICON Write 0.215 0.214
0.000
0.050
0.100
0.150
0.200
0.250
Single Channel 4K 1 Device
z13 FEx16S 16G HBA vs zEC12 FEx8S 8G HBA
ResponseTime(msec)
© Copyright IBM Corporation 2018.
FICON Express16S+
• For FICON, zHPF, and FCP
• CHPID types: FC and FCP
• Both ports must be same CHPID type
• 2 PCHIDs / CHPIDs
• Auto-negotiates to 4, 8, or 16 Gbps
• 2 Gbps connectivity not supported
• FICON Express8S will be available
for 2Gbps (carry forward only)
• Increased performance compared to
FICON Express16S
• Small form factor pluggable (SFP) optics
• Concurrent repair/replace action for each SFP
• 10KM LX - 9 micron single mode fiber
• Unrepeated distance - 10 kilometers (6.2 miles)
• SX - 50 or 62.5 micron multimode fiber
• Distance variable with link data rate and fiber type
• 2 channels of LX or SX (no mix)
FC #0427 – 10KM LX, FC #0428 – SX
LX/LX SX/SXOR
or
OM3
OM2
111
© Copyright IBM Corporation 2018.
20000
52000
20000 23000 23000
92000 98000
300000
0
50000
100000
150000
200000
250000
300000
350000
I/O driver benchmark
I/Os per second
4k block size
Channel 100% utilized
z
H
P
F
FICON
Express8
z
H
P
F
FICON
Express8
z
H
P
F
FICON
Express8S
FICON
Express8S
z196
z10
z196
z10 z196
z10
zEC12
zBC12
z196,z114
zEC12
zBC12
z196,z114
620
770
620 620 620
1600
3000
3200
0
200
400
600
800
1000
1200
1400
1600
1800
2000
2200
2400
2600
2800
3000
3200
3400
FICON
Express8
I/O driver benchmark
MegaBytes per second
Full-duplex
Large sequential read/write mix
FICON
Express8
FICON
Express8S
FICON
Express16S
FICON
Express
16S+
FICON
Express
16S
z196
z10
z196
z10 z14z13
zEC12
zBC12
z196,z114
z
H
P
F
z
H
P
F
z
H
P
F
zEC12
zBC12
z196,z114
z13
z
H
P
F
FICON
Express
16S+
z14
FICON
Express
16S
z14
z13
FICON
Express
8S
FICON
Express
16S+
z
H
P
F
6% increase
z14
FICON
Express
16S+
FICON
Express
16S
306%
increase
*This performance data was measured in a controlled environment running
an I/O driver program under z/OS. The actual throughput or performance that
any user will experience will vary depending upon considerations such as
the amount of multiprogramming in the user's job stream, the I/O
configuration, the storage configuration, and the workload processed.
zHPF and z14 FICON Express 16S+ Performance
112
© Copyright IBM Corporation 2018.
z/OS Transactional Performance for DS8880
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
0 500 1,000 1,500 2,000 2,500 3,000
ResponseTime(ms)
IO Rate (KIO/s)
DS8870 p7+ 16 core 1536 HDD DS8870 p7+ 16 core 8 HPFE (240 Flash Card) DS8884 p8 6 core 4 HPFE (120 Flash Card)
DS8886 p8 24 core 8 HPFE (240 Flash Card) DS8888 p8 48 core 16 HPFE (480 Flash Card)
114
© Copyright IBM Corporation 2018.
DS8000 Family - z/OS OLTP Performance
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0 200,000 400,000 600,000 800,000 1,000,000 1,200,000 1,400,000 1,600,000 1,800,000
ResponseTime(ms)
IO Rate (KIO/s)
DS8870 p7+ 16 core 8 HPFE (240 Flash Card) DS8884 p8 6 core 4 HPFE (120 Flash Card)
DS8886 p8 24 core 8 HPFE (240 Flash Card)
1.5X Faster
200us response time with HPFE for this workload
10% reduction compared to DS8870
115
© Copyright IBM Corporation 2018.
DS8000 Sequential Read – Max Bandwidth
116 116
© Copyright IBM Corporation 2018.
DS8000 Sequential Write – Max Bandwidth
117 117
© Copyright IBM Corporation 2018.
Optimized for enterprise-scale data from multiple platforms and devices
• FICON Express16S links reduce latency for workloads such as Db2 and can
reduce batch elapsed job times
• Reduce up to 58% of Db2 write operations with IBM zHyperWrite and
16Gb links – technology for DS8000 and z/OS for Metro Mirror environment
• First system to use a standards based approach for enabling Forward
Error Correction for a complete end to end solution
• zHPF Extended Distance II provides multi-site configurations with up to 50%
I/O service time improvement when writing data remotely which can benefit
HyperSwap
• FICON Dynamic Routing uses Brocade EBR or CISCO OxID routing across
cascaded FICON directors
• Clients with multi-site configurations can expect I/O service time improvement
when writing data remotely which can benefit GDPS or CSM HyperSwap
• Extend z/OS workload management policies into FICON fabric to manage
the network congestion
• New Easy Tier API removes requirement from application/administrator to
manage hardware resources
Continued innovation - z13 / DS8000 Intelligent and Resilient IO
Unparalleled Resilience and Performance for IBM Z
118
http://www.redbooks.ibm.com/abstracts/redp5134.html?Open
Interface Verification - SFP Health through Read Diagnostics Parameter
• New z13 Channel Subsystem function
• A T11 committee standard
• Read Diagnostic Parameters (RDP)
• Created to enhance path evaluation and improve fault isolation
• Periodic polling from the channel to the end points for the logical paths
established
• Automatically differentiate between errors caused by dirty links and
those errors caused by failing optical components
• Provides the optical characteristics for the ends of the link:
• Enriches the view of Fabric components
• z/OS Commands can display optical signal strength and other
metrics without having to manually insert light meters
123
© Copyright IBM Corporation 2018.
R8.1 - Read Diagnostic Parameters (RDP) Enhancements
• Enhancements have been made in the standard to provide additional information in the Read
Diagnostic Parameters (RDP) response
• Buffer-to-buffer credit
• Round trip latency for a measure of link length
• A configured speed indicator to indicate that a port is configured for a specific link speed
• Forward Error Correction (FEC) status
• Alarm and warning levels that can be used to determine when power levels are out of specification without any prior
knowledge of link speeds and types and the expected levels for these
• SFP vendor identification including the name, part number and serial numbers
• APAR OA49089 provides additional support to exploit this function
• Enhancements to D M=DEV command processing and to z/OS Health Checker utility
124 124
© Copyright IBM Corporation 2018.
IBM Z / DS8880 Integration Capabilities – Availability
• Availability
• Designed for greater than 99.9999% - extreme availability
• Hardware Service Console Redundancy
• Built on high performance/redundant POWER8 technology
• Fully non-disruptive operations
• Fully redundant hardware components
• HyperSwap
• Hardware and software initiated triggers
• Data integrity after a swap
• Consistent time stamps for coordinated recovery of Sysplex and DS8000
• Comprehensive automation management with GDPS or Copy Services Manager (CSM)
• Preserve data reliability with additional redundancy on the information transmitted via
16Gb adapters with Forward Error Connection
126
IBM Z Hardware
z/OS (IOS, etc.), z/VM, Linux for
z Systems
DFSMSdfp: Device Services,
Media Manager, SDM
DFSMShsm, DFSMSdss
DB2, IMS, CICSGDPS
DS8880
© Copyright IBM Corporation 2018.
HyperSwap / DS8880 Integration –
Continuous Availability - Multi-Target Mirroring
• Multiple Site Disaster Recovery / High Availability Solution
• Mirrors data from a single primary site to two secondary sites
• Builds upon and extends current Metro Mirror, Global Mirror and Metro
Global Mirror configurations
• Increased capability and flexibility in Disaster Recovery solutions
• Synchronous replication
• Asynchronous replication
• Combination of both Synchronous and Asynchronous
• Provides for an Incremental Resynchronization between the two secondary
sites
• Improved management for a cascaded Metro/Global Mirror configuration
127
Mirror
H2
H3
H1
© Copyright IBM Corporation 2018.
IBM Z / DS8880 Integration Capabilities – Copy Services
• Advanced Copy Services
• Two, three and four site solutions
• Cascaded and multi-target configurations
• Remote site data currency
• Global Mirror achieves an RPO of under 3 seconds, and RTO in approximately 90 minutes
• Most efficient use of link bandwidth
• Fully utilize pre-deposit write to provide lowest protocol overhead for synchronous mirroring
• Bypass extent utilized in a synchronous mirroring environment to lower latency for
applications like Db2 and JES
• Integration of Easy Tier Heat Map Transfer with GDPS / CSM
• Easy to use replication automation with GDPS / CSM
• Significantly reduces personnel requirements for disaster recovery
• Remote Pair FlashCopy leverages inband communications
• Does not require data transfer across mirroring links
• HyperSwap stays enabled
• UCB constraint relief by utilizing all four Multiple Subchannel Sets for Secondary volumes,
PAV’s, Aliases and GM FlashCopies
128
IBM Z Hardware
z/OS (IOS, etc.), z/VM, Linux for
z Systems
DFSMSdfp: Device Services,
Media Manager, SDM
DFSMShsm, DFSMSdss
DB2, IMS, CICSGDPS
DS8880
© Copyright IBM Corporation 2018.
Business continuity and resiliency protects the reputation of financial firms
129
Statistics from the Ponemon Institute Cost of Data Breach Study 2017; sponsored by IBM.
Visit: http://www-03.ibm.com/security/data-breach
USD141
Average cost
per record compromised
2% increase
Average size of a data breach increased
to 24,089 records
USD 3.62 million
Average total cost
per data breach
© Copyright IBM Corporation 2018.
The largest component of the total cost of a data breach is lost business
130
Detection and escalation
$0.99 million
Notification
$0.19 million
Lost business cost
$1.51 million
Ex-post response
$0.93 million
Components of the $3.62 million cost per data breach
$3.62
million
Forensics, root cause
determination, organizing incident
response team, identifying victims
Disclosure of data breach to victims
and regulators
Help desk, inbound communications, special
investigations, remediation, legal expenditures, product
discounts, identity protection service, regulatory
interventions
Abnormal turnover of customers,
increased customer acquisition
cost, reputation losses, diminished
goodwill
Currencies converted to US dollars
© Copyright IBM Corporation 2018.
What you can do to help reduce the cost of a data breach
$2.90
$5.10
$5.20
$5.40
$5.70
$6.20
$6.80
$8.00
$10.90
$12.50
$16.10
$19.30
CPO appointed
Board-level involvement
CISO appointed
Insurance protection
Data classification
Use of DLP
Use of security analytics
Participation in threat sharing
Business Continuity Management involvement
Employee training
Extensive use of encryption
Incident response team
Amount by which the cost-per-record was lowered
Currencies converted to US dollars
Savings are higher than 2016
*
No comparative data
*
*
*
$262,570 savings per avg breach
131
© Copyright IBM Corporation 2018.
Download your copy of the Report:
ibm.biz/PonemonBCM
Visit www.ponemon.org
to learn more about Ponemon
Institute research programs
Ponemon Institute 2017 Cost of a Data Breach Reports
For country-level 2017 Cost of Data Breach
reports, go to:
ibm.com./security/data-breach
132
© Copyright IBM Corporation 2018.
DS8880 Copy Services solutions for your Business Resiliency requirements
133
Out of
Region
Site C
Metro / Global Mirror
Three and four site cascaded
and multi-target synchronous
and asynchronous mirroring
FlashCopy
Point in time
copy
Within the
same
Storage
System
Out of Region
Site B
Global Mirror
Asynchronous
mirroring
Primary
Site A
Primary
Site A
Metro distance
Site B
Metro Mirror
Synchronous
mirroring
Primary
Site A
Metro
Site B
DS8000 Copy Services fully integrated with GDPS and CSM to provide simplified CA and DR operations
© Copyright IBM Corporation 2018.
• The cascading FlashCopy® function allows a target volume/dataset in one mapping to be the source
volume/dataset in another mapping and so on, creating what is called a cascade of copied data
• Cascading FlashCopy® provides the flexibility to obtain point in time copies of data from different places
within the cascade without removing all other copies
Cascading FlashCopy
134
Target 3 /
Source
With cascading FlashCopy®
• Any Target can become Source
• Any Source can become Target
• Up to 12 relationships are supported
Source
Target 2 /
Source
Target /
Source
(recovery volume)
Target /
Source
• Any target can be restored
to the recovery volume to
validate data.
• If source is corrupted, any
target can be restored back
to the source volume
© Copyright IBM Corporation 2018.
Cascading FlashCopy
Production
Incremental Backups
Production
Incremental Backups
System level backup while active
data set FlashCopy on production
volumes
Recover from an Incremental w/o
withdrawing other copy
135
© Copyright IBM Corporation 2018.
Cascading FlashCopy Use Cases
• Restore a Full Volume FlashCopy while maintaining other
FlashCopies
• Dataset FlashCopy combined with Full Volume FlashCopy
• Including Remote Pair FlashCopy with Metro Mirror
• Recover Global Mirror environment while maintaining a DR test copy
• Improve DEFRAG with FlashCopy
• Improved dataset FlashCopy flexibility
• Perform another FlashCopy immediately from a FlashCopy target
Volume or Dataset
FlashCopy
Volume or Dataset
FlashCopy
A B C
136
© Copyright IBM Corporation 2018.
Using IBM FlashCopy Point-in-Time Copies on DS8000 for Logical Corruption Protection (LCP)
137 137
H1 F2a
F2b
F2c
Prod
Systems
Recovery
Systems
R2
Direct FlashCopy from the Production Copy to the
Recovery Copy for DR or general application testing
Cascaded FlashCopy from one of the
Protection Copies to the Recovery Copy
to enable Surgical or Forensic Recovery
Cascaded FlashCopy back to the Production Copy
from either one of the Protection Copies or the
Recovery Copy for Catastrophic Recovery
Periodic FlashCopy from the
Production Copy to the Protection
Copies
© Copyright IBM Corporation 2018.
IBM Z / GDPS Solution - Proposed Logical Corruption Protection (LCP) Topology
RS1 RS2 RS2
FC1
RS2
FC2
RS2
FC3
Metro Mirror
Prod
Sysplex
Prod
Sysplex
Recovery
Sysplex
RS2
RC1
RS2
RS2
FC1
Prod
Sysplex
RS2
Prod
Sysplex
Recovery
Sysplex
RS2
RC1
Minimal Configuration with a single logical
protection FC1 copy and no Recovery
copy. Can also be used for resync golden
copy
Minimal Configuration with a Recovery
Copy only to enable isolated Disaster
Recovery testing scenarios
FCn devices provide one or more thin
provisioned logical protection copies.
Recovery devices enable IPL of systems
for forensic analysis or other purposes
Logical protection copies can
be defined in any or all sites
(data centers) as desired. This
example shows the LCP copies
in normal secondary site.
138 138
© Copyright IBM Corporation 2018.
Logical Corruption Protection (LCP) with TS7760 Virtual Tape
• Proactive Functions
• Copy Export – Dual physical tape data copies, one can be isolated. True “air gap”
solution; no access to exported volumes from z/OS or Web
• Physical Tape – Single physical tape data copy not directly accessible from IBM Z
hosts. Partial “air gap” solution; manipulation of DFSMS, tape management system
and TS7760 settings required to delete virtual tape volumes
• Delete Expired – Delay (from 1 to 32,767 hours) the actual deletion of data (in disk
cache or physical) for any logical volume moved to scratch status. Transparent
protection from accidental or malicious volume deletion
• Logical Write Once Read Many (LWORM) – TS7760 enforced preservation of data
stored on private logical volumes. Immutability (i.e. no change once created) assured
• Reactive Function
• FlashCopy with Write Protect – “Freeze” the contents of production TS7760 systems
during an emergency situation (such as with an active cyber intruder). Read activity
can continue
139 139
© Copyright IBM Corporation 2018.
DS8880 Remote Mirroring options
• Metro Mirror (MM) – Synchronous Mirroring
• Synchronous mirroring with consistency at remote site
• RPO of 0
• Global Copy (part of MM and GM) – Asynchronous Mirroring
• Asynchronous mirroring without consistency at remote site
• Consistency manually created by user
• RPO determined by how often user is willing to create consistent data at the remote
• Global Mirror (GM) – Asynchronous Mirroring
• Asynchronous mirroring with consistency at the remote site
• RPO between 3-5 seconds
• Metro/Global Mirror – Synchronous / Asynchronous Mirroring
• Three site mirroring solution using Metro Mirror between site 1 and site 2 and Global Mirror between site 2 and site 3
• Consistency maintained at sites 2 and 3
• RPO at site 2 near 0
• RPO at site 3 near 0 if site 1 is lost
• RPO at site 3 between 3-5 seconds if site 2 is lost
• z/OS Global Mirror (XRC)
• Asynchronous mirroring with consistency at the remote site
• RPO between 3-5 seconds
• Timestamp based
• Managed by System Data Mover (SDM)
• Data moved by System Data Mover (SDM) address space(s) running on z/OS
• Supports heterogeneous disk subsystems
• Supports z/OS, z/VM and Linux for z Systems data
140
© Copyright IBM Corporation 2018.
Remote Mirroring Configurations
• Within a single subsystem
• Fibrechannel ‘loopback’
• Typically used only for testing
• 2 subsystems in the same location
• Protection against hardware subsystem failure
• Hardware migration
• High Availability
• 2 sites in a metro region
• Protection against local datacenter disaster
• Migration to new or additional data center
• 2 sites at global distances
• Protection against regional disaster
• Migration to a new data center
• 3 or 4 sites
• Metro Mirror for high availability
• Global Mirror for disaster recovery
141
© Copyright IBM Corporation 2018.
Metro Mirror Overview
•2-site, 2-volume hardware replication
• Continuous synchronous replication with consistency
• Metro distances
• 303 km standard support
• Additional distance via RPQ
• Minimal RPO
• Designed for 0 data loss
• Application response time impacted by copy latency
• 1 ms per 100 km round trip
• Secondary access requires suspension of replication
• IBM Z, distributed systems and IBM i volume replication in one
or multiple consistency groups
142
Metro Mirror
Metro Distances
Local Site Remote Site
Metro Mirror
Local Site Remote Site
© Copyright IBM Corporation 2018.
DS8880 Metro Mirror normal operation
143
• Synchronous mirroring with data consistency
• Can provide an RPO of 0
• Application response time affected by remote mirroring distance
• Leverage pre-deposit write to provide single round trip communication
• Metro Distance (up to 303 KM without RPQ)
2
3
1. Write to local
2. Primary sends Write IO to the
Secondary (cache to cache
transfer)
3. Secondary responds to the
Primary Write completed
4. Primary acknowledges Write
complete to application
1
4
Local DS8880
Application Server
P S
Remote DS8880
Metro Mirror
© Copyright IBM Corporation 2018.
Global Mirror Overview
•2-site, 3-volume hardware replication
•Near continuous asynchronous replication with consistency
• Global Copy + FlashCopy + built-in automation to create consistency
• Minimal application impact
• Unlimited global distances
• Efficient use of network bandwidth
• No additional cache required
•Low Recovery Point Objective (RPO)
• Designed to be as low as 2-5 seconds
• Depends on bandwidth, distance, user specification
• Secondary access requires suspension of replication
• IBM Z, distributed systems and IBM i volume replication in same
or different consistency groups
144
Global Mirror
Global Distances
Local Site Remote Site
Flash
Copy
Global Copy
Global Mirror
© Copyright IBM Corporation 2018.
DS8880 Global Mirror normal operation
145
6
1. Write to local
2. Write complete to application
3. Autonomically or on a user-specified interval,
consistency group formed on local
4. CG sent to remote via Global Copy (drain)
• If writes come in to local, IDs of tracks with changes are
recorded
5. After all consistent data for CG is received at
remote, FlashCopy with 2-phase commit
6. Consistency complete to local
7. Tracks with changes (after CG) are copied to
remote via Global Copy, and FlashCopy Copy-
on-Write preserves consistent image
1
2
Application
Server
4 (CG only)
Global Copy
Flash
Copy
5
3
7 (changes after CG)
Local
DS8880
Remote
DS8880
Global Mirror
• Asynchronous mirroring with data consistency
• RPO of 3-5 seconds realistic
• Minimizes application impact
• Uses bandwidth efficiently
• RPO/currency depends on workload, bandwidth and requirements
• Global Distance
© Copyright IBM Corporation 2018.
Metro/Global Mirror Cascaded Configurations
146
• Metro Mirror within a single location plus Global
Mirror long distance
• Local high availability plus regional disaster protection
• 2-site
Metro Mirror
Metro Distances
Metro Mirror
Metro Distances
Global Mirror
Global Distances
Global Mirror
Global Distances
• Metro Mirror within a metro region plus Global
Mirror long distance
• Local high availability or local disaster protection plus
regional disaster protection
• 3-site
Local Site Remote Site
Local Site Intermediate
Site
Remote Site
© Copyright IBM Corporation 2018.
Metro/Global Mirror Cascaded and Multi Target PPRC
147
• Metro Global Mirror Cascaded
• Local HyperSwap capability
• Asynchronous replication – Out of region disaster recovery capability
• Metro Global Mirror Multi Target PPRC
• Local HyperSwap capability
• Asynchronous replication – Out of region disaster recovery capability
• 2 MM
• 2 GC
• 1 MM / 1 GC
• 1 MM / 1 GM
• 1 GC / 1 GM
• Software support
• GDPS / CSM support MM and MM, MM and GM
Global Mirror
Global Distance
Intermediate Site Remote Site
Metro Mirror
Metro Distance
Local Site
MM
GM
© Copyright IBM Corporation 2018.
Metro/Global Mirror Overview
• 3-site, volume-based hardware replication
• 4-volume design (Global Mirror FlashCopy target may be Space Efficient)
• Synchronous (Metro Mirror) + Asynchronous (Global Mirror)
• Continuous + near-continuous replication
• Cascaded or multi-target
• Metro Distance + Global Distance
• RPO as low as 0 at intermediate or remote for local failure
• RPO as low as 3-5 seconds at remote for failure of both local and intermediate sites
• Application response time impacted only by distance between local and intermediate
• Intermediate site may be co-located at local site
• Fast resynchronization of sites after failures and recoveries
• Single consistency group may include open systems, IBM Z and IBM i volumes
148
Global Mirror
Global Distance
Intermediate Site Remote Site
Metro Mirror
Metro Distance
Local Site
Local Site Intermediate
Site
Remote Site
© Copyright IBM Corporation 2018.
Metro/Global Mirror Normal Operation
149
Application Server
Local DS8000 Intermediate DS8000 Remote DS8000
1. Write to local DS8000
2. Copy to intermediate DS8000 (Metro Mirror)
3. Copy complete to local from intermediate
4. Write complete from local to application
On user-specified interval or autonomically (asynchronously)
5. Global Mirror consistency group formed on intermediate, sent to remote, and
committed on FlashCopies
6. GM consistency complete from remote to intermediate
7. GM consistency complete from intermediate to local (allows for incremental resynch
from local to remote)
1
2
3
4
5
67
© Copyright IBM Corporation 2018.
4-site topology with Metro Global Mirror
150
Metro
Mirror
Global Copy in secondary site
converted to Metro Mirror in
case of disaster or planned site
switch
Global
Copy
Region A Region B
Site2
Site1
Site2
Site1
Incremental Resynchronisation
in case of HyperSwap or
secondary site failure
© Copyright IBM Corporation 2018.
Performance Enhancement - Bypass Extent Serialization
• Certain applications like JES and starting in Db2 V7, Db2
began to use Bypass Extent Serialization to avoid extent
conflicts
• However, Bypass Serialization was not honored when using Metro
Mirror
• Starting with DS8870 R7.2 LIC, the DS8870/DS8880 honors
Bypass Extent Serialization with Metro Mirror
• Especially beneficial with Db2 data sharing, because the
extent range for each cast out I/O is unlimited
• Described in Db2 11 z/OS Performance Topics, chapter 6.8,
http://www.redbooks.ibm.com/abstracts/sg248222.html?Open
• http://blog.intellimagic.com/eliminating-data-set-contention/
151
0
0.5
1
1.5
2
2.5
Extent Conflict
w/Bypass Extent Check
Set
Extent Conflict
w/Bypass Extent Check
NOTSet
No Extent Conflict
Time(ms)
4KB FullTrack UpdateWrite
DISCTIME
CONN TIME
PEND -DV BSY
DV BSYDELAY
QUETIME
3,448
IOps
1,449
IOps
3,382
IOps
Performance based on measurements and projections using IBM benchmarks in a controlled environment.
© Copyright IBM Corporation 2018.
Disaster Recovery / Easy Tier Integration
• Primary site:
• Optimize the storage allocation according to the customer workload (normal Easy Tier process at least once
every 24 hours develops migration plan)
• Save the learning data
• Transfer the learning data from the Primary site to the Secondary site
• Secondary site:
• Without learning, only optimize the storage allocation according to the Replication work load
• With learning, Easy Tier can merge the checkpoint learning data from the primary site
• Following Primary storage data placement to optimize for the customer workload
• Client benefits
• Performance optimized DR sites in the event of a disaster
152
HMT software
GDPS
CSM
© Copyright IBM Corporation 2018.
Easy Tier Heat Map Transfer – GDPS configurations
• GDPS 3.12+ provided HeatMap transfer support for
GDPS/XRC and GDPS/MzGM configurations
• Easy Tier HeatMap can be transferred to either the XRC secondary or
FlashCopy target devices
• GDPS/GM and GDPS/MGM 3/4-site supported for
transferring the HeatMap to FlashCopy target devices
• GDPS HeatMap Transfer supported for all GDPS
configurations
153
Replication
z/OS
HMT software
HMC
H1
HMC
H2
HMC
H3
GDPS
H4
HMC
© Copyright IBM Corporation 2018.
GDPS for IBM Z High Availability and Disaster Recovery
• GDPS provides a complete solution for high availability and
disaster recovery in IBM Z environments
• Replication management, system management, automated
workflows and deep integration with z/OS and parallel sysplex
• DS8000 provides significant benefits for GDPS users with
close cooperation between development teams
• Over 800 GDPS installations worldwide with high
penetration in financial services and some of the
largest IBM Z environments
• 112 3-site GDPS installations and 11 4-site GDPS
installations
• Over 90% of GDPS installations are currently using
IBM disk subsystems
154
© Copyright IBM Corporation 2018.
product Installs
GDPS/MzGM 3-site* 49
GDPS/MGM 3-site ** 71
GDPS/MzGM 4-site *** 4
GDPS/MGM 4-site **** 11
sector installs Percentage
Communications 48 5.7%
Distribution 47 5.2%
Finance 637 73.8%
Industrial 37 4.5%
Public 77 8.7%
Internal IBM 11 1.4%
SMB 6 0.7%
Total 863 100.0%
major geo installs Percentage
AG 264 31.2%
AP 116 13.0%
EMEA 462 55.8%
Totals 863 100.0%
* GDPS/MzGM 3-site consists of GDPS/PPRC HM or GDPS/PPRC and GDPS/XRC. 36-49 have PPRC in the same site.
** GDPS/MGM 3-site consists of GDPS/PPRC or GDPS/MTMM and GDPS/GM. 30-71 have PPRC in the same site.
*** GDPS/MzGM 4-sites consists of GDPS/PPRC, GDPS/XRC, and GDPS/PPRC. 1-4 have PPRC in the same site.
**** GDPS/MGM 4-sites consists of GDPS/PPRC or GDPS/MTMM, GDPS/GM, and GDPS/PPRC or GDPS/MTMM. 5-9 have PPRC in the same site.
GDPS solution by Industry sector
GDPS solution by geography
GDPS installations by product type
Three/four site GDPS installations by product type
product installs percentage
RCMF/PPRC & RCMF/XRC 77 8.2%
GDPS/PPRC HM 89 10.8%
GDPS/PPRC 437 50.8%
GDPS/MTMM 9 0.5%
GDPS/XRC 118 14.0%
GDPS/GM 139 15.2%
GDPS/A-A 4 0.4%
Totals 863 100.0%
155
GDPS Demographics (thru 5/17)
© Copyright IBM Corporation 2018.
There are many IBM GDPS service products to help meet various business requirements
Near-continuous availability of
data within a data center
Near-continuous availability (CA)
and disaster recovery (DR) within
a metropolitan region
Single data center
Applications can remain active
Near-continuous access to data in the event of a storage
subsystem outage
RPO equals 0 and RTO equals 0
Two data centers
Systems can remain active
Multisite workloads can withstand
site and storage failures
DR RPO equals 0 and RTO is
less than 1 hour or
CA RPO equals 0 and RTO minutes
GDPS/PPRC HM1 GDPS/PPRC
1Peer-to-peer remote copy (PPRC) 2Multi-Target Metro Mirror
Near-continuous availability (CA) and disaster
recovery (DR) within a metropolitan region
Two/three data centers (2 server sites,
3 disk locations)
Systems can remain active
Multi-site workloads can withstand site and/or storage
failures
DR RPO equals 0 and RTO is less than 1 hour or CA RPO
equals 0 and RTO minutes
A B
PPRC
GDPS/MTMM2
RPO – recovery point objective
RTO – recovery time objective
Synch replication
Asynch replication
156
© Copyright IBM Corporation 2018.
There are many IBM GDPS service products to help meet various business requirements
(continued)
RPO – recovery point objective
RTO – recovery time objective
Synch replication
Asynch replication
GDPS®/MGM3 and GDPS/MzGM4
(3 or 4-site configuration)
Near-continuous availability (CA) regionally
and disaster recovery at extended distances
Three or four data centers
High availability for site disasters
Disaster recovery (DR) for regional disasters
DR RPO equals 0 and RTO less than 1 hour or CA RPO equals 0 and RTO
minutes
and RPO seconds and RTO less than 1 hour
A B
C D
2Global Mirror (GM) 2Extended Remote Copy (XRC) 3Metro Global Mirror (MGM) 4Metro z/OS Global Mirror (MzGM)
Disaster recovery at
extended distance
Two data centers
More rapid systems disaster recovery with “seconds” of data loss
Disaster recovery for out-of-region interruptions
RPO seconds and RTO less than 1 hour
GDPS/GM1 and GDPS/XRC2
157
© Copyright IBM Corporation 2018.
There are many IBM GDPS service products to help meet various business requirements
(continued)
GDPS Virtual Appliance (VA)
Near-continuous availability and disaster recovery within metropolitan
regions
Two data centers
z/VM and Linux on IBM z Systems can remain active
Near-continuous access to data in the event of a storage subsystem
outage
RPO equals 0 and
RTO is less than 1 hour
1Multi-Target Metro Mirror
A B
PPRC
z/VM & Linux
GDPS VA
GDPS/Active-Active
Near-continuous availability, disaster recovery and
cross-site workload balancing at extended distances
Two data centers
Disaster recovery for out-of -region interruptions
All sites active
RPO seconds and RTO seconds
RPO – recovery point objective
RTO – recovery time objective
Synch replication
Asynch replication
158
© Copyright IBM Corporation 2018.
Global Continuous Availability and Disaster Recovery Offering for IBM Z – over 18
years and still going strong
159
Technology
System Automation for z/OS
NetView for z/OS
SA Multi-Platform
SA Application Manager
Multi-site Workload Lifeline
Manage and Automate
• Central Point of Control
• IBM Z and Distributed Servers
• xDR for z/VM and Linux on z Systems
• Replication Infrastructure
• Real-time Monitoring and Alert
Management
• Automated Recovery
• HyperSwap for Continuous Availability
• Planned & Unplanned Outages
• Configuration Infrastructure Mgmt
• Single site, 2-site, 3-site, 4-site
• Automated Provisioning
• IBM Z CBU / OOCoD
First GDPS installation 1998, now more than 860 in 49 countries
Automation
Disk & Tape
Metro Mirror
z/OS Global Mirror
Global Mirror
DS8000/TS7700
Software
IBM InfoSphere Data
Replication (IIDR) for DB2
IIDR for IMS
IIDR for VSAM
Replication
Solutions
PPRC HyperSwap ManagerGDPS/PPRC HM
PPRC (Metro Mirror)GDPS/PPRC
XRC (z/OS Global Mirror)GDPS/XRC
Global MirrorGDPS/GM
Active-ActiveGDPS/A-A
Metro Global Mirror
3-site and 4-site
GDPS/MGM
Metro z Global Mirror
3-site and 4-site
GDPS/MzGM
Multi-target Metro MirrorGDPS/MTMM
PPRC (Metro Mirror)GDPS Appliance
A
C
B
D
z/OS
xDR
DCM
© Copyright IBM Corporation 2018.
IBM Copy Services Manager (CSM)
• Volume level Copy Service Management
• Manages Data Consistency across a set of volumes with logical dependencies
• Supports multiple devices (ESS, DS6000, DS8000, XIV, A9000, SVC, Storwize, Flash System)
• Coordinates Copy Service Functionalities
• FlashCopy
• Metro Mirror
• Global Mirror
• Metro Global Mirror
• Multi Target PPRC (MM and GC)
• Ease of Use
• Single common point of control
• Web browser based GUI and CLI
• Persistent Store Data Base
• Source / Target volume matching
• SNMP Alerts
• Wizard based configuration
• Business Continuity
• Site Awareness
• High Availability Configuration – active and standby management server
• No Single point of Failure
• Disaster Recovery Testing
• Disaster Recovery Management
160
© Copyright IBM Corporation 2018.
CSM 6.1.1 new features and enhancements at a glance
• DS8000 enhancements
• HyperSwap and Hardened Freeze Enablement for DS8000 Multi-Target Metro Mirror - Global
Mirror session types
• Multi-Target Metro Mirror Global Mirror (MM-GM)
• Multi-Target Metro Mirror - Global Mirror with Practice (MM-GM w/ Practice)
• Support for target box not having the Multi-target feature for DS8000 RPQ
• Support for Multi Target Migration scenario to replace pre DS8870 secondary
• Common CSM improvements
• New Standalone PID (5725-Z54) for distributed platform installations
• available for ordering via Passport Advantage (PPA)
• Small footprint offering for replication only customers (No need for Spectrum Control)
• Modernized GUI Look and Feel
• Setup of LDAP configuration through the CSM GUI
• Support for RACF keyring certificate configuration (optionally replaces GUI certificate)
161
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design
IBM DS8880 and IBM Z - Integrated by Design

More Related Content

What's hot

Emc isilon overview
Emc isilon overview Emc isilon overview
Emc isilon overview
solarisyougood
 
Mainframe Architecture & Product Overview
Mainframe Architecture & Product OverviewMainframe Architecture & Product Overview
Mainframe Architecture & Product Overviewabhi1112
 
Storage basics
Storage basicsStorage basics
Storage basics
Luis Juan Koffler
 
NetApp Fabric Pool Deck
NetApp Fabric Pool DeckNetApp Fabric Pool Deck
NetApp Fabric Pool Deck
Alex Tsui
 
Z4R: Intro to Storage and DFSMS for z/OS
Z4R: Intro to Storage and DFSMS for z/OSZ4R: Intro to Storage and DFSMS for z/OS
Z4R: Intro to Storage and DFSMS for z/OS
Tony Pearson
 
Networking on z/OS
Networking on z/OSNetworking on z/OS
Networking on z/OS
IBM India Smarter Computing
 
Installing Aix
Installing AixInstalling Aix
Installing Aix
Frederick James Rathweg
 
Db2 for z/OS and FlashCopy - Practical use cases (June 2019 Edition)
Db2 for z/OS and FlashCopy - Practical use cases (June 2019 Edition)Db2 for z/OS and FlashCopy - Practical use cases (June 2019 Edition)
Db2 for z/OS and FlashCopy - Practical use cases (June 2019 Edition)
Florence Dubois
 
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
xKinAnx
 
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
xKinAnx
 
FlashSystem Portfolio Overview April 2016 w/ A9000
FlashSystem Portfolio Overview April 2016 w/ A9000FlashSystem Portfolio Overview April 2016 w/ A9000
FlashSystem Portfolio Overview April 2016 w/ A9000
Joe Krotz
 
Infinidat InfiniBox
Infinidat InfiniBoxInfinidat InfiniBox
Infinidat InfiniBox
MarketingArrowECS_CZ
 
Method of NUMA-Aware Resource Management for Kubernetes 5G NFV Cluster
Method of NUMA-Aware Resource Management for Kubernetes 5G NFV ClusterMethod of NUMA-Aware Resource Management for Kubernetes 5G NFV Cluster
Method of NUMA-Aware Resource Management for Kubernetes 5G NFV Cluster
byonggon chun
 
Overview of kubernetes network functions
Overview of kubernetes network functionsOverview of kubernetes network functions
Overview of kubernetes network functions
HungWei Chiu
 
z/OS V2.4 Preview: z/OS Container Extensions - Running Linux on Z docker cont...
z/OS V2.4 Preview: z/OS Container Extensions - Running Linux on Z docker cont...z/OS V2.4 Preview: z/OS Container Extensions - Running Linux on Z docker cont...
z/OS V2.4 Preview: z/OS Container Extensions - Running Linux on Z docker cont...
zOSCommserver
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
Italo Santos
 
Lisa 2015-gluster fs-introduction
Lisa 2015-gluster fs-introductionLisa 2015-gluster fs-introduction
Lisa 2015-gluster fs-introduction
Gluster.org
 
Spectrum Scale Best Practices by Olaf Weiser
Spectrum Scale Best Practices by Olaf WeiserSpectrum Scale Best Practices by Olaf Weiser
Spectrum Scale Best Practices by Olaf Weiser
Sandeep Patil
 
Vce vxrail-customer-presentation new
Vce vxrail-customer-presentation newVce vxrail-customer-presentation new
Vce vxrail-customer-presentation new
Jennifer Graham
 

What's hot (20)

Emc isilon overview
Emc isilon overview Emc isilon overview
Emc isilon overview
 
Mainframe Architecture & Product Overview
Mainframe Architecture & Product OverviewMainframe Architecture & Product Overview
Mainframe Architecture & Product Overview
 
Storage basics
Storage basicsStorage basics
Storage basics
 
NetApp Fabric Pool Deck
NetApp Fabric Pool DeckNetApp Fabric Pool Deck
NetApp Fabric Pool Deck
 
Z4R: Intro to Storage and DFSMS for z/OS
Z4R: Intro to Storage and DFSMS for z/OSZ4R: Intro to Storage and DFSMS for z/OS
Z4R: Intro to Storage and DFSMS for z/OS
 
Networking on z/OS
Networking on z/OSNetworking on z/OS
Networking on z/OS
 
Installing Aix
Installing AixInstalling Aix
Installing Aix
 
Storage
StorageStorage
Storage
 
Db2 for z/OS and FlashCopy - Practical use cases (June 2019 Edition)
Db2 for z/OS and FlashCopy - Practical use cases (June 2019 Edition)Db2 for z/OS and FlashCopy - Practical use cases (June 2019 Edition)
Db2 for z/OS and FlashCopy - Practical use cases (June 2019 Edition)
 
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
Ibm spectrum scale fundamentals workshop for americas part 4 spectrum scale_r...
 
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
Ibm spectrum scale fundamentals workshop for americas part 1 components archi...
 
FlashSystem Portfolio Overview April 2016 w/ A9000
FlashSystem Portfolio Overview April 2016 w/ A9000FlashSystem Portfolio Overview April 2016 w/ A9000
FlashSystem Portfolio Overview April 2016 w/ A9000
 
Infinidat InfiniBox
Infinidat InfiniBoxInfinidat InfiniBox
Infinidat InfiniBox
 
Method of NUMA-Aware Resource Management for Kubernetes 5G NFV Cluster
Method of NUMA-Aware Resource Management for Kubernetes 5G NFV ClusterMethod of NUMA-Aware Resource Management for Kubernetes 5G NFV Cluster
Method of NUMA-Aware Resource Management for Kubernetes 5G NFV Cluster
 
Overview of kubernetes network functions
Overview of kubernetes network functionsOverview of kubernetes network functions
Overview of kubernetes network functions
 
z/OS V2.4 Preview: z/OS Container Extensions - Running Linux on Z docker cont...
z/OS V2.4 Preview: z/OS Container Extensions - Running Linux on Z docker cont...z/OS V2.4 Preview: z/OS Container Extensions - Running Linux on Z docker cont...
z/OS V2.4 Preview: z/OS Container Extensions - Running Linux on Z docker cont...
 
Ceph - A distributed storage system
Ceph - A distributed storage systemCeph - A distributed storage system
Ceph - A distributed storage system
 
Lisa 2015-gluster fs-introduction
Lisa 2015-gluster fs-introductionLisa 2015-gluster fs-introduction
Lisa 2015-gluster fs-introduction
 
Spectrum Scale Best Practices by Olaf Weiser
Spectrum Scale Best Practices by Olaf WeiserSpectrum Scale Best Practices by Olaf Weiser
Spectrum Scale Best Practices by Olaf Weiser
 
Vce vxrail-customer-presentation new
Vce vxrail-customer-presentation newVce vxrail-customer-presentation new
Vce vxrail-customer-presentation new
 

Similar to IBM DS8880 and IBM Z - Integrated by Design

G108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905cG108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905c
Tony Pearson
 
Macroview Netapp Overview
Macroview Netapp OverviewMacroview Netapp Overview
Macroview Netapp Overview
Alex Tsui
 
NetApp All Flash storage
NetApp All Flash storageNetApp All Flash storage
NetApp All Flash storage
MarketingArrowECS_CZ
 
#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session
Brocade
 
S100298 pendulum-swings-orlando-v1804a
S100298 pendulum-swings-orlando-v1804aS100298 pendulum-swings-orlando-v1804a
S100298 pendulum-swings-orlando-v1804a
Tony Pearson
 
Deploying Massive Scale Graphs for Realtime Insights
Deploying Massive Scale Graphs for Realtime InsightsDeploying Massive Scale Graphs for Realtime Insights
Deploying Massive Scale Graphs for Realtime Insights
Neo4j
 
FS900 Datasheet - TSD03189USEN.PDF
FS900 Datasheet - TSD03189USEN.PDFFS900 Datasheet - TSD03189USEN.PDF
FS900 Datasheet - TSD03189USEN.PDFCorné Lottering
 
FlashSystem February 2017
FlashSystem February 2017FlashSystem February 2017
FlashSystem February 2017
Joe Krotz
 
IBM FlashSystems A9000/R presentation
IBM FlashSystems A9000/R presentation IBM FlashSystems A9000/R presentation
IBM FlashSystems A9000/R presentation
Joe Krotz
 
IBM Storage for SAP HANA Deployments
IBM Storage for SAP HANA DeploymentsIBM Storage for SAP HANA Deployments
IBM Storage for SAP HANA Deployments
Paula Koziol
 
Presentazione IBM Flex System e System x Evento Venaria 14 ottobre
Presentazione IBM Flex System e System x Evento Venaria 14 ottobrePresentazione IBM Flex System e System x Evento Venaria 14 ottobre
Presentazione IBM Flex System e System x Evento Venaria 14 ottobrePRAGMA PROGETTI
 
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIbm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIBM Switzerland
 
NVMe and Flash – Make Your Storage Great Again!
NVMe and Flash – Make Your Storage Great Again!NVMe and Flash – Make Your Storage Great Again!
NVMe and Flash – Make Your Storage Great Again!
DataCore Software
 
HPE Solutions for Challenges in AI and Big Data
HPE Solutions for Challenges in AI and Big DataHPE Solutions for Challenges in AI and Big Data
HPE Solutions for Challenges in AI and Big Data
Lviv Startup Club
 
Saviak lviv ai-2019-e-mail (1)
Saviak lviv ai-2019-e-mail (1)Saviak lviv ai-2019-e-mail (1)
Saviak lviv ai-2019-e-mail (1)
Lviv Startup Club
 
Qnap event v1.6
Qnap   event v1.6Qnap   event v1.6
Qnap event v1.6
Amir Ghorbanali
 
Storage Cloud and Spectrum deck 2017 June update
Storage Cloud and Spectrum deck 2017 June updateStorage Cloud and Spectrum deck 2017 June update
Storage Cloud and Spectrum deck 2017 June update
Joe Krotz
 
Ibm spectrum virtualize 101
Ibm spectrum virtualize 101 Ibm spectrum virtualize 101
Ibm spectrum virtualize 101
xKinAnx
 

Similar to IBM DS8880 and IBM Z - Integrated by Design (20)

G108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905cG108277 ds8000-resiliency-lagos-v1905c
G108277 ds8000-resiliency-lagos-v1905c
 
DS8800 Client Presentation
DS8800 Client PresentationDS8800 Client Presentation
DS8800 Client Presentation
 
V9000 Data Sheet.PDF
V9000 Data Sheet.PDFV9000 Data Sheet.PDF
V9000 Data Sheet.PDF
 
Macroview Netapp Overview
Macroview Netapp OverviewMacroview Netapp Overview
Macroview Netapp Overview
 
NetApp All Flash storage
NetApp All Flash storageNetApp All Flash storage
NetApp All Flash storage
 
#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session#IBMEdge: Flash Storage Session
#IBMEdge: Flash Storage Session
 
S100298 pendulum-swings-orlando-v1804a
S100298 pendulum-swings-orlando-v1804aS100298 pendulum-swings-orlando-v1804a
S100298 pendulum-swings-orlando-v1804a
 
Deploying Massive Scale Graphs for Realtime Insights
Deploying Massive Scale Graphs for Realtime InsightsDeploying Massive Scale Graphs for Realtime Insights
Deploying Massive Scale Graphs for Realtime Insights
 
FS900 Datasheet - TSD03189USEN.PDF
FS900 Datasheet - TSD03189USEN.PDFFS900 Datasheet - TSD03189USEN.PDF
FS900 Datasheet - TSD03189USEN.PDF
 
FlashSystem February 2017
FlashSystem February 2017FlashSystem February 2017
FlashSystem February 2017
 
IBM FlashSystems A9000/R presentation
IBM FlashSystems A9000/R presentation IBM FlashSystems A9000/R presentation
IBM FlashSystems A9000/R presentation
 
IBM Storage for SAP HANA Deployments
IBM Storage for SAP HANA DeploymentsIBM Storage for SAP HANA Deployments
IBM Storage for SAP HANA Deployments
 
Presentazione IBM Flex System e System x Evento Venaria 14 ottobre
Presentazione IBM Flex System e System x Evento Venaria 14 ottobrePresentazione IBM Flex System e System x Evento Venaria 14 ottobre
Presentazione IBM Flex System e System x Evento Venaria 14 ottobre
 
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex systemIbm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
Ibm symp14 referent_marcus alexander mac dougall_ibm x6 und flex system
 
NVMe and Flash – Make Your Storage Great Again!
NVMe and Flash – Make Your Storage Great Again!NVMe and Flash – Make Your Storage Great Again!
NVMe and Flash – Make Your Storage Great Again!
 
HPE Solutions for Challenges in AI and Big Data
HPE Solutions for Challenges in AI and Big DataHPE Solutions for Challenges in AI and Big Data
HPE Solutions for Challenges in AI and Big Data
 
Saviak lviv ai-2019-e-mail (1)
Saviak lviv ai-2019-e-mail (1)Saviak lviv ai-2019-e-mail (1)
Saviak lviv ai-2019-e-mail (1)
 
Qnap event v1.6
Qnap   event v1.6Qnap   event v1.6
Qnap event v1.6
 
Storage Cloud and Spectrum deck 2017 June update
Storage Cloud and Spectrum deck 2017 June updateStorage Cloud and Spectrum deck 2017 June update
Storage Cloud and Spectrum deck 2017 June update
 
Ibm spectrum virtualize 101
Ibm spectrum virtualize 101 Ibm spectrum virtualize 101
Ibm spectrum virtualize 101
 

Recently uploaded

Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Inflectra
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
Frank van Harmelen
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Paul Groth
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
DianaGray10
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 

Recently uploaded (20)

Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 

IBM DS8880 and IBM Z - Integrated by Design

  • 1. © Copyright IBM Corporation 2018. IBM Z and DS8880 IO Infrastructure Modernization Brian Sherman IBM Distinguished Engineer bsherman@ca.ibm.com
  • 2. © Copyright IBM Corporation 2018. Broadest Storage and Software Defined Portfolio in the Industry 2 Infrastructure Scale-Out FileScale-Out Block Scale-Out ObjectVirtualized Block ArchiveBackup Monitoring & ControlManagement & Cloud Backup & Archive Copy Data Management Cloud Object Storage System Elastic Storage Server XIV Gen3 High-Performance Computing New-Gen Workloads High-Performance Analytics Cluster Virtualization Available as FlashSystem A9000 FlashSystem A9000RFlashSystem V9000 Storwize V7000FStorwize V5030F SAN Volume Controller Storwize V5000 Storwize V7000 High-end Server Tape & Virtual Tape TS7700 Family TS2900 AutoloaderTape LibrariesLTO8 & Tape Drives VM Data Availability Acceleration FlashSystem 900 DS8884 DS8884F DS8886 DS8886F DS8888F Private Cloud Hybrid Cloud Disaster Recovery 2
  • 3. © Copyright IBM Corporation 2018. IBM Systems Flash Storage Offerings Portfolio DS8888F • Extreme performance • Targeting database acceleration & Spectrum Storage booster FlashSystem 900 Application acceleration IBM FlashCore™ Technology Optimized FlashSystem A9000 FlashSystem A9000R • Full time data reduction • Workloads: Cloud, VDI, VMware Large deployments FlashSystem V9000 Virtualizing the DC Cloud service providers • Full time data reduction • Workloads: Mixed and cloud Storwize V7000F Mid-Range Storwize V5030F Entry / Mid-Range Enhanced data storage functions, economics and flexibility with sophisticated virtualization SVC Simplified management Flexible consumption model Virtualized, enterprise-class, flash-optimized, modular storage Enterprise class heterogeneous data services and selectable data reduction DS8884F Business class DS8886F Enterprise class Analytic class with superior performance Business critical, deepest integration with IBM Z, POWER AIX and IBM i, superior performance, highest availability, Three-site/Four-site replication and industry-leading reliability IBM Power Systems OR IBM Z OR Heterogenous flash storage 3
  • 4. © Copyright IBM Corporation 2018. DS8880 Unique Technology Advantages Provides Value Infrastructure Matters for Business Critical Environments - Don’t settle for less than optimal • IBM Servers and DS8880 Integration • IBM Z, Power i and p • Available years ahead of competitors • OLTP and Batch Performance • High Performance FICON (zHPF), zHyperWrite, zHyperLink and Db2 integration • Cache - efficiency, optimization algorithms and Db2 exploitation • Easy Tier advancements and Db2 reorg integration • QoS - IO Priority Manager (IOPM), Workload Manager (WLM) • Hybrid-Flash Array (HFA) and All-Flash Array (AFA) options • Proven Availability • Built on POWER8 technology, fully non-disruptive operations • Designed for highest levels of availability and data access protection • State-of-the-art Remote Copy • Lowest latency with Metro Mirror, zHyperWrite • Best RPO and lowest bandwidth requirements with Global Mirror • Superior automated failover/failback with GDPS / Copy Services Manager (CSM) • Ease of Use • Common GUI across the IBM platform • Simplified creation, assignment and management of volumes • Total Cost of Ownership • Hybrid Cloud integration • Bandwidth and infrastructure savings through GM and zHPF • Thin Provisioning with zOS integration Business Critical Storage for the World’s Most Demanding Clients 4
  • 5. © Copyright IBM Corporation 2018. Designing, developing, and testing together is key to unlocking true value Synergy is much more than just interoperability: DS8880 and IBM Z – Designed, developed and tested together • IBM invented the IBM Z I/O architecture • IBM Z, SAN and DS8880 are jointly developed • IBM is best positioned for earliest delivery of new server support • Shared technology between server team and storage team • SAN is the key to 16Gbps, latency, and availability • No other disk system delivers 24/7 availability and optimized performance for IBM Z • Compatible ≠ identical – other vendors support new IBM Z features late or never at all 5
  • 6. © Copyright IBM Corporation 2018. IBM z14 and DS8880 – Continuing to Integrate by Design • IBM zHyperLink • Delivers less that 20us response times • All DS8880 support zHyperLink technology • Superior performance with FICON Express 16S+ and up to 9.4x more Flash capacity • Automated tiering to the Cloud • DFSMS policy control for DFSMShsm tiering to the cloud • Amazon S3 support for Transparent Cloud Tiering (TCT) • Cascading FlashCopy • Allows target volume/dataset in one mapping to be the source volume/dataset in another mapping creating a cascade of copied data IBM DS8880 is the result of years of research and collaboration between the IBM storage and IBM Z teams, working together to transform businesses with trust as a growth engine for the digital economy 6
  • 7. © Copyright IBM Corporation 2018. Clear leadership position 90% greater revenue than next closest competitor Global market acceptance #1 with 55% market share 19 of the top 20 world largest banks use DS8000 for core banking data Having the right infrastructure is essential: IBM DS8000 is ranked #1 storage for the IBM Z Market share 2Q 2017 0% 25% 50% EMC HP Hitachi IBM Source: Calculations based on data from IDC Worldwide Quarterly Disk Storage Systems Tracker, 2017Q2(Worldwide vendor revenue for external storage attached to z/OS hosts) 7
  • 8. © Copyright IBM Corporation 2018. DS8000 is the right infrastructure for Business Critical environments •DS8000 is #1 storage for the IBM Z* •19 of the top 20 world banks use DS8000 for core banking •First to integrate High Performance Flash into Tier 1 Storage •Greater than 6-nines availability •3 seconds RPO; automated site recovery well under 5 minutes •First to deliver true four-way replication 19 of 20 Top Banks *Source: Calculations based on data from IDC Worldwide Quarterly Disk Storage Systems Tracker, 2016Q3 (Worldwide vendor revenue for external storage attached to z/OS hosts) 9
  • 9. © Copyright IBM Corporation 2018. DS8880 Family • IBM POWER8 based processors • DS8884 Hybrid-Flash Array Model 984 and Model 84E Expansion Unit • DS8884 All-Flash Array Model 984 • DS8886 Hybrid / All-Flash Array Model 985 and Model 85E Expansion Unit (single phase power) • DS8886 Hybrid / All-Flash Array Model 986 and Model 86E Expansion Unit (three phase power) • DS8888 All-Flash Array Model 988 and Model 88F Expansion Unit • Scalable system memory and scalable process cores in the controllers • Standard 19” rack • I/O bay interconnect utilizes PCIe Gen3 • Integrated Hardware Management Console (HMC) • Simple licensing structure • Base functions license • Copy Services (CS) license • z-synergy Services (zsS) License 10
  • 10. © Copyright IBM Corporation 2018. DS8880/F – 8th Generation DS8000 Replication and Microcode Compatibility 2004 POWER5 DS8100 DS8300 2012 POWER7 DS8870 2013 POWER7+ 2015 / 2016 POWER8 DS8870 DS8880 DS8884/DS8886/DS8888 HPFE Gen1 2017 POWER8 DS8880/F HFA / AFA HPFE Gen2 2010 POWER6+ DS8800 2009 POWER6 DS8700 2006 POWER5+ DS8300 Turbo 11
  • 11. © Copyright IBM Corporation 2018. DS8000 Enterprise Storage Evolution DS8880DS8870DS8800DS8700DS8300 SASSASSASFCFCDisk DC-UPSDC-UPSBulkBulkBulkPower p8p7/p7+P6+p6p5/p5+CEC PCIE3PCIE2PCIE1PCIE1RIO-GIO Bay 16Gb/8Gb16Gb/8Gb8Gb/8Gb4Gb/2Gb4Gb/2GbAdapters 19”33”33”33”33”Frame 12
  • 12. © Copyright IBM Corporation 2018. DS8880 ‘Three Layer Shared Everything’ Architecture • Layer 1: Up to 32 distributed PowerPC / ASIC Host Adapters (HA) • Manage the 16Gbps Fibre Channel host I/O protocol to servers and perform data replication to remote DS8000s • Checks FICON CRC from host, wraps data with internal check bytes. Checks internal check bytes on reads and generates CRC • Layer 2: Centralized POWER 8 Servers • Two symmetric multiprocessing processor (SMP) complexes manage two monolithic data caches, and advanced functions such as replication and Easy Tier • Write data mirrored by Host Adapters into one server as write cache and the other server and Nonvolatile Store • Layer 3: Up to 16 distributed PowerPC / ASIC RAID Adapters (DA); up to 8 dedicated Flash enclosures each with a pair of Flash optimized RAID controllers • DA’s manage the 8Gbps FC interfaces to internal HDD/SSD storage devices • Flash Enclosures leverage PCIe Gen3 for performance and latency of Flash cards • Checks internal check bytes and stores on disk 13 Up to 1TB cache Up to 1TB cache
  • 13. © Copyright IBM Corporation 2018. AFAs reach a new high : 28% of the external array market. Hybrids +0.5%Pts while all-HDD down -7.4%Pts Source: IDC Storage Tracker 3Q17 Revenue based on US$ 44% 32% 41% 40% 15% 28% 0% 100% 4Q15 1Q16 2Q16 3Q16 4Q16 1Q17 2Q17 3Q17 3Q17 QTR WW Storage Array Type Mix All Flash Array (AFA) Hybrid Flash Array (HFA) All Hard Disk Drive (HDD) 14
  • 14. © Copyright IBM Corporation 2018. Flash technology can be used in many forms … IBM Systems Flash Storage Offerings All-Flash Array (AFA) Mixed (HDD/SSD/CFH) All-Custom Flash Hardware (CFH) All-SSD Hybrid-Flash Array (HFA) CFH defines an architecture that uses optimized flash modules to provide better performance and lower latency than SSDs. Examples of CFH are: • High-Performance Flash Enclosure Gen2 • FlashSystem MicroLatency Module All-flash arrays are storage solutions that only use flash media (CFH or SSDs) designed to deliver maximum performance for application and workload where speed is critical. Hybrid-flash arrays are storage solutions that support a mix of HDDs, SSDs and CFH designed to provide a balance between performance, capacity and cost for a variety of workloads DS8880 now offers an All-flash Family enabled with High- Performance Flash Enclosures Gen2 designed to deliver superior performance, more flash capacity and uncompromised availability DS8880 also offers Hybrid-flash solutions with CFH, SSD and HDD configurations designed to satisfy a wide range of business needs from superior performance to cost efficient requirements Source: IDC's Worldwide Flash in the Datacenter Taxonomy, 2016 15
  • 15. © Copyright IBM Corporation 2018. Why Flash on IBM Z? • Very good overall z/OS average response times can hide many specific applications which can gain significant performance benefits from the reduced latency of Flash • Larger IBM Z memory sizes and newer Analytics and Cognitive workloads are resulting in more cache unfriendly IO patterns which will benefit more from Flash • Predictable performance is also about handling peak workloads and recovering from abnormal conditions. Flash can provide an ability to burst significantly beyond normal average workloads • For clients with a focus on cost, Hybrid Systems with Flash and 10K Enterprise drives are higher performance, greater density and lower cost than 15K Enterprise drives • Flash requires lower energy and less floor space consumption 16 z/OS
  • 16. © Copyright IBM Corporation 2018. DS8880 Family of Hybrid-FlashArrays (HFA) DS8884 DS8886 Affordable hybrid-flash block storage solution for midrange enterprises Faster hybrid-flash block storage for large enterprises designed to support a wide variety of application workloads Model 984 (Single Phase) 985 (Single Phase) 986 (Three Phase) Max Cache 256GB 2TB Max FC/FICON ports 64 128 Media 768 HDD/SSD 96 Flash cards 1536 HDD/SSD 192 Flash cards Max raw capacity 2.6 PB 5.2 PB 17 Business Class Enterprise Class
  • 17. © Copyright IBM Corporation 2018. Hybrid-Flash Array - DS8884 Model 984/84E • 12 cores • Up to 256GB of system memory • Maximum of 64 8/16GB FCP/FICON ports • Maximum 768 HDD/SSD drives • Maximum 96 Flash cards • 19”, 40U rack Hybrid-Flash Array -DS8886 Model 985/85E or 986/86E • Up to 48 cores • Up to 2TB of system memory • Maximum of 128 8/16GB FCP/FICON ports • Maximum1536 HDD/SSD drives • Maximum 192 Flash cards • 19”, 40U - 46U rack 18 DS8880 Hybrid-Flash Array Family – Built on POWER8
  • 18. © Copyright IBM Corporation 2018. DS8884 / DS8886 Hybrid-Flash Array (HFA) Platforms • DS8884 HFA • Model 984 (Single Phase) • Expansion racks are 84E • Maximum of 3 racks (base + 2 expansion) • 19” 40U rack • Based on POWER8 S822 • 6 core processors at 3.891 Ghz • Up to 64 host adapter ports • Up to 256 GB processor memory • Up to 768 drives • Up to two Flash enclosures – 96 Flash cards • 1 Flash enclosure in base rack with 1 additional in first expansion rack • 400/800/1600/3200/3800GB Flash card option • Option for 1 or 2 HMCs installed in base frame • Single phase power • DS8886 HFA • Model 985 (Single phase) / 986 (Three phase) • Expansion racks are 85E / 86E • Maximum of 5 racks (base + 4 expansion) • 19” 46U rack • 40U with a 6U top hat that is installed as part of the install when required • Based on POWER8 S824 • Options for 8 / 16 / 24 core processors at 3.525 or 3.891 Ghz • Up to 128 host adapter ports • Up to 2 TB processor memory • Up to 1536 drives • Up to 4 Flash enclosures – 192 Flash cards • 2 Flash enclosures in base rack with 2 additional in first expansion rack • 400/800/1600/3200/3800GB Flash card option • Option for 1 or 2 HMCs installed in base frame • Model 985 – Single phase power • Model 986 - Three phase power 19
  • 19. © Copyright IBM Corporation 2018. DS8880 Hybrid-FlashArray Configuration Summary Processors per CEC Max System Memory Expansion Frame Max HA ports Max flash raw capacity1 (TB) Max DDM/SSD raw capacity2 (TB) Total raw capacity (TB) DS8884 Hybrid-flash3 6-core 64 0 32 153.6 576 729.6 6-core 128 0 to 2 64 307.2 2304 2611.2 6-core 256 0 to 2 64 307.2 2304 2611.2 DS8886 Hybrid-flash3 8-core 256 0 64 307.2 432 739.2 16-core 512 0 to 4 128 614.4 4608 5222.4 24-core 2048 0 to 4 128 614.4 4608 5222.4 1 Considering 3.2 TB per Flash card 2 Considering 6 TB per HDD and the maximum number of LFF HDDs per storage system 3 Can be also offered as an All-flash configuration with all High-Performance Flash Enclosures Gen2 23
  • 20. © Copyright IBM Corporation 2018. DS8884 / DS8886 HFA Media Options – All Encryption Capable • Flash – 2.5” in High Performance Flash • 400/800/1600/3200GB Flash cards • Flash – 2.5” in High Capacity Flash • 3800GB Flash cards • SSD – 2.5” Small Form Factor • Latest generation with higher sequential bandwidth • 200/400/800/1600GB SSD • 2.5” Enterprise Class 15K RPM • Drive selection traditionally used for OLTP • 300/600GB HDD • 2.5” Enterprise Class 10K RPM • Large capacity, much faster than Nearline • 600GB, 1.2/1.8TB HDD • 3.5” Nearline – 7200RPM Native SAS • Extremely high density, direct SAS interface • 4/6TB HDD Performance 24
  • 21. © Copyright IBM Corporation 2018. Entry level business class storage solution with All-Flash performance delivered within a flexible and space- saving package Enterprise class with ideal combination of performance, capacity and cost to support a wide variety of workloads and applications Analytic class storage with superior performance and capacity designed for the most demanding business workload requirements Processor complex (CEC) 2 x IBM Power Systems S822 2 x IBM Power Systems S824 2 x IBM Power Systems E850C Frames (min / max) 1 / 1 1 / 2 1 / 3 POWER 8 cores per CEC (min / max) 6 / 6 8 / 24 24 / 48 System memory (min / max) 64 GB / 256 GB 256 GB / 2048 GB 1024 GB / 2048 GB Ports (min / max) 8 / 64 8 / 128 8 / 128 Flash cards (min /max) 16 / 192 16 / 384 16 / 768 Capacity (min1 / max2 ) 6.4TB / 729.6TB 6.4 TB / 1.459 PB 6.4 TB / 2.918 PB Max IOPs 550,000 1,800,000 3,000,000 Minimum response time 120µsec 120µsec 120µsec 1 Utilizing 400GB flash cards 2 Utilizing 3.8TB flash cards Business Class Enterprise Class Analytics Class DS8884 DS8886 DS8888 http://www.crn.com/slide-shows/storage/300096451/the-10-coolest-flash-storage-and-ssd-products-of-2017.htm/pgno/0/4?itc=refresh DS8880 Family ofAll-FlashArrays (AFA) 25
  • 22. © Copyright IBM Corporation 2018. All-Flash Array - DS8884 Model 984 • 12 cores • Up to 256GB of system memory • Maximum of 32 8/16GB FCP/FICON ports • Maximum 192 Flash cards • 19”, 40U rack All-Flash Array - DS8886 Model 985/85E or 986/86E • Up to 48 cores • Up to 2TB of system memory • Maximum of 128 8/16GB FCP/FICON ports • Maximum 384 Flash cards • 19”, 46U rack All-Flash Array - DS8888 Model 988/88E • Up to 96 cores • Up to 2TB of system memory • Maximum of 128 8/16GB FCP/FICON ports • Maximum 768 Flash cards • 19”, 46U rack 26 DS8880 All-Flash Array Family – Built on POWER8
  • 23. © Copyright IBM Corporation 2018. DS8884 / DS8886 All-Flash Array (AFA) Platforms • DS8884 AFA • Model 984 (Single Phase) • Base rack • 19” 40U rack • Based on POWER8 S822 • 6 core processors at 3.891 Ghz • Up to 32 host adapter ports • Up to 256 GB processor memory • Four Flash enclosures – 192 Flash cards • 4 Flash enclosures in base rack • 400/800/1600/3200/3800GB Flash card option • Up to 729.6TB (raw) • Option for 1 or 2 HMCs installed in base frame • Single phase power • DS8886 AFA • Model 985 (Single phase) / 986 (Three phase) • Expansion racks are 85E / 86E • Maximum of 2 racks (base + 1 expansion) • 19” 46U rack • 40U with a 6U top hat that is installed as part of the install when required • Based on POWER8 S824 • Options for 8 / 16 / 24 core processors at 3.525 or 3.891 Ghz • Up to 128 host adapter ports • Up to 2 TB processor memory • Up to 8 Flash enclosures – 384 Flash cards • 4 Flash enclosures in base rack with 4 additional in first expansion rack • 400/800/1600/3200/3800GB Flash card option • Up to 1.459PB (raw) • Option for 1 or 2 HMCs installed in base frame • Model 985 – Single phase power • Model 986 - Three phase power 27
  • 24. © Copyright IBM Corporation 2018. All Flash DS8880 Configurations HMC HMC HPFE Gen2 1 HPFE Gen2 2 HPFE Gen2 3 HPFE Gen2 4 46 44 42 40 38 36 34 32 30 28 26 24 22 20 18 16 14 12 10 8 6 4 2 HMC HMCHMC HMC TH 3 TH 4 TH 4 8U HPFE Gen2 1 HPFE Gen2 2 HPFE Gen2 3 HPFE Gen2 4 8U HPFE Gen2 5 HPFE Gen2 6 HPFE Gen2 7 HPFE Gen2 8 8U HMC HMC HMC HMC HPFE Gen2 1 HPFE Gen2 2 HPFE Gen2 3 HPFE Gen2 4 HPFE Gen2 5 HPFE Gen2 6 HPFE Gen2 7 HPFE Gen2 8 HPFE Gen2 9 HPFE Gen2 10 HPFE Gen2 15 HPFE Gen2 16 10U HPFE Gen2 11 HPFE Gen2 12 HPFE Gen2 13 HPFE Gen2 14 HPFE Gen2 15 HPFE Gen2 16 DS8886FDS8884F DS8888F • DS8884F • 192 Flash Drives • 64 FICON/FCP ports • 256GB cache memory • DS8884F • 384 Flash Drives • 128 FICON/FCP ports • 2TB cache memory • DS8888F • 768 Flash Drives • 128 FICON/FCP ports • 2TB cache memory 28
  • 25. © Copyright IBM Corporation 2018. DS8886AFA Three Phase Physical layout: Capacity options 32 R8.2.x R8.3+
  • 26. © Copyright IBM Corporation 2018. DS8888 All-Flash Array (AFA) Platform • DS8888 AFA • Model 988 (Three Phase) • Expansion rack 88E • Maximum of 3 racks (base + 2 expansion) • 19” 46U rack • Based on POWER8 Alpine 4S4U E850C • Options for 24 / 48 core processors at 3.6 Ghz • DDR4 Memory • Up to 384 threads per system with SMT4 • Up to 128 host adapter ports • Up to 2 TB processor memory • Up to 16 Flash enclosures – 768 Flash cards • 4 Flash enclosures in base rack with 6 additional in first two expansion racks • 400/800/1600/3200/3800GB Flash card option • Up to 2.918PB (raw) • Option for 1 or 2 HMCs installed in base frame • Three phase power 36
  • 27. © Copyright IBM Corporation 2018. DS8880All-FlashArray (AFA) Capacity Summary R8.2.1 3.2TB Flash R8.3 3.6TB Flash DS8884F 153.6 TB 729.6 TB DS8886F 614.4 TB 1459.2 TB DS8888F 1128.8 TB 2918.4 TB Manage business data growth with up to 3.8x more flash capacity in the same physical space for storage consolidation and data volume demanding workloads 37
  • 28. © Copyright IBM Corporation 2018. DS8880 AFA Media Options – All Encryption Capable • Flash – 2.5” in High Performance Flash • 400/800/1600/3200GB Flash cards • Flash – 2.5” in High Capacity Flash • 3800GB Flash cards • Data is always encrypted on write to Flash and then decrypted on read • Data stored on Flash is encrypted • Customer data in flight is not encrypted • Media does the encryption at full data rate • No impact to response times • Uses AES 256 bit encryption • Supports cryptographic erasure data • Change of encryption keys • Requires authentication with key server before access to data is granted • Key management options • IBM Security Key Lifecycle Manager (SKLM) • z/OS can also use IBM Security Key Lifecycle Manager (ISKLM) • KMIP compliant key manager such as Safenet KeySecure • Key exchange with key server is via 256 bit encryption 38
  • 29. © Copyright IBM Corporation 2018. DS8880 High Performance Flash Enclosure (HPFE) Gen2 • Performance optimized High Performance Flash Enclosure • Each HPFE Gen2 enclosure • Is 2U, installed in pairs for 4U of rack space • Concurrently installable • Contains up to 24 SFF (2.5”) Flash cards, for a maximum of 48 Flash cards in 4U • Flash cards installed in 16 drive increments – 8 per enclosure • Flash card capacity options • 400GB, 800GB, 1.6TB , 3.2TB and 3.8TB • Intermix of 3 different flash card capacities is allowed • Size options are: 400GB, 800GB, 1.6TB and 3.2TB • RAID6 default for all DS8880 media capacities • RAID5 option available for 400/800GB Flash cards • New Adapter card to support HPFE Gen2 • Installed in pairs • Each adapter pair supports an enclosure pair • PCIe Gen3 connection to IO bay as today’s HPFE 39
  • 30. © Copyright IBM Corporation 2018. Number of HPFE Gen2 allowed per DS8880 system DS8884 Installed HPFE Gen1 HPFE Gen2 that can be installed 4 0 3 1 2 2 1 2 0 2 DS8886 Installed HPFE Gen1 HPFE Gen2 that can be installed 8 0 7 1 6 2 5 3 4 4 3 4 2 4 1 4 0 4 DS8888 Installed HPFE Gen1 A - Rack HPFE Gen2 that can be installed A-Rack Installed HPFE Gen1 B - Rack HPFE Gen2 that can be installed B-Rack 8 0 8 0 7 0 7 1 6 1 6 2 5 1 5 2 4 1 4 3 3 1 3 3 2 2 2 4 1 2 1 4 0 N/A 0 4 For already existing 980/981/982 models, the number of HPFE Gen2 that can be installed in the field is based on number of HPFE Gen1 already installed as shown in these tables: 42
  • 31. © Copyright IBM Corporation 2018. Drive media is rapidly increasing in capacity to 10TB and more. The greater density provides real cost advantages but requires changes in the types of RAID protection used. The DS8880 now defaults to RAID6 for all drive types and a RPQ is required for RAID5 on drives >1TB 1 2 3 4 5 6 P S Traditionally RAID5 has been used over RAID6 for because: • Performs better than RAID6 for random writes • Provides more usable capacity Performance concerns are significantly reduced with Flash and Hybrid systems given very high Flash random write performance RAID5 However as the drive capacity increases , RAID5 exposes enterprises to increased risks, since higher capacity drives are more vulnerable to issues during array rebuild • Data will be lost, if a second drive fails while the first failed drive is being rebuilt • Media errors experienced on a drive during rebuild result in a portion of the data being non-recoverable 1 2 3 4 5 Q P S RAID6 RAID6 for mission critical protection 44
  • 32. © Copyright IBM Corporation 2018. HPFE Gen 2 – RAID 6 Configuration • Two spares shared across the arrays • All Flash cards in the enclosure pair will be same capacity • All arrays will be same RAID protection scheme (RAID-6 in this example) • No intermix of RAID type within an enclosure pair • No deferred maintenance – every Flash card failure will call home HPFE Gen 2 Enclosure A S 1 2 3 4 5 6 HPFE Gen 2 Enclosure B S Install Group 1 16 drives (8+8) Two 5+P+Q Two Spares Install Group 2 16 drives (8+8) Two 6+P+Q No Spares* Install Group 3 16 drives (8+8) Two 6+P+Q No Spares* Q 1 2 3 4 5 P Q P 1 2 3 4 5 6 1 2 3 4 5 6 Q P Q P *Spares are shared across all arrays 1 2 3 4 5 6 1 2 3 4 5 6 Q P Q P Two 5+P+Q arrays Four 6+P+Q arrays Two shared spares 45
  • 33. © Copyright IBM Corporation 2018. 3.8TB High Capacity Flash – Random Read / Write • Random Read • Equivalent random read performance to the existing HPFE Gen2 flash drives • Random Write • Lower write performance than the existing High Performance HPFE Gen2 flash drives 46
  • 34. © Copyright IBM Corporation 2018. 3.8TB High Capacity Flash – Sequential Read / Write • Sequential • Equivalent sequential read performance, but lower sequential write performance than the existing HPFE Gen2 flash drives 47
  • 35. © Copyright IBM Corporation 2018. Brocade IBM Z product timeline 48 FICON Introductions • 08/2002 2 Gbps FICON • 05/2002 FICON / FCP Intermix • 11/2001 FICON Inband Mgmt • 04/2001 64 Port Director • 10/2002 140 Port Director • 05/2005 256 Port Director • 09/2006 4 Gbps FICON ESCON Introductions • 10/1994 9032 ESCON Directors • 08/1999 FICON Bridge Bus/Tag, ESCON, FICON and IP Extension • 1986 CTC Extension/B&T • 1991 High Speed Printer Extension • 1993 Tape Storage Extension • 1993 T3/ATM WAN Support • 1995 Disk Mirroring Support • 1998 IBM XRC Support • 1999 Remote Virtual Tape • 2001 FCIP Remote Mirroring • 2003 FICON Emulation for Disk • 2005 FICON Emulation for Tape • 2015 IP Extension 1987 1990 2000 2001 2002 2003 2004 2005 2007 2008 20091997 2012 ED-5000 M6140 M6064 i10K 9032 48000B24000 DCXFC9000 DCX-4S DCX 8510 2015 Channelink USD 82xx Edge USDX 7500 & FR4-18i 7800 & FX8-24 7840 DCX Introductions • 02/2008 DCX Backbone • 02/2008 768 Port Platform • 02/2008 Integrated WAN • 03/2008 8 Gbps FICON • 05/2008 Acceleration for FICON Tape • 11/2009 New FCIP Platforms • 12/2011 DCX 8510 • 01/2012 16 Gbps FICON • 05/2016 X6 Directors • 10/2016 32 Gbps FICON 2016 SX6 X6
  • 36. © Copyright IBM Corporation 2018. Current Brocade / IBM Z Portfolio 49 16 Gbps FC Fabric Extension Switches Extension Blades Gen 5 - FX8-24 Gen 6 – SX6 X6-4 X6-8DCX-8510-4 6510 G620 32/128 Gbps FC Fabric DCX-8510-8 FC16-32 Blade FC16-48 Blade FC32-48 Blade 7840 7800
  • 37. © Copyright IBM Corporation 2018. Performance Availability Management / Growth IBM DS8880 and IBM Z: Integration by Design • zHPF Enhancements (now includes all z/OS Db2 I/O, BxAM/QSAM), IMS R15 WADS • Db2 Castout Accelerator • Extended Distance FICON • Caching Algorithms – AMP, ARC, WOW, 4K Cache Blocking • Cognitive Tiering - Easy Tier Application , Heat Map Transfer and Db2 integration with Reorgs • Metro Mirror Bypass Extent Checking • z/OS GM Multiple Reader support and WLM integration • Flash + DFSMS + zHPF + HyperPAV/SuperPAV + Db2 • zWLM + DS8000 I/O Priority Manager • zHyperWrite + DS8000 Metro Mirror • zHyperLink • FICON Dynamic Routing • Forward Error Correction (FEC) code • HyperPAV/SuperPAV • GDPS and Copy Services Manager (CSM) Automation • GDPS Active / Standby/Query/Active • HyperSwap technology improvements • Remote Pair FlashCopy and Incremental FlashCopy Enhancements • zCDP for Db2, zCDP for IMS – Eliminating Backup windows • Cognitive Tiering - Easy Tier Heat map transfer • Hybrid Cloud – Transparent Cloud Tiering (TCT) • zOS Health Checker • Quick Init for CKD Volumes • Dynamic Volume Expansion • Extent Space Efficient (ESE) for all volume types • z/OS Distributed Data Backup • z/OS Discovery and Automatic Configuration (zDAC) • Alternate Subchannel exploitation • Disk Encryption • Automation with CSM, GDPS 50 IBM z14 Hardware z/OS (IOS, etc.), z/VM, Linux for z Systems Media Manager, SDM DFSMS Device Support DFSMS hsm, dss Db2, IMS, CICS GDPS DS8880
  • 38. © Copyright IBM Corporation 2018. IBM Z / DS8880 Integration Capabilities – Performance • Lowest latency performance for OLTP and Batch • zHPF • All Db2 IO is able to exploit zHPF • IMS R15 WADS exploits zHPF and zHyperWrite • DS8880 supports format write capability; multi-domain IO; QSAM, BSAM, BPAM; EXCP, EXCPVR; DFSORT, Db2 Dynamic or sequential prefetch, disorganized index scans and List Prefetch Optimizer • HPF extended distance support provides 50% IO performance improvement for remote mirrors • Cache segment size and algorithms • 4K is optimized for OLTP environments • Three unique cache management algorithms from IBM Research to optimize random, sequential and destage for OLTP and Batch optimization • IMS WADS guaranteed to be in cache • Workload Manager Integration (WLM) and IO Priority Manager (IOPM) • WLM policies honored by DS8880 • IBM zHyperLink and zHyperWrite™ • Low latency Db2 read/write and Parallel Db2 Log writes • Easy Tier • Application driven tier management whereby application informs Easy Tier of appropriate tier (e.g. Db2 Reorg) • Db2 Castout Accelerator • Metro Mirror • Pre-deposit write provides lowest latency with single trip exchange • FICON Dynamic Routing reduces costs with improved and persistent performance when sharing ISL traffic 52 IBM Z Hardware z/OS (IOS, etc.), z/VM, Linux for z Systems DFSMSdfp: Device Services, Media Manager, SDM DFSMShsm, DFSMSdss Db2, IMS, CICSGDPS DS8880
  • 39. © Copyright IBM Corporation 2018. zHPF Evolution Version 1 Version 4Version 2 Version 3 • Single domain, single track I/O • Reads, update writes • Media Manager exploitation • z/OS 1.8 and above • Multi-track but <= 64K • Multi-track any size • Extended distance I • Format writes • Multi-domain I/O • QSAM/BSAM/BPAM exploitation • z/OS R1.11 and above • EXCPVR • EXCP Support • ISV Exploitation • Extended Distance II • SDM, DFSORT, z/TPF 53
  • 40. © Copyright IBM Corporation 2018. zHPF and Db2 – Working Together • Db2 functions are improved by zHPF • Db2 database reorganizations • Db2 incremental copy • Db2 LOAD and REBUILD • Db2 queries • Db2 RUNSTATS table sampling • Index scans • Index-to-data access • Log applies • New extent allocation during inserts • Reads from a non-partition index • Reads of large fragmented objects • Recover and restore functions • Sequential reads • Table scans • Write to shadow objects 54 z/OS DFSMS DB2
  • 41. © Copyright IBM Corporation 2018. • Reduced batch window for I/O intensive batch • DS8000 I/O commands optimize QSAM, BPAM, and BSAM access methods for exploiting zHPF • Up to 30% improved I/O service times • Complete conversion of Db2 I/O to zHPF maximizes resource utilization and performance • Up to 52% more Format write throughput (4K pages) • Up to 100% more Pre-formatting throughput • Up to 19% more Sequential pre-fetch throughput • Up to 23% more dynamic pre-fetch throughput (40% with Flash/SSD) • Up to 111% more Disorganized index scans yield throughput (more with 8K pages) • Db2 10 and zHPF is up to 11x faster over Db2 V9 w/o HPF • Up to 30% reduction in Synchronous I/O cache hit response time • Improvements in cache handling decrease response times • 3x to 4x% improvement in Skip sequential index-to-data access cache miss processing • Up to 50% reduction in the number of I/O operations for query and utility functions • DS8000 algorithm optimizes Db2 List-Prefetch I/O 55 z/OS and DS8000 zHPF Performance Advantages zHPF Performance Exclusive - Significant Throughput gains in many areas Reduced transaction response time Reduced batch window Better customer experience 55 z/OS DFSMS DB2
  • 42. © Copyright IBM Corporation 2018. DFSORT zHPF Exploitation in z/OS2.2 • DFSORT zHPF Exploitation • DFSORT normally uses EXCP for processing of basic and large format sequential input and output data sets (SORTIN, SORTOUT, OUTFIL) • DFSORT already uses BSAM for extended format sequential input and output data sets (SORTIN, SORTOUT and OUTFIL). BSAM already supports zHPF • New enhancement: Update DFSORT to prefer BSAM for SORTIN/SORTOUT/OUTFIL when zHPF is available • DFSORT will automatically take advantage of zHPF if it is available on your system; no user actions are necessary. • Why it Matters: Taking advantage of the higher start rates and bandwidth available with zHPF is expected to provide significant performance benefits on systems where zHPF is available 56 z/OS
  • 43. © Copyright IBM Corporation 2018. Utilizing zHPF functionality • Clients can enable/disable specific zHPF features • Requires APAR OA40239 • MODIFY DEVMAN command communicates with the device manager address • For zHPF, following options are available • HPF:4 - zHPF BiDi for List Prefetch Optimizer • HPF:5 - zHPF for QSAM/BSAM • HPF:6 - zHPF List Prefetch Optimizer / Db2 Cast Out Accelerator • HPF:8 - zHPF Format Writes for Accelerating Db2 Table Space Provisioning • Example 1 - Disable zHPF Db2 Cast Out Accelerator • F DEVMAN,DISABLE(HPF:6) • F DEVMAN,REPORT • **** DEVMAN **************************************************** • * HPF FEATURES DISABLED: 6 57 z/OS
  • 44. © Copyright IBM Corporation 2018. DS8000 Advanced Caching Algorithms Classical (simple cache algorithms): • LRU (Least Recently Used) / LRW (Least Recently Written) Cache innovations in DS8000: • 2004 – ARC / S-ARC dynamically partitions the read cache between random and sequential portions • 2007 – AMP manages the sequential read cache and decides what, when, and how much to prefetch • 2009 – IWC (or WOW: Wise Ordering for Writes) manages the write cache and decides what order and rate to destage • 2011 – ALP enables prefetch of a list of non-sequential tracks providing improved performance for Db2 workloads 59
  • 45. © Copyright IBM Corporation 2018. DS8880 Cache efficiency delivers higher Cache Hit Ratios VMAX requires 2n GB cache to support n GB of “usable” cache blk1 blk2 blk1 blk1 blk2 DS8880 4KB slots G1000 16KB slots VMAX 64KB slots blk2 Two 4K cache segments allocated (8K stored, 24K unused) Two 4K cache segments allocated (8K stored, 0K unused) Two 4K cache segments allocated (8K stored, 120K unused) Unused space Unused space Unused space Unused space 60
  • 46. © Copyright IBM Corporation 2018. Continued innovation to reduce IBM Z I/O Response Times IOSQ Time Pending Time Disconnect Time Connect Time Parallel Access Volumes Multiple Allegiance Adaptive Multi-Stream Pre- Fetching (AMP) MIDAWs HyperPAV Intelligent Write Caching (IWC) High Performance FICON for IBM z (zHPF) SuperPAV Sequential Adaptive Replacement Cache (SARC) FICON Express 16 Gb channel zHPF List Prefetch Optimizer 4 KB cache slot size zHyperWrite Easy Tier integration with Db2 Db2 Castout Accelerator Integrated DS8000 functions and features to address response time components (not all functions listed) 61
  • 47. © Copyright IBM Corporation 2018. I/O Latency Improvement Technologies for z/OS * Not drawn to scale zHyperLink 62
  • 48. © Copyright IBM Corporation 2018. QoS - I/O Priority Manager and Work Load Manager • Application A and B initiate an I/O operation to the same DS8880 rank (may be different logical volumes) • zWLM sets the I/O importance value according to the application priority as defined by system administrator • If resources are constrained within the DS8880 (very high utilization on the disk rank), I/O Priority Manager will handle the highest priority I/O request first and may throttle low priority I/Os to guarantee a certain service level 63 DS8880
  • 49. © Copyright IBM Corporation 2018. zOS Global Mirror (XRC) / DS8880 Integration - Workload Manager Based Write Pacing • Software Defined Storage enhancement to allow IBM Z Workload Manager (WLM) to control XRC Write Pacing Client benefits • Reduces administrative overhead on hand managing XRC write pacing • Reduces the need to define XRC write pacing on a per volume level allowing greater flexibility in configurations • Prevents low priority work from interfering with the Recovery Point Objective of critical applications • Enables consolidation of workloads onto larger capacity volumes 64 SDM WLMP S
  • 50. © Copyright IBM Corporation 2018. SAP/Db2 Transactional Latency on z/OS • How do we make transactions run faster on IBM Z and z/OS? A banking workload running on z/OS: Db2 Server time: 5% Lock/Latch + Page Latch: 2-4% Sync I/O: 60-65% Dispatcher Latency: 20-25% TCP/IP: 4-6% This is the write to the Db2 Log Lowering the Db2 Log Write Latency will accelerate transaction execution and reduce lock hold times 1. Faster CPU 2. Software scaling, reducing contention, faster I/O 3. Faster I/O technologies such as zHPF, 16 Gbs, zHyperWrite, zHPF ED II, etc… 4. Run at lower utilizations, address Dispatcher Queueing Delays 5. RoCE Express with SMC-R 65
  • 51. © Copyright IBM Corporation 2018. HyperSwap / Db2 / DS8880 Integration – zHyperWrite • Db2 performs dual, parallel Log writes with DS8880 Metro Mirror • Avoids latency overhead of storage based synchronous mirroring • Improved Log throughput • Reduced Db2 log write response time up to 43 percent • Primary / Secondary HyperSwap enabled • Db2 informs DFSMS to perform a dual log write and not use DS8880 Metro Mirroring if a full duplex Metro Mirror relationship exists • Fully integrated with GDPS and CSM Client benefits • Reduction in Db2 Log latency with parallel Log writes • HyperSwap remains enabled 66
  • 52. © Copyright IBM Corporation 2018. HyperSwap / Db2 / DS8880 Integration – zHyperWrite + 16Gb FICON • Db2 Log write latency improved by up to 58%* with the combination of zHyperWrite and FICON Express16S Client benefits • Gain better end user visible transactional response time • Provide additional headroom for growth within the same hardware footprint • Defer when additional Db2 data sharing members are needed for more throughput • Avoid re-engineering applications to reduce log write rates • Improve resilience over workload spikes Client Financial Transaction Test -43% * With {HyperWrite, z13, 16 Gbs HBA DS8870 and FICON Express16S} vs {EC12, 8 Gbs DS8870 HBA and FICON Express8S} 0.000 0.100 0.200 0.300 0.400 0.500 0.600 0.700 0.800 0.900 zEC12 FEx8S zHPF Write 8Gb HBA z13 FEx8S zHPF Write 8Gb HBA z13 FEx16S zHPF Write 8Gb HBA z13 FEx16S zHPF Write 16Gb HBA PEND CONN -23% -14% -15% FICON Express16s 67 67
  • 53. © Copyright IBM Corporation 2018. zHyperWrite - Client Results 68 Geo State Result Comments US Production 66% Large healthcare provider. I/O service time for DB2 log write was reduced up to 66% based on RMF data. Client reported that they are “extremely impressed by the benefits”. Brazil Production 50% Large financial institution in Brazil, zBLC member. US (East) PoC 28% Large financial institution on east coast, zBLC member. US (West) Production 43% Large financial institution on west coast, zBLC member. Measurement was for 43% reduction in DB2 commit times, 8 GBps channels. US (Central) Production 28% Large agricultural provider. I/O service time for DB2 log write was reduced 25- 28% China PoC 36% Job elapsed times with DB2 reduced by 36%. zHPF was active, 8 GBps channels. UK Production 40% Large financial institution in the UK, zBLC and GDPS member. Measurement was a minimum 40% reduction in DB2 commit times, 8 GBps channels … Many other clients have done PoC and now in production
  • 54. © Copyright IBM Corporation 2018. IMS Release 15 Enhancements for WADS Performance https://developer.ibm.com/storage/2017/10/26/ds8880-enables-ims-release-15-reduce-wads-io-service-time-50/ 69
  • 55. © Copyright IBM Corporation 2018. SAP/Db2 Transactional Latency on z/OS Current Projected with zHyperLink Db2 Server CPU time: 5% 5% Lock/Latch + Page Latch: 2-4% 1-2% I/O service time 60-65% 5-7% Dispatcher (CPU) Latency: 20-25% 5-10% Network (TCP/IP): 4-6% 4-6% zHyperLink savings - 80% Latency Breakdown for a simple transaction • How do we make transactions run faster on IBM Z and z/OS? 71
  • 56. © Copyright IBM Corporation 2018. IBM zHyperLink delivers NVMe-oF like latencies for the Mainframe! • New storage technologies like Flash storage are driven by market requirements of low latency • Low latency helps organizations to improve customer satisfaction, generate revenue and address new business opportunities • Low latency drove the high adoption rate of I/O technologies including zHyperWrite, FICON Express16S+, SuperPAV, and zHPF • IBM zHyperLink™ is the result of an IBM research project created to provide extreme low latency links between the IBM Z and the DS8880 • Operating System and middleware (e.g. Db2) are changed to keep running over an I/O • zHyperWrite™ based replication solution allows zHyperLink™ replicated writes to complete in the same time as simplex 72 IBM Z IBM DS8880 Point to point interconnection between the IBM Z Central Electronics Complexes (CECs) and the DS8880 I/O Bays Less than 20msec response time !
  • 57. © Copyright IBM Corporation 2018. New business requirements demand fast and consistent application response times • New storage technologies like Flash storage are driven by market requirements of low latency • Low latency helps organizations to improve customer satisfaction, generate revenue and address new business opportunities • Low latency drove the high adoption rate of I/O technologies including zHyperWrite, FICON Express16S+, SuperPAV, and zHPF • IBM zHyperLink™ is the result of an IBM research project created to provide extreme low latency links between the IBM Z and the DS8880 • Operating System and middleware (e.g. Db2) are changed to keep running over an I/O • zHyperWrite™ based replication solution allows zHyperLink™ replicated writes to complete in the same time as simplex 73 CF Global Buffer Pool IB or PCIe IB or PCIe 8 usec SENDMSG FICON/zHPF SAN >50,000 IOP/sec <20μsec zHyperLink™ FICON/zHPF
  • 58. © Copyright IBM Corporation 2018. Components of zHyperLink • DS8880 - Designed for Extreme Low Latency Access to Data and Continuous Availability • New zHyperLink is an order of magnitude faster for simple read and write of data • zHyperWrite protocols built into zHyperLink protocols for acceleration of database logging with continuous availability • Investment protection for clients that already purchased the DS8880 • New zHyperLink compliment, do not replace, FICON channels • Standard FICON channel (CHPID type FC) is required for exploiting the zHyperLink Express feature • z14 – Designed from the Casters Up for High Availability, Low Latency I/O Processing • New I/O paradigm transparent to client applications for extreme low latency I/O processing • End-to-end data integrity policed by IBM Z CPU cores in cooperation with DS8880 storage system • z/OS, Db2 - New approach to I/O Processing • New I/O paradigm for the CPU synchronous execution of I/O operations to SAN attached storage. Allows reduction of I/O interrupts, context switching, L1/L2 cache disruption and reduced lock hold times typical in transaction processing work loads • Statement of Direction (SOD) to support VSAM and IMS . 74 z/OS IBM z14 Hardware Db2 zHyperLink ExpressSAN
  • 59. © Copyright IBM Corporation 2018. zHyperLink™ provides real value to your business 0 5 10 15 zHPF zHyperLink Application I/O Response Time 0 5 10 15 zHPF zHyperLink Db2 Transaction Elapsed Time 10x Reduction 5x Reduction Response time reduction compared to zHPF• zHyperLink™ is FAST enough that the CPU can just wait for the data • No Un-dispatch of the running task • No CPU Queueing Delays to resume it • No host CPU cache disruption • Very small I/O service time • Extreme data access acceleration for Online Transaction Processing on IBM Z environment • Reduction of the batch processing windows by providing faster Db2™ faster index splits. Index split performance is the main bottleneck for high volume INSERTs • Transparent performance improvement without re-engineering existing applications • More resilient I/O infrastructure with predictable and repeatable service level agreements 75
  • 60. © Copyright IBM Corporation 2018. 1. I/O driver requests synchronous execution 2. Synchronous I/O completes normally 3. Synchronous I/O unsuccessful 4. Heritage I/O path 5. Heritage I/O completion Synchronous I/O Software Flow 76
  • 61. © Copyright IBM Corporation 2018. Continuous Availability - IBM zHyperLink+ zHyperWrite Metro Mirror Primary Storage Subsystem Node 1 Node 2 Optics HyperSwap < 150m zHyperLink Point-to-Point link • zHyperLink™ are point-to point-connections with a maximum distance of 150m • For acceleration of Db2 Log Writes with Metro Mirror, both the primary and the secondary storage need to be no more than 150 meters from the IBM Z • When the Metro Mirror secondary subsystem is further than 150 meters, exploitation is limited to the read use case • Local HyperSwap™ and long distance asynchronous replication provide the best combination of performance, high availability and disaster recovery • zHyperWrite™ based replication solution allows zHyperLink™ replicated writes to complete in the same time as non-replicated data Optics Node 1 Node 2 Optics Optics IBM z14 zHyperLink Adapter zHyperLink Adapter Optics < 150m Metro Mirror Secondary Storage Subsystem 160,000 IOOPs 8 GByte/s 16 zHyperLink Ports supported on each Storage Subsystem 77
  • 62. © Copyright IBM Corporation 2018. The DS8880 I/O bay supports up to six external interfaces using a CXP connector type. I/O Bay EnclosureI/O Bay Enclosure Base Rack Expansion Rack FICON/FCP HPFE DS8880 internal PCIe Fabric zHyperLink ports HPFE FICON/FCP FICON/FCP FICON/FCP RAIDAdapter RAIDAdapter DS8880 zHyperLink™ Ports Investment Protection – DS8880 hardware shipping 4Q2016 (models 984, 985, 986 and 988), older DS8880’s will be field upgradeable at December 2017 GA 78
  • 63. © Copyright IBM Corporation 2018. Protect your current DS8880 investment  DS8880 provides investment protection by allowing customers to enhance their existing 980/981/982 (R8.0 and R8.1) systems with zHyperLink technology  Each IO Bay has two zHyperLink PCIe connections and a single power out that is used to provide the 12V for the Micro-bay  Intermix of the older IO bay hardware and the new IO bay hardware is allowed Reduce the response time up to 10x in your existing 980/981/982 (R8.0 and R8.1) systems HPFE Gen1 RAIDAdapter FICON/FCP FICON/FCP FICON/FCP RAIDAdapter FICON/FCP DS8880 internal PCIe Fabric Previous Cards Field upgradeable card with zHyperLink support DS8880 internal PCIe Fabric HPFE Gen2 zHyperLink ports 79
  • 64. © Copyright IBM Corporation 2018. Continuous Availability – Synchronous zHyperWrite IBM z14 Metro Mirror Primary Storage Subsystem Optics zHyperLink Adapter z/OS performs synchronous dual writes across storage subsystems in parallel to maintain HyperSwap capability Node 1 Node 2 Optics Optics zHyperLink Adapter Node 1 Node 2 Optics Metro Mirror Secondary Storage Subsystem 80
  • 65. © Copyright IBM Corporation 2018. Performance (Latency and Bandwidth) IBM z14 Metro Mirror Primary Storage Subsystem Optics z/OS software performs synchronous writes in parallel across two or more links for striping large write operations Node 1 Node 2 Optics Optics Node 1 Node 2 Metro Mirror Secondary Storage Subsystem Optics OpticsOptics Optics Optics zHyperLink Adapter zHyperLink Adapter zHyperLink Adapter zHyperLink Adapter 81
  • 66. © Copyright IBM Corporation 2018. Local Primary/Remote Secondary IBM z14 Metro Mirror Primary Storage Subsystem Optics Local Primary uses synchronous I/O for reads, zHPF with enhanced write protocols and zHyperWrite for writes at distance Node 1 Node 2 Optics F C Optics Node 1 Node 2 Metro Mirror Secondary Storage Subsystem Optics OpticsOptics F C Optics Optics zHyperLink Adapter zHyperLink Adapter FICON FICON zHPF Enhanced Write Protocol SAN 100 KM < 150m zHPF Enhanced Write Protocol zHyperWrite Synchronous Reads PPRC 82
  • 67. © Copyright IBM Corporation 2018. I/O Performance Chart – Evolution to IBM zHyperLink with DS8886 IOOPs per CHN IBM DS8886 Average latency (μsec) Single channel BW (GB/s) Number of IOOPs (4K block size) 184.5 155 148 132 20 62K 95K 106K 315K 2.2M 2.4M 3.2M 3.8M 5.3M 0.75 1.6 2.5 3.2 8.0 83
  • 68. © Copyright IBM Corporation 2018. zHyperLink Infrastructure at a Glance • Z14 zHyperLink Express Adapter • Two ports per adapter • Maximum of 16 adapters (32 ports) • Function ID Type = HYL • Up to 127 Virtual Functions (VFs) per PCHID • Point to point connection using PCIe Gen3 • Maximum distance: 150 meters • DS8880 zHyperLink Adapter • Two ports per adapter • Maximum adapters • Up to 8 adapters (16 ports) on DS8888 • Up to 6 adapters (12 ports) on DS8886 • Point to point connection using PCIe Gen3 DS8880 internal PCIe Fabric zHyperLink ports HPFE Gen2 84 z/OS 2.1, 2.2, 2.3 IBM z14 Hardware Db2 V11 or V12 zHyperLink ExpressSAN DS8880 R8.3
  • 69. © Copyright IBM Corporation 2018. IBM DS8000 Restrictions – December 8, 2017 GA • Physical Configuration Limits • Initially only DS8886 model supported • 16 Cores • 256GB and 512GB Cache Sizes only • Maximum of 4 zHyperLinks per DS8886, one per I/O Bay • 4 Links, one per I/O Bay – plug order will specify that port 0 must be used • Links plug into A-Frame only • These restrictions will be enforced through the ordering process • z/OS will restrict zHyperLink requests to 4K Control Interval Sizes or smaller • Firmware Restriction • DS8000 I/O Priority Manager cannot be used with zHyperLinks active 85 z/OS 2.1, 2.2, 2.3 IBM z14 Hardware Db2 V12 zHyperLink ExpressSAN DS8880 R8.3.x
  • 70. © Copyright IBM Corporation 2018. IBM z14 Restrictions – December 8, 2017 GA • Physical Configuration Limits • Maximum of 8 zHyperLinks per z14 (4 zHyperLink Express Adapters) • Recommended maximum 4 PFIDs per zHyperLink per LPAR • Maximum 64 PFIDs per link Note: 1 PFID can achieve ~50k IOPs/s for 4K Reads 4 PFIDs on a single link can achieve ~175K IOPs/s 86 z/OS 2.1, 2.2, 2.3 IBM z14 Hardware Db2 V12 zHyperLink ExpressSAN DS8880 R8.3.x
  • 71. © Copyright IBM Corporation 2018. Fix Category: IBM.Function.zHyperLink Exploitation for zHyperLink Express: FMID APAR PTF Comments ======= ======= ======= ============================ HBB7790 OA50653 BCP (IOS) HDZ2210 OA53199 DFSMS (Media Mgr, Dev. Support) OA50681 DFSMS (Media Mgr, Dev. Support) OA53287 DFSMS (Catalog) OA53110 DFSMS (CMM) OA52329 DFSMS (LISTDATA) HRM7790 OA52452 RMF Exploitation support for other products: FMID APAR PTF Comments ======= ======= ======= ============================ HDBCC10 PI82575 DB2 12 support-zHyperLink Exp. DB2 11 TBD HDZ2210 OA52876 VSAM RLS zHyperlink Exp. OA52941 VSAM zHyperlink Exp. OA52790 SMS zHyperlink Exp. Software Deliveries 87 z/OS 2.1, 2.2, 2.3 IBM z14 Hardware Db2 V12 zHyperLink ExpressSAN DS8880 R8.3.x
  • 72. © Copyright IBM Corporation 2018. Preliminary Results – zHyperLink Performance z/OS Dispatcher Latencies can exceed 725 usec with high CPU utilization Disclaimer: This performance data was measured in a controlled environment running an I/O driver program under z/OS. The actual link latency that any user will experience may vary. z/OS dispatch latencies are work load dependent. Dispatch latencies of 725 microseconds have been observed under the following conditions: The IBM measurement from Db2 Brokerage Online Transaction Workload results on z13 with 12 CPs and an I/O Rate of 53,458 per second to one DS8870, 79% CPU utilization, average IOS service time from RMF is 4.875 milliseconds, DB2 (CL3) average blocking I/O wait time is 5.6 milliseconds (this includes database I/O (predominantly read) and log write I/O). 4K Read at 150 meters 88
  • 73. © Copyright IBM Corporation 2018. Early Adopter Program • Joint effort between z and DS8880 development teams • If your customer is interested in begin to exploit zHyperLinks, nominate them for the EAP • Contacts: • Addie M Richards/Tucson/IBM addie@us.ibm.com • Katharine Kulchock/Poughkeepsie/IBM kathyk@us.ibm.com 89 z/OS 2.1, 2.2, 2.3 IBM z14 Hardware Db2 V12 zHyperLink ExpressSAN DS8880 R8.3.x
  • 74. • Z Batch Network Analyzer (BNA) tool supports zHyperLink to estimate benefits • Generate customer reports with text and graphs to show zHyperLink benefit • Top Data Set candidate list for zHyperLink • Able to filter the data by time • Provide support to aggregate zBNA LPAR results into CPC level views • Requires APAR OA52133 • Only ECKD supported • Fixed Block/SCSI to be considered for future release • FICON and zHPF paths required in addition to zHyperLink Express • zHyperLink Express is a two-port card residing in the PCIe z14 I/O drawer • Up to 16 cards with up to 32 zHyperLink Express ports are supported in a z14 • Shared by multiple LPARs and each port can support up to 127 Virtual Functions (VFs) • Maximum of 254 VFs per adapter • Native LPAR supported • z/VM and KVM guest support to be considered for a future release Planning for zHyperLink http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5132 90
  • 75. • Function ID Type = HYL • PCHID keyword • Db2 v11 and v12 with z/OS 2.1+ • zHyperLink connector on DS8880 I/O Bay • DS8880 firmware R8.3 above • zHyperLink uses optical cable with MTP connector • Maximum supported cable length is 150m Planning for zHyperLink FUNCTION PCHID=100,PORT=2,FID=1000,VF=16,TYPE=HYL,PART=((LP1),(…)) 91 z/OS IBM z14 Hardware Db2 zHyperLink ExpressSAN
  • 76. © Copyright IBM Corporation 2018. HCD – Defining a zHyperLink ┌──────────────────────────── Add PCIe Function ────────────────────────────┐ │ CBDPPF10 │ │ │ │ Specify or revise the following values. │ │ │ │ Processor ID . . . . : S35 │ │ │ │ Function ID . . . . . . 300_ │ │ Type . . . . . . . . . ZHYPERLINK + │ │ │ │ Channel ID . . . . . . . . . . . 1C0 + │ │ Port . . . . . . . . . . . . . . 1 + │ │ Virtual Function ID . . . . . . 1__ + │ │ Number of virtual functions . . 1 │ │ UID . . . . . . . . . . . . . . ____ │ │ │ │ Description . . . . . . . . . . ________________________________ │ │ │ │ F1=Help F2=Split F3=Exit F4=Prompt F5=Reset F9=Swap │ │ F12=Cancel │ └───────────────────────────────────────────────────────────────────────────┘ 92
  • 77. Db2 for z/OS Enablement Acceptable values: ENABLE, DISABLE, DATABASE, or LOG Default: • ENABLE • TBD after performance measurements are done • Data sharing scope: • Member scope. It is recommended that all members use the same setting • Online changeable: Yes ENABLE • Db2 requests the zHyperLink protocol for all eligible I/O requests DISABLE • Db2 does not use the zHyperLink for any I/O requests DATABASE • Db2 requests the zHyperLink protocol for only data base synchronous read I/Os LOG • Db2 requests the zHyperLink protocol for only log write I/Os 93
  • 78. © Copyright IBM Corporation 2018. Enabling zHyperLink on DS8886 - DSGUI 94
  • 79. © Copyright IBM Corporation 2018. Enabling zHyperLink on DS8886 - DSGUI 95
  • 80. © Copyright IBM Corporation 2018. DSCLI zHyperLink Commands 96 chzhyperlink Description: Modify zHyperLink switch Syntax: chzhyperlink [-read enable | disable] [-write enable | disable] storage_image_ID | Example: dscli > chzhyperlink –read enable IBM.2107-75FA120 Aug 11 02:23:49 PST 2004 IBM DS CLI Version: 5.0.0.0 DS: IBM.2107-75FA120 CMUC00519I chzhyperlink: zHyperLink read is successfully modified.
  • 81. © Copyright IBM Corporation 2018. DSCLI zHyperLink Commands 97 lszhyperlink Description: Display the status of zHyperLink switch for a given Storage Image Syntax: lszhyperlink [ -s | -l ] [ storage_image_ID […] | -] Example: dscli > lszhyperlink Date/Time: July 21, 2017 1:18:19 PM MST IBM DSCLI Version: 7.8.30.364 DS: - ID Read Write =============================== IBM.2107-75FBH11 enable disable
  • 82. © Copyright IBM Corporation 2018. DSCLI zHyperLink Commands 98 lszhyperlinkport Description: Display a list of zHyperLink ports for the given storage image Syntax: lszhyperlinkport [-s | -l] [-dev storage_image_ID] [port_ID […] | -] Example: dscli> lszhyperlinkport Date/Time: July 12, 2017 9:54:02 AM CST IBM DSCLI Version: 0.0.0.0 DS: - ID State loc Speed Width ============================================================= HL0028 Connected U1500.1B3.RJBAY03-P1-C7-T3 GEN3 8 HL0029 Connected U1500.1B3.RJBAY03-P1-C7-T4 GEN3 8 HL0038 Disconnected U1500.1B4.RJBAY04-P1-C7-T3 GEN3 8 HL0039 Disconnected U1500.1B4.RJBAY04-P1-C7-T4 GEN3 8
  • 83. © Copyright IBM Corporation 2018. DSCLI zHyperLink Commands 99 showzhyperlinkport Description: Displays detailed properties of an individual zHyperLink port Syntax: showzhyperlinkport [-dev storage_image_ID] [-metrics] “ port_ID” | - Example: dscli> showzhyperlinkport –metrics HL0068 Date/Time: July 12, 2017 9:59:05 AM CST IBM DSCLI Version: 0.0.0.0 DS: - ID HL0068 Date Fri Jun 23 11:26:15 PDT 2017 TxLayerErr 2 DataLayerErr 3 PhyLayerErr 4 ================================ Lane RxPower (dBm) TxPower (dBm) ================================ 0 0.4 0.5884 1 0.1845 -0.2909 2 -0.41 -0.0682 3 0.114 -0.4272
  • 84. • A standard FICON channel (CHPID type FC) is required for exploiting the zHyperLink Express feature • A customer-supplied 24x MTP-MTP cable is required for each port of the zHyperLink Express feature. The cable is a single 24-fiber cable with Multi-fiber Termination Push-on (MTP) connectors. • Internally, the single cable houses 12 fibers for transmit and 12 fibers for receive (Ports are 8x, similar to ICA SR) • Two fiber type options are available with specifications supporting different distances for the zHyperLink Express: • 150m: OM4 50/125 micrometer multimode fiber optic cable with a fiber bandwidth @wavelength: 4.7 GHz-km @ 850 nm. • 40m: OM3 50/125 micrometer multimode fiber optic cable with a fiber bandwidth @wavelength: 2.0 GHz-km @ 850 nm. zHyperLink Connectivity 100
  • 85. © Copyright IBM Corporation 2018. IBM z14 I/O and zHyperLink 101
  • 86. © Copyright IBM Corporation 2018. SuperPAV / DS8880 Integration • Building upon IBM’s success with PAVs and HyperPAV, SuperPAVs which provide cross control unit aliases • Previously aliases must be from within the logical control unit (LCU) • 3390 devices + aliases ≤ 256 could be a limiting factor • LCUs with many EAVs could potential require additional aliases • LCUs with many logical devices and few aliases required reconfiguration if they required additional aliases • SuperPAVs, an IBM DS8880 exclusive, extends aliases beyond the LCU barrier • SuperPAVs can cross control unit boundaries and enable aliases to be shared among multiple LCUs provided that: • The 3390 devices and the aliases are assigned to the same DS8000 server (even/odd LCU) • The devices share a common path group on the z/OS system • Even numbered control units with the exact same paths (CHPIDs [and destination addresses]) are considered peer control units and may share aliases • Odd numbered control units with the exact same paths (CHPIDs [and destination addresses]) are considered peer control units and may share aliases • There is still a requirement to have a least one base device per LCU so it is not possible to define a LCU with nothing but aliases. • Using SuperPAVs will provide benefits to clients especially with a large number of systems (LPARs) or many LCUs sharing a path group 102 z/OS
  • 87. © Copyright IBM Corporation 2018. Db2 Castout Accelerator / DS8880 Integration • In Db2, the process of writing pages from the group buffer pool to disk as referred to as “castout” • Db2 uses a defined process to move buffer pool pages from group buffer pool to private buffer pools to disk • When this process occurs, Db2 writes long chains of writes which typically contain multiple locate record domains. • Each I/O in the chain will be synchronized individually • Reduces overheads for chains of scattered writes • This process is not required for Db2 usage – Db2 requires that the updates are written in order • What changed? • Media Manager has been enhanced to signal to the DS8000 that there is a single logical locate record domain – even though there are multiple imbedded locate records • The data hardening requirement for the entire I/O chain are as if this was a single locate record domain • This change is only done for zHPF I/O • Significant benefit also when using Metro Mirror in this environment • Prototype code results showed a 33% reduction in response time when replicating with Metro Mirror for typical write chain for Db2 castout processing and 43% when Metro Mirror is not in use. • Requires z/OS V1.13 or above with APAR OA49684 and OA49685 • DS8880 R8.1+ 104 https://developer.ibm.com/storage/2017/04/04/Db2-cast-accelerator/ 104 z/OS Media Manager DB2
  • 88. Performance - Db2 Castout Accelerator (CA) Significant improvement in Disconnect time 106
  • 89. © Copyright IBM Corporation 2018. Copy Pool Application CP Backup Storage Group FlashCopy Multiple Disk Copies Dump to Tape Onsite Offsite • Up to 5 copies and 85 Versions for each copy pool • Automatic Expiration •Managed by Management Class Integrated Db2 / DFSMShsm solution to manage Point-in-Time copies • Solution based on FlashCopy backups combined with Db2 logging • Db2 BACKUP SYSTEM provides non-disruptive backup and recovery to any point in time for Db2 databases and subsystems • Db2 maintains cross Volume Data Consistency. No Quiesce of DB required • Recovery at all levels from either disk or tape! • Entire copy pool, individual volumes and individual data sets zCDP for Db2 - Joint solution between DFSMS and Db2 107
  • 90. © Copyright IBM Corporation 2018. Db2 RESTORE SYSTEM Copy Pool Name: DSN$DSNDB0G$DB Name: DB2DATA Storage Group Copy Pool Name: DB2BKUP Type: Copy Pool Backup Storage Group Version n Fast Replication Recover Apply Log Identify Recovery Point Recover appropriate PIT copy (May be from disk or tape. Disk provides short RTO while tape will be a longer RTO). Apply log records up to Recovery Point 1 2 3 108
  • 91. © Copyright IBM Corporation 2018. 16Gb Host Adapter – FCP and FICON • 16Gb connectivity reduces latency and provides faster single stream and per port throughput • 8GFC, 4GFC compatibility (no FC-AL Connections) • Quad core Power PC processor upgrade • Dramatic (2-3x) full adapter IOPS improvements compared to existing 8Gb adapters (for both CKD and distributed FCP) • Lights on Fastload avoids path disturbance during code loads • Forward Error Correction (FEC) for the utmost reliability • Additional functional improvements for IBM Z environments combined with z13/z14 host channels • zHPF extended distance performance feature • (zHPF Extended Distance II) 109
  • 92. © Copyright IBM Corporation 2018. zHPF and 16Gb FICON reduces end-to-end latency • Latency of the storage media is not the only aspect to consider for performance • zHPF significantly reduces read and write response times compared to FICON • With 16Gb SAN connectivity the benefits of zHPF are even greater 110 z13 with 16Gb HBA provides up to 21% lower latency than the zEC12 with 8Gb HBA z13 FEx16S 16G HBA zEC12 FEx8S 8G HBA zHPF Read 0.122 0.155 zHPF Write 0.143 0.180 FICON Read 0.185 0.209 FICON Write 0.215 0.214 0.000 0.050 0.100 0.150 0.200 0.250 Single Channel 4K 1 Device z13 FEx16S 16G HBA vs zEC12 FEx8S 8G HBA ResponseTime(msec)
  • 93. © Copyright IBM Corporation 2018. FICON Express16S+ • For FICON, zHPF, and FCP • CHPID types: FC and FCP • Both ports must be same CHPID type • 2 PCHIDs / CHPIDs • Auto-negotiates to 4, 8, or 16 Gbps • 2 Gbps connectivity not supported • FICON Express8S will be available for 2Gbps (carry forward only) • Increased performance compared to FICON Express16S • Small form factor pluggable (SFP) optics • Concurrent repair/replace action for each SFP • 10KM LX - 9 micron single mode fiber • Unrepeated distance - 10 kilometers (6.2 miles) • SX - 50 or 62.5 micron multimode fiber • Distance variable with link data rate and fiber type • 2 channels of LX or SX (no mix) FC #0427 – 10KM LX, FC #0428 – SX LX/LX SX/SXOR or OM3 OM2 111
  • 94. © Copyright IBM Corporation 2018. 20000 52000 20000 23000 23000 92000 98000 300000 0 50000 100000 150000 200000 250000 300000 350000 I/O driver benchmark I/Os per second 4k block size Channel 100% utilized z H P F FICON Express8 z H P F FICON Express8 z H P F FICON Express8S FICON Express8S z196 z10 z196 z10 z196 z10 zEC12 zBC12 z196,z114 zEC12 zBC12 z196,z114 620 770 620 620 620 1600 3000 3200 0 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 2400 2600 2800 3000 3200 3400 FICON Express8 I/O driver benchmark MegaBytes per second Full-duplex Large sequential read/write mix FICON Express8 FICON Express8S FICON Express16S FICON Express 16S+ FICON Express 16S z196 z10 z196 z10 z14z13 zEC12 zBC12 z196,z114 z H P F z H P F z H P F zEC12 zBC12 z196,z114 z13 z H P F FICON Express 16S+ z14 FICON Express 16S z14 z13 FICON Express 8S FICON Express 16S+ z H P F 6% increase z14 FICON Express 16S+ FICON Express 16S 306% increase *This performance data was measured in a controlled environment running an I/O driver program under z/OS. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. zHPF and z14 FICON Express 16S+ Performance 112
  • 95. © Copyright IBM Corporation 2018. z/OS Transactional Performance for DS8880 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0 500 1,000 1,500 2,000 2,500 3,000 ResponseTime(ms) IO Rate (KIO/s) DS8870 p7+ 16 core 1536 HDD DS8870 p7+ 16 core 8 HPFE (240 Flash Card) DS8884 p8 6 core 4 HPFE (120 Flash Card) DS8886 p8 24 core 8 HPFE (240 Flash Card) DS8888 p8 48 core 16 HPFE (480 Flash Card) 114
  • 96. © Copyright IBM Corporation 2018. DS8000 Family - z/OS OLTP Performance 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0 200,000 400,000 600,000 800,000 1,000,000 1,200,000 1,400,000 1,600,000 1,800,000 ResponseTime(ms) IO Rate (KIO/s) DS8870 p7+ 16 core 8 HPFE (240 Flash Card) DS8884 p8 6 core 4 HPFE (120 Flash Card) DS8886 p8 24 core 8 HPFE (240 Flash Card) 1.5X Faster 200us response time with HPFE for this workload 10% reduction compared to DS8870 115
  • 97. © Copyright IBM Corporation 2018. DS8000 Sequential Read – Max Bandwidth 116 116
  • 98. © Copyright IBM Corporation 2018. DS8000 Sequential Write – Max Bandwidth 117 117
  • 99. © Copyright IBM Corporation 2018. Optimized for enterprise-scale data from multiple platforms and devices • FICON Express16S links reduce latency for workloads such as Db2 and can reduce batch elapsed job times • Reduce up to 58% of Db2 write operations with IBM zHyperWrite and 16Gb links – technology for DS8000 and z/OS for Metro Mirror environment • First system to use a standards based approach for enabling Forward Error Correction for a complete end to end solution • zHPF Extended Distance II provides multi-site configurations with up to 50% I/O service time improvement when writing data remotely which can benefit HyperSwap • FICON Dynamic Routing uses Brocade EBR or CISCO OxID routing across cascaded FICON directors • Clients with multi-site configurations can expect I/O service time improvement when writing data remotely which can benefit GDPS or CSM HyperSwap • Extend z/OS workload management policies into FICON fabric to manage the network congestion • New Easy Tier API removes requirement from application/administrator to manage hardware resources Continued innovation - z13 / DS8000 Intelligent and Resilient IO Unparalleled Resilience and Performance for IBM Z 118 http://www.redbooks.ibm.com/abstracts/redp5134.html?Open
  • 100. Interface Verification - SFP Health through Read Diagnostics Parameter • New z13 Channel Subsystem function • A T11 committee standard • Read Diagnostic Parameters (RDP) • Created to enhance path evaluation and improve fault isolation • Periodic polling from the channel to the end points for the logical paths established • Automatically differentiate between errors caused by dirty links and those errors caused by failing optical components • Provides the optical characteristics for the ends of the link: • Enriches the view of Fabric components • z/OS Commands can display optical signal strength and other metrics without having to manually insert light meters 123
  • 101. © Copyright IBM Corporation 2018. R8.1 - Read Diagnostic Parameters (RDP) Enhancements • Enhancements have been made in the standard to provide additional information in the Read Diagnostic Parameters (RDP) response • Buffer-to-buffer credit • Round trip latency for a measure of link length • A configured speed indicator to indicate that a port is configured for a specific link speed • Forward Error Correction (FEC) status • Alarm and warning levels that can be used to determine when power levels are out of specification without any prior knowledge of link speeds and types and the expected levels for these • SFP vendor identification including the name, part number and serial numbers • APAR OA49089 provides additional support to exploit this function • Enhancements to D M=DEV command processing and to z/OS Health Checker utility 124 124
  • 102. © Copyright IBM Corporation 2018. IBM Z / DS8880 Integration Capabilities – Availability • Availability • Designed for greater than 99.9999% - extreme availability • Hardware Service Console Redundancy • Built on high performance/redundant POWER8 technology • Fully non-disruptive operations • Fully redundant hardware components • HyperSwap • Hardware and software initiated triggers • Data integrity after a swap • Consistent time stamps for coordinated recovery of Sysplex and DS8000 • Comprehensive automation management with GDPS or Copy Services Manager (CSM) • Preserve data reliability with additional redundancy on the information transmitted via 16Gb adapters with Forward Error Connection 126 IBM Z Hardware z/OS (IOS, etc.), z/VM, Linux for z Systems DFSMSdfp: Device Services, Media Manager, SDM DFSMShsm, DFSMSdss DB2, IMS, CICSGDPS DS8880
  • 103. © Copyright IBM Corporation 2018. HyperSwap / DS8880 Integration – Continuous Availability - Multi-Target Mirroring • Multiple Site Disaster Recovery / High Availability Solution • Mirrors data from a single primary site to two secondary sites • Builds upon and extends current Metro Mirror, Global Mirror and Metro Global Mirror configurations • Increased capability and flexibility in Disaster Recovery solutions • Synchronous replication • Asynchronous replication • Combination of both Synchronous and Asynchronous • Provides for an Incremental Resynchronization between the two secondary sites • Improved management for a cascaded Metro/Global Mirror configuration 127 Mirror H2 H3 H1
  • 104. © Copyright IBM Corporation 2018. IBM Z / DS8880 Integration Capabilities – Copy Services • Advanced Copy Services • Two, three and four site solutions • Cascaded and multi-target configurations • Remote site data currency • Global Mirror achieves an RPO of under 3 seconds, and RTO in approximately 90 minutes • Most efficient use of link bandwidth • Fully utilize pre-deposit write to provide lowest protocol overhead for synchronous mirroring • Bypass extent utilized in a synchronous mirroring environment to lower latency for applications like Db2 and JES • Integration of Easy Tier Heat Map Transfer with GDPS / CSM • Easy to use replication automation with GDPS / CSM • Significantly reduces personnel requirements for disaster recovery • Remote Pair FlashCopy leverages inband communications • Does not require data transfer across mirroring links • HyperSwap stays enabled • UCB constraint relief by utilizing all four Multiple Subchannel Sets for Secondary volumes, PAV’s, Aliases and GM FlashCopies 128 IBM Z Hardware z/OS (IOS, etc.), z/VM, Linux for z Systems DFSMSdfp: Device Services, Media Manager, SDM DFSMShsm, DFSMSdss DB2, IMS, CICSGDPS DS8880
  • 105. © Copyright IBM Corporation 2018. Business continuity and resiliency protects the reputation of financial firms 129 Statistics from the Ponemon Institute Cost of Data Breach Study 2017; sponsored by IBM. Visit: http://www-03.ibm.com/security/data-breach USD141 Average cost per record compromised 2% increase Average size of a data breach increased to 24,089 records USD 3.62 million Average total cost per data breach
  • 106. © Copyright IBM Corporation 2018. The largest component of the total cost of a data breach is lost business 130 Detection and escalation $0.99 million Notification $0.19 million Lost business cost $1.51 million Ex-post response $0.93 million Components of the $3.62 million cost per data breach $3.62 million Forensics, root cause determination, organizing incident response team, identifying victims Disclosure of data breach to victims and regulators Help desk, inbound communications, special investigations, remediation, legal expenditures, product discounts, identity protection service, regulatory interventions Abnormal turnover of customers, increased customer acquisition cost, reputation losses, diminished goodwill Currencies converted to US dollars
  • 107. © Copyright IBM Corporation 2018. What you can do to help reduce the cost of a data breach $2.90 $5.10 $5.20 $5.40 $5.70 $6.20 $6.80 $8.00 $10.90 $12.50 $16.10 $19.30 CPO appointed Board-level involvement CISO appointed Insurance protection Data classification Use of DLP Use of security analytics Participation in threat sharing Business Continuity Management involvement Employee training Extensive use of encryption Incident response team Amount by which the cost-per-record was lowered Currencies converted to US dollars Savings are higher than 2016 * No comparative data * * * $262,570 savings per avg breach 131
  • 108. © Copyright IBM Corporation 2018. Download your copy of the Report: ibm.biz/PonemonBCM Visit www.ponemon.org to learn more about Ponemon Institute research programs Ponemon Institute 2017 Cost of a Data Breach Reports For country-level 2017 Cost of Data Breach reports, go to: ibm.com./security/data-breach 132
  • 109. © Copyright IBM Corporation 2018. DS8880 Copy Services solutions for your Business Resiliency requirements 133 Out of Region Site C Metro / Global Mirror Three and four site cascaded and multi-target synchronous and asynchronous mirroring FlashCopy Point in time copy Within the same Storage System Out of Region Site B Global Mirror Asynchronous mirroring Primary Site A Primary Site A Metro distance Site B Metro Mirror Synchronous mirroring Primary Site A Metro Site B DS8000 Copy Services fully integrated with GDPS and CSM to provide simplified CA and DR operations
  • 110. © Copyright IBM Corporation 2018. • The cascading FlashCopy® function allows a target volume/dataset in one mapping to be the source volume/dataset in another mapping and so on, creating what is called a cascade of copied data • Cascading FlashCopy® provides the flexibility to obtain point in time copies of data from different places within the cascade without removing all other copies Cascading FlashCopy 134 Target 3 / Source With cascading FlashCopy® • Any Target can become Source • Any Source can become Target • Up to 12 relationships are supported Source Target 2 / Source Target / Source (recovery volume) Target / Source • Any target can be restored to the recovery volume to validate data. • If source is corrupted, any target can be restored back to the source volume
  • 111. © Copyright IBM Corporation 2018. Cascading FlashCopy Production Incremental Backups Production Incremental Backups System level backup while active data set FlashCopy on production volumes Recover from an Incremental w/o withdrawing other copy 135
  • 112. © Copyright IBM Corporation 2018. Cascading FlashCopy Use Cases • Restore a Full Volume FlashCopy while maintaining other FlashCopies • Dataset FlashCopy combined with Full Volume FlashCopy • Including Remote Pair FlashCopy with Metro Mirror • Recover Global Mirror environment while maintaining a DR test copy • Improve DEFRAG with FlashCopy • Improved dataset FlashCopy flexibility • Perform another FlashCopy immediately from a FlashCopy target Volume or Dataset FlashCopy Volume or Dataset FlashCopy A B C 136
  • 113. © Copyright IBM Corporation 2018. Using IBM FlashCopy Point-in-Time Copies on DS8000 for Logical Corruption Protection (LCP) 137 137 H1 F2a F2b F2c Prod Systems Recovery Systems R2 Direct FlashCopy from the Production Copy to the Recovery Copy for DR or general application testing Cascaded FlashCopy from one of the Protection Copies to the Recovery Copy to enable Surgical or Forensic Recovery Cascaded FlashCopy back to the Production Copy from either one of the Protection Copies or the Recovery Copy for Catastrophic Recovery Periodic FlashCopy from the Production Copy to the Protection Copies
  • 114. © Copyright IBM Corporation 2018. IBM Z / GDPS Solution - Proposed Logical Corruption Protection (LCP) Topology RS1 RS2 RS2 FC1 RS2 FC2 RS2 FC3 Metro Mirror Prod Sysplex Prod Sysplex Recovery Sysplex RS2 RC1 RS2 RS2 FC1 Prod Sysplex RS2 Prod Sysplex Recovery Sysplex RS2 RC1 Minimal Configuration with a single logical protection FC1 copy and no Recovery copy. Can also be used for resync golden copy Minimal Configuration with a Recovery Copy only to enable isolated Disaster Recovery testing scenarios FCn devices provide one or more thin provisioned logical protection copies. Recovery devices enable IPL of systems for forensic analysis or other purposes Logical protection copies can be defined in any or all sites (data centers) as desired. This example shows the LCP copies in normal secondary site. 138 138
  • 115. © Copyright IBM Corporation 2018. Logical Corruption Protection (LCP) with TS7760 Virtual Tape • Proactive Functions • Copy Export – Dual physical tape data copies, one can be isolated. True “air gap” solution; no access to exported volumes from z/OS or Web • Physical Tape – Single physical tape data copy not directly accessible from IBM Z hosts. Partial “air gap” solution; manipulation of DFSMS, tape management system and TS7760 settings required to delete virtual tape volumes • Delete Expired – Delay (from 1 to 32,767 hours) the actual deletion of data (in disk cache or physical) for any logical volume moved to scratch status. Transparent protection from accidental or malicious volume deletion • Logical Write Once Read Many (LWORM) – TS7760 enforced preservation of data stored on private logical volumes. Immutability (i.e. no change once created) assured • Reactive Function • FlashCopy with Write Protect – “Freeze” the contents of production TS7760 systems during an emergency situation (such as with an active cyber intruder). Read activity can continue 139 139
  • 116. © Copyright IBM Corporation 2018. DS8880 Remote Mirroring options • Metro Mirror (MM) – Synchronous Mirroring • Synchronous mirroring with consistency at remote site • RPO of 0 • Global Copy (part of MM and GM) – Asynchronous Mirroring • Asynchronous mirroring without consistency at remote site • Consistency manually created by user • RPO determined by how often user is willing to create consistent data at the remote • Global Mirror (GM) – Asynchronous Mirroring • Asynchronous mirroring with consistency at the remote site • RPO between 3-5 seconds • Metro/Global Mirror – Synchronous / Asynchronous Mirroring • Three site mirroring solution using Metro Mirror between site 1 and site 2 and Global Mirror between site 2 and site 3 • Consistency maintained at sites 2 and 3 • RPO at site 2 near 0 • RPO at site 3 near 0 if site 1 is lost • RPO at site 3 between 3-5 seconds if site 2 is lost • z/OS Global Mirror (XRC) • Asynchronous mirroring with consistency at the remote site • RPO between 3-5 seconds • Timestamp based • Managed by System Data Mover (SDM) • Data moved by System Data Mover (SDM) address space(s) running on z/OS • Supports heterogeneous disk subsystems • Supports z/OS, z/VM and Linux for z Systems data 140
  • 117. © Copyright IBM Corporation 2018. Remote Mirroring Configurations • Within a single subsystem • Fibrechannel ‘loopback’ • Typically used only for testing • 2 subsystems in the same location • Protection against hardware subsystem failure • Hardware migration • High Availability • 2 sites in a metro region • Protection against local datacenter disaster • Migration to new or additional data center • 2 sites at global distances • Protection against regional disaster • Migration to a new data center • 3 or 4 sites • Metro Mirror for high availability • Global Mirror for disaster recovery 141
  • 118. © Copyright IBM Corporation 2018. Metro Mirror Overview •2-site, 2-volume hardware replication • Continuous synchronous replication with consistency • Metro distances • 303 km standard support • Additional distance via RPQ • Minimal RPO • Designed for 0 data loss • Application response time impacted by copy latency • 1 ms per 100 km round trip • Secondary access requires suspension of replication • IBM Z, distributed systems and IBM i volume replication in one or multiple consistency groups 142 Metro Mirror Metro Distances Local Site Remote Site Metro Mirror Local Site Remote Site
  • 119. © Copyright IBM Corporation 2018. DS8880 Metro Mirror normal operation 143 • Synchronous mirroring with data consistency • Can provide an RPO of 0 • Application response time affected by remote mirroring distance • Leverage pre-deposit write to provide single round trip communication • Metro Distance (up to 303 KM without RPQ) 2 3 1. Write to local 2. Primary sends Write IO to the Secondary (cache to cache transfer) 3. Secondary responds to the Primary Write completed 4. Primary acknowledges Write complete to application 1 4 Local DS8880 Application Server P S Remote DS8880 Metro Mirror
  • 120. © Copyright IBM Corporation 2018. Global Mirror Overview •2-site, 3-volume hardware replication •Near continuous asynchronous replication with consistency • Global Copy + FlashCopy + built-in automation to create consistency • Minimal application impact • Unlimited global distances • Efficient use of network bandwidth • No additional cache required •Low Recovery Point Objective (RPO) • Designed to be as low as 2-5 seconds • Depends on bandwidth, distance, user specification • Secondary access requires suspension of replication • IBM Z, distributed systems and IBM i volume replication in same or different consistency groups 144 Global Mirror Global Distances Local Site Remote Site Flash Copy Global Copy Global Mirror
  • 121. © Copyright IBM Corporation 2018. DS8880 Global Mirror normal operation 145 6 1. Write to local 2. Write complete to application 3. Autonomically or on a user-specified interval, consistency group formed on local 4. CG sent to remote via Global Copy (drain) • If writes come in to local, IDs of tracks with changes are recorded 5. After all consistent data for CG is received at remote, FlashCopy with 2-phase commit 6. Consistency complete to local 7. Tracks with changes (after CG) are copied to remote via Global Copy, and FlashCopy Copy- on-Write preserves consistent image 1 2 Application Server 4 (CG only) Global Copy Flash Copy 5 3 7 (changes after CG) Local DS8880 Remote DS8880 Global Mirror • Asynchronous mirroring with data consistency • RPO of 3-5 seconds realistic • Minimizes application impact • Uses bandwidth efficiently • RPO/currency depends on workload, bandwidth and requirements • Global Distance
  • 122. © Copyright IBM Corporation 2018. Metro/Global Mirror Cascaded Configurations 146 • Metro Mirror within a single location plus Global Mirror long distance • Local high availability plus regional disaster protection • 2-site Metro Mirror Metro Distances Metro Mirror Metro Distances Global Mirror Global Distances Global Mirror Global Distances • Metro Mirror within a metro region plus Global Mirror long distance • Local high availability or local disaster protection plus regional disaster protection • 3-site Local Site Remote Site Local Site Intermediate Site Remote Site
  • 123. © Copyright IBM Corporation 2018. Metro/Global Mirror Cascaded and Multi Target PPRC 147 • Metro Global Mirror Cascaded • Local HyperSwap capability • Asynchronous replication – Out of region disaster recovery capability • Metro Global Mirror Multi Target PPRC • Local HyperSwap capability • Asynchronous replication – Out of region disaster recovery capability • 2 MM • 2 GC • 1 MM / 1 GC • 1 MM / 1 GM • 1 GC / 1 GM • Software support • GDPS / CSM support MM and MM, MM and GM Global Mirror Global Distance Intermediate Site Remote Site Metro Mirror Metro Distance Local Site MM GM
  • 124. © Copyright IBM Corporation 2018. Metro/Global Mirror Overview • 3-site, volume-based hardware replication • 4-volume design (Global Mirror FlashCopy target may be Space Efficient) • Synchronous (Metro Mirror) + Asynchronous (Global Mirror) • Continuous + near-continuous replication • Cascaded or multi-target • Metro Distance + Global Distance • RPO as low as 0 at intermediate or remote for local failure • RPO as low as 3-5 seconds at remote for failure of both local and intermediate sites • Application response time impacted only by distance between local and intermediate • Intermediate site may be co-located at local site • Fast resynchronization of sites after failures and recoveries • Single consistency group may include open systems, IBM Z and IBM i volumes 148 Global Mirror Global Distance Intermediate Site Remote Site Metro Mirror Metro Distance Local Site Local Site Intermediate Site Remote Site
  • 125. © Copyright IBM Corporation 2018. Metro/Global Mirror Normal Operation 149 Application Server Local DS8000 Intermediate DS8000 Remote DS8000 1. Write to local DS8000 2. Copy to intermediate DS8000 (Metro Mirror) 3. Copy complete to local from intermediate 4. Write complete from local to application On user-specified interval or autonomically (asynchronously) 5. Global Mirror consistency group formed on intermediate, sent to remote, and committed on FlashCopies 6. GM consistency complete from remote to intermediate 7. GM consistency complete from intermediate to local (allows for incremental resynch from local to remote) 1 2 3 4 5 67
  • 126. © Copyright IBM Corporation 2018. 4-site topology with Metro Global Mirror 150 Metro Mirror Global Copy in secondary site converted to Metro Mirror in case of disaster or planned site switch Global Copy Region A Region B Site2 Site1 Site2 Site1 Incremental Resynchronisation in case of HyperSwap or secondary site failure
  • 127. © Copyright IBM Corporation 2018. Performance Enhancement - Bypass Extent Serialization • Certain applications like JES and starting in Db2 V7, Db2 began to use Bypass Extent Serialization to avoid extent conflicts • However, Bypass Serialization was not honored when using Metro Mirror • Starting with DS8870 R7.2 LIC, the DS8870/DS8880 honors Bypass Extent Serialization with Metro Mirror • Especially beneficial with Db2 data sharing, because the extent range for each cast out I/O is unlimited • Described in Db2 11 z/OS Performance Topics, chapter 6.8, http://www.redbooks.ibm.com/abstracts/sg248222.html?Open • http://blog.intellimagic.com/eliminating-data-set-contention/ 151 0 0.5 1 1.5 2 2.5 Extent Conflict w/Bypass Extent Check Set Extent Conflict w/Bypass Extent Check NOTSet No Extent Conflict Time(ms) 4KB FullTrack UpdateWrite DISCTIME CONN TIME PEND -DV BSY DV BSYDELAY QUETIME 3,448 IOps 1,449 IOps 3,382 IOps Performance based on measurements and projections using IBM benchmarks in a controlled environment.
  • 128. © Copyright IBM Corporation 2018. Disaster Recovery / Easy Tier Integration • Primary site: • Optimize the storage allocation according to the customer workload (normal Easy Tier process at least once every 24 hours develops migration plan) • Save the learning data • Transfer the learning data from the Primary site to the Secondary site • Secondary site: • Without learning, only optimize the storage allocation according to the Replication work load • With learning, Easy Tier can merge the checkpoint learning data from the primary site • Following Primary storage data placement to optimize for the customer workload • Client benefits • Performance optimized DR sites in the event of a disaster 152 HMT software GDPS CSM
  • 129. © Copyright IBM Corporation 2018. Easy Tier Heat Map Transfer – GDPS configurations • GDPS 3.12+ provided HeatMap transfer support for GDPS/XRC and GDPS/MzGM configurations • Easy Tier HeatMap can be transferred to either the XRC secondary or FlashCopy target devices • GDPS/GM and GDPS/MGM 3/4-site supported for transferring the HeatMap to FlashCopy target devices • GDPS HeatMap Transfer supported for all GDPS configurations 153 Replication z/OS HMT software HMC H1 HMC H2 HMC H3 GDPS H4 HMC
  • 130. © Copyright IBM Corporation 2018. GDPS for IBM Z High Availability and Disaster Recovery • GDPS provides a complete solution for high availability and disaster recovery in IBM Z environments • Replication management, system management, automated workflows and deep integration with z/OS and parallel sysplex • DS8000 provides significant benefits for GDPS users with close cooperation between development teams • Over 800 GDPS installations worldwide with high penetration in financial services and some of the largest IBM Z environments • 112 3-site GDPS installations and 11 4-site GDPS installations • Over 90% of GDPS installations are currently using IBM disk subsystems 154
  • 131. © Copyright IBM Corporation 2018. product Installs GDPS/MzGM 3-site* 49 GDPS/MGM 3-site ** 71 GDPS/MzGM 4-site *** 4 GDPS/MGM 4-site **** 11 sector installs Percentage Communications 48 5.7% Distribution 47 5.2% Finance 637 73.8% Industrial 37 4.5% Public 77 8.7% Internal IBM 11 1.4% SMB 6 0.7% Total 863 100.0% major geo installs Percentage AG 264 31.2% AP 116 13.0% EMEA 462 55.8% Totals 863 100.0% * GDPS/MzGM 3-site consists of GDPS/PPRC HM or GDPS/PPRC and GDPS/XRC. 36-49 have PPRC in the same site. ** GDPS/MGM 3-site consists of GDPS/PPRC or GDPS/MTMM and GDPS/GM. 30-71 have PPRC in the same site. *** GDPS/MzGM 4-sites consists of GDPS/PPRC, GDPS/XRC, and GDPS/PPRC. 1-4 have PPRC in the same site. **** GDPS/MGM 4-sites consists of GDPS/PPRC or GDPS/MTMM, GDPS/GM, and GDPS/PPRC or GDPS/MTMM. 5-9 have PPRC in the same site. GDPS solution by Industry sector GDPS solution by geography GDPS installations by product type Three/four site GDPS installations by product type product installs percentage RCMF/PPRC & RCMF/XRC 77 8.2% GDPS/PPRC HM 89 10.8% GDPS/PPRC 437 50.8% GDPS/MTMM 9 0.5% GDPS/XRC 118 14.0% GDPS/GM 139 15.2% GDPS/A-A 4 0.4% Totals 863 100.0% 155 GDPS Demographics (thru 5/17)
  • 132. © Copyright IBM Corporation 2018. There are many IBM GDPS service products to help meet various business requirements Near-continuous availability of data within a data center Near-continuous availability (CA) and disaster recovery (DR) within a metropolitan region Single data center Applications can remain active Near-continuous access to data in the event of a storage subsystem outage RPO equals 0 and RTO equals 0 Two data centers Systems can remain active Multisite workloads can withstand site and storage failures DR RPO equals 0 and RTO is less than 1 hour or CA RPO equals 0 and RTO minutes GDPS/PPRC HM1 GDPS/PPRC 1Peer-to-peer remote copy (PPRC) 2Multi-Target Metro Mirror Near-continuous availability (CA) and disaster recovery (DR) within a metropolitan region Two/three data centers (2 server sites, 3 disk locations) Systems can remain active Multi-site workloads can withstand site and/or storage failures DR RPO equals 0 and RTO is less than 1 hour or CA RPO equals 0 and RTO minutes A B PPRC GDPS/MTMM2 RPO – recovery point objective RTO – recovery time objective Synch replication Asynch replication 156
  • 133. © Copyright IBM Corporation 2018. There are many IBM GDPS service products to help meet various business requirements (continued) RPO – recovery point objective RTO – recovery time objective Synch replication Asynch replication GDPS®/MGM3 and GDPS/MzGM4 (3 or 4-site configuration) Near-continuous availability (CA) regionally and disaster recovery at extended distances Three or four data centers High availability for site disasters Disaster recovery (DR) for regional disasters DR RPO equals 0 and RTO less than 1 hour or CA RPO equals 0 and RTO minutes and RPO seconds and RTO less than 1 hour A B C D 2Global Mirror (GM) 2Extended Remote Copy (XRC) 3Metro Global Mirror (MGM) 4Metro z/OS Global Mirror (MzGM) Disaster recovery at extended distance Two data centers More rapid systems disaster recovery with “seconds” of data loss Disaster recovery for out-of-region interruptions RPO seconds and RTO less than 1 hour GDPS/GM1 and GDPS/XRC2 157
  • 134. © Copyright IBM Corporation 2018. There are many IBM GDPS service products to help meet various business requirements (continued) GDPS Virtual Appliance (VA) Near-continuous availability and disaster recovery within metropolitan regions Two data centers z/VM and Linux on IBM z Systems can remain active Near-continuous access to data in the event of a storage subsystem outage RPO equals 0 and RTO is less than 1 hour 1Multi-Target Metro Mirror A B PPRC z/VM & Linux GDPS VA GDPS/Active-Active Near-continuous availability, disaster recovery and cross-site workload balancing at extended distances Two data centers Disaster recovery for out-of -region interruptions All sites active RPO seconds and RTO seconds RPO – recovery point objective RTO – recovery time objective Synch replication Asynch replication 158
  • 135. © Copyright IBM Corporation 2018. Global Continuous Availability and Disaster Recovery Offering for IBM Z – over 18 years and still going strong 159 Technology System Automation for z/OS NetView for z/OS SA Multi-Platform SA Application Manager Multi-site Workload Lifeline Manage and Automate • Central Point of Control • IBM Z and Distributed Servers • xDR for z/VM and Linux on z Systems • Replication Infrastructure • Real-time Monitoring and Alert Management • Automated Recovery • HyperSwap for Continuous Availability • Planned & Unplanned Outages • Configuration Infrastructure Mgmt • Single site, 2-site, 3-site, 4-site • Automated Provisioning • IBM Z CBU / OOCoD First GDPS installation 1998, now more than 860 in 49 countries Automation Disk & Tape Metro Mirror z/OS Global Mirror Global Mirror DS8000/TS7700 Software IBM InfoSphere Data Replication (IIDR) for DB2 IIDR for IMS IIDR for VSAM Replication Solutions PPRC HyperSwap ManagerGDPS/PPRC HM PPRC (Metro Mirror)GDPS/PPRC XRC (z/OS Global Mirror)GDPS/XRC Global MirrorGDPS/GM Active-ActiveGDPS/A-A Metro Global Mirror 3-site and 4-site GDPS/MGM Metro z Global Mirror 3-site and 4-site GDPS/MzGM Multi-target Metro MirrorGDPS/MTMM PPRC (Metro Mirror)GDPS Appliance A C B D z/OS xDR DCM
  • 136. © Copyright IBM Corporation 2018. IBM Copy Services Manager (CSM) • Volume level Copy Service Management • Manages Data Consistency across a set of volumes with logical dependencies • Supports multiple devices (ESS, DS6000, DS8000, XIV, A9000, SVC, Storwize, Flash System) • Coordinates Copy Service Functionalities • FlashCopy • Metro Mirror • Global Mirror • Metro Global Mirror • Multi Target PPRC (MM and GC) • Ease of Use • Single common point of control • Web browser based GUI and CLI • Persistent Store Data Base • Source / Target volume matching • SNMP Alerts • Wizard based configuration • Business Continuity • Site Awareness • High Availability Configuration – active and standby management server • No Single point of Failure • Disaster Recovery Testing • Disaster Recovery Management 160
  • 137. © Copyright IBM Corporation 2018. CSM 6.1.1 new features and enhancements at a glance • DS8000 enhancements • HyperSwap and Hardened Freeze Enablement for DS8000 Multi-Target Metro Mirror - Global Mirror session types • Multi-Target Metro Mirror Global Mirror (MM-GM) • Multi-Target Metro Mirror - Global Mirror with Practice (MM-GM w/ Practice) • Support for target box not having the Multi-target feature for DS8000 RPQ • Support for Multi Target Migration scenario to replace pre DS8870 secondary • Common CSM improvements • New Standalone PID (5725-Z54) for distributed platform installations • available for ordering via Passport Advantage (PPA) • Small footprint offering for replication only customers (No need for Spectrum Control) • Modernized GUI Look and Feel • Setup of LDAP configuration through the CSM GUI • Support for RACF keyring certificate configuration (optionally replaces GUI certificate) 161