WHY HITACHI VIRTUAL
STORAGE PLATFORM DOES
SO WELL IN A MAINFRAME
ENVIRONMENT
HOW FAST CAN A VSP GO?
RON HAWKINS, MANAGER,
TECH OPS PERFORMANCE
NOV. 2, 2011
WEBTECH EDUCATIONAL SERIES


Why Hitachi Virtual Storage Platform (VSP) Does So Well in a Mainframe
Environment

Hitachi VSP is a new paradigm in enterprise array performance. In this session
we will discuss how the architecture of VSP enhances its box-wide performance.
The results of performance testing with synthetic host I/O generators and the
PAI/O driver will also be presented.

Attend this WebTech to learn how to:

 Improve performance in your environment with VSP

 Affect performance in mainframe environments with different RAID
  architectures

 Optimize functionality with wide striping enabled by Hitachi Dynamic
  Provisioning
UPCOMING WEBTECHS


 November and December
  ‒ Increase Your IT Agility and Cost-efficiency with HDS Cloud
    Solutions, Nov. 9, 9 a.m. PT, 12 p.m. ET
  ‒ Best Practices for Upgrading to Hitachi Device Manager
    v7, Nov. 16, 9 a.m. PT, 12 p.m. ET
  ‒ Hitachi Clinical Repository, Dec. 7, 9 a.m. PT, 12 p.m. ET


 Please check www.hds.com/webtech for
  ‒ Link to the recording, presentation and Q&A (available next
    week)
  ‒ Schedule and registration for upcoming WebTech sessions
WE ARE TOP GUN

WHAT WE WILL COVER

 Unified microprocessor
 FICON front-end director
 The numbers
 Hitachi Dynamic
  Provisioning for
  Mainframe
                              Hitachi Virtual
                             Storage Platform
UNIFIED
MICROPROCESSOR
THE HEART OF THINGS
ANOTHER VIEW OF THE
HITACHI VIRTUAL STORAGE PLATFORM
MICROPROCESSORS HAVE MOVED OFF
THE FRONT-END AND BACK-END DIRECTORS
HITACHI UNIVERSAL STORAGE PLATFORM® V
ESTRANGED MICROPROCESSOR




                  Shared Memory
HITACHI UNIVERSAL STORAGE PLATFORM® V
ESTRANGED MICROPROCESSOR




                  Shared Memory
HITACHI UNIVERSAL STORAGE PLATFORM® V
ESTRANGED MICROPROCESSOR




                  Shared Memory
UNIFIED MICROPROCESSOR


              4GB Local
  XEON Quad   RAM
  Core CPU                                     Cache
                          Package
                          Memory



                          Write Through       Shared
                                              Memory

                                    Page In
FICON FRONT-END
DIRECTOR
AT THE EDGE
FICON FRONT-END DIRECTOR

FICON 16-PORT FEATURE

 Hitachi transport processor
  ‒ Initial FICON protocol
  ‒ Open exchanges

 MHUB + ASIC
  ‒ Data accelerator circuit
  ‒ A programmable
    application-specific
    integrated circuit
  ‒ Route command to virtual
    storage director (VSD)
  ‒ Direct memory access
    (DMA) engine
HITACHI VIRTUAL STORAGE PLATFORM
8-PORT FICON BOARD

HITACHI TRANSPORT PROCESSOR

 Accepts initial channel
  command request from
  host
 Multi-protocol
  ‒ FICON and high-
    performance FICON
    (zHPF)

 Establishes open          This graphic is getting
                            hard to read in the red.
  exchanges (480)           Can the type be
                            reversed? -th
  ‒ Shared on demand by
                            Not Really as it is a
    adjacent ports          graphic lifted from
                            another source, we
                            don’ t have the original
                            graphic.
OPEN EXCHANGES

   HITACHI TRANSPORT PROCESSOR

 64 for each host channel
  ‒ 2:1 fan-in is 128
    ‒ Adjacent ports 256         CMR
                                 Time
  ‒ 4:1 fan-in is 256
    ‒ Adjacent ports 512

 Open Exchange (OE)
  exhaustion increases
  Command Response (CMR)
  time
  ‒ Microprocessor busy (>80%)
  ‒ Low cache hits
  ‒ TCz synchronous
HITACHI VIRTUAL STORAGE PLATFORM
8-PORT FICON BOARD

MHUB + ASIC

 Two chips working
  together
  ‒ Processor plus
    programmable ASIC
  ‒ Commands to/from virtual
    storage director
  ‒ Using LDEV mapping
    tables

 DMA engine
  ‒ Read and write directly to
    cache
THE NUMBERS
WE ARE TOP GUN
WE ARE TOP GUN

HITACHI UNIVERSAL STORAGE PLATFORM® V DUAL-CHASSIS CONFIGURATION

  All tests are a fully popped VSP (unless stated otherwise)
   ‒ 2 chassis
   ‒ 8 virtual storage directors
   ‒ 4 back-end directors
   ‒ 16 front-end directors (128 channels)
   ‒ 2048 HDD (10K 300GB)
   ‒ 512GB cache
WE ARE TOP GUN

HITACHI UNIVERSAL STORAGE PLATFORM® V AS AN I/O DRIVER

                        FNP Driver Results
1200000

1000000

 800000

 600000

 400000                                               FNP Driver Results

 200000

      0
           Zero       Zero      Front End Front End
          Locality   Locality     Write     Read
           Write      Read
PAI/O TESTING

THE LAB TO THE REAL WORLD

 Extended format VSAM
  ‒ 4KB and 26KB

 High overhead
  ‒ Two CCW per block

 Growing format for DB2
 Datasets >4GB
 What customers will really
  get
WE ARE TOP GUN

CONSTRAINED BY CPU CAPACITY
WE ARE TOP GUN

BENEFIT OF ZHPF
WE ARE TOP GUN

CONSTRAINED BY VSD 80-85% BUSY
WE ARE TOP GUN

BENEFIT OF ZHPF
WE ARE TOP GUN

RANDOM WRITE MISS HIT 70% WP
WE ARE TOP GUN

CONSTRAINED BY VSD 80% BUSY
WE ARE TOP GUN

CONSTRAINED BY VSD 80% BUSY
WE ARE TOP GUN

HIGH-PERFORMANCE BED FEATURE



                                                     PAI/O H4
                                                      4K Read
                                                  100% Cache Miss
                           20
                           18
                           16
        Response Time MS




                           14
                           12
                           10
                            8
                            6
                            4
                            2
                            0
                                0   20,000   40,000    60,000    80,000   100,000   120,000   140,000   160,000
                                                       IO Operations per Second

                                      High Performance (2 BED)        Standard Performance (1 BED)
WE ARE TOP GUN

HIGH-PERFORMANCE BED FEATURE


                                         PAI/O G0 - Sequential Read
                                             27K Chain Length 30 (1 Cyl)
                           100

                            90

                            80
        Response Time MS




                            70

                            60

                            50

                            40

                            30

                            20

                            10

                             0
                                 0   1,000    2,000     3,000       4,000       5,000   6,000    7,000   8,000
                                                                MB per second

                                     High Performance (2 Bed)           Standard Performance (1 BED)
HITACHI DYNAMIC
PROVISIONING
FOR MAINFRAME
SPREAD THE LOAD
HITACHI DYNAMIC PROVISIONING
FOR MAINFRAME (HDPM)

HDPM SPREADS THE LOAD

 Primary advantages of HDPM
 ‒ Wide striping eliminates skewed I/O to disk
    ‒ Greater throughput for cache miss I/O
 ‒ Configuration flexibility
    ‒ Custom volume sizes made easy
 ‒ Dynamic volume expansion
 ‒ IBM® FlashCopy® space-efficient pools

 Performance results focus on skewed I/O
PAI/O DRIVER – F1 (SYMMETRICAL)

                                                              PAI/O Driver - F1
                                                         4K Cache Hit Write Chain(1)
                      0.20


                      0.18


                      0.16


                      0.14
Response Time in MS




                      0.12


                      0.10


                      0.08


                      0.06


                      0.04


                      0.02


                      0.00
                             0   20,000        40,000      60,000     80,000   100,000     120,000       140,000   160,000   180,000
                                          RD662NN       RD662HF     RD662HN Second
                                                                       IO Per    RD571NN       RD571HF         RD571HN



                                                                    HDS Confidential
SKEW OR ASYMMETRICAL WORKLOAD
PAI/O DRIVER C* SERIES




Figure taken from PAI/O Driver User Guide
                              HDS Confidential
RAID 6 6D+2P PAI/O DRIVER – C6 (SKEWED OR ASYMMETRICAL)


                      LET HITACHI DYNAMIC PROVISIONING FOR MAINFRAME (HDPM)
                      SHARE THE LOAD
                                                 PAI/O Driver - C6
                                             4K Skewed Series Chain(1)
                                               70% Read / 30% Write
                      3.50


                      3.00


                      2.50
Response Time in MS




                      2.00


                      1.50
                                                                                            HDPM
                                                                                          Benefit Zone
                      1.00


                      0.50


                      0.00
                             0   2,000   4,000             6,000             8,000         10,000        12,000   14,000
                                                               I/O Per Second

                                                 RD662NN           RD662HF      RD662HN

                                                      HDS Confidential
FINAL THOUGHT




“I feel the need–the need for speed!”
                           ―Top Gun
QUESTIONS AND
DISCUSSION
UPCOMING WEBTECHS


 November and December
  ‒ Increase Your IT Agility and Cost-efficiency with HDS Cloud
    Solutions, Nov. 9, 9 a.m. PT, 12 p.m. ET
  ‒ Best Practices for Upgrading to Hitachi Device Manager
    v7, Nov. 16, 9 a.m. PT, 12 p.m. ET
  ‒ Hitachi Clinical Repository, Dec. 7, 9 a.m. PT, 12 p.m. ET


 Please check www.hds.com/webtech for
  ‒ Link to the recording, presentation and Q&A (available next
    week)
  ‒ Schedule and registration for upcoming WebTech sessions
THANK YOU

Why hitachi virtual storage platform does so well in a mainframe environment webinar

  • 1.
    WHY HITACHI VIRTUAL STORAGEPLATFORM DOES SO WELL IN A MAINFRAME ENVIRONMENT HOW FAST CAN A VSP GO? RON HAWKINS, MANAGER, TECH OPS PERFORMANCE NOV. 2, 2011
  • 2.
    WEBTECH EDUCATIONAL SERIES WhyHitachi Virtual Storage Platform (VSP) Does So Well in a Mainframe Environment Hitachi VSP is a new paradigm in enterprise array performance. In this session we will discuss how the architecture of VSP enhances its box-wide performance. The results of performance testing with synthetic host I/O generators and the PAI/O driver will also be presented. Attend this WebTech to learn how to:  Improve performance in your environment with VSP  Affect performance in mainframe environments with different RAID architectures  Optimize functionality with wide striping enabled by Hitachi Dynamic Provisioning
  • 3.
    UPCOMING WEBTECHS  Novemberand December ‒ Increase Your IT Agility and Cost-efficiency with HDS Cloud Solutions, Nov. 9, 9 a.m. PT, 12 p.m. ET ‒ Best Practices for Upgrading to Hitachi Device Manager v7, Nov. 16, 9 a.m. PT, 12 p.m. ET ‒ Hitachi Clinical Repository, Dec. 7, 9 a.m. PT, 12 p.m. ET  Please check www.hds.com/webtech for ‒ Link to the recording, presentation and Q&A (available next week) ‒ Schedule and registration for upcoming WebTech sessions
  • 4.
    WE ARE TOPGUN WHAT WE WILL COVER  Unified microprocessor  FICON front-end director  The numbers  Hitachi Dynamic Provisioning for Mainframe Hitachi Virtual Storage Platform
  • 5.
  • 6.
    ANOTHER VIEW OFTHE HITACHI VIRTUAL STORAGE PLATFORM
  • 7.
    MICROPROCESSORS HAVE MOVEDOFF THE FRONT-END AND BACK-END DIRECTORS
  • 8.
    HITACHI UNIVERSAL STORAGEPLATFORM® V ESTRANGED MICROPROCESSOR Shared Memory
  • 9.
    HITACHI UNIVERSAL STORAGEPLATFORM® V ESTRANGED MICROPROCESSOR Shared Memory
  • 10.
    HITACHI UNIVERSAL STORAGEPLATFORM® V ESTRANGED MICROPROCESSOR Shared Memory
  • 11.
    UNIFIED MICROPROCESSOR 4GB Local XEON Quad RAM Core CPU Cache Package Memory Write Through Shared Memory Page In
  • 12.
  • 13.
    FICON FRONT-END DIRECTOR FICON16-PORT FEATURE  Hitachi transport processor ‒ Initial FICON protocol ‒ Open exchanges  MHUB + ASIC ‒ Data accelerator circuit ‒ A programmable application-specific integrated circuit ‒ Route command to virtual storage director (VSD) ‒ Direct memory access (DMA) engine
  • 14.
    HITACHI VIRTUAL STORAGEPLATFORM 8-PORT FICON BOARD HITACHI TRANSPORT PROCESSOR  Accepts initial channel command request from host  Multi-protocol ‒ FICON and high- performance FICON (zHPF)  Establishes open This graphic is getting hard to read in the red. exchanges (480) Can the type be reversed? -th ‒ Shared on demand by Not Really as it is a adjacent ports graphic lifted from another source, we don’ t have the original graphic.
  • 15.
    OPEN EXCHANGES HITACHI TRANSPORT PROCESSOR  64 for each host channel ‒ 2:1 fan-in is 128 ‒ Adjacent ports 256 CMR Time ‒ 4:1 fan-in is 256 ‒ Adjacent ports 512  Open Exchange (OE) exhaustion increases Command Response (CMR) time ‒ Microprocessor busy (>80%) ‒ Low cache hits ‒ TCz synchronous
  • 16.
    HITACHI VIRTUAL STORAGEPLATFORM 8-PORT FICON BOARD MHUB + ASIC  Two chips working together ‒ Processor plus programmable ASIC ‒ Commands to/from virtual storage director ‒ Using LDEV mapping tables  DMA engine ‒ Read and write directly to cache
  • 17.
  • 18.
    WE ARE TOPGUN HITACHI UNIVERSAL STORAGE PLATFORM® V DUAL-CHASSIS CONFIGURATION  All tests are a fully popped VSP (unless stated otherwise) ‒ 2 chassis ‒ 8 virtual storage directors ‒ 4 back-end directors ‒ 16 front-end directors (128 channels) ‒ 2048 HDD (10K 300GB) ‒ 512GB cache
  • 19.
    WE ARE TOPGUN HITACHI UNIVERSAL STORAGE PLATFORM® V AS AN I/O DRIVER FNP Driver Results 1200000 1000000 800000 600000 400000 FNP Driver Results 200000 0 Zero Zero Front End Front End Locality Locality Write Read Write Read
  • 20.
    PAI/O TESTING THE LABTO THE REAL WORLD  Extended format VSAM ‒ 4KB and 26KB  High overhead ‒ Two CCW per block  Growing format for DB2  Datasets >4GB  What customers will really get
  • 21.
    WE ARE TOPGUN CONSTRAINED BY CPU CAPACITY
  • 22.
    WE ARE TOPGUN BENEFIT OF ZHPF
  • 23.
    WE ARE TOPGUN CONSTRAINED BY VSD 80-85% BUSY
  • 24.
    WE ARE TOPGUN BENEFIT OF ZHPF
  • 25.
    WE ARE TOPGUN RANDOM WRITE MISS HIT 70% WP
  • 26.
    WE ARE TOPGUN CONSTRAINED BY VSD 80% BUSY
  • 27.
    WE ARE TOPGUN CONSTRAINED BY VSD 80% BUSY
  • 28.
    WE ARE TOPGUN HIGH-PERFORMANCE BED FEATURE PAI/O H4 4K Read 100% Cache Miss 20 18 16 Response Time MS 14 12 10 8 6 4 2 0 0 20,000 40,000 60,000 80,000 100,000 120,000 140,000 160,000 IO Operations per Second High Performance (2 BED) Standard Performance (1 BED)
  • 29.
    WE ARE TOPGUN HIGH-PERFORMANCE BED FEATURE PAI/O G0 - Sequential Read 27K Chain Length 30 (1 Cyl) 100 90 80 Response Time MS 70 60 50 40 30 20 10 0 0 1,000 2,000 3,000 4,000 5,000 6,000 7,000 8,000 MB per second High Performance (2 Bed) Standard Performance (1 BED)
  • 30.
  • 31.
    HITACHI DYNAMIC PROVISIONING FORMAINFRAME (HDPM) HDPM SPREADS THE LOAD  Primary advantages of HDPM ‒ Wide striping eliminates skewed I/O to disk ‒ Greater throughput for cache miss I/O ‒ Configuration flexibility ‒ Custom volume sizes made easy ‒ Dynamic volume expansion ‒ IBM® FlashCopy® space-efficient pools  Performance results focus on skewed I/O
  • 32.
    PAI/O DRIVER –F1 (SYMMETRICAL) PAI/O Driver - F1 4K Cache Hit Write Chain(1) 0.20 0.18 0.16 0.14 Response Time in MS 0.12 0.10 0.08 0.06 0.04 0.02 0.00 0 20,000 40,000 60,000 80,000 100,000 120,000 140,000 160,000 180,000 RD662NN RD662HF RD662HN Second IO Per RD571NN RD571HF RD571HN HDS Confidential
  • 33.
    SKEW OR ASYMMETRICALWORKLOAD PAI/O DRIVER C* SERIES Figure taken from PAI/O Driver User Guide HDS Confidential
  • 34.
    RAID 6 6D+2PPAI/O DRIVER – C6 (SKEWED OR ASYMMETRICAL) LET HITACHI DYNAMIC PROVISIONING FOR MAINFRAME (HDPM) SHARE THE LOAD PAI/O Driver - C6 4K Skewed Series Chain(1) 70% Read / 30% Write 3.50 3.00 2.50 Response Time in MS 2.00 1.50 HDPM Benefit Zone 1.00 0.50 0.00 0 2,000 4,000 6,000 8,000 10,000 12,000 14,000 I/O Per Second RD662NN RD662HF RD662HN HDS Confidential
  • 35.
    FINAL THOUGHT “I feelthe need–the need for speed!” ―Top Gun
  • 36.
  • 37.
    UPCOMING WEBTECHS  Novemberand December ‒ Increase Your IT Agility and Cost-efficiency with HDS Cloud Solutions, Nov. 9, 9 a.m. PT, 12 p.m. ET ‒ Best Practices for Upgrading to Hitachi Device Manager v7, Nov. 16, 9 a.m. PT, 12 p.m. ET ‒ Hitachi Clinical Repository, Dec. 7, 9 a.m. PT, 12 p.m. ET  Please check www.hds.com/webtech for ‒ Link to the recording, presentation and Q&A (available next week) ‒ Schedule and registration for upcoming WebTech sessions
  • 38.

Editor's Notes

  • #9 they build to show explain that you have to add resource in sync to maintain performance. Need to leave them in.
  • #14 MHUB refers to the chip on the FED. Not sure what it means but don’t need to spell it out.
  • #34 What does the asterisk after “C” in the title refer to?It is the name referring to a number of drivers that are part of the PAI/O performance testing package