Why hitachi virtual storage platform does so well in a mainframe environment webinar

  • 1,309 views
Uploaded on

Hitachi VSP is a new paradigm in enterprise array performance. In this session we will discuss how the architecture of VSP enhances its box-wide performance. The results of performance testing with …

Hitachi VSP is a new paradigm in enterprise array performance. In this session we will discuss how the architecture of VSP enhances its box-wide performance. The results of performance testing with synthetic host I/O generators and the PAI/O driver will also be presented.

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
1,309
On Slideshare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
36
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • they build to show explain that you have to add resource in sync to maintain performance. Need to leave them in.
  • MHUB refers to the chip on the FED. Not sure what it means but don’t need to spell it out.
  • What does the asterisk after “C” in the title refer to?It is the name referring to a number of drivers that are part of the PAI/O performance testing package

Transcript

  • 1. WHY HITACHI VIRTUALSTORAGE PLATFORM DOESSO WELL IN A MAINFRAMEENVIRONMENTHOW FAST CAN A VSP GO?RON HAWKINS, MANAGER,TECH OPS PERFORMANCENOV. 2, 2011
  • 2. WEBTECH EDUCATIONAL SERIESWhy Hitachi Virtual Storage Platform (VSP) Does So Well in a MainframeEnvironmentHitachi VSP is a new paradigm in enterprise array performance. In this sessionwe will discuss how the architecture of VSP enhances its box-wide performance.The results of performance testing with synthetic host I/O generators and thePAI/O driver will also be presented.Attend this WebTech to learn how to: Improve performance in your environment with VSP Affect performance in mainframe environments with different RAID architectures Optimize functionality with wide striping enabled by Hitachi Dynamic Provisioning
  • 3. UPCOMING WEBTECHS November and December ‒ Increase Your IT Agility and Cost-efficiency with HDS Cloud Solutions, Nov. 9, 9 a.m. PT, 12 p.m. ET ‒ Best Practices for Upgrading to Hitachi Device Manager v7, Nov. 16, 9 a.m. PT, 12 p.m. ET ‒ Hitachi Clinical Repository, Dec. 7, 9 a.m. PT, 12 p.m. ET Please check www.hds.com/webtech for ‒ Link to the recording, presentation and Q&A (available next week) ‒ Schedule and registration for upcoming WebTech sessions
  • 4. WE ARE TOP GUNWHAT WE WILL COVER Unified microprocessor FICON front-end director The numbers Hitachi Dynamic Provisioning for Mainframe Hitachi Virtual Storage Platform
  • 5. UNIFIEDMICROPROCESSORTHE HEART OF THINGS
  • 6. ANOTHER VIEW OF THEHITACHI VIRTUAL STORAGE PLATFORM
  • 7. MICROPROCESSORS HAVE MOVED OFFTHE FRONT-END AND BACK-END DIRECTORS
  • 8. HITACHI UNIVERSAL STORAGE PLATFORM® VESTRANGED MICROPROCESSOR Shared Memory
  • 9. HITACHI UNIVERSAL STORAGE PLATFORM® VESTRANGED MICROPROCESSOR Shared Memory
  • 10. HITACHI UNIVERSAL STORAGE PLATFORM® VESTRANGED MICROPROCESSOR Shared Memory
  • 11. UNIFIED MICROPROCESSOR 4GB Local XEON Quad RAM Core CPU Cache Package Memory Write Through Shared Memory Page In
  • 12. FICON FRONT-ENDDIRECTORAT THE EDGE
  • 13. FICON FRONT-END DIRECTORFICON 16-PORT FEATURE Hitachi transport processor ‒ Initial FICON protocol ‒ Open exchanges MHUB + ASIC ‒ Data accelerator circuit ‒ A programmable application-specific integrated circuit ‒ Route command to virtual storage director (VSD) ‒ Direct memory access (DMA) engine
  • 14. HITACHI VIRTUAL STORAGE PLATFORM8-PORT FICON BOARDHITACHI TRANSPORT PROCESSOR Accepts initial channel command request from host Multi-protocol ‒ FICON and high- performance FICON (zHPF) Establishes open This graphic is getting hard to read in the red. exchanges (480) Can the type be reversed? -th ‒ Shared on demand by Not Really as it is a adjacent ports graphic lifted from another source, we don’ t have the original graphic.
  • 15. OPEN EXCHANGES HITACHI TRANSPORT PROCESSOR 64 for each host channel ‒ 2:1 fan-in is 128 ‒ Adjacent ports 256 CMR Time ‒ 4:1 fan-in is 256 ‒ Adjacent ports 512 Open Exchange (OE) exhaustion increases Command Response (CMR) time ‒ Microprocessor busy (>80%) ‒ Low cache hits ‒ TCz synchronous
  • 16. HITACHI VIRTUAL STORAGE PLATFORM8-PORT FICON BOARDMHUB + ASIC Two chips working together ‒ Processor plus programmable ASIC ‒ Commands to/from virtual storage director ‒ Using LDEV mapping tables DMA engine ‒ Read and write directly to cache
  • 17. THE NUMBERSWE ARE TOP GUN
  • 18. WE ARE TOP GUNHITACHI UNIVERSAL STORAGE PLATFORM® V DUAL-CHASSIS CONFIGURATION  All tests are a fully popped VSP (unless stated otherwise) ‒ 2 chassis ‒ 8 virtual storage directors ‒ 4 back-end directors ‒ 16 front-end directors (128 channels) ‒ 2048 HDD (10K 300GB) ‒ 512GB cache
  • 19. WE ARE TOP GUNHITACHI UNIVERSAL STORAGE PLATFORM® V AS AN I/O DRIVER FNP Driver Results12000001000000 800000 600000 400000 FNP Driver Results 200000 0 Zero Zero Front End Front End Locality Locality Write Read Write Read
  • 20. PAI/O TESTINGTHE LAB TO THE REAL WORLD Extended format VSAM ‒ 4KB and 26KB High overhead ‒ Two CCW per block Growing format for DB2 Datasets >4GB What customers will really get
  • 21. WE ARE TOP GUNCONSTRAINED BY CPU CAPACITY
  • 22. WE ARE TOP GUNBENEFIT OF ZHPF
  • 23. WE ARE TOP GUNCONSTRAINED BY VSD 80-85% BUSY
  • 24. WE ARE TOP GUNBENEFIT OF ZHPF
  • 25. WE ARE TOP GUNRANDOM WRITE MISS HIT 70% WP
  • 26. WE ARE TOP GUNCONSTRAINED BY VSD 80% BUSY
  • 27. WE ARE TOP GUNCONSTRAINED BY VSD 80% BUSY
  • 28. WE ARE TOP GUNHIGH-PERFORMANCE BED FEATURE PAI/O H4 4K Read 100% Cache Miss 20 18 16 Response Time MS 14 12 10 8 6 4 2 0 0 20,000 40,000 60,000 80,000 100,000 120,000 140,000 160,000 IO Operations per Second High Performance (2 BED) Standard Performance (1 BED)
  • 29. WE ARE TOP GUNHIGH-PERFORMANCE BED FEATURE PAI/O G0 - Sequential Read 27K Chain Length 30 (1 Cyl) 100 90 80 Response Time MS 70 60 50 40 30 20 10 0 0 1,000 2,000 3,000 4,000 5,000 6,000 7,000 8,000 MB per second High Performance (2 Bed) Standard Performance (1 BED)
  • 30. HITACHI DYNAMICPROVISIONINGFOR MAINFRAMESPREAD THE LOAD
  • 31. HITACHI DYNAMIC PROVISIONINGFOR MAINFRAME (HDPM)HDPM SPREADS THE LOAD Primary advantages of HDPM ‒ Wide striping eliminates skewed I/O to disk ‒ Greater throughput for cache miss I/O ‒ Configuration flexibility ‒ Custom volume sizes made easy ‒ Dynamic volume expansion ‒ IBM® FlashCopy® space-efficient pools Performance results focus on skewed I/O
  • 32. PAI/O DRIVER – F1 (SYMMETRICAL) PAI/O Driver - F1 4K Cache Hit Write Chain(1) 0.20 0.18 0.16 0.14Response Time in MS 0.12 0.10 0.08 0.06 0.04 0.02 0.00 0 20,000 40,000 60,000 80,000 100,000 120,000 140,000 160,000 180,000 RD662NN RD662HF RD662HN Second IO Per RD571NN RD571HF RD571HN HDS Confidential
  • 33. SKEW OR ASYMMETRICAL WORKLOADPAI/O DRIVER C* SERIESFigure taken from PAI/O Driver User Guide HDS Confidential
  • 34. RAID 6 6D+2P PAI/O DRIVER – C6 (SKEWED OR ASYMMETRICAL) LET HITACHI DYNAMIC PROVISIONING FOR MAINFRAME (HDPM) SHARE THE LOAD PAI/O Driver - C6 4K Skewed Series Chain(1) 70% Read / 30% Write 3.50 3.00 2.50Response Time in MS 2.00 1.50 HDPM Benefit Zone 1.00 0.50 0.00 0 2,000 4,000 6,000 8,000 10,000 12,000 14,000 I/O Per Second RD662NN RD662HF RD662HN HDS Confidential
  • 35. FINAL THOUGHT“I feel the need–the need for speed!” ―Top Gun
  • 36. QUESTIONS ANDDISCUSSION
  • 37. UPCOMING WEBTECHS November and December ‒ Increase Your IT Agility and Cost-efficiency with HDS Cloud Solutions, Nov. 9, 9 a.m. PT, 12 p.m. ET ‒ Best Practices for Upgrading to Hitachi Device Manager v7, Nov. 16, 9 a.m. PT, 12 p.m. ET ‒ Hitachi Clinical Repository, Dec. 7, 9 a.m. PT, 12 p.m. ET Please check www.hds.com/webtech for ‒ Link to the recording, presentation and Q&A (available next week) ‒ Schedule and registration for upcoming WebTech sessions
  • 38. THANK YOU