Build the Optimal Mainframe Storage Architecture


Published on

White Paper which discusses the advantages of FICON networked storage. The paper focuses specifically on Hitachi VSP and Brocade 8510. It discusses why networked FICON and describes the Hitachi VSP enterprise storage and the Brocade 8510 director.

For more information on HDS and Brocade Solutions please visit:

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Build the Optimal Mainframe Storage Architecture

  1. 1. Build the Optimal Mainframe Storage Architecture WithHitachi Data Systems and BrocadeWhy Choose an IBM®FICON®Switched Network?DATA DRIVEN GLOBAL VISION CLOUD PLATFORM STRATEGON POWERFUL RELEVANT PERFORMANCE SOLUTION CLOVIRTUAL BIG DATA SOLUTION ROI FLEXIBLE DATA DRIVEN VWHITEPAPERBy Bill Martin, Hitachi Data SystemsStephen Guendert, PhD, BrocadeMarch 2013
  2. 2. WHITE PAPER 2ContentsExecutive Summary 3Introduction 4Why Networked FICON Storage Is Better Than Direct-Attached Storage 4Hitachi Virtual Storage Platform 4Why Brocade Gen5 DCX 8510 Is the Best FICON Director 4An Ideal Pairing: Hitachi Virtual Storage Platform andBrocade Gen5 DCX 8510 5Why IT Should Choose Networked Storage for FICON OverDirect-Attached Storage 5Technical Reasons for a Switched FICON Architecture 5Business Reasons for a Switched FICON Architecture 10Why Switched FICON: Summary 14Hitachi Virtual Storage Platform 15Scalability (3-D Scaling: Out, Up, Deep) 15Performance 16IBM 3390 and FICON Support 16Hitachi Dynamic Provisioning 16Hitachi Dynamic Tiering 17Hitachi Remote Replication 17Multiplatform Support 20Cost-Savings Efficiencies 20Brocade Gen5 DCX 8510 in Mainframe Environments 20Reliability, Availability and Serviceability 21Scalability 21Pair the Two Platforms Together 22Linux on the Mainframe 22FICON and FCP Intermix 22Private Cloud 22Conclusion 23
  3. 3. WHITE PAPER 3Build the Optimal Mainframe Storage Architecture With Hitachi DataSystems and BrocadeExecutive SummaryThe IBM®System z®and newer zEnterprise®or, in other words, mainframes, continue to be a critical foundation inthe IT infrastructure of many large companies today. An important element of the mainframe environment is the diskstorage system (subsystem) that is connected to the mainframe via channels. The overall reliability, availability andperformance of mainframe-based applications are dependent on this storage system.The performance demands, capacity, reliability, flexibility, efficiency and cost-effectiveness of the storage system areimportant aspects of any storage acquisition and configuration decision. The increasing demands for improved per-formance, in other words, throughput (IOPS) and response time, make this storage system a critical element of the ITinfrastructure. Another key factor in configuring the storage system is the decision of how it should be connected tothe mainframe channels: direct attached or through a switched IBM FICON®network. This decision impacts the flex-ibility, reliability and availability of the storage infrastructure and the efficiency of the storage administrators.Hitachi Virtual Storage Platform (VSP) is an enterprise-class storage system that provides a comprehensive set ofstorage and data services. These provide mainframe users with a cost-effective, highly reliable and available stor-age platform that delivers outstanding performance, capacity and scalability. VSP supports the operating systemsused with IBM zEnterprise processors: z/OS®, z/VSE®, z/VM®, and Linux on System z. This industry-leading storagesystem provides IBM 3390 disk drive support across a variety of disk drive types to meet the variety of performanceand capacity needs of mainframe environments. The platform provides an internal physical disk capacity of approxi-mately 2.5PB per storage system. With externally attached storage, the VSP can support up to 255PB of storagecapacity. It supports 8Gb/sec FICON across all front-end ports for connectivity to the mainframe and 8Gb/sec FibreChannel for connecting external storage.Using a FICON network configured with a switch or director to connect a storage system to the mainframe channelscan significantly enhance reliability, flexibility and availability of storage systems. At the same time, it can maximizestorage performance and throughput. A switched FICON network allows the implementation of a fan-in, fan-out con-figuration, which allows maximum resource utilization and simultaneously helps localize failures, improving availability.The Brocade Gen5 DCX 8510 is a backbone-class FICON or Fibre Channel director. The Brocade Gen5 DCX 8510family of FICON directors provides the industry’s most powerful switching infrastructure for modern mainframe envi-ronments. It provides the most reliable, scalable, efficient, cost-effective, high-performance foundation for today’shighly virtualized mainframe environments. The Brocade Gen5 DCX 8510 builds upon years of innovation and experi-ence and leverages the core technology of Brocade systems, providing over 99.999% uptime in the world’s mostdemanding data centers. The Gen5 DCX 8510 supports the operating systems used with zEnterprise processors:z/OS and z/OS.e, z/VSE, z/VM, Linux on System z, and zTPF for System z. This industry-leading FICON director sup-ports 2, 4, 8, 10, and 16Gb/sec Fibre Channel links, FICON I/O traffic, and 1 gigabit Ethernet (GbE) or 10GbE linkson Fibre Channel over IP (FCIP) while providing 8.2Tb/sec chassis bandwidth.The combination of switched FICON connectivity with Hitachi VSP connected to mainframe channels through aBrocade Gen5 DCX 8510 Director provides a powerful, flexible and highly available solution. Together, they supportthe storage features, performance and capacity needed for today’s mainframe environments.
  4. 4. WHITE PAPER 4IntroductionThis paper explores both technical and business reasons for implementing a switched FICON architecture instead ofa direct-attached storage FICON architecture. It also explains why Hitachi Virtual Storage Platform and the BrocadeFICON Director together provide an outstanding, industry-leading solution for FICON environments.With the many enhancements and improvements in mainframe I/O technology in the past 5 years, the question “Do Ineed FICON switching technology, or should I go with direct-attached storage?” is frequently asked. With up to 320FICON Express8S channels supported on an IBM zEnterprise z114, z196 and zEC12, why not just direct-attach thecontrol units? The short answer is that with all of the I/O improvements, switching technology is needed — now morethan ever. In fact, there are more reasons to use switched FICON than there were to use switched ESCON. Some ofthese reasons are purely technical; others are more business-related.Why Networked FICON Storage Is Better Than Direct-Attached StorageThe raw bandwidth of FICON Express8S running on IBM zEnterprise Systems is 40 times greater than the capabilitiesof IBM ESCON®. The raw I/Os per second (IOPS) capacity of FICON Express8S channels is even more impressive,particularly when a channel program utilizes the z High Performance FICON (zHPF) protocol. To utilize these tremen-dous improvements, the FICON protocol is packet-switched and, unlike ESCON, capable of having multiple I/Osoccupy the same channel simultaneously.FICON Express8S channels on zEnterprise processors can have up to 64 concurrent I/Os (open exchanges) to dif-ferent devices. FICON Express8S channels running zHPF can have up to 750 concurrent I/Os on the zEnterpriseprocessor family. Only when a director or switch is used between the host and storage device can the true perfor-mance potential inherent in these channel bandwidth and I/O processing gains be fully exploited.Hitachi Virtual Storage PlatformHitachi Virtual Storage Platform, with its vast functionality and throughput capability, is ideal for IBM mainframeenvironments and provides a comprehensive set of storage and data services. The flexibility in configuring and par-titioning VSP makes it ideal for mainframe environments, with multiple LPARS running multiple operating systemimages in the same SYSPLEX.The packaging, enhanced features and improved manageability of VSP provide mainframe users with a cost-effective, highly reliable and available storage platform that delivers outstanding performance, capacity and scalability.The storage platform easily supports both mainframe and open systems environments. For mainframe environments,it supports z/OS, z/VSE and z/VM. Additionally, with many organizations considering the benefits of or running LINUXon IBM zEnterprise processors, VSP supports this capability for both CKD and FBA disk formats.With support for FICON Express8S and the support of 2Gb, 4Gb and 8Gb FICON and 2Gb, 4Gb and 8Gb FibreChannel connectivity, this platform delivers industry-leading I/O performance. A VSP can have up to 24 front-enddirectors with a total of 176 FICON ports. Each port can support more IOPS than a single zEnterprise FICONExpress8 channel can deliver. As a result, it is ideally suited for connectivity to the mainframe through a switchedFICON network.Why Brocade Gen5 DCX 8510 Is the Best FICON DirectorEmerging and evolving enterprise-critical workloads and higher density virtualization are continuing to pushthe limits of SAN infrastructures. This is even truer in a data center with IBM zEnterprise and its support forMicrosoft®Windows®in the zEnterprise Blade Center Extension (zBX). The Brocade Gen5 DCX 8510 family featuresindustry-leading 16Gb/sec performance, and 8.2Tb chassis bandwidth to address these next-generation I/O and
  5. 5. WHITE PAPER 5bandwidth-intensive application requirements. In addition, the Brocade Gen5 DCX 8510 provides unmatched slot-to-slot and port performance, with 512Gb/sec bandwidth per slot (port card/blade). And this performance comes inthe most energy-efficient FICON director in the industry, using an average of less than 1 watt per Gb/sec, which is 15times more efficient than competitive offerings.The Brocade Gen5 DCX 8510 family enables high-speed replication and backup solutions over metro or WAN linkswith native Fibre Channel (10Gb/sec or 16Gb/sec) and optional FCIP 1GbE or 10GbE extension support. These solu-tions are accomplished by integrating this technology via a blade (FX24-8) or standalone switch (Brocade 7800).Finally, this solution is accomplished with unsurpassed levels of reliability, availability and serviceability (RAS), basedupon more than 25 years of Brocade experience in the mainframe space. This experience includes defining theFICON standards and authoring or co-authoring many of the FICON patents.An Ideal Pairing: Hitachi Virtual Storage Platform and Brocade Gen5 DCX 8510The IBM zEnterprise architecture is the highest performing, most scalable, cost-effective, energy-efficient platform inmainframe computing history. To get the most out of your investment in IBM zEnterprise, you need a storage infra-structure, that is, a DASD platform and FICON director, which can match the impressive capabilities of zEnterprise.Hitachi Data Systems and Brocade, via VSP and Gen5 DCX 8510, together offer the highest performing and mostreliable, scalable, cost-effective and energy-efficient products in the storage and networking industry. The experienceof these 2 companies in the mainframe market, coupled with the capabilities of VSP and Gen5 DCX 8510, make pair-ing them with IBM’s zEnterprise the ideal “best in industry” storage architecture for mainframe data centers.Why IT Should Choose Networked Storage for FICON Over Direct-Attached StorageDirect-attached FICON storage might appear to be a great way to take advantage of FICON technology. However, acloser examination will show why a switched FICON architecture is a better, more robust design for enterprise datacenters than direct-attached FICON.Technical Reasons for a Switched FICON ArchitectureThere are 5 key technical reasons for connecting storage control units using switched FICON:■■ Overcome buffer credit limitations on FICON Express8 channels.■■ Build fan-in, fan-out architecture designs for maximizing resource utilization.■■ Localize failures for improved availability.■■ Increase scalability and enable flexible connectivity for continued growth.■■ Leverage new FICON technologies.FICON Channel Buffer CreditsWhen IBM introduced the availability of FICON Express8 channels, one very important change was the number ofbuffer credits available on each port per 4-port FICON Express8 channel card. While FICON Express4 channels had200 buffer credits per port on a 4-port FICON Express4 channel card, this changed to 40 buffer credits per port ona FICON Express8 channel card. Organizations familiar with buffer credits will recall that the number of buffer creditsrequired for a given distance varies directly in a linear relationship with link speed. In other words, doubling the linkspeed would double the number of buffer credits required to achieve the same performance at the same distance.Also, organizations might recall the IBM System z10™ Statement of Direction concerning buffer credits:“The FICON Express4 features are intended to be the last features to support extended distancewithout performance degradation. IBM intends to not offer FICON features with buffer creditsfor performance at extended distances. Future FICON features are intended to support up to
  6. 6. WHITE PAPER 610km without performance degradation. Extended distance solutions may include FICON direc-tors or switches (for buffer credit provision) or Dense Wave Division Multiplexers (for buffer creditsimulation).”IBM held true to its statement, and the 40 buffer credits per port on a FICON Express8/FICON Express8S channelcard can support up to 10km of distance for full-frame size I/Os (2KB frames). What happens if an organization hasI/Os with smaller than full-size frames? The distance supported by the 40 buffer credits would increase. It is likely thatat faster future link speeds, the distance supported will decrease to 5km or less.A switched architecture allows organizations to overcome the buffer credit limitations on the FICON Express8/FICONExpress8S channel card. Depending upon the specific model, FICON directors and switches can have more than1300 buffer credits available per port for long-distance connectivity.Fan-In, Fan-Out Architecture DesignsIn the late 1990s, the open systems world started to implement Fibre Channel storage area networks (SANs) to over-come the low utilization of resources inherent in a direct-attached storage architecture. SANs addressed this issuethrough the use of fan-in and fan-out storage network designs. That is, multiple server host bus adapters (HBAs)could be connected through a Fibre Channel switch to a single storage port: in other words, fan-in. Or a single-serverHBA could be connected through a Fibre Channel switch to multiple storage ports: that is, fan-out. These same prin-ciples apply to a FICON storage network.As a general rule, FICON Express8 and FICON Express8S channels offer different levels of performance, in terms ofIOPS and bandwidth, than the storage host adapter ports to which they are connected. Therefore, a direct-attachedFICON storage architecture may see very low channel or storage port utilization rates. To overcome this issue, fan-inand fan-out storage network designs are used.A switched FICON architecture allows a single channel to fan-out to multiple storage devices via switching, improvingoverall resource utilization. This can be especially valuable if an organization’s environment has newer FICON chan-nels, such as FICON Express8 or Express8S, but older tape drive technology. Figure 1 illustrates how a single FICONchannel can concurrently keep several tape drives running at full-rated speeds. The actual fan-out ratios for connec-tivity to tape drives will, of course, depend on the specific tape drive and control unit; however, it is not unusual to seea FICON Express8 or Express8S channel fan-out from a switch to 5 to 6 tape drives (a 1:5 or 1:6 fan-out ratio). Thesame principles apply for fan-out to storage systems. The exact fan-out ratio is dependent on the storage systemmodel and host adapter capabilities for IOPS and/or bandwidth. On the other hand, several FICON channels couldbe connected through a director or switch to a single storage port to maximize the port utilization and increase overallI/O efficiency and throughput.
  7. 7. WHITE PAPER 7Figure 1. Switched FICON allows one channel to keep multiple tape drives fully utilized.Keep Failures LocalizedIn a direct-attached architecture, a failure anywhere in the path renders both the channel interface and the controlunit port unusable. The failure could be of an entire FICON channel card, a port on the channel card, a failure of thecable, a failure of the entire storage host adapter card, or a failure of an individual port on the storage host adaptercard. In other words, a failure on any of these components will affect both the mainframe connection and the storageconnection. The worst possible reliability, availability and serviceability for FICON-attached storage are provided with adirect-attached architecture.With a switched architecture, failures are localized to only the affected FICON channel interface or control unit inter-face, not both. The nonfailing side remains available, and if the storage side has not failed, other FICON channelscan still access that host adapter port via the switch or director (see Figure 2). This failure isolation, combined withfan-in and fan-out architectures, allows for the most robust storage architectures, minimizing downtime and maximiz-ing availability.
  8. 8. WHITE PAPER 8Figure 2. A FICON director isolates faults and improves availability.Scalable and Flexible ConnectivityDirect-attached FICON does not easily allow for dynamic growth and scalability, since a single FICON channel cardport is tied to a single dedicated storage host adapter port. In such an architecture, there is a 1:1 relationship (nofan-in or fan-out). Since there is a finite number of FICON channels available (dependent on the mainframe model ormachine type), growth in a mainframe storage environment with such an architecture can pose a problem. What hap-pens if an organization needs more FICON connectivity, but has run out of FICON channels? FICON switching andproper usage of fan-in and fan-out in the storage architecture design will go a long way toward improving scalability.In addition, best-practice storage architecture designs include room for growth. With a switched FICON architecture,adding a new storage system or port in a storage system is much easier: simply connect the new storage systemor port to the switch. This eliminates the need to open the channel cage in the mainframe to add new channel inter-faces, reducing both capital and operational costs. This also gives managers more flexible planning options whenupgrades are necessary, since the urgency of upgrades is lessened.What about the next generation of channels? The bandwidth capabilities of channels are growing at a much fasterrate than those of storage devices. As channel speeds increase, switches will allow data center managers to takeadvantage of new technology as it becomes available, while protecting investments and minimizing costs.Also, it is an IBM best-practice recommendation to use single-mode long-wave connections for FICON channels.Storage vendors, however, often offer single-mode long-wave connections and multimode short-wave connectionson their storage systems, allowing organizations to decide which to use. The organization makes the decision basedon the trade-off between cost and reliability. Some organizations’ existing storage devices have a mix of single-modeand multimode connections. Since they cannot directly connect a single-mode FICON channel to a multimode stor-age host adapter, this could pose a problem. With a FICON director or switch in the path, however, organizations donot need to change the storage host adapter ports to comply with the single-mode best-practice recommendationfor the FICON channels. The FICON switching device can have both types of connectivity. It can have single-modelong-wave ports for attaching the FICON channels, and multimode short-wave ports for attaching the storage.
  9. 9. WHITE PAPER 9Furthermore, FICON switching elements at 2 different locations can be interconnected by fiber at distances up to100km or more, creating a cascaded FICON switched architecture. This setup is typically used in disaster recoveryand business continuance architectures. As previously discussed, FICON switching allows resources to be shared.With cascaded FICON switching, those resources can be shared between geographically separated locations,allowing data to be replicated or tape backups to be made at the alternate site, away from the primary site, with noperformance loss. Often, workloads will be distributed such that both the local and remote sites are primary produc-tion sites, and each site uses the other as its backup.While the fiber itself is relatively inexpensive, laying new fiber may require an expensive construction project. Whiledense wave division multiplexing (DWDM) can help get more out of fiber connections, inter-switch links with up to16Gb/sec of bandwidth are offered by switch vendors and can reduce the cost of DWDM or even eliminate the needfor DWDM. FICON switches maximize utilization of this valuable intersite fiber by allowing multiple environments toshare the same fiber link. In addition, FICON switching devices offer unique storage network management features,such as ISL trunking and preferred pathing, which are not available with DWDM equipment.FICON switches allow data center managers to further exploit intersite fiber sharing by enabling them to intermixFICON and native Fibre Channel Protocol (FCP) traffic, which is known as Protocol Intermix Mode, or PIM. Even indata centers where there is enough fiber to separate FICON and open systems traffic, preferred pathing features ona FICON switch can be a great cost saver. With preferred paths established, certain cross-site fiber can be allocatedfor the mainframe environment, while other fiber can be allocated for open systems. The ISLs can be configured suchthat in the event of a failure, and only in the event of an ISL failure, the links would be shared by both open systemsand mainframe traffic.Leverage New TechnologiesOver the past 5 years, IBM has announced a series of technology enhancements that require the use of switchedFICON. These include:■■ N_port ID virtualization (NPIV) support for z Linux.■■ Dynamic Channel-Path Management (DCM).■■ z/OS FICON Discovery and Auto-Configuration (zDAC).NPIV allows for full support of LUN masking and zoning by virtualizing the Fibre Channel identifiers. IBM announcedsupport for NPIV on z Linux in 2005. Today, NPIV is supported on the System z9®, z10, z196, and z114. Until NPIVwas supported on System z, adoption of Linux on System z had been relatively slow. This, in turn, allows eachLinux on System z image to appear as if it has its own individual HBA when those images are, in fact, sharing FCPchannels. Since IBM began supporting NPIV on System z, adoption of Linux on System z has grown significantly.IBM believes approximately 19% of MIPS shipping on new z196s are for Linux on System z implementations.Implementation of NPIV on System z requires a switched architecture.DCM is another feature that requires a switched FICON architecture. DCM provides the ability to have System z auto-matically manage FICON I/O paths connected to storage systems in response to changing workload demands. Useof DCM helps simplify I/O configuration planning and definition, reduces the complexity of managing I/O, dynamicallybalances I/O channel resources, and enhances availability. DCM can best be summarized as a feature that allows formore flexible channel configurations, by designating channels as “managed,” and proactive performance manage-ment. DCM requires a switched FICON architecture because topology information is communicated via the switch ordirector. The FICON switch must have a control unit port (CUP) license and be configured or defined as a control unitin the hardware configuration definition (HCD).
  10. 10. WHITE PAPER 10z/OS FICON Discovery and Auto-Configuration (zDAC) is the latest technology enhancement for FICON. IBM intro-duced zDAC as a follow-on to an earlier enhancement in which the FICON channels log into the Fibre Channel nameserver on a FICON director. zDAC enables the automatic discovery and configuration of FICON-attached DASD andtape devices. Essentially, zDAC automates a portion of the HCD Sysgen process. zDAC uses intelligent analysis tohelp validate the System z and storage definitions’ compatibility, and uses built-in best practices to help configurefor high availability and avoid single points of failure. zDAC is transparent to existing configurations and settings. It isinvoked and integrated with the z/OS HCD and z/OS Hardware Configuration Manager (HCM). zDAC also requires aswitched FICON architecture.IBM also introduced support for transport-mode FICON (known as z High Performance FICON, or zHPF) in October2008 and announced recent enhancements in July 2011. While not required for zHPF, a switched architecture isrecommended.Business Reasons for a Switched FICON ArchitectureIn addition to the technical reasons described earlier, the following business reasons support implementing aswitched FICON architecture:■■ Enable massive consolidation in order to reduce capital and operating expenses.■■ Improve application performance at long distances.■■ Support growth and enable effective resource sharing.Massive ConsolidationWith NPIV support on System z, server and I/O consolidation is very compelling (see Figure 3). IBM undertook awell-publicized project at its internal data centers (Project Big Green) and consolidated 3900 open systems serversonto 30 System z mainframes running Linux. IBM’s total cost of ownership (TCO) savings was calculated, taking intoaccount footprint reductions, power and cooling, and management simplification costs. The result was nearly 80%TCO savings for a 5-year period. This scale of TCO savings is why 19% of new IBM mainframe processor shipmentsare now being used for Linux.Implementation of NPIV requires connectivity from the FICON (FCP) channel to a switching device (director or smallerport-count switch) that supports NPIV. A special microcode load is installed on the FICON channel to enable it tofunction as an FCP channel. NPIV allows the consolidation of up to 255 z Linux images (“servers”) behind each FCPchannel, using one port on a channel card and one port on the attached switching device for connecting these virtualservers. This enables massive consolidation of many HBAs, each attached to its own switch port in the SAN.As a best practice, IBM currently recommends configuring no more than 32 Linux images per FCP channel. Althoughthis level of I/O consolidation was possible prior to NPIV support on System z, implementing LUN masking andzoning in the same manner as with open systems servers, SAN and storage was not possible prior to the support forNPIV with Linux System z.NPIV implementation on System z has also resulted in consolidation and adoption of a common SAN for distributedor open systems (FCP) and mainframe (FICON), commonly known as protocol intermix mode (PIM). While IBM hassupported PIM in System z environments since 2003, adoption rates were low until NPIV implementations for Linuxfor System Z picked up with the introduction of System z10 in 2008. With z10 enhanced segregation and securitybeyond simple zoning was possible through switch partitioning or virtual fabrics and logical switches. With 19% ofnew mainframes being shipped for use with Linux on System z, it is safe to say that at least 19% of mainframe envi-ronments are now running a shared PIM environment.
  11. 11. WHITE PAPER 11Leveraging enhancements in switching technology, performance and management, PIM users can now fully populatethe latest high-density directors with minimal or no oversubscription. They can use management capabilities suchas virtual fabrics or logical switches to fully isolate open systems ports and FICON ports in the same physical direc-tor chassis. Rather than having more partially populated switching platforms that are dedicated to either mainframe(FICON) or open systems (FCP), PIM allows for consolidation onto fewer physical switching devices, reducing man-agement complexity and improving resource utilization. This, in turn, leads to lower operating costs, and a lower TCOfor the storage network. It also allows for a consolidated, simplified cabling infrastructure.Figure 3. Organizations implement NPIV to consolidate I/O in z Linux environments.Application Performance Over DistanceAs previously discussed, the number of buffer credits per port on a 4-port FICON Express8 channel has beenreduced to 40, supporting up to 10km without performance degradation. What happens if an organization needs togo beyond 10km for a direct-attached storage configuration? They will likely see performance degradation due toinsufficient buffer credits. Without a sufficient quantity of buffer credits, the “pipe” cannot be kept full with streamingframes of data.Switched FICON avoids this problem (see Figure 4). FICON directors and switches have a sufficient quantity of buffercredits available on ports to allow them to stream frames at full-line performance rates with no bandwidth degrada-tion. IT organizations that implement a cascaded FICON configuration between sites can, with the latest FICONdirector platforms, stream frames at 16Gb/sec rates with no performance degradation for sites that are 100km apart.
  12. 12. WHITE PAPER 12Switched FICON technology also allows organizations to take advantage of hardware-based FICON protocol accel-eration or emulation techniques for tape (reads and writes), as well as with zGM (z/OS Global Mirror, formerly knownas XRC, or Extended Remote Copy). This emulation technology is available on standalone extension switches or on ablade in FICON directors. It allows the z/OS-initiated channel programs to be acknowledged locally at each site andavoids the back-and-forth protocol handshakes that normally travel between remote sites. It also reduces the impactof latency on application performance and delivers local-like performance over unlimited distances. In addition, thisacceleration or emulation technology optimizes bandwidth utilization.Why is bandwidth efficiency so important? It is typically the most expensive budget component in an organization’smultisite disaster recovery or business continuity architecture. Anything that can be done to improve the utilizationand/or reduce the bandwidth requirements between sites would likely lead to significant TCO savings.Figure 4. Switched FICON with emulation allows optimized performance and bandwidth utilization over extendeddistance.Enable Growth and Resource SharingDirect-attached storage forces a 1:1 relationship between host connectivity and storage connectivity. In other words,each storage port on a storage system host adapter requires its own physical port connection on a FICON Express8channel card. These channel cards are typically very expensive on a per-port basis — typically 4 to 6 times the costof a FICON director port. Also, there is a finite number of FICON Express8S channels available on a zEnterprise196(a maximum of 320), as well as a finite number of host adapter ports in the storage system. If an organization has
  13. 13. WHITE PAPER 13a large configuration and a direct-attached FICON storage architecture, how does it plan to scale its environment?What happens if an organization acquires a company and needs additional channel ports? A switched FICON infra-structure allows cost-effective, seamless expansion to meet growth requirements.Direct-attached FICON storage also typically results in underutilized host channel card ports and host adapter portsin storage systems. FICON Express8 and FICON Express8S channels can comfortably perform at high-channel uti-lization rates, and a direct-attached storage architecture typically sees channel utilization rates of 10% or less. Asillustrated in Figure 5, leveraging FICON directors or switches allows organizations to maximize channel utilization.Figure 5. Switched FICON drives improved channel utilization, while preserving CHPIDs for growth.It also is very important to keep traffic for tape drives streaming, and to avoid stopping and starting the tape drives, asthis leads to unwanted wear and tear of tape heads, cartridges, and the tape media itself. Using FICON accelerationor emulation techniques, as described earlier, this can be accomplished with a configuration similar to the one shownin Figure 6. Such a configuration requires solid analysis and planning, but it will pay dividends for an organization’sFICON tape environment.
  14. 14. WHITE PAPER 14Figure 6. A well-planned configuration can maximize CHPID capacity utilization for FICON tape efficiency.Finally, switches facilitate fan-in, which allows different hosts and LPARs whose I/O subsystems are not shared toshare the same assets. While some benefits may be realized immediately, the potential for value in future equipmentplanning can be even greater. With the ability to share assets, equipment that would be too expensive for a singleenvironment can be deployed in a cost-saving manner. The most common example is to replace tape farms withvirtual tape systems. By reducing the number of individual tape drives, maintenance (service contracts), floor space,power, tape handling and cooling costs are reduced. Virtual tape also improves reliable data recovery, allows for sig-nificantly shorter recovery time objectives (RTO) and nearer recovery point objectives (RPO), and offers features suchas peer-to-peer copies. However, without the ability to share these systems, it may be difficult to amass sufficientcost savings to justify the initial cost of virtual tape. And the only practical way to share these standalone tape sys-tems or tape libraries is through a switch.With disk storage systems, in addition to sharing the asset, it is sometimes desirable to share the data across mul-tiple systems. The port limitations on a storage system may prohibit or limit this capability using direct-attached(point-to-point) FICON channels. Again, the switch can provide a solution to this issue.Even when there is no need to share devices during normal production, this capability can be very valuable in theevent of a failure. Data sets stored on tape can quickly be read by CPUs picking up workload that is already attachedto the same switch as the tape drives. Similarly, data stored on a storage system can be available as soon as a faultis determined.Switch features, such as preconfigured port prohibit or allow matrix tables, can ensure that access intended only for adisaster scenario is prohibited during normal production.Why Switched FICON: SummaryDirect-attached FICON might appear to be a great way to take advantage of FICON technology’s advances overESCON. However, a closer examination shows that switched FICON, similar to switched ESCON, is a better, morerobust architecture for enterprise data centers. Switched FICON offers:■■ Better utilization of host channels and their performance capabilities.■■ Scalability to meet growth requirements.■■ Improved reliability, problem isolation and availability.■■ Flexible connectivity to support evolving infrastructures.■■ More robust business continuity implementations via cascaded FICON.
  15. 15. WHITE PAPER 15■■ Improved distance connectivity, with improved performance over extended distances.■■ New mainframe I/O technology enhancements such as NPIV, FICON DCM, zDAC and zHPF.Switched FICON also provides many business advantages and potential cost savings, including:■■ The ability to perform massive server, I/O and SAN consolidation, dramatically reducing capital and operatingexpenses.■■ Local-like application performance over any distance, allowing host and storage resources to reside whereverbusiness dictates.■■ More effective resource sharing, improved utilization, reduced costs and improved recovery time.With the growing trend toward increased usage of Linux on System z, and the cost advantages of NPIV implemen-tations and PIM SAN architectures, direct-attached storage in a mainframe environment is becoming a thing of thepast. Investments made in switches for disaster recovery and business continuance are likely to pay the largest divi-dends. Having access to alternative resources and multiple paths to those resources can result in significant savingsin the event of a failure. The advantages of a switched FICON infrastructure are simply too great to ignore.Hitachi Virtual Storage PlatformHitachi Data Systems has over 20 years of experience supporting IBM mainframe environments. A large portion ofthe installed base of Hitachi storage systems connects to IBM z/OS and S/390®mainframes via ESCON and FICONnetworks.Hitachi Virtual Storage Platform builds on this experience and introduces new features and packaging to improveperformance while lowering TCO. In addition to its new 3-D scaling architecture, it features lower power and cool-ing requirements, high-density packaging based on industry-standard 19-inch racks, faster microprocessors and thechoice of disk drives types, including solid state disk (SSD), serial attached SCSI (SAS) and nearline SAS. This stor-age platform provides an industry-leading, reliable and highly available storage system for mainframes in IBM z/OSenvironments.It supports z/OS, z/VSE and z/VM for zEnterprise. Additionally, with many organizations considering the benefits ofor running LINUX on IBM zEnterprise processors, Virtual Storage Platform supports this capability for both countkey device (CKD) and fixed block architecture (FBA) disk formats. Hitachi has implemented support for many keyperformance features in support of these operating systems running on zEnterprise, including PAV, HyperPAV, z/HPF,Multiple Allegiance, MIDAW and Priority I/O Queuing. It also provides a unique mainframe storage management solu-tion to deliver functionally compatible extended address volumes (EAV) for z/OS, data volume expansion (DVE), andIBM FlashCopy®SE (with space efficiency capability).Hitachi Virtual Storage Platform is designed to be highly available and resilient. All critical components are imple-mented in pairs. If a component fails, the paired component can take over the workload without an outage. Withits support of multiple RAID configurations, an organization’s data is protected in event of a disk drive problem.Additionally, with its industry-leading replication software and support of FlashCopy, FlashCopy SE and HitachiCompatible Software for IBM XRC®providing the functionality of the IBM z/OS Metro/Global Mirror, copies of datacan be maintained locally and at remote locations. This ensures its availability in case the primary copy becomesunusable or is not accessible.Scalability (3-D Scaling: Out, Up, Deep)Hitachi Virtual Storage Platform can scale up to provide increased performance, capacity, throughput and connectiv-ity. It can scale out by dynamically combining multiple units into a single logical system with shared resources. It can
  16. 16. WHITE PAPER 16also scale deep by dynamically virtualizing new and existing external storage systems. This 3-D scaling means thatVSP can grow nondisruptively to meet changing needs within the data center. It minimizes outages to extend theplatform and enhance functionality while providing flexibility in the configuration and choice of disk technology to meetthe specific needs of each environment.The ability to scale deep is provided by Hitachi controller-based storage virtualization, which supports connectivityto external storage. This enables organizations to further extend the life of existing storage assets, including stor-age from a variety of other vendors. It also provides IBM mainframes the ability to connect to both enterprise andmidrange storage platforms, some of which can be configured with lower cost nearline SAS or SATA drives. This vir-tualization of external storage can potentially extend the life of existing storage assets and reduce costs.Three important benefits of scaling deep are:■■ Enables the reuse of existing or legacy assets for less critical or accessed data.■■ Simplifies management of external storage with common management and data protection for internal andexternal storage.■■ Supports the reuse of existing or legacy assets across data centers within a metro area network distance andacross global distances with replication capabilities of the scale up storage system.PerformanceHitachi Virtual Storage Platform ushers in a new level of I/O throughput, response and scalability. It supports of 8GbFICON (FICON Express8 and FICON Express8S) and enables a single VSP FICON 8Gb port to handle higher trafficrates that can be delivered by a single zEnterprise FICON Express8 or FICON Express8S channel. This storage net-working is critical to optimizing performance and maximizing throughput in mainframe environments.IBM 3390 and FICON SupportThis industry-leading storage system provides 3390 disk drive support through emulation across a variety of diskdrive types to meet the variety of performance and capacity needs of mainframe environments. The platform sup-ports SSD flash drives, providing ultra-high-speed response with capacities of 200GB and 400GB, as well as2.5-inch SAS drives, and nearline SAS drives. It can control up to 65,280 logical volumes and provides an internalphysical disk capacity of approximately 2.5PB per storage system. With externally attached storage, Hitachi VirtualStorage Platform can support up to 255PB of storage capacity.VSP supports 8Gb/sec FICON (FICON Express8 and FICON Express8S) across all front-end ports for connectivity tothe mainframe and 8Gb/sec Fibre Channel for connecting external storage. VSP supports high-performance FICON(z/HPF) for z/OS. On the back end, it supports SAS, SATA and SSD drives, which are connected using the SAS 2protocol with 6GB/sec connectivity per back-end port.Hitachi Dynamic ProvisioningHitachi Dynamic Provisioning for Mainframe optimizes performance through extremely wide striping and more effec-tive use of storage through thin provisioning (see Figure 7). In other words, it allocates storage to an applicationwithout actually mapping the corresponding physical storage until it is used. This separation of allocation from physi-cal mapping results in more effective use of physical storage with higher overall performance and rates of storageutilization. Dynamic Provisioning also enables Dynamic Volume Expansion (DVE) of 3390 volumes and FlashCopy SEfor more efficient use of storage when creating local copies.
  17. 17. WHITE PAPER 17Figure 7. Hitachi Dynamic Provisioning for Mainframe optimizes performance.Hitachi Dynamic TieringHitachi Dynamic Tiering (HDT) for Mainframe enables the automatic movement of data between tiers. HDT moveshighly accessed blocks of data to the highest tier storage and migrates less frequently accessed data to the lowesttiers. This significantly reduces the time storage administrators have to spend analyzing storage usage and managingthe movement of data to optimize performance. HDT complements z/OS System Managed Storage and can movepages of data to the appropriate tier when needed rather than moving entire datasets.Hitachi Remote ReplicationBusiness continuity is more important than ever in today’s business environment as demonstrated through the natu-ral disasters and physical intrusion and destruction of IT resources over the last few years. A loss of business-criticaldata can force a company to its knees and even into bankruptcy. In addition, regulatory compliance requirementsdemand a business continuity and disaster recovery plan and infrastructure to support that plan or face stiff fines andbusiness restrictions. Hitachi remote replication offerings provide the ability to copy critical data to off-site facilitieseither within a metropolitan area and/or to distant remote locations. The combination of the enterprise-level HitachiVirtual Storage Platform with Brocade’s solutions to extend and optimize fabric connectivity facilitates the movementof your business-critical data over longer distances. Together, they enable and enhance your ability to support busi-ness continuity and disaster recovery.
  18. 18. WHITE PAPER 18Hitachi TrueCopy®Hitachi TrueCopy synchronous software provides a continuous, nondisruptive, host-independent remote data replica-tion solution for disaster recovery or data migration over distances within the same metropolitan area. It provides ano-data-loss, rapid-restart solution (see Figure 8). For enterprise environments, TrueCopy synchronous software com-bined with Hitachi Universal Replicator on Virtual Storage Platform allows for advanced 3 data center configurations.This includes consistency across up to 12 storage systems in 1 site for optimal data protection.Figure 8. Hitachi TrueCopy synchronous supports business continuity and disaster recovery efforts.TrueCopy synchronous supports business continuity and disaster recovery efforts, improving business resilience. Itimproves service levels by reducing planned and unplanned downtime of customer-facing applications. It enables fre-quent, nondisruptive disaster recovery testing with an online copy of current and accurate production data. TrueCopysynchronous can be seamlessly integrated into existing z/OS environments and controlled with familiar PPRC com-mands or with Hitachi Business Continuity Manager software.Hitachi Universal ReplicatorHitachi Universal Replicator provides asynchronous data replication across any distance for both internal VirtualStorage Platform storage and external storage managed by VSP (see Figure 9). Universal Replicator providesenterprise-class performance associated with storage system-based replication. At the same time, it provides resilientbusiness continuity without the need for remote host involvement, or redundant servers or replication appliances.Universal Replicator maintains the integrity of replicated copies without impacting processing, even when replicationnetwork outages occur or optimal bandwidth is not available. When compared to traditional methods of storage-system-based replication, Universal Replicator leverages performance-optimized disk-based journals, resulting insignificantly reduced cache utilization and increased bandwidth utilization.Universal Replicator ensures availability of up-to-date copies of data in up to 3 dispersed locations by leveraging thesynchronous capabilities of Hitachi TrueCopy synchronous. In the event of a disaster at the primary data center, thedelta resync feature of Universal Replicator enables fast failover and restart of the application without loss of data,whether at the local or remote data center.
  19. 19. WHITE PAPER 19Figure 9. Hitachi Universal Replicator ensures availability of current copies of data in up to 3 dispersed locations.Universal Replicator can be integrated into an IBM GDPS®environment, providing a much more cost-effective andcomplete recovery solution than the IBM alternative of z/OS Global Mirror. With Universal Replicator and TrueCopysynchronous support of a 3 data center replication solution, VSP supports delta resync, which is similar to but moreefficient than z/OS Metro/Global Mirror Incremental Resync.Hitachi Virtual Storage Platform also supports IBM z/OS Basic HyperSwap®, which is enabled by IBM TivoliProductivity Center for Replication for System z Basic Edition (TPC-R). TPC-R enables the administrator to developa z/OS Basic HyperSwap configuration using VSP. Using VSP, the organization can create a z/OS Basic HyperSwapplan, for a 2 data center configuration with TrueCopy synchronous or a 3 data center configuration with TrueCopysynchronous, Universal Replicator and Business Continuity Manager. Initially, VSP will support a maximum of 3 stor-age systems at each site.VSP with Universal Replicator will support a 4 data center configuration and allow you to have 2 long asynchronousdata paths and 2 synchronous paths. This solution offers you the ability to create multiple copies of data in manylocations and reduce the impact of data migration.Hitachi Compatible Software for IBM®XRC®Hitachi Compatible Replication software for IBM XRC is a cross-license technology between Hitachi Data Systemsand IBM that provides support for z/OS Global Mirror. This Hitachi software is fully compatible with IBM XRC and letsadministrators create and share server-based remote copies between Hitachi Virtual Storage Platform, the HitachiUniversal Storage Platform family and IBM enterprise storage systems, such as the DS8000®system. Hitachi DataSystems is the only 3rd-party storage vendor capable of fully supporting IBM XRC command sets.Hitachi Business Continuity ManagerHitachi Business Continuity Manager enables centralized, enterprise-wide replication management for IBM z/OSmainframe environments. Through a single, consistent interface based on the Time Sharing Option/Interactive SystemProductivity Facility (TSO/ISPF) it uses full-screen panels to automate Hitachi Universal Replicator, Hitachi TrueCopy
  20. 20. WHITE PAPER 20synchronous (including multisite topologies) and in-system Hitachi ShadowImage Heterogeneous Replication soft-ware operations.This software feature automates complex disaster recovery and planned outage functions, resulting in reduced recov-ery times. It also enables advanced, 3 data center disaster recovery configurations and extended consistency groupcapabilities. Business Continuity Manager provides built-in capabilities for monitoring and managing critical perfor-mance metrics and thresholds for proactive problem avoidance. It also delivers autodiscovery of enterprise-widestorage configuration and replication objects, eliminating tedious, error-prone data entry that can cause outages.Hitachi Business Continuity Manager integrates with the Hitachi replication management framework, HitachiReplication Manager software, for replication monitoring and continuous operations in mainframe (and open system)environments.Multiplatform SupportHitachi Virtual Storage Platform can support multiple operating systems at the same time. Although many mainframeorganizations have been reluctant to share their storage platforms with open systems servers, the need to share stor-age is becoming more important: Organizations are implementing Linux on System z. In addition, the introduction ofIBM zEnterprise BladeCenter Extension (zBX) for mainframe processors enables Microsoft Windows to operate aspart of zEnterprise servers. VSP can be configured to facilitate the isolation of disparate types of data. Additionally,the FICON and Fibre Channel ports are completely separate and help ensure that critical mainframe data cannot beaccessed directly by open systems servers or clients.Cost-Savings EfficienciesThis storage system is designed to lower TCO wherever possible. The physical packaging has been designed touse standard-size racks and chassis. The internal layout supports front-to-back airflow, to facilitate the use of hotand cold aisles and maximize the efficiency of data center cooling. In combination with very fast processors, denserpackaging and smaller batteries, the physical floor space and the heating and cooling requirements result in very lowpower per square foot (KVA/sq ft.). Operating expenditure (opex) is lower than previous systems thanks to denserpackaging, blade architecture, low power memory, small form factor disks, SSD disk and flash-protected cache withits smaller batteries. Hitachi Data Systems is committed to continuing to deliver more efficient packaging, resulting inmore sustainable products.Brocade Gen5 DCX 8510 in Mainframe EnvironmentsNow on its 5th-generation (1G, 2G, 4G, 8G and 16G) of switching technology (Gen5), Brocade has the experienceto rely on. The company has been in the mainframe storage networking business for more than 20 years, as far backas the parallel channel extension technology of the late 1980s. Brocade has a history of thought leadership. It has 4of its own FICON patents, as well as 5 FICON joint patents with IBM on technologies, such as the FICON bridge cardand control unit port (CUP). Brocade helped IBM develop Fibre Connection (FICON), and in 2000 the 1st IBM certifiedFICON network infrastructure, using 1Gb/sec ED5000 Directors, was deployed. Brocade has the only FICON archi-tecture certification program (BCAF) in the industry. Brocade manufactured the 9032-5 ESCON director for IBM, andpioneered ESCON channel extension emulation technology. Brocade has continued its heritage of mainframe storagenetworking thought leadership with 9 generations of FICON directors. These products include the current industry-leading FICON directors, such as the DCX and DCX 8510, and FICON channel extension, such as the Brocade 7800and FX8-24 extension blade.
  21. 21. WHITE PAPER 21Reliability, Availability and ServiceabilityThe largest corporations in the world literally run their businesses on mainframes. Government institutions in manycountries worldwide also rely on the mainframe for their critical computing needs. RAS qualities for these mission-critical environments are of the utmost importance. Mainframe practitioners in these organizations avoid risk at allcosts. They never want to suffer an unscheduled outage, and they want to minimize if not outright eliminate sched-uled or planned outages. Mainframes such as the IBM zEnterprise have historically been the rock-solid pillar in termsof computing RAS. Mainframe practitioners have a history of creating I/O infrastructures that have “five nines” avail-ability. For FICON channel connectivity to mainframe-attached storage, these same organizations have a requirementfor a FICON director platform that offers the same levels of RAS as the mainframe itself. The Brocade Gen5 DCX8510 is the ideal FICON director for these RAS requirements.The Brocade Gen5 DCX 8510 FICON Director features a modular, high-availability architecture that supports thesemission-critical mainframe environments. The Brocade Gen5 DCX 8510 chassis has been engineered from incep-tion for “five nines” of availability by providing multiple fans (supporting hot aisle-cool aisle), multiple fan connectors,dual core blade internal connectivity, dual control processors, dual power supplies, a passive backplane and dualI/O timing clocks. These features and the switching design of the Brocade Gen5 DCX 8510 result in leading meantime between failure (MBTF) and mean time to recovery or repair (MTTR) numbers. In a recent study performed witha sample size of 26,593 Brocade products, the average yearly downtime was .53 minutes per year, for an availabilityrate of 99.99984%. It is this kind of availability that consistently leads OEM partners such as HDS to praise Brocadeproducts for their quality.ScalabilityWith the advent of the zBX and the zEnterprise Unified Resource Manager, private cloud computing centered on theIBM zEnterprise has emerged as a “hot topic.” Cloud computing requires a highly scalable (hyper-scale) storage net-working architecture to support it. Hyper-Scale Inter-Chassis Link (ICL) is a unique Brocade Gen5 DCX 8510 featurethat provides connectivity among 2 or more Brocade 8510-4 or 8510-8 chassis. This is the 2nd generation of ICLtechnology from Brocade with optical QSFP (Quad Small Form Factor). The 1st generation used a copper connector.Each ICL connects the core routing blades of two 8510 chassis and provides up to 64Gb/sec of throughput within asingle cable. The Brocade 8510-8 allows up to 32 QSFP ports, and the 8510-4 allows up to 16 QSFP ports to helppreserve switch ports for end devices.This 2nd generation of Brocade optical ICL technology, based on QSFP technology, provides a number of benefitsto the organization. Brocade has improved ICL connectivity over the use of copper connectors by upgrading to anoptical form factor. With this improvement, Brocade has also increased the distance of the connection from 2 metersto 50 meters. QSFP combines 4 cables into 1 cable per port, significantly reducing the number of ISL cables thecustomer needs to run. Since the QSFP connections reside on the core blades within each 8510, they do not use upconnections on the slot line cards. This improvement frees up to 33% of the available ports for additional server andstorage connectivity.Dual-chassis backbone topologies connected through low-latency ICL connections are ideal in a FICON environment.The majority of FICON installations have switches that are connected in dual or triangular topologies, using ISLs tomeet the FICON requirement for low latency between switches. New 64Gb/sec QSFP based ICLs enable simpler,flatter, low-latency chassis topologies spanning a distance of up to 50 meters with off-the-shelf cables. They reduceinterswitch cables by 75% and preserve 33% of front-end ports for servers and storage, leading to fewer cables andmore usable ports in a smaller footprint.
  22. 22. WHITE PAPER 22Pair the Two Platforms TogetherTraditional (z/OS) Mainframe EnvironmentsIn a “traditional” z/OS mainframe environment, RAS, as well as performance are the key concerns to most orga-nizations. These characteristics provide the stability for the mainframe-based applications, on which the largestcompanies in the world run their businesses. Dr. Thomas E. Bell, winner of the Computer Measurement Group (CMG)Michelson Award for lifetime achievement in the computer performance field, once famously commented that “allCPUs wait at the same speed.” Likewise, Dr. Steve Guendert, a CMG Board member has commented in his blog that“The IBM zEnterprise is a hungry machine, and its users need to feed the I/O beast.” Response time means money inthese environments. The ability to process transactions more rapidly provides companies a competitive advantage intoday’s financial industry. Hitachi Virtual Storage Platform and Brocade DCX 8510, together, make sure the “I/O beastis fed.”Linux on the MainframeA 2011 IDC report indicated that of all the mainframes being shipped, approximately 19% of the processing poweris intended for Linux. And IBM has been quoted as saying that 32% of IBM’s zEnterprise installed base is runningintegrated facility for Linux (IFL) specialty engines. Regardless of whether Linux is running as a guest under z/VMor natively in an LPAR, it is an important trend that cannot be ignored. This trend has been growing since the 2005introduction of support for NPIV on System z. IT organizations are realizing that there are significant cost savings tobe realized by moving to Linux on System z, and these cost savings are in terms of hardware acquisition, softwarelicensing and operational costs, such as power and cooling. Hitachi VSP and Brocade Gen5 DCX 8510 are the idealchoice for these Linux environments. VSP offers very powerful virtualization, support for NPIV and both DynamicProvisioning and Dynamic Tiering. Brocade Gen5 DCX 8510 offers full support for NPIV, and its Virtual Fabrics func-tionality allows for highly secure separation of the z/OS data traffic from the Linux traffic on the FICON director.FICON and FCP IntermixFICON and FCP Intermix, or protocol intermix mode (PIM) is another growing trend in mainframe environments. Linuxon System z has been the major driver of this trend, as its very nature often leads to mainframe end users using FCPchannels and FICON channels on the mainframe. IBM’s recent announcement and GA of support for Windows bladeservers on the zEnterprise Blade Center Extension (zBX) is likely to see even further adoption or acceptance of PIMas a storage networking architecture. The virtualization, performance, scalability and tiering capabilities of Hitachi VSPmake it an ideal disk storage platform for a PIM storage architecture. The performance and virtual fabrics capabilities,coupled with the immense number of open systems SAN experience at Brocade make the DCX 8510 the ideal direc-tor platform to go along with the VSP in a PIM architecture.Private CloudThe ideas behind cloud computing are well known to experienced mainframers, who remember “service bureau com-puting.” Private cloud computing is a “hot topic.” It is seeing a lot of adoption, and the concept of IBM zEnterpriseSystems at the center of a private cloud is gaining a lot of traction. Private cloud computing relies on extensive virtu-alization. This virtualization is not just at the server and application; it is at everything in the data center, most notablywith the storage devices and the network. Hitachi Virtual Storage Platform paired with Brocade Gen5 DCX 8510 cre-ates the ideal architecture for a mainframe-centric private cloud.
  23. 23. WHITE PAPER 23ConclusionA networked FICON storage architecture for your mainframe is a well-documented industry best practice for a widevariety of reasons, both technical and financial. Networked storage architectures beat direct-attached architectures interms of RAS, performance, scalability and long-run costs. The latest I/O enhancements to IBM mainframes, such asDynamic Channel-path Management (DCM) and System z Discovery and Configuration (zDAC), require a networkedstorage architecture (with FICON directors) if the end user wishes to take advantage of them.The IBM zEnterprise offers unprecedented performance, scalability and innovative new features, such as the zBX, aswell as support for Windows. To take full advantage of a zEnterprise requires the end user to have an equally capablestorage system and FICON director platform for connectivity. Hitachi Virtual Storage Platform paired with BrocadeGen5 DCX 8510 is the ideal combination with zEnterprise mainframes, whether intended for a traditional z/OS, Linux,PIM or private cloud environment. Hitachi Data Systems and Brocade have the experience to rely on and VSP andDCX 8510 are the best platforms in the industry for mainframe data centers.
  24. 24. © Hitachi Data Systems Corporation 2013. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Universal Storage Platform, ShadowImage and TrueCopyare trademarks or registered trademarks of Hitachi Data Systems Corporation. IBM, FICON, ESCON, System z, z/OS, zEnterprise, z/VM, z9, z10, s/390, z/VSE, FlashCopy, XRC, GDPS,HyperSwap and DS8000 are trademarks or registered trademarks of International Business Machines. Microsoft and Windows are trademarks or registered trademarks of MicrosoftCorporation. All other trademarks, service marks, and company names are properties of their respective owners.Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered byHitachi Data Systems Corporation.WP-432-C DG March 2013Corporate Headquarters2845 Lafayette StreetSanta Clara, CA 96050-2639 USAwww.HDS.comRegional Contact InformationAmericas: +1 408 970 1000 or info@hds.comEurope, Middle East and Africa: +44 (0) 1753 618000 or info.emea@hds.comAsia Pacific: +852 3189 7900 or