SlideShare a Scribd company logo
1 of 7
Download to read offline
Simple layouts for ECKD and zfcp disk configurations
on Linux on System z

Thorsten Diehl
Linux on System z System Evaluation
thorsten.diehl@de.ibm.com




1                                                      © 2011 IBM Corporation
Linux on System z Performance Evaluation




FICON/ECKD dasd I/O to a single disk

                                                          Assume that subchannel a corresponds to disk 2 in rank 1
    Application program                                   The full choice of host adapters can be used
                                                          Only one I/O can be issued at a time through subchannel a
                 VFS                                      All other I/Os need to be queued in the dasd driver and in the block device layer
                                                           until the subchannel is no longer busy with the preceding I/O


                                                                                                                     DA   ranks
    Block device layer                                      Channel                                                                           1
                                                           subsystem




                                                                                                          Server 0
           Page cache
          I/O scheduler                                     a          chpid 1              HBA 1
                                                                                                                                              3
                                                                       chpid 2              HBA 2
            dasd driver                                                            Switch
                                                                       chpid 3              HBA 3                                             5




                                                                                                          Server 1
                                                                       chpid 4              HBA 4
                                                                                                                                              7

2                     Introduction to Linux features for disk I/O                                                          © 2011 IBM Corporation
Linux on System z Performance Evaluation




FICON/ECKD dasd I/O to a single disk with HyperPAV

                                                        VFS sees one device
    Application program                                 The dasd driver sees the real device and all alias devices
                                                        Load balancing with HyperPAV is done in the dasd driver. The aliases need only
                 VFS                                     to be added to Linux.
                                                        The next slowdown is the fact that only one disk is used in the storage server.
                                                         This implies the use of only one rank, one device adapter, one server

                                                                                                                 DA   ranks
    Block device layer                                   Channel                                                                          1
                                                        subsystem




                                                                                                      Server 0
           Page cache
          I/O scheduler                                    a        chpid 1              HBA 1
                                                                                                                                          3
                                                           b        chpid 2              HBA 2
            dasd driver                                                         Switch
                                                            c       chpid 3              HBA 3                                            5




                                                                                                      Server 1
                                                           d        chpid 4              HBA 4
                                                                                                                                          7

3                     Introduction to Linux features for disk I/O                                                      © 2011 IBM Corporation
Linux on System z Performance Evaluation




FICON/ECKD dasd I/O to a linear or striped logical volume

                                                        VFS sees one device (logical volume)
                                                        The device mapper sees the logical volume and the physical volumes
    Application program                                 With a striped logical volume the I/Os can be well balanced over the entire storage
                                                         server and overcome limitations from a single rank, a single device adapter or a
                 VFS                                     single server
                                                        To ensure that I/O to one physical disk is not limited by one subchannel,
                                                         PAV or HyperPAV should be used in combination with logical volumes
                LVM
                                                                                                                    DA   ranks
                  dm
    Block device layer                                    Channel                                                                            1
                                                         subsystem




                                                                                                         Server 0
           Page cache
          I/O scheduler                                     a        chpid 1               HBA 1
                                                                                                                                             3
                                                            b        chpid 2               HBA 2
            dasd driver                                                           Switch
                                                             c       chpid 3               HBA 3                                             5




                                                                                                         Server 1
                                                            d        chpid 4               HBA 4
                                                                                                                                             7

4                      Introduction to Linux features for disk I/O                                                        © 2011 IBM Corporation
Linux on System z Performance Evaluation




FCP/SCSI LUN I/O to a single disk

                                                        Assume that disk 3 in rank 8 is reachable via channel 6 and host bus adapter 6
    Application program                                 Up to 32 (default value) I/O requests can be sent out to disk 3 before the first
                                                         completion is required
                                                        The throughput will be limited by the rank and/or the device adapter
                 VFS
                                                        There is no high availability provided for the connection between the host and the
                                                         storage server


                                                                                                                      DA   ranks
    Block device layer




                                                                                                           Server 0
           Page cache
                                                                                                                                                2
          I/O scheduler                                              chpid 5                HBA 5

                                                                     chpid 6                HBA 6                                               4
             SCSI driver                                                           Switch
             zFCP driver
                                                                     chpid 7                HBA 7
              qdio driver




                                                                                                           Server 1
                                                                                                                                                6
                                                                     chpid 8                HBA 8


                                                                                                                                                8
5                     Introduction to Linux features for disk I/O                                                            © 2011 IBM Corporation
Linux on System z Performance Evaluation




FCP/SCSI LUN I/O to a single disk with multipathing

                                                          VFS sees one device
    Application program                                   The device mapper sees the multibus or failover alternatives to the same disk
                                                          Administrational effort is required to define all paths to one disk
                 VFS                                      Additional processor cycles are spent to do the mapping to the desired path for the
                                                           disk in the device mapper


             Multipath                                                                                                 DA   ranks
               dm
    Block device layer




                                                                                                            Server 0
           Page cache
                                                                                                                                                 2
          I/O scheduler                                                chpid 5                HBA 5

                                                                       chpid 6                HBA 6                                              4
             SCSI driver                                                             Switch
             zFCP driver
                                                                       chpid 7                HBA 7
              qdio driver




                                                                                                            Server 1
                                                                                                                                                 6
                                                                       chpid 8                HBA 8


                                                                                                                                                 8
6                     Introduction to Linux features for disk I/O                                                             © 2011 IBM Corporation
Linux on System z Performance Evaluation




FCP/SCSI LUN I/O to a linear or striped logical volume

                                                         VFS sees one device (logical volume)
    Application program                                  The device mapper sees the logical volume and the physical volumes
                                                         With a striped logical volume the I/Os can be well balanced over the entire storage
                                                          server and overcome limitations from a single rank, a single device adapter or a
                 VFS                                      single server
                                                         To ensure high availability the logical volume should be used in combination
                LVM                                       with multipathing
                                                                                                                    DA   ranks
                  dm
    Block device layer




                                                                                                         Server 0
           Page cache
                                                                                                                                              2
          I/O scheduler                                              chpid 5                HBA 5

                                                                     chpid 6                HBA 6                                             4
             SCSI driver                                                           Switch
             zFCP driver
                                                                     chpid 7                HBA 7
              qdio driver




                                                                                                         Server 1
                                                                                                                                              6
                                                                     chpid 8                HBA 8


                                                                                                                                              8
7                      Introduction to Linux features for disk I/O                                                         © 2011 IBM Corporation

More Related Content

What's hot

Windows offloaded data_transfer_steve_olsson
Windows offloaded data_transfer_steve_olssonWindows offloaded data_transfer_steve_olsson
Windows offloaded data_transfer_steve_olssonscsibeast
 
ECE 24 Final Report 052609
ECE 24 Final Report 052609ECE 24 Final Report 052609
ECE 24 Final Report 052609crh342
 
Mpls Presentation Ine
Mpls Presentation IneMpls Presentation Ine
Mpls Presentation IneAlp isik
 
Munich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topics
Munich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topicsMunich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topics
Munich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topicsMartin Packer
 
Introduction to SCSI over FCP for Linux on System z
Introduction to SCSI over FCP for Linux on System zIntroduction to SCSI over FCP for Linux on System z
Introduction to SCSI over FCP for Linux on System zIBM India Smarter Computing
 
IETF80 - IDR/GROW BGP Error Handling Requirements
IETF80 - IDR/GROW BGP Error Handling RequirementsIETF80 - IDR/GROW BGP Error Handling Requirements
IETF80 - IDR/GROW BGP Error Handling RequirementsRob Shakir
 
Considerations when implementing_ha_in_dmf
Considerations when implementing_ha_in_dmfConsiderations when implementing_ha_in_dmf
Considerations when implementing_ha_in_dmfhik_lhz
 
Strata + Hadoop World 2012: HDFS: Now and Future
Strata + Hadoop World 2012: HDFS: Now and FutureStrata + Hadoop World 2012: HDFS: Now and Future
Strata + Hadoop World 2012: HDFS: Now and FutureCloudera, Inc.
 
Microsoft Exchange Server 2007 High Availability And Disaster Recovery Deep Dive
Microsoft Exchange Server 2007 High Availability And Disaster Recovery Deep DiveMicrosoft Exchange Server 2007 High Availability And Disaster Recovery Deep Dive
Microsoft Exchange Server 2007 High Availability And Disaster Recovery Deep Diversnarayanan
 
Jaguar x86 Core Functional Verification
Jaguar x86 Core Functional VerificationJaguar x86 Core Functional Verification
Jaguar x86 Core Functional VerificationDVClub
 
CMAF Live Ingest Uplink Protocol
CMAF Live Ingest Uplink ProtocolCMAF Live Ingest Uplink Protocol
CMAF Live Ingest Uplink ProtocolRufael Mekuria
 

What's hot (18)

D02 Evolution of the HADR tool
D02 Evolution of the HADR toolD02 Evolution of the HADR tool
D02 Evolution of the HADR tool
 
Windows offloaded data_transfer_steve_olsson
Windows offloaded data_transfer_steve_olssonWindows offloaded data_transfer_steve_olsson
Windows offloaded data_transfer_steve_olsson
 
ECE 24 Final Report 052609
ECE 24 Final Report 052609ECE 24 Final Report 052609
ECE 24 Final Report 052609
 
Mpls Presentation Ine
Mpls Presentation IneMpls Presentation Ine
Mpls Presentation Ine
 
Munich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topics
Munich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topicsMunich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topics
Munich 2016 - Z011601 Martin Packer - Parallel Sysplex Performance Topics topics
 
Introduction to SCSI over FCP for Linux on System z
Introduction to SCSI over FCP for Linux on System zIntroduction to SCSI over FCP for Linux on System z
Introduction to SCSI over FCP for Linux on System z
 
SCSI over FCP for Linux on System z
SCSI over FCP for Linux on System zSCSI over FCP for Linux on System z
SCSI over FCP for Linux on System z
 
IETF80 - IDR/GROW BGP Error Handling Requirements
IETF80 - IDR/GROW BGP Error Handling RequirementsIETF80 - IDR/GROW BGP Error Handling Requirements
IETF80 - IDR/GROW BGP Error Handling Requirements
 
22 configuration
22 configuration22 configuration
22 configuration
 
Considerations when implementing_ha_in_dmf
Considerations when implementing_ha_in_dmfConsiderations when implementing_ha_in_dmf
Considerations when implementing_ha_in_dmf
 
Strata + Hadoop World 2012: HDFS: Now and Future
Strata + Hadoop World 2012: HDFS: Now and FutureStrata + Hadoop World 2012: HDFS: Now and Future
Strata + Hadoop World 2012: HDFS: Now and Future
 
Multicast for ipv6
Multicast for ipv6Multicast for ipv6
Multicast for ipv6
 
Microsoft Exchange Server 2007 High Availability And Disaster Recovery Deep Dive
Microsoft Exchange Server 2007 High Availability And Disaster Recovery Deep DiveMicrosoft Exchange Server 2007 High Availability And Disaster Recovery Deep Dive
Microsoft Exchange Server 2007 High Availability And Disaster Recovery Deep Dive
 
Gfs andmapreduce
Gfs andmapreduceGfs andmapreduce
Gfs andmapreduce
 
Jaguar x86 Core Functional Verification
Jaguar x86 Core Functional VerificationJaguar x86 Core Functional Verification
Jaguar x86 Core Functional Verification
 
CMAF Live Ingest Uplink Protocol
CMAF Live Ingest Uplink ProtocolCMAF Live Ingest Uplink Protocol
CMAF Live Ingest Uplink Protocol
 
CSL_Cochin_c
CSL_Cochin_cCSL_Cochin_c
CSL_Cochin_c
 
Bgp
BgpBgp
Bgp
 

Similar to Simple layouts for ECKD and zfcp disk configurations on Linux on System z

Storage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook MessagesStorage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook Messagesyarapavan
 
Cloudera Sessions - Clinic 1 - Getting Started With Hadoop
Cloudera Sessions - Clinic 1 - Getting Started With HadoopCloudera Sessions - Clinic 1 - Getting Started With Hadoop
Cloudera Sessions - Clinic 1 - Getting Started With HadoopCloudera, Inc.
 
Storage virtualization citrix blr wide tech talk
Storage virtualization citrix blr wide tech talkStorage virtualization citrix blr wide tech talk
Storage virtualization citrix blr wide tech talkSisimon Soman
 
CICC 2001 - Reducing Multiple Design Flow Support Requirements with OLA
CICC 2001 - Reducing Multiple Design Flow Support Requirements with OLACICC 2001 - Reducing Multiple Design Flow Support Requirements with OLA
CICC 2001 - Reducing Multiple Design Flow Support Requirements with OLATim55Ehrler
 
Fpga implementation of a multi channel hdlc
Fpga implementation of a multi channel hdlcFpga implementation of a multi channel hdlc
Fpga implementation of a multi channel hdlcnitin palan
 
HDFS - What's New and Future
HDFS - What's New and FutureHDFS - What's New and Future
HDFS - What's New and FutureDataWorks Summit
 
Storage infrastructure using HBase behind LINE messages
Storage infrastructure using HBase behind LINE messagesStorage infrastructure using HBase behind LINE messages
Storage infrastructure using HBase behind LINE messagesLINE Corporation (Tech Unit)
 
VDI storage and storage virtualization
VDI storage and storage virtualizationVDI storage and storage virtualization
VDI storage and storage virtualizationSisimon Soman
 
Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?Uwe Printz
 
State of Containers and the Convergence of HPC and BigData
State of Containers and the Convergence of HPC and BigDataState of Containers and the Convergence of HPC and BigData
State of Containers and the Convergence of HPC and BigDatainside-BigData.com
 
High Availability != High-cost
High Availability != High-costHigh Availability != High-cost
High Availability != High-costnormanmaurer
 
Decade architecture discussion 20110311
Decade architecture discussion 20110311Decade architecture discussion 20110311
Decade architecture discussion 20110311chenlijiang
 
HDFS NameNode HA in CDH4
HDFS NameNode HA in CDH4HDFS NameNode HA in CDH4
HDFS NameNode HA in CDH4Lee neal
 
Simple Virtualization Overview
Simple Virtualization OverviewSimple Virtualization Overview
Simple Virtualization Overviewbassemir
 

Similar to Simple layouts for ECKD and zfcp disk configurations on Linux on System z (20)

Storage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook MessagesStorage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook Messages
 
Cloudera Sessions - Clinic 1 - Getting Started With Hadoop
Cloudera Sessions - Clinic 1 - Getting Started With HadoopCloudera Sessions - Clinic 1 - Getting Started With Hadoop
Cloudera Sessions - Clinic 1 - Getting Started With Hadoop
 
Storage virtualization citrix blr wide tech talk
Storage virtualization citrix blr wide tech talkStorage virtualization citrix blr wide tech talk
Storage virtualization citrix blr wide tech talk
 
CICC 2001 - Reducing Multiple Design Flow Support Requirements with OLA
CICC 2001 - Reducing Multiple Design Flow Support Requirements with OLACICC 2001 - Reducing Multiple Design Flow Support Requirements with OLA
CICC 2001 - Reducing Multiple Design Flow Support Requirements with OLA
 
Zoned Storage
Zoned StorageZoned Storage
Zoned Storage
 
27ian2011 hp
27ian2011   hp27ian2011   hp
27ian2011 hp
 
Fpga implementation of a multi channel hdlc
Fpga implementation of a multi channel hdlcFpga implementation of a multi channel hdlc
Fpga implementation of a multi channel hdlc
 
HDFS - What's New and Future
HDFS - What's New and FutureHDFS - What's New and Future
HDFS - What's New and Future
 
What's New in RHEL 6 for Linux on System z?
What's New in RHEL 6 for Linux on System z?What's New in RHEL 6 for Linux on System z?
What's New in RHEL 6 for Linux on System z?
 
Storage infrastructure using HBase behind LINE messages
Storage infrastructure using HBase behind LINE messagesStorage infrastructure using HBase behind LINE messages
Storage infrastructure using HBase behind LINE messages
 
VDI storage and storage virtualization
VDI storage and storage virtualizationVDI storage and storage virtualization
VDI storage and storage virtualization
 
Hadoop 1.x vs 2
Hadoop 1.x vs 2Hadoop 1.x vs 2
Hadoop 1.x vs 2
 
Storage em Oracle RAC
Storage em Oracle RACStorage em Oracle RAC
Storage em Oracle RAC
 
Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?
 
State of Containers and the Convergence of HPC and BigData
State of Containers and the Convergence of HPC and BigDataState of Containers and the Convergence of HPC and BigData
State of Containers and the Convergence of HPC and BigData
 
High Availability != High-cost
High Availability != High-costHigh Availability != High-cost
High Availability != High-cost
 
Rds data lake @ Robinhood
Rds data lake @ Robinhood Rds data lake @ Robinhood
Rds data lake @ Robinhood
 
Decade architecture discussion 20110311
Decade architecture discussion 20110311Decade architecture discussion 20110311
Decade architecture discussion 20110311
 
HDFS NameNode HA in CDH4
HDFS NameNode HA in CDH4HDFS NameNode HA in CDH4
HDFS NameNode HA in CDH4
 
Simple Virtualization Overview
Simple Virtualization OverviewSimple Virtualization Overview
Simple Virtualization Overview
 

More from IBM India Smarter Computing

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments IBM India Smarter Computing
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...IBM India Smarter Computing
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceIBM India Smarter Computing
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM India Smarter Computing
 

More from IBM India Smarter Computing (20)

Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments Using the IBM XIV Storage System in OpenStack Cloud Environments
Using the IBM XIV Storage System in OpenStack Cloud Environments
 
All-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage EfficiencyAll-flash Needs End to End Storage Efficiency
All-flash Needs End to End Storage Efficiency
 
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
TSL03104USEN Exploring VMware vSphere Storage API for Array Integration on th...
 
IBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product GuideIBM FlashSystem 840 Product Guide
IBM FlashSystem 840 Product Guide
 
IBM System x3250 M5
IBM System x3250 M5IBM System x3250 M5
IBM System x3250 M5
 
IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4IBM NeXtScale nx360 M4
IBM NeXtScale nx360 M4
 
IBM System x3650 M4 HD
IBM System x3650 M4 HDIBM System x3650 M4 HD
IBM System x3650 M4 HD
 
IBM System x3300 M4
IBM System x3300 M4IBM System x3300 M4
IBM System x3300 M4
 
IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4IBM System x iDataPlex dx360 M4
IBM System x iDataPlex dx360 M4
 
IBM System x3500 M4
IBM System x3500 M4IBM System x3500 M4
IBM System x3500 M4
 
IBM System x3550 M4
IBM System x3550 M4IBM System x3550 M4
IBM System x3550 M4
 
IBM System x3650 M4
IBM System x3650 M4IBM System x3650 M4
IBM System x3650 M4
 
IBM System x3500 M3
IBM System x3500 M3IBM System x3500 M3
IBM System x3500 M3
 
IBM System x3400 M3
IBM System x3400 M3IBM System x3400 M3
IBM System x3400 M3
 
IBM System x3250 M3
IBM System x3250 M3IBM System x3250 M3
IBM System x3250 M3
 
IBM System x3200 M3
IBM System x3200 M3IBM System x3200 M3
IBM System x3200 M3
 
IBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and ConfigurationIBM PowerVC Introduction and Configuration
IBM PowerVC Introduction and Configuration
 
A Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization PerformanceA Comparison of PowerVM and Vmware Virtualization Performance
A Comparison of PowerVM and Vmware Virtualization Performance
 
IBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architectureIBM pureflex system and vmware vcloud enterprise suite reference architecture
IBM pureflex system and vmware vcloud enterprise suite reference architecture
 
X6: The sixth generation of EXA Technology
X6: The sixth generation of EXA TechnologyX6: The sixth generation of EXA Technology
X6: The sixth generation of EXA Technology
 

Simple layouts for ECKD and zfcp disk configurations on Linux on System z

  • 1. Simple layouts for ECKD and zfcp disk configurations on Linux on System z Thorsten Diehl Linux on System z System Evaluation thorsten.diehl@de.ibm.com 1 © 2011 IBM Corporation
  • 2. Linux on System z Performance Evaluation FICON/ECKD dasd I/O to a single disk  Assume that subchannel a corresponds to disk 2 in rank 1 Application program  The full choice of host adapters can be used  Only one I/O can be issued at a time through subchannel a VFS  All other I/Os need to be queued in the dasd driver and in the block device layer until the subchannel is no longer busy with the preceding I/O DA ranks Block device layer Channel 1 subsystem Server 0 Page cache I/O scheduler a chpid 1 HBA 1 3 chpid 2 HBA 2 dasd driver Switch chpid 3 HBA 3 5 Server 1 chpid 4 HBA 4 7 2 Introduction to Linux features for disk I/O © 2011 IBM Corporation
  • 3. Linux on System z Performance Evaluation FICON/ECKD dasd I/O to a single disk with HyperPAV  VFS sees one device Application program  The dasd driver sees the real device and all alias devices  Load balancing with HyperPAV is done in the dasd driver. The aliases need only VFS to be added to Linux.  The next slowdown is the fact that only one disk is used in the storage server. This implies the use of only one rank, one device adapter, one server DA ranks Block device layer Channel 1 subsystem Server 0 Page cache I/O scheduler a chpid 1 HBA 1 3 b chpid 2 HBA 2 dasd driver Switch c chpid 3 HBA 3 5 Server 1 d chpid 4 HBA 4 7 3 Introduction to Linux features for disk I/O © 2011 IBM Corporation
  • 4. Linux on System z Performance Evaluation FICON/ECKD dasd I/O to a linear or striped logical volume  VFS sees one device (logical volume)  The device mapper sees the logical volume and the physical volumes Application program  With a striped logical volume the I/Os can be well balanced over the entire storage server and overcome limitations from a single rank, a single device adapter or a VFS single server  To ensure that I/O to one physical disk is not limited by one subchannel, PAV or HyperPAV should be used in combination with logical volumes LVM DA ranks dm Block device layer Channel 1 subsystem Server 0 Page cache I/O scheduler a chpid 1 HBA 1 3 b chpid 2 HBA 2 dasd driver Switch c chpid 3 HBA 3 5 Server 1 d chpid 4 HBA 4 7 4 Introduction to Linux features for disk I/O © 2011 IBM Corporation
  • 5. Linux on System z Performance Evaluation FCP/SCSI LUN I/O to a single disk  Assume that disk 3 in rank 8 is reachable via channel 6 and host bus adapter 6 Application program  Up to 32 (default value) I/O requests can be sent out to disk 3 before the first completion is required  The throughput will be limited by the rank and/or the device adapter VFS  There is no high availability provided for the connection between the host and the storage server DA ranks Block device layer Server 0 Page cache 2 I/O scheduler chpid 5 HBA 5 chpid 6 HBA 6 4 SCSI driver Switch zFCP driver chpid 7 HBA 7 qdio driver Server 1 6 chpid 8 HBA 8 8 5 Introduction to Linux features for disk I/O © 2011 IBM Corporation
  • 6. Linux on System z Performance Evaluation FCP/SCSI LUN I/O to a single disk with multipathing  VFS sees one device Application program  The device mapper sees the multibus or failover alternatives to the same disk  Administrational effort is required to define all paths to one disk VFS  Additional processor cycles are spent to do the mapping to the desired path for the disk in the device mapper Multipath DA ranks dm Block device layer Server 0 Page cache 2 I/O scheduler chpid 5 HBA 5 chpid 6 HBA 6 4 SCSI driver Switch zFCP driver chpid 7 HBA 7 qdio driver Server 1 6 chpid 8 HBA 8 8 6 Introduction to Linux features for disk I/O © 2011 IBM Corporation
  • 7. Linux on System z Performance Evaluation FCP/SCSI LUN I/O to a linear or striped logical volume  VFS sees one device (logical volume) Application program  The device mapper sees the logical volume and the physical volumes  With a striped logical volume the I/Os can be well balanced over the entire storage server and overcome limitations from a single rank, a single device adapter or a VFS single server  To ensure high availability the logical volume should be used in combination LVM with multipathing DA ranks dm Block device layer Server 0 Page cache 2 I/O scheduler chpid 5 HBA 5 chpid 6 HBA 6 4 SCSI driver Switch zFCP driver chpid 7 HBA 7 qdio driver Server 1 6 chpid 8 HBA 8 8 7 Introduction to Linux features for disk I/O © 2011 IBM Corporation