Front cover


IBM Flex System
Interoperability Guide
Quick reference for IBM Flex System
Interoperability

Covers internal components and
external connectivity

Latest updates as of
30 January 2013




                                                    David Watts
                                                     Ilya Krutov




ibm.com/redbooks                        Redpaper
International Technical Support Organization

IBM Flex System Interoperability Guide

30 January 2013




                                               REDP-FSIG-00
Note: Before using this information and the product it supports, read the information in “Notices” on page v.




This edition applies to:

IBM PureFlex System
IBM Flex System Enterprise Chassis
IBM Flex System Manager
IBM Flex System x220 Compute Node
IBM Flex System x240 Compute Node
IBM Flex System x440 Compute Node
IBM Flex System p260 Compute Node
IBM Flex System p24L Compute Node
IBM Flex System p460 Compute Node
IBM 42U 1100 mm Enterprise V2 Dynamic Rack

© Copyright International Business Machines Corporation 2012, 2013. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents

                 Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
                 Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

                 Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
                 The team who wrote this paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
                 Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
                 Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
                 Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

                 Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
                 30 January 2013 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
                 8 December 2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
                 29 November 2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
                 13 November 2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
                 2 October 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x

                 Chapter 1. Chassis interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
                 1.1 Chassis to compute node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
                 1.2 Switch to adapter interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
                    1.2.1 Ethernet switches and adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
                    1.2.2 Fibre Channel switches and adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
                    1.2.3 InfiniBand switches and adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
                 1.3 Switch to transceiver interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
                    1.3.1 Ethernet switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
                    1.3.2 Fibre Channel switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
                    1.3.3 InfiniBand switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
                 1.4 Switch upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
                    1.4.1 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch . . . . . . . . . . . 9
                    1.4.2 IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch . . . . . . . . . 10
                    1.4.3 IBM Flex System EN2092 1Gb Ethernet Scalable Switch . . . . . . . . . . . . . . . . . . 11
                    1.4.4 IBM Flex System IB6131 InfiniBand Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
                    1.4.5 IBM Flex System FC5022 16Gb SAN Scalable Switch. . . . . . . . . . . . . . . . . . . . . 12
                 1.5 vNIC and UFP support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
                 1.6 Chassis power supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
                 1.7 Rack to chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

                 Chapter 2. Compute node component compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . .                                    17
                 2.1 Compute node-to-card interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                      18
                 2.2 Memory DIMM compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                  20
                    2.2.1 x86 compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                20
                    2.2.2 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                         21
                 2.3 Internal storage compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .               22
                    2.3.1 x86 compute nodes: 2.5-inch drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        22
                    2.3.2 x86 compute nodes: 1.8-inch drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                        23
                    2.3.3 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                         24
                 2.4 Embedded virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              25
                 2.5 Expansion node compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 26
                    2.5.1 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .              26
                    2.5.2 Flex System I/O adapters - PCIe Expansion Node . . . . . . . . . . . . . . . . . . . . . . . .                                  26


© Copyright IBM Corp. 2012, 2013. All rights reserved.                                                                                                     iii
2.5.3 PCIe I/O adapters - PCIe Expansion Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
                   2.5.4 Internal storage - Storage Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
                   2.5.5 RAID upgrades - Storage Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

               Chapter 3. Software compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 31
               3.1 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .            32
                  3.1.1 x86 compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             32
                  3.1.2 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                      33
               3.2 IBM Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          34

               Chapter 4. Storage interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 37
               4.1 Unified NAS storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .         38
               4.2 FCoE support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .      39
               4.3 iSCSI support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .     40
               4.4 NPIV support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    41
               4.5 Fibre Channel support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .          41
                  4.5.1 x86 compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .             41
                  4.5.2 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                      42

               Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

               Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        45
               IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .    45
               Other publications and online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .                 45
               Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   46




iv   IBM Flex System Interoperability Guide
Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that does
not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.

Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.




© Copyright IBM Corp. 2012, 2013. All rights reserved.                                                          v
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
     AIX®                                 POWER7+™                             Redbooks (logo)     ®
     BladeCenter®                         POWER7®                              RETAIN®
     DS8000®                              PowerVM®                             ServerProven®
     IBM Flex System™                     POWER®                               Storwize®
     IBM Flex System Manager™             PureFlex™                            System Storage®
     IBM®                                 RackSwitch™                          System x®
     Netfinity®                           Redbooks®                            XIV®
     Power Systems™                       Redpaper™

The following terms are trademarks of other companies:

Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.

Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.

Other company, product, or service names may be trademarks or service marks of others.




vi      IBM Flex System Interoperability Guide
Preface

                 To meet today’s complex and ever-changing business demands, you need a solid foundation
                 of compute, storage, networking, and software resources. This system must be simple to
                 deploy, and be able to quickly and automatically adapt to changing conditions. You also need
                 to be able to take advantage of broad expertise and proven guidelines in systems
                 management, applications, hardware maintenance, and more.

                 The IBM® PureFlex™ System combines no-compromise system designs along with built-in
                 expertise and integrates them into complete and optimized solutions. At the heart of PureFlex
                 System is the IBM Flex System™ Enterprise Chassis. This fully integrated infrastructure
                 platform supports a mix of compute, storage, and networking resources to meet the demands
                 of your applications.

                 The solution is easily scalable with the addition of another chassis with the required nodes.
                 With the IBM Flex System Manager™, multiple chassis can be monitored from a single panel.
                 The 14 node, 10U chassis delivers high speed performance complete with integrated servers,
                 storage, and networking. This flexible chassis is simple to deploy, and scales to meet your
                 needs in the future.

                 This IBM Redpaper™ publication is a reference to compatibility and interoperability of
                 components inside and connected to IBM PureFlex System and IBM Flex System solutions.

                 The latest version of this document can be downloaded from:
                 http://www.redbooks.ibm.com/fsig



The team who wrote this paper
                 This paper was produced by a team of specialists from around the world working at the
                 International Technical Support Organization, Raleigh Center.

                 David Watts is a Consulting IT Specialist at the ITSO Center in Raleigh. He manages
                 residencies and produces IBM Redbooks® publications for hardware and software topics that
                 are related to IBM System x® and IBM BladeCenter® servers and associated client
                 platforms. He has authored over 300 books, papers, and web documents. David has worked
                 for IBM both in the US and Australia since 1989. He is an IBM Certified IT Specialist and a
                 member of the IT Specialist Certification Review Board. David holds a Bachelor of
                 Engineering degree from the University of Queensland (Australia).

                 Ilya Krutov is a Project Leader at the ITSO Center in Raleigh and has been with IBM since
                 1998. Before joining the ITSO, Ilya served in IBM as a Run Rate Team Leader, Portfolio
                 Manager, Brand Manager, Technical Sales Specialist, and Certified Instructor. Ilya has
                 expertise in IBM System x and BladeCenter products, server operating systems, and
                 networking solutions. He has a Bachelor’s degree in Computer Engineering from the Moscow
                 Engineering and Physics Institute.

                 Special thanks to Ashish Jain, the former author of this document.




© Copyright IBM Corp. 2012, 2013. All rights reserved.                                                      vii
Now you can become a published author, too!
                Here’s an opportunity to spotlight your skills, grow your career, and become a published
                author—all at the same time! Join an ITSO residency project and help write a book in your
                area of expertise, while honing your experience using leading-edge technologies. Your efforts
                will help to increase product acceptance and customer satisfaction, as you expand your
                network of technical contacts and relationships. Residencies run from two to six weeks in
                length, and you can participate either in person or as a remote resident working from your
                home base.

                Find out more about the residency program, browse the residency index, and apply online at:
                ibm.com/redbooks/residencies.html



Comments welcome
                Your comments are important to us!

                We want our papers to be as helpful as possible. Send us your comments about this paper or
                other IBM Redbooks publications in one of the following ways:
                   Use the online Contact us review Redbooks form found at:
                   ibm.com/redbooks
                   Send your comments in an email to:
                   redbooks@us.ibm.com
                   Mail your comments to:
                   IBM Corporation, International Technical Support Organization
                   Dept. HYTD Mail Station P099
                   2455 South Road
                   Poughkeepsie, NY 12601-5400



Stay connected to IBM Redbooks
                   Find us on Facebook:
                   http://www.facebook.com/IBMRedbooks
                   Follow us on Twitter:
                   http://twitter.com/ibmredbooks
                   Look for us on LinkedIn:
                   http://www.linkedin.com/groups?home=&gid=2130806
                   Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
                   weekly newsletter:
                   https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
                   Stay current on recent Redbooks publications with RSS Feeds:
                   http://www.redbooks.ibm.com/rss.html




viii   IBM Flex System Interoperability Guide
Summary of changes

                 This section describes the technical changes made in this edition of the paper and in previous
                 editions. This edition might also include minor corrections and editorial changes that are not
                 identified.



30 January 2013
                 New information
                     More specifics about configuration support for chassis power supplies, Table 1-17 on
                     page 15.
                     Windows Server 2012 support, Table 3-1 on page 32.
                     Red Hat Enterprise Linux 5 support for the p260 model 23X, Table 3-2 on page 33

                 Changed information
                     x440 restriction regarding the use of the ServeRAID M5115 is now removed with the
                     release of IMM2 firmware build 40a,
                     Updated the Fibre Channel support section, 4.5, “Fibre Channel support” on page 41.



8 December 2012
                 New information
                     Added Table 2-2 on page 19 indicating which slots I/O adapters are supported in with
                     Power Systems compute nodes.
                     The x440 now supports UDIMMs, Table 2-3 on page 20



29 November 2012
                 Changed information
                     Clarified that the use of expansion nodes requires that the second processor be installed
                     in the compute node, Table 2-10 on page 26.
                     Corrected the NPIV information, 4.4, “NPIV support” on page 41.
                     Clarified NAS supported, 4.1, “Unified NAS storage” on page 38.



13 November 2012
                 This revision reflects the addition, deletion, or modification of new and changed information
                 described below.




© Copyright IBM Corp. 2012, 2013. All rights reserved.                                                           ix
New information
                  Added information about these new products:
                   –   IBM Flex System p260 Compute Node, 7895-23X
                   –   IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
                   –   IBM Flex System Fabric EN4093R 10Gb Scalable Switch
                   –   IBM Flex System CN4058 8-port 10Gb Converged Adapter
                   –   IBM Flex System EN4132 2-port 10Gb RoCE Adapter
                   –   IBM Flex System Storage® Expansion Node
                   –   IBM Flex System PCIe Expansion Node
                   –   IBM PureFlex System 42U Rack
                   –   IBM Flex System V7000 Storage Node
                  The x220 now supports 32 GB LRDIMM, page Table 2-3 on page 20
                  The Power Systems™ compute nodes support new DIMMs, Table 2-4 on page 21.
                  New 2100W power supply option for the Enterprise Chassis, 1.6, “Chassis power
                  supplies” on page 14.
                  New section covering Features on Demand upgrades for scalable switches, 1.4, “Switch
                  upgrades” on page 9.

               Changed information
                  Moved the FCoE and NPIV tables to Chapter 4, “Storage interoperability” on page 37.
                  Added machine types & models (MTMs) for the x220 and x440 when ordered via AAS
                  (e-config), Table 1-1 on page 2
                  Added footnote regarding power management and the use of 14 Power Systems compute
                  nodes with 32 GB DIMMs, Table 1-1 on page 2
                  Added AAS (e-config) feature codes to various tables of x86 compute node options. Note
                  that AAS feature codes for the x220 and x440 are the same as those used in the HVEC
                  system (x-config). However the AAS feature codes for the x240 are different than the
                  equivalent HVEC feature codes. This is noted in the table.
                  Updated the FCoE table, 4.2, “FCoE support” on page 39
                  Updated the vNIC table, Table 1-14 on page 13
                  Clarified that the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) and x240 USB
                  Enablement Kit (49Y8119) cannot be installed at the same time, Table 2-6 on page 23.
                  Updated the table of supported 2.5-inch drives, Table 2-5 on page 22.
                  Updated the operating system table, Table 3-1 on page 32



2 October 2012
               This revision reflects the addition, deletion, or modification of new and changed information
               described below.

               New information
                  Temporary restrictions on the use of network and storage adapters with the x440, page 18

               Changed information
                  Updated the x86 memory table, Table 2-3 on page 20
                  Updated the FCoE table, 4.2, “FCoE support” on page 39


x   IBM Flex System Interoperability Guide
Updated the operating system table, Table 3-1 on page 32
Clarified the support of the Pass-thru module and Fibre Channel switches with IBM Fabric
Manager, Table 3-4 on page 35.




                                                                Summary of changes    xi
xii   IBM Flex System Interoperability Guide
1


    Chapter 1.   Chassis interoperability
                 The IBM Flex System Enterprise Chassis is a 10U next-generation server platform with
                 integrated chassis management. It is a compact, high-density, high-performance, rack-mount,
                 and scalable server platform system. It supports up to 14 one-bay compute nodes that share
                 common resources, such as power, cooling, management, and I/O resources within a single
                 Enterprise Chassis. In addition, it can also support up to seven 2-bay compute nodes or three
                 4-bay compute nodes when the shelves are removed. You can mix and match 1-bay, 2-bay,
                 and 4-bay compute nodes to meet your specific hardware needs.

                 Topics in this chapter are:
                     1.1, “Chassis to compute node” on page 2
                     1.2, “Switch to adapter interoperability” on page 3
                     1.3, “Switch to transceiver interoperability” on page 5
                     1.4, “Switch upgrades” on page 9
                     1.5, “vNIC and UFP support” on page 13
                     1.6, “Chassis power supplies” on page 14
                     1.7, “Rack to chassis” on page 16




© Copyright IBM Corp. 2012, 2013. All rights reserved.                                                       1
1.1 Chassis to compute node
                   Table 1-1 lists the maximum number of compute nodes installed in the chassis.

Table 1-1 Maximum number of compute nodes installed in the chassis
    Compute nodes                                                 Machine type            Maximum number of
                                                                                          compute nodes in the
                                                          System x     Power System       Enterprise Chassis
                                                          (x-config)   (e-config)
                                                                                          8721-A1x       7893-92X
                                                                                          (x-config)     (e-config)

    x86 compute nodes

    IBM Flex System x220 Compute Node                     7906         7906-25X           14             14

    IBM Flex System x240 Compute Node                     8737         7863-10X           14             14

    IBM Flex System x440 Compute Node                     7917         7917-45X           7              7

    IBM Power Systems compute nodes

    IBM Flex System p24L Compute Node                     None          1457-7FL          14a            14a

    IBM Flex System p260 Compute Node (POWER7®)           None         7895-22X           14a            14a

    IBM Flex System p260 Compute Node (POWER7+™)          None         7895-23X           14a            14a

    IBM Flex System p460 Compute Node                     None         7895-42X           7a             7a

    Management node

    IBM Flex System Manager                               8731-A1x     7955-01M           1b             1b
     a. For Power Systems compute nodes: if the chassis is configured with the power management policy “AC Power
        Source Redundancy with Compute Node Throttling Allowed”, some maximum chassis configurations containing
        Power Systems compute nodes with large populations of 32GB DIMMs may result in the chassis having insufficient
        power to power on all 14 compute nodes bays. In such circumstances, only 13 of the 14 bays would be allowed to
        be powered on.
     b. One Flex System Manager management node can manage up to four chassis




2      IBM Flex System Interoperability Guide
1.2 Switch to adapter interoperability
                  In this section, we describe switch to adapter interoperability.


1.2.1 Ethernet switches and adapters
                  Table 1-2 lists Ethernet switch to card compatibility.

                   Switch upgrades: To maximize the usable port count on the adapters, the switches may
                   need additional license upgrades. See 1.4, “Switch upgrades” on page 9 for details.


Table 1-2 Ethernet switch to card compatibility
                                                           CN4093      EN4093R      EN4093      EN4091        EN2092
                                                           10Gb        10Gb         10Gb        10Gb          1Gb
                                                           Switch      Switch       Switch      Pass-thru     Switch

                                           Part number     00D5823     95Y3309      49Y4270     88Y6043       49Y4294

 Part         Feature                                      A3HH /      A3J6 /       A0TB /      A1QV /        A0TF /
 number       codesa                    Feature codesa     ESW2        ESW7         3593        3700          3598

 None         None             x220 Embedded 1 Gb          Yesb        Yes          Yes         No            Yes

 None         None             x240 Embedded 10 Gb         Yes         Yes          Yes         Yes           Yes

 None         None             x440 Embedded 10 Gb         Yes         Yes          Yes         Yes           Yes

 49Y7900      A1BR / 1763      EN2024 4-port 1Gb           Yes         Yes          Yes         Yesc          Yes
                               Ethernet Adapter

 90Y3466      A1QY / EC2D      EN4132 2-port 10 Gb         No          Yes          Yes         Yes           No
                               Ethernet Adapter

 None         None / 1762      EN4054 4-port 10Gb          Yes         Yes          Yes         Yesc          Yes
                               Ethernet Adapter

 90Y3554      A1R1 / 1759      CN4054 10Gb Virtual         Yes         Yes          Yes         Yesc          Yes
                               Fabric Adapter

 None         None / EC24      CN4058 8-port 10Gb          Yesd        Yesd         Yesd        Yesc          Yese
                               Converged Adapter

 None         None / EC26      EN4132 2-port 10Gb          No          Yes          Yes         Yes           No
                               RoCE Adapter
   a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second
      feature code is for configurations ordered through the IBM Power Systems channel (e-config)
   b. 1 Gb is supported on the CN4093’s two external 10 Gb SFP+ ports only. The 12 external Omni Ports do not support
      1 GbE speeds.
   c. Only two of the ports of this adapter are connected when used with the EN4091 10Gb Pass-thru.
   d. Only six of the eight ports of the CN4058 adapter are connected with the CN4093, EN4093R, EN4093R switches
   e. Only four of the eight ports of CN4058 adapter are connected with the EN2092 switch.




                                                                                 Chapter 1. Chassis interoperability   3
1.2.2 Fibre Channel switches and adapters
                   Table 1-3 lists Fibre Channel switch to card compatibility.

Table 1-3 Fibre Channel switch to card compatibility
                                                                FC5022      FC5022      FC5022          FC3171      FC3171
                                                                16Gb        16Gb        16Gb            8Gb         8Gb
                                                                12-port     24-port     24-port         switch      Pass-thru
                                                                                        ESB

                                               Part number      88Y6374     00Y3324     90Y9356         69Y1930     69Y1934

    Part       Feature                      Feature codesa      A1EH /      A3DP /      A2RQ /          A0TD /      A0TJ /
    number     codesa                                           3770        ESW5        3771            3595        3591

    69Y1938    A1BM / 1764     FC3172 2-port 8Gb FC             Yes         Yes         Yes             Yes         Yes
                               Adapter

    95Y2375    A2N5 / EC25     FC3052 2-port 8Gb FC             Yes         Yes         Yes             Yes         Yes
                               Adapter

    88Y6370    A1BP / EC2B     FC5022 2-port 16Gb FC            Yes         Yes         Yes             No          No
                               Adapter
     a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second
        feature code is for configurations ordered through the IBM Power Systems channel (e-config)


1.2.3 InfiniBand switches and adapters
                   Table 1-4 lists InfiniBand switch to card compatibility.

Table 1-4 InfiniBand switch to card compatibility
                                                                                                          IB6131 InfiniBand
                                                                                                          Switch

                                                                                       Part number        90Y3450
    Part         Feature
    number       codesa                                                               Feature   codea     A1EK / 3699

    90Y3454      A1QZ / EC2C          IB6132 2-port FDR InfiniBand Adapter                                Yesb

    None         None / 1761          IB6132 2-port QDR InfiniBand Adapter                                Yes
     a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second
        feature code is for configurations ordered through the IBM Power Systems channel (e-config)
     b. To operate at FDR speeds, the IB6131 switch will need the FDR upgrade, as described in 1.4, “Switch upgrades” on
        page 9




4      IBM Flex System Interoperability Guide
1.3 Switch to transceiver interoperability
                This section specifies the transceivers and direct-attach copper (DAC) cables supported by
                the various IBM Flex System I/O modules.


1.3.1 Ethernet switches
                Support for transceivers and cables for Ethernet switch modules is shown in Table 1-5.

Table 1-5 Modules and cables supported in Ethernet I/O modules
                                                        CN4093    EN4093R      EN4093       EN4091       EN2092
                                                        10Gb      10Gb         10Gb         10Gb         1Gb
                                                        Switch    Switch       Switch       Pass-thru    Switch

                                         Part number    00D5823   95Y3309      49Y4270      88Y6043      49Y4294

 Part        Feature                  Feature codesa    A3HH /    A3J6 /       A0TB /       A1QV /       A0TF /
 number      codesa                                     ESW2      ESW7         3593         3700         3598

 SFP transceivers - 1 Gbps

 81Y1622     3269 /    IBM SFP SX Transceiver           Yes       Yes          Yes          Yes          Yes
             EB2A      (1000Base-SX)

 81Y1618     3268 /    IBM SFP RJ45 Transceiver         Yes       Yes          Yes          Yes          Yes
             EB29      (1000Base-T)

 90Y9424     A1PN /    IBM SFP LX Transceiver           Yes       Yes          Yes          Yes          Yes
             ECB8      (1000Base-LX)

 SFP+ transceivers - 10 Gbps

 44W4408     4942 /    10 GBase-SR SFP+ (MMFiber)       Yes       Yes          Yes          Yes          Yes
             3282

 46C3447     5053 /    IBM SFP+ SR Transceiver          Yes       Yes          Yes          Yes          Yes
             EB28      (10GBase-SR)

 90Y9412     A1PM /    IBM SFP+ LR Transceiver          Yes       Yes          Yes          Yes          Yes
             ECB9      (10GBase-LR)

 QSFP+ transceivers - 40 Gbps

 49Y7884     A1DR /    IBM QSFP+ SR Transceiver         Yes       Yes          Yes          No           No
             EB27      (40Gb)

 8 Gb Fibre Channel SFP+ transceivers

 44X1964     5075 /    IBM 8 Gb SFP+ SW Optical         Yes       No           No           No           No
             3286      Transceiver

 SFP+ direct-attach copper (DAC) cables

 90Y9427     A1PH /    1m IBM Passive DAC SFP+          Yes       Yes          Yes          No           Yes
             None

 90Y9430     A1PJ /    3m IBM Passive DAC SFP+          Yes       Yes          Yes          No           Yes
             None

 90Y9433     A1PK /    5m IBM Passive DAC SFP+          Yes       Yes          Yes          No           Yes
             ECB6




                                                                            Chapter 1. Chassis interoperability   5
CN4093      EN4093R      EN4093       EN4091        EN2092
                                                              10Gb        10Gb         10Gb         10Gb          1Gb
                                                              Switch      Switch       Switch       Pass-thru     Switch

                                             Part number      00D5823     95Y3309      49Y4270      88Y6043       49Y4294

    Part        Feature                   Feature codesa      A3HH /      A3J6 /       A0TB /       A1QV /        A0TF /
    number      codesa                                        ESW2        ESW7         3593         3700          3598

    49Y7886     A1DL /     1m 40 Gb QSFP+ to 4 x 10 Gb        Yes         Yes          Yes          No            No
                EB24       SFP+ Cable

    49Y7887     A1DM /     3m 40 Gb QSFP+ to 4 x 10 Gb        Yes         Yes          Yes          No            No
                EB25       SFP+ Cable

    49Y7888     A1DN /     5m 40 Gb QSFP+ to 4 x 10 Gb        Yes         Yes          Yes          No            No
                EB26       SFP+ Cable

    95Y0323     A25A /     IBM 1m 10 GBase Copper             No          No           No           Yes           No
                None       SFP+ Twinax (Active)

    95Y0326     A25B /     IBM 3m 10 GBase Copper             No          No           No           Yes           No
                None       SFP+ Twinax (Active)

    95Y0329     A25C /     IBM 5m 10 GBase Copper             No          No           No           Yes           No
                None       SFP+ Twinax (Active)

    81Y8295     A18M /     1m 10 GbE Twinax Act Copper        No          No           No           Yes           No
                None       SFP+ DAC (active)

    81Y8296     A18N /     3m 10 GE Twinax Act Copper         No          No           No           Yes           No
                None       SFP+ DAC (active)

    81Y8297     A18P /     5m 10 GE Twinax Act Copper         No          No           No           Yes           No
                None       SFP+ DAC (active)

    QSFP cables

    49Y7890     A1DP /     1m IBM QSFP+ to QSFP+              Yes         Yes          Yes          No            No
                EB2B       Cable

    49Y7891     A1DQ /     3m IBM QSFP+ to QSFP+              Yes         Yes          Yes          No            No
                EB2H       Cable

    Fiber optic cables

    90Y3519     A1MM /     10m IBM MTP Fiber Optical          Yes         Yes          Yes          No            No
                EB2J       Cable

    90Y3521     A1MN /     30m IBM MTP Fiber Optical          Yes         Yes          Yes          No            No
                EC2K a     Cable
     a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second
        feature code is for configurations ordered through the IBM Power Systems channel (e-config)




6       IBM Flex System Interoperability Guide
1.3.2 Fibre Channel switches
                 Support for transceivers and cables for Fibre Channel switch modules is shown in Table 1-6.

Table 1-6 Modules and cables supported in Fibre Channel I/O modules
                                                             FC5022      FC5022       FC5022      FC3171      FC3171
                                                             16Gb        16Gb         16Gb        8Gb         8Gb
                                                             12-port     24-port      24-port     switch      Pass-thru
                                                                                      ESB

                                            Part number      88Y6374     00Y3324      90Y9356     69Y1930     69Y1934

 Part        Feature                     Feature codesa      A1EH /      A3DP /       A2RQ /      A0TD /      A0TJ /
 number      codesa                                          3770        ESW5         3771        3595        3591

 16 Gb transceivers

 88Y6393     A22R /     Brocade 16 Gb SFP+ Optical           Yes         Yes          Yes         No          No
             5371       Transceiver

 8 Gb transceivers

 88Y6416     A2B9 /     Brocade 8 Gb SFP+ SW Optical         Yes         Yes          Yes         No          No
             5370       Transceiver

 44X1964     5075 /     IBM 8 Gb SFP+ SW Optical             No          No           No          Yes         Yes
             3286       Transceiver

 4 Gb transceivers

 39R6475     4804 /     4 Gb SFP Transceiver Option          No          No           No          Yes         Yes
             3238
   a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second
      feature code is for configurations ordered through the IBM Power Systems channel (e-config)




                                                                                   Chapter 1. Chassis interoperability     7
1.3.3 InfiniBand switches
                   Support for transceivers and cables for InfiniBand switch modules is shown in Table 1-7.

                     Compliant cables: The IB6131 switch supports all cables compliant to the InfiniBand
                     Architecture specification.


Table 1-7 Modules and cables supported in InfiniBand I/O modules
                                                                                                     IB6131 InfiniBand
                                                                                                     Switch

                                                                                    Part number      90Y3450
    Part         Feature
    number       codesa                                                          Feature codesa      A1EK / 3699

    49Y9980      3866 / 3249                IB QDR 3m QSFP Cable Option (passive)                    Yes

    90Y3470      A227 / ECB1                3m FDR InfiniBand Cable (passive)                        Yes
     a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second
        feature code is for configurations ordered through the IBM Power Systems channel (e-config)




8      IBM Flex System Interoperability Guide
1.4 Switch upgrades
                 Various IBM Flex System switches can be upgraded via software licenses to enable
                 additional ports or features.

                 Switches covered in this section:
                     1.4.1, “IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch” on page 9
                     1.4.2, “IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch” on page 10
                     1.4.3, “IBM Flex System EN2092 1Gb Ethernet Scalable Switch” on page 11
                     1.4.4, “IBM Flex System IB6131 InfiniBand Switch” on page 11
                     1.4.5, “IBM Flex System FC5022 16Gb SAN Scalable Switch” on page 12


1.4.1 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
                 The CN4093 switch is initially licensed for fourteen 10 GbE internal ports, two external 10
                 GbE SFP+ ports, and six external Omni Ports enabled.

                 Further ports can be enabled, including 14 additional internal ports and two external 40 GbE
                 QSFP+ uplink ports with the Upgrade 1 (00D5845) and 14 additional internal ports and six
                 additional external Omni Ports with the Upgrade 2 (00D5847) license options.

                 Upgrade 1 and Upgrade 2 can be applied on the switch independently from each other or in
                 combination for full feature capability.

                 Table 1-8 shows the part numbers for ordering the switches and the upgrades.

Table 1-8 CN4093 10Gb Converged Scalable Switch part numbers and port upgrades
 Part        Feature          Description                                         Total ports enabled
 number      codea
                                                                 Internal   External       External       External
                                                                 10Gb       10Gb SFP+      10Gb Omni      40Gb QSFP+

 00D5823     A3HH / ESW2      Base switch (no upgrades)          14         2              6              0

 00D5845     A3HL / ESU1      Add Upgrade 1                      28         2              6               2

 00D5847     A3HM / ESU2      Add Upgrade 2                      28         2              12             0

 00D5845     A3HL / ESU1      Add both Upgrade 1 and             42         2              12             2
 00D5847     A3HM / ESU2      Upgrade 2
   a. The first feature code listed is for configurations ordered through System x sales channels. The second feature
      code is for configurations ordered through the IBM Power Systems channel.

                 Each upgrade license enables additional internal ports. To take full advantage of those ports,
                 each compute node needs the appropriate I/O adapter installed:
                     The base switch requires a two-port Ethernet adapter (one port of the adapter goes to
                     each of two switches)
                     Adding Upgrade 1 or Upgrade 2 requires a four-port Ethernet adapter (two ports of the
                     adapter to each switch) to use all internal ports
                     Adding both Upgrade 1 and Upgrade 2 requires a six-port Ethernet adapter (three ports to
                     each switch) to use all internal ports




                                                                                 Chapter 1. Chassis interoperability    9
1.4.2 IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch
                   The EN4093 and EN4093R are initially licensed with fourteen 10 Gb internal ports enabled
                   and ten 10 Gb external uplink ports enabled. Further ports can be enabled, including the two
                   40 Gb external uplink ports with the Upgrade 1 and four additional SFP+ 10Gb ports with
                   Upgrade 2 license options.

                   Upgrade 1 must be applied before Upgrade 2 can be applied. These are IBM Features on
                   Demand license upgrades.

                   Table 1-9 lists the available parts and upgrades.

Table 1-9 IBM Flex System Fabric EN4093 10Gb Scalable Switch part numbers and port upgrades
 Part          Feature          Product description                                        Total ports enabled
 number        codea
                                                                                Internal    10 Gb uplink     40 Gb uplink

 49Y4270       A0TB / 3593      IBM Flex System Fabric EN4093 10Gb              14          10               0
                                Scalable Switch
                                   10x external 10 Gb uplinks
                                   14x internal 10 Gb ports

 05Y3309       A3J6 / ESW7      IBM Flex System Fabric EN4093R 10Gb             14          10               0
                                Scalable Switch
                                   10x external 10 Gb uplinks
                                   14x internal 10 Gb ports

 49Y4798       A1EL / 3596      IBM Flex System Fabric EN4093 10Gb              28          10               2
                                Scalable Switch (Upgrade 1)
                                   Adds 2x external 40 Gb uplinks
                                   Adds 14x internal 10 Gb ports

 88Y6037       A1EM / 3597      IBM Flex System Fabric EN4093 10Gb              42          14               2
                                Scalable Switch (Upgrade 2) (requires
                                Upgrade 1):
                                   Adds 4x external 10 Gb uplinks
                                   Add 14x internal 10 Gb ports
     a. The first feature code listed is for configurations ordered through System x sales channels. The second feature
        code is for configurations ordered through the IBM Power Systems channel.

                   Each upgrade license enables additional internal ports. To take full advantage of those ports,
                   each compute node needs the appropriate I/O adapter installed:
                       The base switch requires a two-port Ethernet adapter (one port of the adapter goes to
                       each of two switches)
                       Upgrade 1 requires a four-port Ethernet adapter (two ports of the adapter to each switch)
                       to use all internal ports
                       Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch) to use all
                       internal ports

                     Consideration: Adding Upgrade 2 enables an additional 14 internal ports. This allows a
                     total of 42 internal ports -- three ports connected to each of the 14 compute nodes. To take
                     full advantage of all 42 internal ports, a 6-port or 8-port adapter is required, such as the
                     CN4058 8-port 10Gb Converged Adapter.

                     Upgrade 2 still provides a benefit with a 4-port adapter because this upgrade enables an
                     extra four external 10 Gb uplinks as well.


10      IBM Flex System Interoperability Guide
1.4.3 IBM Flex System EN2092 1Gb Ethernet Scalable Switch
           The EN2092 comes standard with 14 internal and 10 external Gigabit Ethernet ports enabled.
           Further ports can be enabled, including the four external 10 Gb uplink ports with IBM
           Features on Demand license upgrades. Upgrade 1 and the 10 Gb Uplinks upgrade can be
           applied in either order.

           Table 1-10 IBM Flex System EN2092 1Gb Ethernet Scalable Switch part numbers and port upgrades
            Part number      Feature codea       Product description

            49Y4294          A0TF / 3598         IBM Flex System EN2092 1Gb Ethernet Scalable Switch
                                                    14 internal 1 Gb ports
                                                    10 external 1 Gb ports

            90Y3562          A1QW / 3594         IBM Flex System EN2092 1Gb Ethernet Scalable Switch
                                                 (Upgrade 1)
                                                    Adds 14 internal 1 Gb ports
                                                    Adds 10 external 1 Gb ports

            49Y4298          A1EN / 3599         IBM Flex System EN2092 1Gb Ethernet Scalable Switch (10 Gb
                                                 Uplinks)
                                                    Adds 4 external 10 Gb uplinks
              a. The first feature code listed is for configurations ordered through System x sales channels. The
                 second feature code is for configurations ordered through the IBM Power Systems channel.


           The standard switch has 14 internal ports, and the Upgrade 1 license enables 14 additional
           internal ports. To take full advantage of those ports, each compute node needs the
           appropriate I/O adapter installed:
              The base switch requires a two-port Ethernet adapter installed in each compute node (one
              port of the adapter goes to each of two switches)
              Upgrade 1 requires a four-port Ethernet adapter installed in each compute node (two ports
              of the adapter to each switch)


1.4.4 IBM Flex System IB6131 InfiniBand Switch
           The IBM Flex System IB6131 InfiniBand Switch is a 32 port InfiniBand switch. It has 18
           FDR/QDR (56/40 Gbps) external ports and 14 FDR/QDR (56/40 Gbps) internal ports for
           connections to nodes.

           This switch ships standard with quad data rate (QDR) and can be upgraded to fourteen data
           rate (FDR) with an IBM Features on Demand license upgrade. Ordering information is listed
           in Table 1-11.

           Table 1-11 IBM Flex System IB6131 InfiniBand Switch Part Number and upgrade option
            Part number     Feature codesa      Product Name

            90Y3450         A1EK / 3699         IBM Flex System IB6131 InfiniBand Switch
                                                   18 external QDR ports
                                                   14 QDR internal ports

            90Y3462         A1QX / ESW1         IBM Flex System IB6131 InfiniBand Switch (FDR Upgrade)
                                                   Upgrades all ports to FDR speeds
              a. The first feature code listed is for configurations ordered through System x sales channels. The
                 second feature code is for configurations ordered through the IBM Power Systems channel.




                                                                          Chapter 1. Chassis interoperability       11
1.4.5 IBM Flex System FC5022 16Gb SAN Scalable Switch
                   Table 1-12 lists the available port and feature upgrades for the FC5022 16Gb SAN Scalable
                   Switches. These upgrades are all IBM Features on Demand license upgrades.

Table 1-12 FC5022 switch upgrades
                                                                                     24-port         24-port        16 Gb
                                                                                     16 Gb           16 Gb          SAN switch
                                                                                     ESB switch      SAN switch
 Part          Feature
 number        codesa           Description                                          90Y9356         00Y3324        88Y6374

 88Y6382       A1EP / 3772      FC5022 16Gb SAN Scalable Switch (Upgrade 1)          No              No             Yes

 88Y6386       A1EQ / 3773      FC5022 16Gb SAN Scalable Switch (Upgrade 2)          Yes             Yes            Yes

 00Y3320       A3HN / ESW3      FC5022 16Gb Fabric Watch Upgrade                     No              Yes            Yes

 00Y3322       A3HP / ESW4      FC5022 16Gb ISL/Trunking Upgrade                     No              Yes            Yes
     a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is
        for configurations ordered through the IBM Power Systems channel.

                   Table 1-13 shows the total number of active ports on the switch after applying compatible port
                   upgrades.

                   Table 1-13 Total port counts after applying upgrades
                                                                              Total number of active ports

                                                              24-port 16 Gb         24-port 16 Gb     16 Gb SAN switch
                                                              ESB SAN switch        SAN switch

                     Ports on Demand upgrade                  90Y9356               00Y3324           88Y6374

                     Included with base switch                24                    24                12

                     Upgrade 1, 88Y6382 (adds 12 ports)       Not supported         Not supported     24

                     Upgrade 2, 88Y6386 (adds 24 ports)       48                    48                48




12      IBM Flex System Interoperability Guide
1.5 vNIC and UFP support
                 Table 1-14 lists vNIC (virtual NIC) and UFP (Universal Fabric Port) support by combinations
                 of switch, adapter, and operating system.

                 In the table, we use the following abbreviations for the vNIC modes:
                    vNIC1 = IBM Virtual Fabric Mode
                    vNIC2 = Switch Independent Mode

                  10 GbE adapters only: Only 10 Gb Ethernet adapters support vNIC and UFP. 1 GbE
                  adapter do not support these features.


Table 1-14 Supported vNIC modes
              Flex System I/O module      EN4093 10Gb Scalable Switch           EN4091 10Gb Ethernet Pass-thru
                                             EN4093R 10Gb Switch
                                         CN4093 10Gb Converged Switch

                   Top-of-rack switch                  None                        IBM RackSwitch™ G8124E
                                                                                     IBM RackSwitch G8264

                    Operating system     Windows      Linuxab    VMwarec      Windows       Linuxab      VMwarec

 10Gb onboard LOM (x240 and x440)        vNIC1        vNIC1      vNIC1        vNIC1         vNIC1        vNIC1
                                         vNIC2        vNIC2      vNIC2        vNIC2         vNIC2        vNIC2
                                         UFPd         UFPd       UFP          UFP           UFP          UFP

 CN4054 10Gb Virtual Fabric Adapter      vNIC1        vNIC1      vNIC1        vNIC1         vNIC1        vNIC1
 90Y3554 (e-config #1759)                vNIC2        vNIC2      vNIC2        vNIC2         vNIC2        vNIC2
                                         UFPd         UFPd       UFPd         UFP           UFP          UFP

 EN4054 4-port 10Gb Ethernet Adapter
                                         The EN4054 4-port 10Gb Ethernet Adapter does not support vNIC nor UFP.
 (e-config #1762)

 EN4132 2-port 10 Gb Ethernet Adapter
                                         The EN4132 2-port 10 Gb Ethernet Adapter does not support vNIC nor UFP.
 90Y3466 (e-config #EC2D)

 CN4058 8-port 10Gb Converged            The CN4058 8-port 10Gb Converged Adapter does not support vNIC nor
 Adapter, (e-config #EC24)               UFP.

 EN4132 2-port 10Gb RoCE Adapter,
                                         The EN4132 2-port 10Gb RoCE Adapter does not support vNIC nor UFP.
 (e-config #EC26)
   a. Linux kernels with Xen are not supported with either vNIC1 nor vNIC2. For support information, see IBM RETAIN®
      Tip H205800 at http://ibm.com/support/entry/portal/docdisplay?lndocid=migr-5090480.
   b. The combination of vNIC2 and iBoot is not supported for legacy booting with Linux.
   c. The combination of vNIC2 with VMware ESX 4.1 and storage protocols (FCoE and iSCSI) is not supported.
   d. The CN4093 10Gb Converged Switch is planned to support Universal Fabric Port (UFP) in 2Q/2013




                                                                             Chapter 1. Chassis interoperability   13
1.6 Chassis power supplies
                   Power supplies are available either as 2500W or 2100W capacities. The standard chassis
                   ships with two 2500W power supplies. A maximum of six power supplies can be installed.
                   The 2100W power supplies are only available via CTO and through the System x ordering
                   channel.

                   Table 1-15 shows the ordering information for the Enterprise Chassis power supplies. Power
                   supplies cannot be mixed in the same chassis.

Table 1-15 Power supply module option part numbers
 Part           Feature          Description                                                        Chassis models
 number         codesa                                                                              where standard

 43W9049        A0UC / 3590      IBM Flex System Enterprise Chassis 2500W Power Module              8721-A1x (x-config)
                                                                                                    7893-92X (e-config)

 47C7633        A3JH / None      IBM Flex System Enterprise Chassis 2100W Power Module              None
     a. The first feature code listed is for configurations ordered through System x sales channels. The second feature
        code is for configurations ordered through the IBM Power Systems channel.

                   A chassis powered by the 2100W power supplies cannot provide N+N redundant power
                   unless all the compute nodes are configured with 95W or lower Intel processors. N+1
                   redundancy is possible with any processors.

                   Table 1-16 shows the nodes that are supported in chassis when powered by either the
                   2100W or 2500W modules.

Table 1-16 Compute nodes supported by the power supplies
 Node                                                                                 2100W               2500W
                                                                                      power supply        power supply

 IBM Flex System Manager management node                                              Yes                 Yes

 x220 (with or without Storage Expansion Node or PCIe Expansion Node)                 Yes                 Yes

 x240 (with or without Storage Expansion Node or PCIe Expansion Node)                 Yesa                Yesa

 x440                                                                                 Yesa                Yesa

 p24L                                                                                 No                  Yesa

 p260                                                                                 No                  Yesa

 p460                                                                                 No                  Yesa

 V7000 Storage Node (either primary or expansion node)                                Yes                 Yes
     a. Some restrictions based on the TDP power of the processors installed or the power policy enabled. See Table 1-17
        on page 15.




14      IBM Flex System Interoperability Guide
Table 1-17 on page 15 lists details of the support for compute nodes supported based on type
                   and number of power supplies installed in the chassis and the power policy enabled (N+N or
                   N+1).

                   In this table, the colors of the cells have the following meaning:

                       Supported with no restrictions as to the number of compute nodes that can be installed

                       Supported but with restrictions on the number of compute nodes that can be installed.

Table 1-17 Specific number of compute nodes supported based on installed power supplies
 Compute     CPU                   2100W power supplies                             2500W power supplies
 node        TDP
             rating     N+1, N=5    N+1, N=4    N+1, N=3    N+N, N=3    N+1, N=5    N+1, N=4     N+1, N=3     N+N, N=3
                        6 total     5 total     4 total     6 total     6 total     5 total      4 total      6 total

 x240        60W        14          14          14          14          14          14           14           14

             70W        14          14          13          14          14          14           14           14

             80W        14          14          13          14          14          14           14           14

             95W        14          14          12          13          14          14           14           14

             115W       14          14          11          12          14          14           14           14

             130W       14          14          11          11          14          14           14           14

             135W       14          14          11          11          14          14           13           14

 x440        95W        7           7           6           6           7           7            7            7

             115W       7           7           5           6           7           7            7            7

             130W       7           7           5           5           7           7            6            7

 p24L        All                         Not supported                  14          14           12           13

 p260        All                         Not supported                  14          14           12           13

 p460        All                         Not supported                  7           7            6            6

 x220        50W        14          14          14          14          14          14           14           14

             60W        14          14          14          14          14          14           14           14

             70W        14          14          14          14          14          14           14           14

             80W        14          14          14          14          14          14           14           14

             95W        14          14          14          14          14          14           14           14

 FSM         95W        2           2           2           2           2           2            2            2

 V7000       N/A        3           3           3           3           3           3            3            3


                   Assumptions:
                      All Compute Nodes fully configured
                      Throttling and over subscription is enabled

                    Tip: Consult the Power configurator for exact configuration support:
                    http://ibm.com/systems/bladecenter/resources/powerconfig.html



                                                                              Chapter 1. Chassis interoperability   15
1.7 Rack to chassis
                 IBM offers an extensive range of industry-standard and EIA-compatible rack enclosures and
                 expansion units. The flexible rack solutions help you consolidate servers and save space,
                 while allowing easy access to crucial components and cable management.

                 Table 1-18 lists the IBM Flex System Enterprise Chassis supported in each rack cabinet.

Table 1-18 The chassis supported in each rack cabinet
 Part number            Rack cabinet                                           Supports the
                                                                               Enterprise Chassis

 93634CX                IBM PureFlex System 42U Rack                           Yes (recommended)

 93634DX                IBM PureFlex System 42U Expansion Rack                 Yes (recommended)

 93634PX                IBM 42U 1100 mm Deep Dynamic rack                      Yes (recommended)

 201886X                IBM 11U Office Enablement Kit                          Yes

 93072PX                IBM S2 25U Static standard rack                        Yes

 93072RX                IBM S2 25U Dynamic standard rack                       Yes

 93074RX                IBM S2 42U standard rack                               Yes

 99564RX                IBM S2 42U Dynamic standard rack                       Yes

 93084PX                IBM 42U Enterprise rack                                Yes

 93604PX                IBM 42U 1200 mm Deep Dynamic Rack                      Yes

 93614PX                IBM 42U 1200 mm Deep Static rack                       Yes

 93624PX                IBM 47U 1200 mm Deep Static rack                       Yes

 9306-900               IBM Netfinity® 42U Rack                                No

 9306-910               IBM Netfinity 42U Rack                                 No

 9308-42P               IBM Netfinity Enterprise Rack                          No

 9308-42X               IBM Netfinity Enterprise Rack                          No

 Varies                 IBM NetBay 22U                                         No




16    IBM Flex System Interoperability Guide
2


    Chapter 2.   Compute node component
                 compatibility
                 This chapter lists the compatibility of components installed internally to each compute node.

                 Topics in this chapter are:
                     2.1, “Compute node-to-card interoperability” on page 18
                     2.2, “Memory DIMM compatibility” on page 20
                     2.3, “Internal storage compatibility” on page 22
                     2.4, “Embedded virtualization” on page 25
                     2.5, “Expansion node compatibility” on page 26




© Copyright IBM Corp. 2012, 2013. All rights reserved.                                                      17
2.1 Compute node-to-card interoperability
                   Table 2-1 lists the available I/O adapters and their compatibility with compute nodes.

                     Power Systems compute nodes: Some I/O adapters supported by Power Systems
                     compute nodes are restricted to only some of the available slots. See Table 2-2 on page 19
                     for specifics.


Table 2-1 I/O adapter compatibility matrix - compute nodes
                                                                                                    Supported servers




                                                                                                                        p260 22X

                                                                                                                                   p260 23X
 System x       x-config    e-config




                                                                                                         x440b

                                                                                                                 p24L




                                                                                                                                              p460
                                                                                           x220

                                                                                                  x240
 part           feature     feature
 number         code        codea            I/O adapters

 Ethernet adapters

 49Y7900        A1BR        1763 / A10Y      EN2024 4-port 1Gb Ethernet Adapter            Y      Y      Y       Y      Y          Y          Y

 90Y3466        A1QY        EC2D / A1QY      EN4132 2-port 10 Gb Ethernet Adapter          Y      Y      Y       N      N          N          N

 None           None        1762 / None      EN4054 4-port 10Gb Ethernet Adapter           N      N      N       Y      Y          Y          Y

 90Y3554        A1R1        1759 / A1R1      CN4054 10Gb Virtual Fabric Adapter            Y      Y      Y       N      N          N          N

 90Y3558        A1R0        1760 / A1R0      CN4054 Virtual Fabric Adapter Upgradec        Y      Y      Y       N      N          N          N

 None           None        EC24 / None      CN4058 8-port 10Gb Converged Adapter          N      N      N       Y      Y          Y          Y

 None           None        EC26 / None      EN4132 2-port 10Gb RoCE Adapter               N      N      N       Y      Y          Y          Y

 Fibre Channel adapters

 69Y1938        A1BM        1764 / A1BM      FC3172 2-port 8Gb FC Adapter                  Y      Y      Y       Y      Y          Y          Y

 95Y2375        A2N5        EC25 / A2N5      FC3052 2-port 8Gb FC Adapter                  Y      Y      Y       N      N          N          N

 88Y6370        A1BP        EC2B / A1BP      FC5022 2-port 16Gb FC Adapter                 Y      Y      Y       N      N          N          N

 InfiniBand adapters

 90Y3454        A1QZ        EC2C / A1QZ      IB6132 2-port FDR InfiniBand Adapter          Y      Y      Y       N      N          N          N

 None           None        1761 / None      IB6132 2-port QDR InfiniBand Adapter          N      N      N       Y      Y          Y          Y

 SAS

 90Y4390        A2XW        None / A2XW      ServeRAID M5115 SAS/SATA Controllerd          Y      Y      Yb      N      N          N          N
     a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when
        supported). The second is for the x220 and x440.
     b. For compatibility as listed here, ensure the x440 is running IMM2 firmare Build 40a or later
     c. Features on Demand (software) upgrade to enable FCoE and iSCSI on the CN4054. One upgrade needed per
        adapter.
     d. Various enablement kits and Features on Demand upgrades are available for the ServeRAID M5115. See the
        ServeRAID M5115 Product Guide, http://www.redbooks.ibm.com/abstracts/tips0884.html?Open




18      IBM Flex System Interoperability Guide
For Power Systems compute nodes, Table 2-2 shows which specific I/O expansion slots each
                  of the supported adapters can be installed in to. Yes in the table means the adapter is
                  supported in that I/O expansion slot.

                    Tip: Table 2-2 applies to Power Systems compute nodes only.


Table 2-2 Slot locations supported by I/O expansion cards in Power Systems compute nodes
 Feature      Description                                                      Slot 1   Slot 2   Slot 3   Slot 4
 code                                                                                            (p460)   (p460)

 10 Gb Ethernet

 EC24         IBM Flex System CN4058 8-port 10Gb Converged Adapter             Yes      Yes      Yes      Yes

 EC26         IBM Flex System EN4132 2-port 10Gb RoCE Adapter                  No       Yes      Yes      Yes

 1762         IBM Flex System EN4054 4-port 10Gb Ethernet Adapter              Yes      Yes      Yes      Yes

 1 Gb Ethernet

 1763         IBM Flex System EN2024 4-port 1Gb Ethernet Adapter               Yes      Yes      Yes      Yes

 InfiniBand

 1761         IBM Flex System IB6132 2-port QDR InfiniBand Adapter             No       Yes      No       Yes

 Fibre Channel

 1764         IBM Flex System FC3172 2-port 8Gb FC Adapter                     No       Yes      No       Yes




                                                             Chapter 2. Compute node component compatibility    19
2.2 Memory DIMM compatibility
                This section covers memory DIMMs for both compute node families. It covers the
                following topics:
                   2.2.1, “x86 compute nodes” on page 20
                   2.2.2, “Power Systems compute nodes” on page 21


2.2.1 x86 compute nodes
                Table 2-3 lists the memory DIMM options for the x86 compute nodes.

Table 2-3 Supported memory DIMMs - x86 compute nodes
 Part       x-config   e-config        Description                                    x220   x240   x440
 number     feature    featurea,b

 Unbuffered DIMM (UDIMM) modules

 49Y1403    A0QS       EEM2 / A0QS     2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC    Yes    No     No
                                       DDR3 1333MHz LP UDIMM

 49Y1404    8648       EEM3 / 8648     4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC    Yes    Yes    Yes
                                       DDR3 1333MHz LP UDIMM

 Registered DIMMs (RDIMMs) - 1333 MHz and 1066 MHz

 49Y1405    8940       EM05 / None     2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC    No     Yes    No
                                       DDR3 1333MHz LP RDIMM

 49Y1406    8941       EEM4 / 8941     4GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC    Yes    Yes    Yes
                                       DDR3 1333MHz LP RDIMM

 49Y1407    8942       EM09 / 8942     4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC    Yes    Yes    Yes
                                       DDR3 1333MHz LP RDIMM

 49Y1397    8923       EM17 / 8923     8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC    Yes    Yes    Yes
                                       DDR3 1333MHz LP RDIMM

 49Y1563    A1QT       EM33 / A1QT     16GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9      Yes    Yes    Yes
                                       ECC DDR3 1333MHz LP RDIMM

 49Y1400    8939       EEM1 / 8939     16GB (1x16GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC   Yes    Yes    No
                                       DDR3 1066MHz LP RDIMM

 90Y3101    A1CP       EEM7 / None     32GB (1x32GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC   No     No     No
                                       DDR3 1066MHz LP RDIMM

 Registered DIMMs (RDIMMs) - 1600 MHz

 49Y1559    A28Z       EEM5 / A28Z     4GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC     Yes    Yes    Yes
                                       DDR3 1600MHz LP RDIMM

 90Y3178    A24L       EEMC / A24L     4GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC     Yes    Yes    No
                                       DDR3 1600MHz LP RDIMM

 90Y3109    A292       EEM9 / A292     8GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC     Yes    Yes    Yes
                                       DDR3 1600MHz LP RDIMM

 00D4968    A2U5       EEMB / A2U5     16GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC   Yes    Yes    Yes
                                       DDR3 1600MHz LP RDIMM




20    IBM Flex System Interoperability Guide
Part        x-config    e-config          Description                                             x220     x240    x440
 number      feature     featurea,b

 Load-reduced DIMMs (LRDIMMs)

 49Y1567     A290        EEM6 / A290       16GB (1x16GB, 4Rx4, 1.35V) PC3L-10600 CL9               No       Yes     Yes
                                           ECC DDR3 1333MHz LP LRDIMM

 90Y3105     A291        EEM8 / A291       32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9               Yes      Yes     Yes
                                           ECC DDR3 1333MHz LP LRDIMM
   a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when
      supported). The second is for the x220 and x440.
   b. For memory DIMMs, the first feature code listed will result in two DIMMs each, whereas the second feature code
      listed contains only one DIMM each.


2.2.2 Power Systems compute nodes
                 Table 2-4 lists the supported memory DIMMs for Power Systems compute nodes.

Table 2-4 Supported memory DIMMs - Power Systems compute nodes
 Part        e-config    Description                                                       p24L    p260     p260    p460
 number      feature                                                                               22X      23X

 78P1011     EM04        2x 2 GB DDR3 RDIMM 1066 MHz                                       Yes     Yes      No      Yes

 78P0501     8196        2x 4 GB DDR3 RDIMM 1066 MHz                                       Yes     Yes      Yes     Yes

 78P0502     8199        2x 8 GB DDR3 RDIMM 1066 MHz                                       Yes     Yes      No      Yes

 78P1917     EEMD        2x 8 GB DDR3 RDIMM 1066 MHz                                       Yes     Yes      Yes     Yes

 78P0639     8145        2x 16 GB DDR3 RDIMM 1066 MHz                                      Yes     Yes      No      Yes

 78P1915     EEME        2x 16 GB DDR3 RDIMM 1066 MHz                                      Yes     Yes      Yes     Yes

 78P1539     EEMF        2x 32 GB DDR3 RDIMM 1066 MHz                                      Yes     Yes      Yes     Yes




                                                                Chapter 2. Compute node component compatibility           21
2.3 Internal storage compatibility
                 This section covers supported internal storage for both compute node families. It covers the
                 following topics:
                    2.3.1, “x86 compute nodes: 2.5-inch drives” on page 22
                    2.3.2, “x86 compute nodes: 1.8-inch drives” on page 23
                    2.3.3, “Power Systems compute nodes” on page 24


2.3.1 x86 compute nodes: 2.5-inch drives
                 Table 2-5 lists the 2.5-inch drives for x86 compute nodes.

Table 2-5 Supported 2-5-inch SAS and SATA drives
 Part        x-config   e-config       Description                                        x220   x240   x440
 number      feature    featurea

 10K SAS hard disk drives

 90Y8877     A2XC       None / A2XC    IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD          N      N      Y

 42D0637     5599       3743 / 5599    IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD       Y      Y      N

 44W2264     5413       None / 5599    IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS SED       N      N      Y

 90Y8872     A2XD       None / A2XD    IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS HDD          N      N      Y

 49Y2003     5433       3766 / 5433    IBM 600GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD       Y      Y      N

 81Y9650     A282       EHD4 / A282    IBM 900GB 10K 6Gbps SAS 2.5" SFF HS HDD            Y      Y      Y

 15K SAS hard disk drives

 90Y8926     A2XB       None / A2XB    IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS HDD          N      N      Y

 42D0677     5536       EHD1 / 5536    IBM 146GB 15K 6Gbps SAS 2.5" SFF Slim-HS HDD       Y      Y      N

 81Y9670     A283       EHD5 / A283    IBM 300GB 15K 6Gbps SAS 2.5" SFF HS HDD            Y      Y      Y

 NL SAS hard disk drives

 81Y9690     A1P3       EHD6 / A1P3    IBM 1TB 7.2K 6Gbps NL SAS 2.5" SFF HS HDD          Y      Y      Y

 90Y8953     A2XE       None / A2XE    IBM 500GB 7.2K 6Gbps NL SAS 2.5" SFF G2HS HDD      N      N      Y

 42D0707     5409       EHD2 / 5409    IBM 500GB 7200 6Gbps NL SAS 2.5" SFF HS HDD        Y      Y      N

 NL SATA hard disk drives

 81Y9730     A1AV       EHD9 / A1AV    IBM 1TB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD         Y      Y      Y

 81Y9722     A1NX       EHD7 / A1NX    IBM 250GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD       Y      Y      Y

 81Y9726     A1NZ       EHD8 / A1NZ    IBM 500GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD       Y      Y      Y

 Solid-state drives - Enterprise

 00W1125     A3HR       None / A3HR    IBM 100GB SATA 2.5" MLC HS Enterprise SSD          Y      Y      Y

 43W7746     5420       None / 5420    IBM 200GB SATA 1.8" MLC SSD                        Y      Y      Y

 43W7718     A2FN       EHD3 / A2FN    IBM 200GB SATA 2.5" MLC HS SSD                     Y      Y      Y

 43W7726     5428       None / 5428    IBM 50GB SATA 1.8" MLC SSD                         Y      Y      Y


22    IBM Flex System Interoperability Guide
Part        x-config    e-config          Description                                              x220    x240    x440
 number      feature     featurea

 Solid-state drives - Enterprise value

 49Y5839     A3AS        None / A3AS       IBM 64GB SATA 2.5" MLC HS Enterprise Value SSD           Y       Y       N

 90Y8648     A2U4        EHDD / A2U4       IBM 128GB SATA 2.5" MLC HS Enterprise Value SSD          Y       Y       Y

 90Y8643     A2U3        EHDC / A2U3       IBM 256GB SATA 2.5" MLC HS Enterprise Value SSD          Y       Y       Y

 49Y5844     A3AU        None / A3AU       IBM 512GB SATA 2.5" MLC HS Enterprise Value SSD          Y       Y       N
   a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when
      supported). The second is for the x220 and x440.


2.3.2 x86 compute nodes: 1.8-inch drives
                 The x86 compute nodes support 1.8-inch solid-state drives with the addition of the
                 ServeRAID M5115 RAID controller plus the appropriate enablement kits. For details about
                 configurations, see ServeRAID M5115 SAS/SATA Controller for IBM Flex System, TIPS0884.

                    Tip: The ServeRAID M5115 RAID controller is installed in I/O expansion slot 1 but can be
                    installed along with the Compute Node Fabric Connector (aka periscope connector) used
                    to connect the onboard Ethernet controller to the chassis midplane.

                 Table 2-6 lists the supported enablement kits and Features on Demand activation upgrades
                 available for use with the ServeRAID M5115.

Table 2-6 ServeRAID M5115 compatibility
 Part        Feature     Description                                                                x220    x240    x440
 number      codea

 90Y4390     A2XW        ServeRAID M5115 SAS/SATA Controller for IBM Flex System                    Yes     Yes     Yes

 Hardware enablement kits - IBM Flex System x220 Compute Node

 90Y4424     A35L        ServeRAID M5100 Series Enablement Kit for IBM Flex System x220             Yes     No      No

 90Y4425     A35M        ServeRAID M5100 Series IBM Flex System Flash Kit for x220                  Yes     No      No

 90Y4426     A35N        ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x220          Yes     No      No

 Hardware enablement kits - IBM Flex System x240 Compute Node

 90Y4342     A2XX        ServeRAID M5100 Series Enablement Kit for IBM Flex System x240             No      Yes     No

 90Y4341     A2XY        ServeRAID M5100 Series IBM Flex System Flash Kit for x240                  No      Yes     No

 90Y4391     A2XZ        ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x240          No      Yesb    No

 Hardware enablement kits - IBM Flex System x440 Compute Node

 46C9030     A3DS        ServeRAID M5100 Series Enablement Kit for IBM Flex System x440             No      No      Yes

 46C9031     A3DT        ServeRAID M5100 Series IBM Flex System Flash Kit for x440                  No      No      Yes

 46C9032     A3DU        ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x440          No      No      Yes




                                                                Chapter 2. Compute node component compatibility           23
Part          Feature     Description                                                                 x220    x240    x440
 number        codea

 Feature on-demand licenses (for all three compute nodes)

 90Y4410       A2Y1        ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System                   Yes     Yes     Yes

 90Y4412       A2Y2        ServeRAID M5100 Series Performance Upgrade for IBM Flex System              Yes     Yes     Yes

 90Y4447       A36G        ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System              Yes     Yes     Yes
     a. The feature codes listed here are for both x-config (HVEC) and e-config (AAS), with the exception of those for the
        x240 which are for HVEC only.
     b. If the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) is installed, the x240 USB Enablement Kit
        (49Y8119) cannot also be installed. Both the x240 USB Enablement Kit and the SSD Expansion Kit both include
        special air baffles that cannot be installed at the same time.

                   Table 2-7 lists the drives supported in conjunction with the ServeRAID M5115 RAID
                   controller.

Table 2-7 Supported 1.8-inch solid-state drives
 Part           Feature     Description                                                                x220    x240    x440
 number         codea

 43W7746        5420        IBM 200GB SATA 1.8" MLC SSD                                                Yes     Yes     Yes

 43W7726        5428        IBM 50GB SATA 1.8" MLC SSD                                                 Yes     Yes     Yes

 49Y5993        A3AR        IBM 512GB SATA 1.8" MLC Enterprise Value SSD                               No      No      No

 49Y5834        A3AQ        IBM 64GB SATA 1.8" MLC Enterprise Value SSD                                No      No      No
     a. The feature codes listed here are for both x-config (HVEC) and e-config (AAS), with the exception of those for the
        x240 which are for HVEC only.


2.3.3 Power Systems compute nodes
                   Local storage options for Power Systems compute nodes are shown in Table 2-8. None of the
                   available drives are hot-swappable. The local drives (HDD or SDD) are mounted to the top
                   cover of the system. If you use local drives, you must order the appropriate cover with
                   connections for your wanted drive type. The maximum number of drives that can be installed
                   in any Power Systems compute node is two. SSD and HDD drives cannot be mixed.

Table 2-8 Local storage options for Power Systems compute nodes
 e-config     Description                                                                            p24L     p260    p460
 feature

 2.5 inch SAS HDDs

 8274         300 GB 10K RPM non-hot-swap 6 Gbps SAS                                                 Yes      Yes     Yes

 8276         600 GB 10K RPM non-hot-swap 6 Gbps SAS                                                 Yes      Yes     Yes

 8311         900 GB 10K RPM non-hot-swap 6 Gbps SAS                                                 Yes      Yes     Yes

 7069         Top cover with HDD connectors for the p260 and p24L                                    Yes      Yes     No

 7066         Top cover with HDD connectors for the p460                                             No       No      Yes

 1.8 inch SSDs

 8207         177 GB SATA non-hot-swap SSD                                                           Yes      Yes     Yes


24      IBM Flex System Interoperability Guide
e-config    Description                                                                           p24L    p260     p460
 feature

 7068        Top cover with SSD connectors for the p260 and p24L                                   Yes     Yes      No

 7065        Top Cover with SSD connectors for p460                                                No      No       Yes

 No drives

 7067        Top cover for no drives on the p260 and p24L                                          Yes     Yes      No

 7005        Top cover for no drives on the p460                                                   No      No       Yes



2.4 Embedded virtualization
                 The x86 compute nodes support an IBM standard USB flash drive (USB Memory Key) option
                 preinstalled with VMware ESXi or VMware vSphere. It is fully contained on the flash drive,
                 without requiring any disk space.

                 On the x240 the USB memory keys plug into the USB ports on the optional x240 USB
                 Enablement Kit. On the x220 and x440, the USB memory keys plug directly into USB ports on
                 the system board.

                 Table 2-9 lists the ordering information for the VMware hypervisor options.

Table 2-9 IBM USB Memory Key for VMware hypervisors
 Part        x-config    e-config         Description                                               x220    x240    x440
 number      feature     featurea

 49Y8119     A33M        None / None      x240 USB Enablement Kit                                   No      Yesb    No

 41Y8300     A2VC        EBK3 / A2VC      IBM USB Memory Key for VMware ESXi 5.0                    Yes     Yes     Yes

 41Y8307     A383        None / A383      IBM USB Memory Key for VMware ESXi 5.0 Update1            Yes     Yes     Yes

 41Y8298     A2G0        None / A2G0      IBM Blank USB Memory Key for VMware ESXi                  Yes     Yes     Yes
                                          Downloads
   a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when
      supported). The second is for the x220 and x440.
   b. If the x240 USB Enablement Kit (49Y8119) is installed, the ServeRAID M5100 Series SSD Expansion Kit
      (90Y4391) cannot also be installed. Both the x240 USB Enablement Kit and the SSD Expansion Kit both include
      special air baffles that cannot be installed at the same time.

                 You can use the Blank USB Memory Key, 41Y8298, to use any available IBM customized
                 version of the VMware hypervisor. The VMware vSphere hypervisor with IBM customizations
                 can be downloaded from the following website:
                 http://ibm.com/systems/x/os/vmware/esxi

                 Power Systems compute nodes do not support VMware ESXi installed on a USB Memory
                 Key. Power Systems compute nodes support IBM PowerVM® as standard.

                 These servers do support virtual servers, also known as logical partitions or LPARs. The
                 maximum number of virtual serves is 10 times the number of cores in the compute node:
                     p24L: Up to 160 virtual servers (10 x 16 cores)
                     p260: Up to 160 virtual servers (10 x 16 cores)
                     p460: Up to 320 virtual servers (10 x 32 cores)


                                                                Chapter 2. Compute node component compatibility           25
2.5 Expansion node compatibility
                   This section describes the two expansion nodes and the components that are compatible with
                   each.
                      2.5.1, “Compute nodes” on page 26
                      2.5.2, “Flex System I/O adapters - PCIe Expansion Node” on page 26
                      2.5.3, “PCIe I/O adapters - PCIe Expansion Node” on page 27
                      2.5.4, “Internal storage - Storage Expansion Node” on page 28
                      2.5.5, “RAID upgrades - Storage Expansion Node” on page 29


2.5.1 Compute nodes
                   Table 2-10 lists the expansion nodes and their compatibility with compute nodes.

Table 2-10 I/O adapter compatibility matrix - compute nodes
                                                                                             Supported servers




                                                                                                                       p260 22X

                                                                                                                                  p260 23X
 System x      x-config    e-config




                                                                                                                p24L




                                                                                                                                             p460
                                                                                      x220

                                                                                             x240

                                                                                                         x440
 part          feature     feature
 number        code        code       Description

 81Y8983       A1BV        A1BV       IBM Flex System PCIe Expansion Node            Ya      Ya          N      N      N          N          N

 68Y8588       A3JF        A3JF       IBM Flex System Storage Expansion Node         Ya      Ya          N      N      N          N          N
     a. The x220 and x240 both require the second processor be installed.


2.5.2 Flex System I/O adapters - PCIe Expansion Node
                   The PCIe Expansion Node supports the adapters listed in Table 2-11.

                    Storage Expansion Node: The Storage Expansion Node does not include connectors for
                    additional I/O adapters.


Table 2-11 I/O adapter compatibility matrix - expansion nodes
 System x         x-config         e-config                                                         Supported in PCIe
 part number      feature code     feature codea    I/O adapters                                    Expansion Node

 Ethernet adapters

 49Y7900          A1BR             1763 / A1BR      EN2024 4-port 1Gb Ethernet Adapter              Yes

 90Y3466          A1QY             EC2D / A1QY      EN4132 2-port 10 Gb Ethernet Adapter            Yesb

 None             None             1762 / None      EN4054 4-port 10Gb Ethernet Adapter             No

 90Y3554          A1R1             1759 / A1R1      CN4054 10Gb Virtual Fabric Adapter              Yesb

 90Y3558          A1R0             1760 / A1R0      CN4054 Virtual Fabric Adapter Upgradec          Yes

 None             None             EC24 / None      CN4058 8-port 10Gb Converged Adapter            No

 None             None             EC26 / None      EN4132 2-port 10Gb RoCE Adapter                 No




26      IBM Flex System Interoperability Guide
System x        x-config         e-config                                                       Supported in PCIe
 part number     feature code     feature codea     I/O adapters                                 Expansion Node

 Fibre Channel adapters

 69Y1938         A1BM             1764 / A1BM       FC3172 2-port 8Gb FC Adapter                 Yes

 95Y2375         A2N5             EC25 / A2N5       FC3052 2-port 8Gb FC Adapter                 Yes

 88Y6370         A1BP             EC2B / A1BP       FC5022 2-port 16Gb FC Adapter                Yes

 InfiniBand adapters

 90Y3454         A1QZ             EC2C / A1QZ       IB6132 2-port FDR InfiniBand Adapter         Yes

 None            None             1761 / None       IB6132 2-port QDR InfiniBand Adapter         No

 SAS

 90Y4390         A2XW             Note / A2XW       ServeRAID M5115 SAS/SATA Controller          No
   a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when
      supported). The second is for the x220 and x440.
   b. Operates at PCIe 2.0 speeds when installed in the PCIe Expansion Node. For best performance install adapter
      directly on Compute Node.
   c. Features on Demand (software) upgrade to enable FCoE and iSCSI on the CN4054. One upgrade needed per
      adapter.


2.5.3 PCIe I/O adapters - PCIe Expansion Node
                 The PCIe Expansion Node supports for up to four standard PCIe 2.0 adapters:
                     Two PCIe 2.0 x16 slots that support full-length, full-height adapters (1x, 2x, 4x, 8x, and
                     16x adapters supported)
                     Two PCIe 2.0 x8 slots that support low-profile adapters (1x, 2x, 4x, and 8x adapters
                     supported)

                   Storage Expansion Node: The Storage Expansion Node does not include connectors for
                   PCIe I/O adapters.

                 Table 2-12 lists the supported adapters. Some adapters must be installed in one of the
                 full-height slots as noted. If the NVIDIA Tesla M2090 is installed in the Expansion Node, then
                 an adapter cannot be installed in the other full-height slot. The low-profile slots and Flex
                 System I/O expansion slots can still be used, however.

Table 2-12 Supported adapter cards
 System x     x-config    e-config    Description                                                            Maximum
 part         feature     feature                                                                            supported
 number       code        code

 46C9078      A3J3        A3J3        IBM 365GB High IOPS MLC Mono Adapter (low-profile adapter)             4

 46C9081      A3J4        A3J4        IBM 785GB High IOPS MLC Mono Adapter (low-profile adapter)             4

 81Y4519      5985        5985        640GB High IOPS MLC Duo Adapter (full-height adapter)                  2

 81Y4527      A1NB        A1NB        1.28TB High IOPS MLC Duo Adapter (full-height adapter)                 2

 90Y4377      A3DY        A3DY        IBM 1.2TB High IOPS MLC Mono Adapter (low-profile adapter)             4

 90Y4397      A3DZ        A3DZ        IBM 2.4TB High IOPS MLC Duo Adapter (full-height adapter)              2


                                                                Chapter 2. Compute node component compatibility           27
System x       x-config    e-config   Description                                                            Maximum
 part           feature     feature                                                                           supported
 number         code        code

 94Y5960        A1R4        A1R4       NVIDIA Tesla M2090 (full-height adapter)                               1a
     a. if the NVIDIA Tesla M2090 is installed in the Expansion Node, then an adapter cannot be installed in the other
        full-height slot. The low-profile slots and Flex System I/O expansion slots can still be used

                   Consult the IBM ServerProven® site for the current list of adapter cards that are supported in
                   the Expansion Node:
                   http://ibm.com/systems/info/x86servers/serverproven/compat/us/flexsystems.html

                   Note: Although the design of Expansion Node allows for a much greater set of standard PCIe
                   adapter cards, the preceding table lists the adapters that are specifically supported. If the PCI
                   Express adapter that you require is not on the ServerProven web site, use the IBM
                   ServerProven Opportunity Request for Evaluation (SPORE) process to confirm compatibility
                   in the desired configuration.


2.5.4 Internal storage - Storage Expansion Node
                   The Storage Expansion Node adds 12 drive bays to the attached compute node. The
                   expansion node supports 2.5-inch drives, either HDDs or SSDs.

                     PCIe Expansion Node: The PCIe Expansion Node does not support any HDDs or SSDs.

                   Table 2-13 shows the hard disk drives and solid state drives supported within the Storage
                   Expansion Node. Both SSD and HDD can be installed inside the unit at the same time,
                   although as per best practice it is recommended that logical drives are created of similar type
                   of disks. ie for a RAID 1 pair, choose identical drive types, SSD or HDD.

Table 2-13 HDDs and SSDs supported in Storage Expansion Node
 System x          x-config        e-config       Description
 part              feature         feature
 number            code            code

 NL SATA HDDs

 81Y9722           A1NX            A1NX           IBM 250GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD

 81Y9726           A1NZ            A1NZ           IBM 500GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD

 81Y9730           A1AV            A1AV           IBM 1TB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD

 10K SAS HDDs

 81Y9650           A282            A282           IBM 900GB 10K 6Gbps SAS 2.5" SFF HS HDD

 90Y8872           A2XD            A2XD           IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS HDD

 90Y8877           A2XC            A2XC           IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD

 Solid state drives (SSD)

 90Y8643           A2U3            A2U3           IBM 256GB SATA 2.5" MLC HS Enterprise Value SSD




28      IBM Flex System Interoperability Guide
2.5.5 RAID upgrades - Storage Expansion Node
                The Storage Expansion Node supports the RAID upgrades listed in Table 2-14.

                  PCIe Expansion Node: The PCIe Expansion Node does not support any of these
                  upgrades.


Table 2-14 FOD options available for the Storage Expansion Node
 System x     x-config    e-config   Description
 part         feature     feature
 number       code        code

 Hardware upgrades

 81Y4559      A1WY        A1WY       ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade for IBM System x

 81Y4487      A1J4        A1J4       ServeRAID M5100 Series 512MB Flash/RAID 5 Upgrade for IBM System x

 Features on Demand upgrades (license only)

 90Y4410      A2Y1        A2Y1       ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System

 90Y4447      A36G        A36G       ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System

 90Y4412      A2Y2        A2Y2       ServeRAID M5100 Series Performance Accelerator for IBM Flex System




                                                           Chapter 2. Compute node component compatibility   29
30   IBM Flex System Interoperability Guide
3


    Chapter 3.   Software compatibility
                 This chapter describes aspects of software compatibility.

                 Topics in this chapter are:
                     3.1, “Operating system support” on page 32
                     3.2, “IBM Fabric Manager” on page 34

                 Unless it is otherwise specified, updates or service packs equal to or higher within the same
                 operating system release family and version of the operating system are also supported.
                 However, support for newer major versions are not supported unless specifically identified.

                 For customers interested in deploying operating systems not listed here, IBM can provide
                 customers with server hardware only warranty support. For operating system and software
                 support, customers must contact the operating system vendor or community. Customers
                 must obtain the operating system and OS software support directly from the operating system
                 vendor or community. For more information, see “Additional OS Information” on the IBM
                 ServerProven web page.




© Copyright IBM Corp. 2012, 2013. All rights reserved.                                                      31
3.1 Operating system support
                  For the latest information, see IBM ServerProven at the following website:
                  http://ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.shtml


3.1.1 x86 compute nodes
                  Table 3-1 lists the operating systems supported by the x86 compute nodes.

Table 3-1 Operating system support - x86 compute nodes
 Model                                                                     x220           x240          x440

 Microsoft Windows Server 2012                                             Yes            Yes           Yes

 Microsoft Windows Server 2008 R2                                          Yes (SP1)      Yes (SP1)     Yes (SP1)

 Microsoft Windows Server 2008 HPC Edition                                 Yes (SP1)      Yes (SP1)     No

 Microsoft Windows Server 2008, Datacenter x64 Edition                     Yes (SP2)      Yes (SP2)     Yes (SP2)

 Microsoft Windows Server 2008, Enterprise x64 Edition                     Yes (SP2)      Yes (SP2)     Yes (SP2)

 Microsoft Windows Server 2008, Standard x64 Edition                       Yes (SP2)      Yes (SP2)     Yes (SP2)

 Microsoft Windows Server 2008, Web x64 Edition                            Yes (SP2)      Yes (SP2)     Yes (SP2)

 Red Hat Enterprise Linux 6 Server x64 Edition                             Yes (U2)       Yes (U2)      Yes (U3)

 Red Hat Enterprise Linux 5 Server with Xen x64 Edition                    Yes (U7)ab     Yes (U7)b     Yes (U8)b

 Red Hat Enterprise Linux 5 Server x64 Edition                             Yes (U7)       Yes (U7)      Yes (U8)

 SUSE Linux Enterprise Server 11 for AMD64/EM64T SP2                       Yes (SP2)      Yes (SP1)     Yes (SP2)

 SUSE Linux Enterprise Server 11 with Xen for AMD64/EM64T SP2              Yes (SP2)ab    Yes (SP1)b    Yes (SP2)b

 SUSE Linux Enterprise Server 10 for AMD64/EM64T SP4                       Yes (SP4)      Yes (SP4)     Yes (SP4)

 VMware ESXi 4.1                                                           Yes   (U2)a    Yes   (U2)c   Yes (U2)

 VMware ESX 4.1                                                            Yes (U2)a      Yes (U2)c     Yes (U2)

 VMware vSphere 5                                                          Yesa           Yesc          Yes (U1)

 VMware vSphere 5.1                                                        Yesa           Yesc          Yes
     a. Xen and VMware hypervisors are not supported with ServeRAID C105 (software RAID), but are supported with
        ServeRAID H1135 Controller 90Y4750 and ServeRAID M5115 Controller 90Y4390.
     b. Only pNIC mode is supported with Xen kernels. For support information, see RETAIN Tip H205800 at
        http://ibm.com/support/entry/portal/docdisplay?lndocid=migr-5090480.
     c. The IMM2 Ethernet over USB must be disabled using the IMM2 web interface. For support information, see
        RETAIN Tip H205897 at http://ibm.com/support/entry/portal/docdisplay?lndocid=migr-5090620.




32      IBM Flex System Interoperability Guide
3.1.2 Power Systems compute nodes
                 Table 3-2 lists the operating systems supported by the Power Systems compute nodes.

Table 3-2 Operating system support - Power Systems compute nodes
 Model                                                              p24L         p260         p260         p460
                                                                                 22X          23X

 IBM AIX® Version 7.1                                               No           Yes          Yes          Yes

 IBM AIX Version 6.1                                                No           Yes          Yes          Yes

 IBM i 7.1                                                          No           Yes          Yes          Yes

 IBM i 6.1                                                          No           Yesa         Yesa         Yesa

 IBM Virtual I/O Server (VIOS) 2.2.1.4                              Yes          Yes          No           Yes

 IBM Virtual I/O Server (VIOS) 2.2.2.0                              Yes          Yes          Yes          Yes

 Red Hat Enterprise Linux 5 for IBM POWER®                          Yes (U7)     Yes (U7)     Yes (U9)     Yes (U7)

 Red Hat Enterprise Linux 6 for IBM POWER                           Yes (U2)     Yes (U2)     Yes (U3)     Yes (U2)

 SUSE Linux Enterprise Server 11 for IBM POWERb                     Yes (SP2)    Yes (SP2)    Yes (SP2)    Yes (SP2)
   a. IBM i 6.1 is supported but cannot be ordered preinstalled from IBM Manufacturing.
   b. With current maintenance updates available from SUSE to enable all planned functionality.

                 Specific technology levels, service pack, and APAR levels are as follows:
                       For the p260 (model 22X), and p460:
                       –   IBM i 6.1 with i 6.1.1 machine code, or later
                       –   IBM i 7.1 TR4, or later
                       –   VIOS 2.2.1.4, or later
                       –   AIX V7.1 with the 7100-01 Technology Level with Service Pack 3 with APAR IV14284
                       –   AIX V7.1 with the 7100-01 Technology Level with Service Pack 4, or later
                       –   AIX V7.1 with the 7100-00 Technology Level with Service Pack 6, or later
                       –   AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with APAR IV14283
                       –   AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later
                       –   AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later
                       –   AIX V5.3 with the 5300-12 Technology Level with Service Pack 6, or later. An IBM
                           AIX 5L V5.3 Service Extension is also required.
                       For the p260 (model 23X):
                       –   IBM i 6.1 with i 6.1.1 machine code, or later
                       –   IBM i 7.1, or later
                       –   VIOS 2.2.2.0 or later
                       –   AIX V7.1 with the 7100-02 Technology Level or later
                       –   AIX V6.1 with the 6100-08 Technology Level or later
                       –   AIX V6.1 with the 6100-07 Technology Level, with Service Pack 71, or later
                       –   AIX V6.1 with the 6100-06 Technology Level with Service Pack 111 , or later
                       –   AIX V5.3 with the 5300-12 Technology Level with Service Pack 7, or later. An IBM
                           AIX 5L V5.3 Service Extension is required.




                 1   Planned availability March 29, 2013


                                                                                Chapter 3. Software compatibility     33
3.2 IBM Fabric Manager
                   IBM Fabric Manager is a solution that you can use to quickly replace and recover compute
                   nodes in your environment. It accomplishes this task by assigning Ethernet MAC, Fibre
                   Channel WWN, and SAS WWN addresses so that any compute nodes plugged into those
                   bays take on the assigned addresses. This configuration enables the Ethernet and Fibre
                   Channel infrastructure to be configured once and before any compute nodes are connected
                   to the chassis.

                   For information about IBM Fabric Manager, see the following website:
                   http://www.ibm.com/systems/flex/fabricmanager

                   The operating systems that IBM Fabric Manager supports are listed in the IBM Flex System
                   Information Center at the following website:
                   http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc.iof
                   m.doc/dw1li_supported_os.html

                   Table 3-3 lists the adapters that support IBM Fabric Manager and the compute nodes that
                   they can be installed in.

Table 3-3 IBM Fabric Manager support - adapters
 Part          Feature          Description                                  x220    x240    x440     p24L    p260     p460
 number        codesa

 Ethernet expansion cards

 None          None / 1762      EN4054 4-port 10Gb Ethernet Adapter          N/Ab    N/Ab    N/Ab     Yes     Yes      Yes

 90Y3554       A1R1 / 1759      CN4054 10Gb Virtual Fabric Adapter           Yes     Yes     Yes      N/Ab    N/Ab     N/Ab

 49Y7900       A1BR / 1763      EN2024 4-port 1Gb Ethernet Adapter           Yes     Yes     Yes      Yes     Yes      Yes

 90Y3466       A1QY / EC2D      EN4132 2-port 10Gb Ethernet Adapter          Yes     Yes     Yes      N/Ab    N/Ab     N/Ab

 None          None / EC24      CN4058 8-port 10Gb Converged Adapter         N/Ab    N/Ab    N/Ab     Yes     Yes      Yes

 None          None / EC26      EN4132 2-port 10Gb RoCE Adapter              N/Ab    N/Ab    N/Ab     Yes     Yes      Yes

 Fibre Channel expansion cards

 95Y2375       A2N5 / EC25      FC3052 2-port 8Gb FC Adapter                 Yes     Yes     Yes      N/Ab    N/Ab     N/Ab

 69Y1938       A1BM / 1764      FC3172 2-port 8Gb FC Adapter                 Yes     Yes     Yes      Yes     Yes      Yes

 88Y6370       A1BP / EC2B      FC5022 2-port 16Gb FC Adapter                Yes     Yes     Yes      N/Ab    N/Ab     N/Ab

 InfiniBand expansion cards

 None          None / 1761      IB6132 2-port QDR InfiniBand Adapter         N/Ab    N/Ab    N/Ab     No      No       No

 90Y3454       A1QZ / EC2C      IB6132 2-port FDR InfiniBand Adapter         No      No      No       N/Ab    N/Ab     N/Ab
     a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second
        feature code is for configurations ordered through the IBM Power Systems channel (e-config)
     b. Not applicable. This combination of adapter and compute node is not supported.




34      IBM Flex System Interoperability Guide
Table 3-4 lists the supported switches.

Table 3-4 IBM Fabric Manager support - switches
 Description                                                     Part        Feature          Support IBM
                                                                 number      codes            Fabric Manager

 Flex System Fabric CN4093 10Gb Converged Scalable Switch        00D5823     A3HH / ESW2      No

 Flex System Fabric EN4093R 10Gb Scalable Switch                 95Y3309     A3J6 / ESW7      No

 Flex System Fabric EN4093 10Gb Scalable Switch                  49Y4270     A0TB / 3593      Yes - VLAN failovera

 Flex System EN2092 1Gb Ethernet Switch                          49Y4294     A0TF / 3598      Yes - VLAN failovera

 Flex System EN4091 10Gb Ethernet Pass-thru                      88Y6043     A1QV / 3700      Yesb

 Flex System FC5022 16Gb SAN Scalable Switch                     88Y6374     A1EH / 3770      Yesb

 Flex System FC5022 16Gb 24-port SAN Scalable Switch             00Y3324     A3DP / ESW5      Yesb

 Flex System FC5022 16Gb ESB Switch                              90Y9356     A2RQ / 3771      Yesb

 Flex System FC3171 8Gb SAN Switch                               69Y1930     A0TD / 3595      Yesb

 Flex System FC3171 8Gb SAN Pass-thru                            69Y1934     A0TJ / 3591      Yesb

 Flex System IB6131 InfiniBand Switch                            90Y3450     A1EK / 3699      No
   a. VLAN failover (port based or untagged only) is supported
   b. IBM Fabric Manager is transparent to pass-thru and Fibre Channel switch modules. There is no dependency
      between IBM Fabric Manager and these modules.

                IBM Fabric Manager V3.0 is supported on the following operating systems (see 3.1,
                “Operating system support” on page 32 for operating systems supported by each compute
                node):
                    Microsoft Windows 7 (client only)
                    Microsoft Windows Server 2003
                    Microsoft Windows Server 2003 R2
                    Microsoft Windows Server 2008
                    Microsoft Windows Server 2008 R2
                    Red Hat Enterprise Linux 5
                    Red Hat Enterprise Linux 6
                    SUSE Linux Enterprise Server 10
                    SUSE Linux Enterprise Server 11

                IBM Fabric Manager V3.0 is supported on the following web browsers:
                    Internet Explorer 8
                    Internet Explorer 9
                    Firefox 14

                IBM Fabric Manager V3.0 is supported on Java Runtime Edition 1.6.




                                                                             Chapter 3. Software compatibility   35
36   IBM Flex System Interoperability Guide
4


    Chapter 4.   Storage interoperability
                 This chapter describes storage subsystem compatibility.

                 Topics in this chapter are:
                     4.1, “Unified NAS storage” on page 38
                     4.2, “FCoE support” on page 39
                     4.3, “iSCSI support” on page 40
                     4.4, “NPIV support” on page 41
                     4.5, “Fibre Channel support” on page 41

                   Tip: Use these tables only as a starting point. Configuration support must be verified
                   through the IBM System Storage Interoperation Center (SSIC) found at the following
                   website:
                   http://ibm.com/systems/support/storage/ssic/interoperability.wss

                 The tables in this chapter and in SSIC are used primarily to document Fibre Channel SAN
                 and FCoE-attached block storage interoperability and iSCSI storage when a hardware iSCSI
                 initiator host adapters are used.




© Copyright IBM Corp. 2012, 2013. All rights reserved.                                                      37
4.1 Unified NAS storage
               NFS, CIFS and iSCSI protocols on storage products such as IBM N series, IBM Storwize®
               V7000 Unified, and SONAS are supported with IBM Flex System based on the requirements
               including operating system levels.

               See the following interoperability documentation provided for those products for specific
               support:
                  N series interoperability:
                  http://ibm.com/support/docview.wss?uid=ssg1S7003897
                  IBM Storwize V7000 Unified
                  http://ibm.com/support/docview.wss?uid=ssg1S1003911
                  IBM Storwize V7000
                  SVC 6.4: http://ibm.com/support/docview.wss?uid=ssg1S1004113
                  SVC 6.3: http://ibm.com/support/docview.wss?uid=ssg1S1003908
                  SONAS
                  http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fcom.ibm.s
                  onas.doc%2Fovr_nfssupportmatrix.html

                Software iSCSI: Generally iSCSI would be supported with all types of storage as long as
                software iSCSI initiators are used on the servers running supported OS and device driver
                levels.




38   IBM Flex System Interoperability Guide
4.2 FCoE support
                  This section lists FCoE support. Table 4-1 lists FCoE support using Fibre Channel targets.
                  Table 4-2 on page 40 lists FCoE support using native FCoE targets (that is, end-to-end
                  FCoE).

                   Tip: Use these tables only as a starting point. Configuration support must be verified
                   through the IBM System Storage Interoperation Center (SSIC) web site:
                   http://ibm.com/systems/support/storage/ssic/interoperability.wss


Table 4-1 FCoE support using FC targets
 Ethernet           Flex System I/O       FC Forwarder    Supported       Operating        Storage targets
 adapter            module                (FCF)           SAN fabric      systems

    10Gb                                                                                       DS8000®
    onboard                                                  Cisco MDS                         SVC
                        EN4091 10Gb
    LOM (x240)                              Cisco            9124                              IBM Storwize
                        Ethernet
    + FCoE                                  Nexus 5010       Cisco MDS                         V7000
                        Pass-thru
    upgrade,                                Cisco            9148                              V7000 Storage
                        (vNIC2 and                                            Windows
    90Y9310                                 Nexus 5020       Cisco MDS                         Node (FC)
                        pNIC)                                                 Server
    10Gb                                                     9513                              TS3200, TS3310,
                                                                              2008 R2          TS3500
    onboard
                                                                              SLES 10
    LOM (x440)
                        EN4093 10Gb         Brocade                           SLES 11
    + FCoE                                                   IBM B-type
                        Switch (vNIC1,      VDX 6730                          RHEL 5
    upgrade,
                        vNIC2, UFP,                                           RHEL 6           DS8000
    90Y9310
                        and pNIC)           Cisco                             ESX 4.1          SVC
    CN4054
                        EN4093R             Nexus 5548                        vSphere          Storwize V7000
    10Gb                                                     Cisco MDS
                        10Gb Switch         Cisco                             5.0              V7000 Storage
    Adapter,
                        (vNIC1, vNIC2,      Nexus 5596                                         Node (FC)
    90Y3554
                        UFP and pNIC)                                                          IBM XIV®
    + FCoE
    upgrade,            CN4093 10Gb Converged Switch         IBM B-type
    90Y3558             (vNIC1, vNIC2 and pNIC)              Cisco MDS

                        EN4093 10Gb
                        (pNIC only)
                                            Brocade
                        EN4093R                              IBM B-type
                                            VDX 6730
                        10Gb Switch
                        (pNIC only)                                                            DS8000
    CN4058                                                                    AIX 6.1
                                                                                               SVC
    8-port 10Gb         EN4093 10Gb                                           AIX 7.1
                                                                                               Storwize V7000
    Converged           Switch (pNIC        Cisco                             VIOS 2.2
                                                                                               V7000 Storage
    Adapter,            only)               Nexus 5548                        SLES 11.2
                                                             Cisco MDS                         Node (FC)
    EC24                EN4093R             Cisco                             RHEL 6.3
                                                                                               IBM XIV
                        10Gb Switch         Nexus 5596
                        (pNIC only)

                        CN4093 10Gb Converged Switch         IBM B-type
                        (pNIC only)                          Cisco MDS




                                                                          Chapter 4. Storage interoperability   39
Table 4-2 FCoE support using FCoE targets (end-to-end FCoE)
 Ethernet adapter                       Flex System I/O          Operating                     Storage targets
                                        module                   systems

                                                                    Windows Server 2008 R2
      10Gb onboard LOM (x240) +
                                                                    SLES 10
      FCoE upgrade, 90Y9310                 CN4093 10Gb
                                                                    SLES 11
      10Gb onboard LOM (x440) +             Converged Switch                                      V7000 Storage
                                                                    RHEL 5
      FCoE upgrade, 90Y9310                 (vNIC1, vNIC2, and                                    Node (FCoE)
                                                                    RHEL 6
      CN4054 10Gb Adapter, 90Y3554          pNIC)
                                                                    ESX 4.1
      + FCoE upgrade, 90Y3558
                                                                    vSphere 5.0

                                                                    AIX 6.1
                                            CN4093 10Gb             AIX 7.1
      CN4058 8-port 10Gb Converged                                                                V7000 Storage
                                            Converged Switch        VIOS 2.2
      Adapter, EC24                                                                               Node (FCoE)
                                            (pNIC only)             SLES 11.2
                                                                    RHEL 6.3




4.3 iSCSI support
                  Table 4-3 lists iSCSI support using a hardware-based iSCSI initiator.

                  IBM System Storage Interoperation Center normally only lists support for iSCSI storage
                  attached using hardware iSCSI offload adapters in the servers. Flex System compute nodes
                  support any type of iSCSI (1Gb or 10Gb) storage as long as software iSCSI initiator device
                  drivers that meet the storage requirements for operating system and device driver levels are
                  met.

                    Tip: Use these tables only as a starting point. Configuration support must be verified
                    through the IBM System Storage Interoperation Center (SSIC) web site:
                    http://ibm.com/systems/support/storage/ssic/interoperability.wss


Table 4-3 Hardware-based iSCSI support
 Ethernet adapter           Flex System I/O module        Operating systems            Storage targets

      10Gb onboard LOM
                                EN4093 10Gb Switch
      (x240)a                                                Windows Server 2008 R2        SVC
                                (vNIC1, vNIC2, UFP,
      10Gb onboard LOM                                       SLES 10 & 11                  Storwize V7000
                                and pNIC)
      (x440)a                                                RHEL 5 & 6                    V7000 Storage Node
                                EN4093R 10Gb Switch
      CN4054 10Gb                                            ESX 4.1                       (iSCSI)
                                (vNIC1, vNIC2, UFP
      Virtual Fabric                                         vSphere 5.0                   IBM XIV
                                and pNIC)
      Adapter, 90Y3554b
     a. iSCSI/FCoE upgrade is required, IBM Virtual Fabric Advanced Software Upgrade (LOM), 90Y9310
     b. iSCSI/FCoE upgrade is required, IBM Flex System CN4054 Virtual Fabric Adapter Upgrade, 90Y3558




40      IBM Flex System Interoperability Guide
4.4 NPIV support
                NPIV is supported on all Fibre Channel and FCoE adapters that are supported in the compute
                nodes. See Table 2-1 on page 18 for the list of supported adapters.

                 IBM i support: IBM i 6.1 and i 7.1 NPIV attachment for SAN volumes requires 520 byte
                 sectors on those volumes. At this time, only the DS8000s, DS5100 and DS5300 SANs
                 have this capability.



4.5 Fibre Channel support
                This section discusses Fibre Channel support for IBM BladeCenter. The following topics are
                covered:
                   4.5.1, “x86 compute nodes”
                   4.5.2, “Power Systems compute nodes” on page 42

                 Tip: Use these tables only as a starting point. Not all combinations may be supported.
                 Configuration support must be verified through the IBM System Storage Interoperation
                 Center (SSIC) web site:
                 http://ibm.com/systems/support/storage/ssic/interoperability.wss


4.5.1 x86 compute nodes
                Table 4-1 lists Fibre Channel storage support for x86 compute nodes.

Table 4-4 Fibre Channel support: x86 compute nodes
 FC adapter               Flex System I/O module      External SAN         Operating           FC storage
                                                      fabric               systems             targets

                             FC3171 8Gb switch,
                             69Y1930
    FC3172 2-port 8Gb        FC3171 8Gb Pass-thru,                                                V7000
    FC Adapter,              69Y1934                                          Microsoft           Storage
                                                         Cisco MDS
    69Y1938                  FC5022 16Gb 12-port,                             Windows             Node (FC)
                                                         IBM b-type
    FC3052 2-port 8Gb        88Y6374                                          Server 2008         DS3000
                                                         Brocade
    FC Adapter,              FC5022 16Gb 24-port,                             RHEL 5              DS5000
    95Y2375                  00Y3324                                          RHEL 6              DS8000
                             FC5022 16Gb 24-port                              SLES 10             SVC
                             ESB, 90Y9356                                     SLES 11             V7000
                             FC5022 16Gb 12-port,                             ESX 4.1             V3500
                             88Y6374                                          vSphere 5.0         V3700
    FC5022 2-port 16Gb                                                        vSphere 5.1         XIV
                             FC5022 16Gb 24-port,        IBM b-type
    FC Adapter,                                                                                   Tape
                             00Y3324                     Brocade
    88Y6370
                             FC5022 16Gb 24-port
                             ESB, 90Y9356




                                                                       Chapter 4. Storage interoperability    41
4.5.2 Power Systems compute nodes
                Table 4-5 lists Fibre Channel storage support for Power Systems compute nodes.

Table 4-5 Fibre Channel support: Power Systems compute nodes
 FC expansion card        Flex System I/O module       External SAN fabric   Operating     FC storage
                                                                             systems       targets

                             FC3171 8Gb switch,                                                  V7000
                             3595                                                                Storage
                             FC3171 8Gb Pass-thru,                              AIX 6.1          Node (FC)
                             3591                                               AIX 7.1          DS8000
                                                          IBM b-type
     FC3172 2-port 8Gb       FC5022 16Gb 12-port,                               VIOS 2.2         SVC
                                                          Brocade
     FC Adapter, 1764        3770                                               SLES 11          V7000
                                                          Cisco MDS
                             FC5022 16Gb 24-port,                               RHEL 5           V3500
                             ESW5                                               RHEL 6           V3700
                             FC5022 16Gb 24-port                                                 XIV
                             ESB, 3771                                                           Tape




42    IBM Flex System Interoperability Guide
Abbreviations and acronyms
APAR                 Authorized Problem Analysis         SAN     storage area network
                     Reports
                                                         SAS     Serial Attached SCSI
DAC                  dual address cycle
                                                         SATA    Serial ATA
DIMM                 dual inline memory module
                                                         SDD     Subsystem Device Driver
ECC                  error checking and correcting
                                                         SED     self-encrypting drive
ESB                  Enterprise Switch Bundle
                                                         SFF     Small Form Factor
FC                   Fibre Channel
                                                         SFP     small form-factor pluggable
FDR                  fourteen data rate
                                                         SLES    SUSE Linux Enterprise Server
GB                   gigabyte
                                                         SR      short range
HDD                  hard disk drive
                                                         SSD     solid-state drive
HH                   half-high
                                                         SSIC    System Storage Interoperation
HPC                  high performance computing                  Center
HS                   hot swap                            SVC     SAN Volume Controller
I/O                  input/output                        TOR     top of rack
IB                   InfiniBand                          UDIMM   unbuffered DIMM
IBM                  International Business Machines     USB     universal serial bus
IT                   information technology              VIOS    Virtual I/O Server
ITSO                 International Technical Support     WWN     worldwide name
                     Organization
LOM                  LAN on motherboard
LP                   low profile
LR                   long range
LRDIMM               load-reduced DIMM
MAC                  media access control
MDS                  Multilayer Director Switch
MLC                  multilevel cell
MTP                  Multi-fiber Termination Push-on
N/A                  not applicable
NL                   nearline
NPIV                 N_Port ID Virtualization
OS                   operating system
QDR                  quad data rate
QSFP                 Quad Small Form-factor Pluggable
RAID                 redundant array of independent
                     disks
RDIMM                registered DIMM
RETAIN               Remote Electronic Technical
                     Assistance Information Network
RHEL                 Red Hat Enterprise Linux
RPM                  revolutions per minute
RSS                  receive-side scaling



© Copyright IBM Corp. 2012, 2013. All rights reserved.                                           43
44   IBM Flex System Interoperability Guide
Related publications

                 The publications listed in this section are considered particularly suitable for a more detailed
                 discussion of the topics covered in this paper.



IBM Redbooks
                 The following IBM Redbooks publications provide additional information about the topic in this
                 document. Note that some publications referenced in this list might be available in softcopy
                 only.

                 You can search for, view, download or order these documents and other Redbooks,
                 Redpapers, Web Docs, draft and additional materials, at the following website:
                 ibm.com/redbooks
                     IBM PureFlex System and IBM Flex System Products and Technology, SG24-7984
                     IBM Flex System p260 and p460 Planning and Implementation Guide, SG24-7989
                     IBM Flex System Networking in an Enterprise Data Center, REDP-4834
                     Overview of IBM PureSystems, TIPS0892

                 IBM Redbooks Product Guides are also available for the following IBM Flex System
                 components:
                     Chassis and compute nodes
                     Switches and pass-through modules
                     Adapter cards

                 You can find these publications at the following web page:
                 http://www.redbooks.ibm.com/portals/puresystems?Open&page=pgbycat



Other publications and online resources
                 These publications and websites are also relevant as further information sources:
                     Configuration and Option Guide, found at:
                     http://www.ibm.com/systems/xbc/cog/
                     IBM Flex System Enterprise Chassis Power Guide, found at:
                     http://ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102111
                     IBM Flex System Information Center:
                     http://publib.boulder.ibm.com/infocenter/flexsys/information
                     IBM System Storage Interoperation Center:
                     http://www.ibm.com/systems/support/storage/ssic




© Copyright IBM Corp. 2012, 2013. All rights reserved.                                                         45
ServerProven hardware compatibility page for IBM Flex System:
                  http://www.ibm.com/systems/info/x86servers/serverproven/compat/us/
                  xREF: IBM x86 Server Reference:
                  http://www.redbooks.ibm.com/xref
                  IBM System x and Cluster Solutions configurator (x-config)
                  https://ibm.com/products/hardware/configurator/americas/bhui/asit/install.html
                  IBM Configurator for e-business (e-config)
                  http://ibm.com/services/econfig/



Help from IBM
               IBM Support and downloads
               ibm.com/support

               IBM Global Services
               ibm.com/services




46   IBM Flex System Interoperability Guide
Back cover                                                   ®



IBM Flex System
Interoperability Guide
                                                                                                     Redpaper                   ™




Quick reference for     To meet today’s complex and ever-changing business demands, you
                        need a solid foundation of compute, storage, networking, and software       INTERNATIONAL
IBM Flex System
                        resources. This system must be simple to deploy, and be able to             TECHNICAL
Interoperability
                        quickly and automatically adapt to changing conditions. You also need       SUPPORT
                        to be able to take advantage of broad expertise and proven guidelines       ORGANIZATION
Covers internal         in systems management, applications, hardware maintenance, and
components and          more.
external connectivity   The IBM PureFlex System combines no-compromise system designs
                        along with built-in expertise and integrates them into complete and
Latest updates as of    optimized solutions. At the heart of PureFlex System is the IBM Flex        BUILDING TECHNICAL
30 January 2013         System Enterprise Chassis. This fully integrated infrastructure platform    INFORMATION BASED ON
                        supports a mix of compute, storage, and networking resources to meet        PRACTICAL EXPERIENCE
                        the demands of your applications.
                                                                                                    IBM Redbooks are developed
                        The solution is easily scalable with the addition of another chassis with   by the IBM International
                        the required nodes. With the IBM Flex System Manager, multiple              Technical Support
                        chassis can be monitored from a single panel. The 14 node, 10U              Organization. Experts from
                        chassis delivers high speed performance complete with integrated            IBM, Customers and Partners
                        servers, storage, and networking. This flexible chassis is simple to        from around the world create
                        deploy, and scales to meet your needs in the future.                        timely technical information
                                                                                                    based on realistic scenarios.
                        This IBM Redpaper publication is a reference to compatibility and           Specific recommendations
                        interoperability of components inside and connected to IBM PureFlex         are provided to help you
                        System and IBM Flex System solutions.                                       implement IT solutions more
                                                                                                    effectively in your
                                                                                                    environment.



                                                                                                    For more information:
                                                                                                    ibm.com/redbooks


                             REDP-FSIG-00

IBM Flex System Interoperability Guide

  • 1.
    Front cover IBM FlexSystem Interoperability Guide Quick reference for IBM Flex System Interoperability Covers internal components and external connectivity Latest updates as of 30 January 2013 David Watts Ilya Krutov ibm.com/redbooks Redpaper
  • 3.
    International Technical SupportOrganization IBM Flex System Interoperability Guide 30 January 2013 REDP-FSIG-00
  • 4.
    Note: Before usingthis information and the product it supports, read the information in “Notices” on page v. This edition applies to: IBM PureFlex System IBM Flex System Enterprise Chassis IBM Flex System Manager IBM Flex System x220 Compute Node IBM Flex System x240 Compute Node IBM Flex System x440 Compute Node IBM Flex System p260 Compute Node IBM Flex System p24L Compute Node IBM Flex System p460 Compute Node IBM 42U 1100 mm Enterprise V2 Dynamic Rack © Copyright International Business Machines Corporation 2012, 2013. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
  • 5.
    Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii The team who wrote this paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 30 January 2013 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 8 December 2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 29 November 2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 13 November 2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 2 October 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Chapter 1. Chassis interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Chassis to compute node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Switch to adapter interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Ethernet switches and adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.2 Fibre Channel switches and adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.3 InfiniBand switches and adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Switch to transceiver interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3.1 Ethernet switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3.2 Fibre Channel switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.3 InfiniBand switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4 Switch upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.4.1 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch . . . . . . . . . . . 9 1.4.2 IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch . . . . . . . . . 10 1.4.3 IBM Flex System EN2092 1Gb Ethernet Scalable Switch . . . . . . . . . . . . . . . . . . 11 1.4.4 IBM Flex System IB6131 InfiniBand Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4.5 IBM Flex System FC5022 16Gb SAN Scalable Switch. . . . . . . . . . . . . . . . . . . . . 12 1.5 vNIC and UFP support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.6 Chassis power supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.7 Rack to chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Chapter 2. Compute node component compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1 Compute node-to-card interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2 Memory DIMM compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.1 x86 compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.2 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3 Internal storage compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3.1 x86 compute nodes: 2.5-inch drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3.2 x86 compute nodes: 1.8-inch drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.3 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4 Embedded virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.5 Expansion node compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5.1 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5.2 Flex System I/O adapters - PCIe Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . 26 © Copyright IBM Corp. 2012, 2013. All rights reserved. iii
  • 6.
    2.5.3 PCIe I/Oadapters - PCIe Expansion Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.4 Internal storage - Storage Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5.5 RAID upgrades - Storage Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Chapter 3. Software compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.1 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.1.1 x86 compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.1.2 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2 IBM Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Chapter 4. Storage interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.1 Unified NAS storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.2 FCoE support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.3 iSCSI support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 NPIV support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.5 Fibre Channel support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.5.1 x86 compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.5.2 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Other publications and online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 iv IBM Flex System Interoperability Guide
  • 7.
    Notices This information wasdeveloped for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. © Copyright IBM Corp. 2012, 2013. All rights reserved. v
  • 8.
    Trademarks IBM, the IBMlogo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX® POWER7+™ Redbooks (logo) ® BladeCenter® POWER7® RETAIN® DS8000® PowerVM® ServerProven® IBM Flex System™ POWER® Storwize® IBM Flex System Manager™ PureFlex™ System Storage® IBM® RackSwitch™ System x® Netfinity® Redbooks® XIV® Power Systems™ Redpaper™ The following terms are trademarks of other companies: Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Other company, product, or service names may be trademarks or service marks of others. vi IBM Flex System Interoperability Guide
  • 9.
    Preface To meet today’s complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources. This system must be simple to deploy, and be able to quickly and automatically adapt to changing conditions. You also need to be able to take advantage of broad expertise and proven guidelines in systems management, applications, hardware maintenance, and more. The IBM® PureFlex™ System combines no-compromise system designs along with built-in expertise and integrates them into complete and optimized solutions. At the heart of PureFlex System is the IBM Flex System™ Enterprise Chassis. This fully integrated infrastructure platform supports a mix of compute, storage, and networking resources to meet the demands of your applications. The solution is easily scalable with the addition of another chassis with the required nodes. With the IBM Flex System Manager™, multiple chassis can be monitored from a single panel. The 14 node, 10U chassis delivers high speed performance complete with integrated servers, storage, and networking. This flexible chassis is simple to deploy, and scales to meet your needs in the future. This IBM Redpaper™ publication is a reference to compatibility and interoperability of components inside and connected to IBM PureFlex System and IBM Flex System solutions. The latest version of this document can be downloaded from: http://www.redbooks.ibm.com/fsig The team who wrote this paper This paper was produced by a team of specialists from around the world working at the International Technical Support Organization, Raleigh Center. David Watts is a Consulting IT Specialist at the ITSO Center in Raleigh. He manages residencies and produces IBM Redbooks® publications for hardware and software topics that are related to IBM System x® and IBM BladeCenter® servers and associated client platforms. He has authored over 300 books, papers, and web documents. David has worked for IBM both in the US and Australia since 1989. He is an IBM Certified IT Specialist and a member of the IT Specialist Certification Review Board. David holds a Bachelor of Engineering degree from the University of Queensland (Australia). Ilya Krutov is a Project Leader at the ITSO Center in Raleigh and has been with IBM since 1998. Before joining the ITSO, Ilya served in IBM as a Run Rate Team Leader, Portfolio Manager, Brand Manager, Technical Sales Specialist, and Certified Instructor. Ilya has expertise in IBM System x and BladeCenter products, server operating systems, and networking solutions. He has a Bachelor’s degree in Computer Engineering from the Moscow Engineering and Physics Institute. Special thanks to Ashish Jain, the former author of this document. © Copyright IBM Corp. 2012, 2013. All rights reserved. vii
  • 10.
    Now you canbecome a published author, too! Here’s an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.html Comments welcome Your comments are important to us! We want our papers to be as helpful as possible. Send us your comments about this paper or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400 Stay connected to IBM Redbooks Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.html viii IBM Flex System Interoperability Guide
  • 11.
    Summary of changes This section describes the technical changes made in this edition of the paper and in previous editions. This edition might also include minor corrections and editorial changes that are not identified. 30 January 2013 New information More specifics about configuration support for chassis power supplies, Table 1-17 on page 15. Windows Server 2012 support, Table 3-1 on page 32. Red Hat Enterprise Linux 5 support for the p260 model 23X, Table 3-2 on page 33 Changed information x440 restriction regarding the use of the ServeRAID M5115 is now removed with the release of IMM2 firmware build 40a, Updated the Fibre Channel support section, 4.5, “Fibre Channel support” on page 41. 8 December 2012 New information Added Table 2-2 on page 19 indicating which slots I/O adapters are supported in with Power Systems compute nodes. The x440 now supports UDIMMs, Table 2-3 on page 20 29 November 2012 Changed information Clarified that the use of expansion nodes requires that the second processor be installed in the compute node, Table 2-10 on page 26. Corrected the NPIV information, 4.4, “NPIV support” on page 41. Clarified NAS supported, 4.1, “Unified NAS storage” on page 38. 13 November 2012 This revision reflects the addition, deletion, or modification of new and changed information described below. © Copyright IBM Corp. 2012, 2013. All rights reserved. ix
  • 12.
    New information Added information about these new products: – IBM Flex System p260 Compute Node, 7895-23X – IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch – IBM Flex System Fabric EN4093R 10Gb Scalable Switch – IBM Flex System CN4058 8-port 10Gb Converged Adapter – IBM Flex System EN4132 2-port 10Gb RoCE Adapter – IBM Flex System Storage® Expansion Node – IBM Flex System PCIe Expansion Node – IBM PureFlex System 42U Rack – IBM Flex System V7000 Storage Node The x220 now supports 32 GB LRDIMM, page Table 2-3 on page 20 The Power Systems™ compute nodes support new DIMMs, Table 2-4 on page 21. New 2100W power supply option for the Enterprise Chassis, 1.6, “Chassis power supplies” on page 14. New section covering Features on Demand upgrades for scalable switches, 1.4, “Switch upgrades” on page 9. Changed information Moved the FCoE and NPIV tables to Chapter 4, “Storage interoperability” on page 37. Added machine types & models (MTMs) for the x220 and x440 when ordered via AAS (e-config), Table 1-1 on page 2 Added footnote regarding power management and the use of 14 Power Systems compute nodes with 32 GB DIMMs, Table 1-1 on page 2 Added AAS (e-config) feature codes to various tables of x86 compute node options. Note that AAS feature codes for the x220 and x440 are the same as those used in the HVEC system (x-config). However the AAS feature codes for the x240 are different than the equivalent HVEC feature codes. This is noted in the table. Updated the FCoE table, 4.2, “FCoE support” on page 39 Updated the vNIC table, Table 1-14 on page 13 Clarified that the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) and x240 USB Enablement Kit (49Y8119) cannot be installed at the same time, Table 2-6 on page 23. Updated the table of supported 2.5-inch drives, Table 2-5 on page 22. Updated the operating system table, Table 3-1 on page 32 2 October 2012 This revision reflects the addition, deletion, or modification of new and changed information described below. New information Temporary restrictions on the use of network and storage adapters with the x440, page 18 Changed information Updated the x86 memory table, Table 2-3 on page 20 Updated the FCoE table, 4.2, “FCoE support” on page 39 x IBM Flex System Interoperability Guide
  • 13.
    Updated the operatingsystem table, Table 3-1 on page 32 Clarified the support of the Pass-thru module and Fibre Channel switches with IBM Fabric Manager, Table 3-4 on page 35. Summary of changes xi
  • 14.
    xii IBM Flex System Interoperability Guide
  • 15.
    1 Chapter 1. Chassis interoperability The IBM Flex System Enterprise Chassis is a 10U next-generation server platform with integrated chassis management. It is a compact, high-density, high-performance, rack-mount, and scalable server platform system. It supports up to 14 one-bay compute nodes that share common resources, such as power, cooling, management, and I/O resources within a single Enterprise Chassis. In addition, it can also support up to seven 2-bay compute nodes or three 4-bay compute nodes when the shelves are removed. You can mix and match 1-bay, 2-bay, and 4-bay compute nodes to meet your specific hardware needs. Topics in this chapter are: 1.1, “Chassis to compute node” on page 2 1.2, “Switch to adapter interoperability” on page 3 1.3, “Switch to transceiver interoperability” on page 5 1.4, “Switch upgrades” on page 9 1.5, “vNIC and UFP support” on page 13 1.6, “Chassis power supplies” on page 14 1.7, “Rack to chassis” on page 16 © Copyright IBM Corp. 2012, 2013. All rights reserved. 1
  • 16.
    1.1 Chassis tocompute node Table 1-1 lists the maximum number of compute nodes installed in the chassis. Table 1-1 Maximum number of compute nodes installed in the chassis Compute nodes Machine type Maximum number of compute nodes in the System x Power System Enterprise Chassis (x-config) (e-config) 8721-A1x 7893-92X (x-config) (e-config) x86 compute nodes IBM Flex System x220 Compute Node 7906 7906-25X 14 14 IBM Flex System x240 Compute Node 8737 7863-10X 14 14 IBM Flex System x440 Compute Node 7917 7917-45X 7 7 IBM Power Systems compute nodes IBM Flex System p24L Compute Node None 1457-7FL 14a 14a IBM Flex System p260 Compute Node (POWER7®) None 7895-22X 14a 14a IBM Flex System p260 Compute Node (POWER7+™) None 7895-23X 14a 14a IBM Flex System p460 Compute Node None 7895-42X 7a 7a Management node IBM Flex System Manager 8731-A1x 7955-01M 1b 1b a. For Power Systems compute nodes: if the chassis is configured with the power management policy “AC Power Source Redundancy with Compute Node Throttling Allowed”, some maximum chassis configurations containing Power Systems compute nodes with large populations of 32GB DIMMs may result in the chassis having insufficient power to power on all 14 compute nodes bays. In such circumstances, only 13 of the 14 bays would be allowed to be powered on. b. One Flex System Manager management node can manage up to four chassis 2 IBM Flex System Interoperability Guide
  • 17.
    1.2 Switch toadapter interoperability In this section, we describe switch to adapter interoperability. 1.2.1 Ethernet switches and adapters Table 1-2 lists Ethernet switch to card compatibility. Switch upgrades: To maximize the usable port count on the adapters, the switches may need additional license upgrades. See 1.4, “Switch upgrades” on page 9 for details. Table 1-2 Ethernet switch to card compatibility CN4093 EN4093R EN4093 EN4091 EN2092 10Gb 10Gb 10Gb 10Gb 1Gb Switch Switch Switch Pass-thru Switch Part number 00D5823 95Y3309 49Y4270 88Y6043 49Y4294 Part Feature A3HH / A3J6 / A0TB / A1QV / A0TF / number codesa Feature codesa ESW2 ESW7 3593 3700 3598 None None x220 Embedded 1 Gb Yesb Yes Yes No Yes None None x240 Embedded 10 Gb Yes Yes Yes Yes Yes None None x440 Embedded 10 Gb Yes Yes Yes Yes Yes 49Y7900 A1BR / 1763 EN2024 4-port 1Gb Yes Yes Yes Yesc Yes Ethernet Adapter 90Y3466 A1QY / EC2D EN4132 2-port 10 Gb No Yes Yes Yes No Ethernet Adapter None None / 1762 EN4054 4-port 10Gb Yes Yes Yes Yesc Yes Ethernet Adapter 90Y3554 A1R1 / 1759 CN4054 10Gb Virtual Yes Yes Yes Yesc Yes Fabric Adapter None None / EC24 CN4058 8-port 10Gb Yesd Yesd Yesd Yesc Yese Converged Adapter None None / EC26 EN4132 2-port 10Gb No Yes Yes Yes No RoCE Adapter a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second feature code is for configurations ordered through the IBM Power Systems channel (e-config) b. 1 Gb is supported on the CN4093’s two external 10 Gb SFP+ ports only. The 12 external Omni Ports do not support 1 GbE speeds. c. Only two of the ports of this adapter are connected when used with the EN4091 10Gb Pass-thru. d. Only six of the eight ports of the CN4058 adapter are connected with the CN4093, EN4093R, EN4093R switches e. Only four of the eight ports of CN4058 adapter are connected with the EN2092 switch. Chapter 1. Chassis interoperability 3
  • 18.
    1.2.2 Fibre Channelswitches and adapters Table 1-3 lists Fibre Channel switch to card compatibility. Table 1-3 Fibre Channel switch to card compatibility FC5022 FC5022 FC5022 FC3171 FC3171 16Gb 16Gb 16Gb 8Gb 8Gb 12-port 24-port 24-port switch Pass-thru ESB Part number 88Y6374 00Y3324 90Y9356 69Y1930 69Y1934 Part Feature Feature codesa A1EH / A3DP / A2RQ / A0TD / A0TJ / number codesa 3770 ESW5 3771 3595 3591 69Y1938 A1BM / 1764 FC3172 2-port 8Gb FC Yes Yes Yes Yes Yes Adapter 95Y2375 A2N5 / EC25 FC3052 2-port 8Gb FC Yes Yes Yes Yes Yes Adapter 88Y6370 A1BP / EC2B FC5022 2-port 16Gb FC Yes Yes Yes No No Adapter a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second feature code is for configurations ordered through the IBM Power Systems channel (e-config) 1.2.3 InfiniBand switches and adapters Table 1-4 lists InfiniBand switch to card compatibility. Table 1-4 InfiniBand switch to card compatibility IB6131 InfiniBand Switch Part number 90Y3450 Part Feature number codesa Feature codea A1EK / 3699 90Y3454 A1QZ / EC2C IB6132 2-port FDR InfiniBand Adapter Yesb None None / 1761 IB6132 2-port QDR InfiniBand Adapter Yes a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second feature code is for configurations ordered through the IBM Power Systems channel (e-config) b. To operate at FDR speeds, the IB6131 switch will need the FDR upgrade, as described in 1.4, “Switch upgrades” on page 9 4 IBM Flex System Interoperability Guide
  • 19.
    1.3 Switch totransceiver interoperability This section specifies the transceivers and direct-attach copper (DAC) cables supported by the various IBM Flex System I/O modules. 1.3.1 Ethernet switches Support for transceivers and cables for Ethernet switch modules is shown in Table 1-5. Table 1-5 Modules and cables supported in Ethernet I/O modules CN4093 EN4093R EN4093 EN4091 EN2092 10Gb 10Gb 10Gb 10Gb 1Gb Switch Switch Switch Pass-thru Switch Part number 00D5823 95Y3309 49Y4270 88Y6043 49Y4294 Part Feature Feature codesa A3HH / A3J6 / A0TB / A1QV / A0TF / number codesa ESW2 ESW7 3593 3700 3598 SFP transceivers - 1 Gbps 81Y1622 3269 / IBM SFP SX Transceiver Yes Yes Yes Yes Yes EB2A (1000Base-SX) 81Y1618 3268 / IBM SFP RJ45 Transceiver Yes Yes Yes Yes Yes EB29 (1000Base-T) 90Y9424 A1PN / IBM SFP LX Transceiver Yes Yes Yes Yes Yes ECB8 (1000Base-LX) SFP+ transceivers - 10 Gbps 44W4408 4942 / 10 GBase-SR SFP+ (MMFiber) Yes Yes Yes Yes Yes 3282 46C3447 5053 / IBM SFP+ SR Transceiver Yes Yes Yes Yes Yes EB28 (10GBase-SR) 90Y9412 A1PM / IBM SFP+ LR Transceiver Yes Yes Yes Yes Yes ECB9 (10GBase-LR) QSFP+ transceivers - 40 Gbps 49Y7884 A1DR / IBM QSFP+ SR Transceiver Yes Yes Yes No No EB27 (40Gb) 8 Gb Fibre Channel SFP+ transceivers 44X1964 5075 / IBM 8 Gb SFP+ SW Optical Yes No No No No 3286 Transceiver SFP+ direct-attach copper (DAC) cables 90Y9427 A1PH / 1m IBM Passive DAC SFP+ Yes Yes Yes No Yes None 90Y9430 A1PJ / 3m IBM Passive DAC SFP+ Yes Yes Yes No Yes None 90Y9433 A1PK / 5m IBM Passive DAC SFP+ Yes Yes Yes No Yes ECB6 Chapter 1. Chassis interoperability 5
  • 20.
    CN4093 EN4093R EN4093 EN4091 EN2092 10Gb 10Gb 10Gb 10Gb 1Gb Switch Switch Switch Pass-thru Switch Part number 00D5823 95Y3309 49Y4270 88Y6043 49Y4294 Part Feature Feature codesa A3HH / A3J6 / A0TB / A1QV / A0TF / number codesa ESW2 ESW7 3593 3700 3598 49Y7886 A1DL / 1m 40 Gb QSFP+ to 4 x 10 Gb Yes Yes Yes No No EB24 SFP+ Cable 49Y7887 A1DM / 3m 40 Gb QSFP+ to 4 x 10 Gb Yes Yes Yes No No EB25 SFP+ Cable 49Y7888 A1DN / 5m 40 Gb QSFP+ to 4 x 10 Gb Yes Yes Yes No No EB26 SFP+ Cable 95Y0323 A25A / IBM 1m 10 GBase Copper No No No Yes No None SFP+ Twinax (Active) 95Y0326 A25B / IBM 3m 10 GBase Copper No No No Yes No None SFP+ Twinax (Active) 95Y0329 A25C / IBM 5m 10 GBase Copper No No No Yes No None SFP+ Twinax (Active) 81Y8295 A18M / 1m 10 GbE Twinax Act Copper No No No Yes No None SFP+ DAC (active) 81Y8296 A18N / 3m 10 GE Twinax Act Copper No No No Yes No None SFP+ DAC (active) 81Y8297 A18P / 5m 10 GE Twinax Act Copper No No No Yes No None SFP+ DAC (active) QSFP cables 49Y7890 A1DP / 1m IBM QSFP+ to QSFP+ Yes Yes Yes No No EB2B Cable 49Y7891 A1DQ / 3m IBM QSFP+ to QSFP+ Yes Yes Yes No No EB2H Cable Fiber optic cables 90Y3519 A1MM / 10m IBM MTP Fiber Optical Yes Yes Yes No No EB2J Cable 90Y3521 A1MN / 30m IBM MTP Fiber Optical Yes Yes Yes No No EC2K a Cable a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second feature code is for configurations ordered through the IBM Power Systems channel (e-config) 6 IBM Flex System Interoperability Guide
  • 21.
    1.3.2 Fibre Channelswitches Support for transceivers and cables for Fibre Channel switch modules is shown in Table 1-6. Table 1-6 Modules and cables supported in Fibre Channel I/O modules FC5022 FC5022 FC5022 FC3171 FC3171 16Gb 16Gb 16Gb 8Gb 8Gb 12-port 24-port 24-port switch Pass-thru ESB Part number 88Y6374 00Y3324 90Y9356 69Y1930 69Y1934 Part Feature Feature codesa A1EH / A3DP / A2RQ / A0TD / A0TJ / number codesa 3770 ESW5 3771 3595 3591 16 Gb transceivers 88Y6393 A22R / Brocade 16 Gb SFP+ Optical Yes Yes Yes No No 5371 Transceiver 8 Gb transceivers 88Y6416 A2B9 / Brocade 8 Gb SFP+ SW Optical Yes Yes Yes No No 5370 Transceiver 44X1964 5075 / IBM 8 Gb SFP+ SW Optical No No No Yes Yes 3286 Transceiver 4 Gb transceivers 39R6475 4804 / 4 Gb SFP Transceiver Option No No No Yes Yes 3238 a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second feature code is for configurations ordered through the IBM Power Systems channel (e-config) Chapter 1. Chassis interoperability 7
  • 22.
    1.3.3 InfiniBand switches Support for transceivers and cables for InfiniBand switch modules is shown in Table 1-7. Compliant cables: The IB6131 switch supports all cables compliant to the InfiniBand Architecture specification. Table 1-7 Modules and cables supported in InfiniBand I/O modules IB6131 InfiniBand Switch Part number 90Y3450 Part Feature number codesa Feature codesa A1EK / 3699 49Y9980 3866 / 3249 IB QDR 3m QSFP Cable Option (passive) Yes 90Y3470 A227 / ECB1 3m FDR InfiniBand Cable (passive) Yes a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second feature code is for configurations ordered through the IBM Power Systems channel (e-config) 8 IBM Flex System Interoperability Guide
  • 23.
    1.4 Switch upgrades Various IBM Flex System switches can be upgraded via software licenses to enable additional ports or features. Switches covered in this section: 1.4.1, “IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch” on page 9 1.4.2, “IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch” on page 10 1.4.3, “IBM Flex System EN2092 1Gb Ethernet Scalable Switch” on page 11 1.4.4, “IBM Flex System IB6131 InfiniBand Switch” on page 11 1.4.5, “IBM Flex System FC5022 16Gb SAN Scalable Switch” on page 12 1.4.1 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch The CN4093 switch is initially licensed for fourteen 10 GbE internal ports, two external 10 GbE SFP+ ports, and six external Omni Ports enabled. Further ports can be enabled, including 14 additional internal ports and two external 40 GbE QSFP+ uplink ports with the Upgrade 1 (00D5845) and 14 additional internal ports and six additional external Omni Ports with the Upgrade 2 (00D5847) license options. Upgrade 1 and Upgrade 2 can be applied on the switch independently from each other or in combination for full feature capability. Table 1-8 shows the part numbers for ordering the switches and the upgrades. Table 1-8 CN4093 10Gb Converged Scalable Switch part numbers and port upgrades Part Feature Description Total ports enabled number codea Internal External External External 10Gb 10Gb SFP+ 10Gb Omni 40Gb QSFP+ 00D5823 A3HH / ESW2 Base switch (no upgrades) 14 2 6 0 00D5845 A3HL / ESU1 Add Upgrade 1 28 2 6 2 00D5847 A3HM / ESU2 Add Upgrade 2 28 2 12 0 00D5845 A3HL / ESU1 Add both Upgrade 1 and 42 2 12 2 00D5847 A3HM / ESU2 Upgrade 2 a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel. Each upgrade license enables additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter (one port of the adapter goes to each of two switches) Adding Upgrade 1 or Upgrade 2 requires a four-port Ethernet adapter (two ports of the adapter to each switch) to use all internal ports Adding both Upgrade 1 and Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch) to use all internal ports Chapter 1. Chassis interoperability 9
  • 24.
    1.4.2 IBM FlexSystem Fabric EN4093 & EN4093R 10Gb Scalable Switch The EN4093 and EN4093R are initially licensed with fourteen 10 Gb internal ports enabled and ten 10 Gb external uplink ports enabled. Further ports can be enabled, including the two 40 Gb external uplink ports with the Upgrade 1 and four additional SFP+ 10Gb ports with Upgrade 2 license options. Upgrade 1 must be applied before Upgrade 2 can be applied. These are IBM Features on Demand license upgrades. Table 1-9 lists the available parts and upgrades. Table 1-9 IBM Flex System Fabric EN4093 10Gb Scalable Switch part numbers and port upgrades Part Feature Product description Total ports enabled number codea Internal 10 Gb uplink 40 Gb uplink 49Y4270 A0TB / 3593 IBM Flex System Fabric EN4093 10Gb 14 10 0 Scalable Switch 10x external 10 Gb uplinks 14x internal 10 Gb ports 05Y3309 A3J6 / ESW7 IBM Flex System Fabric EN4093R 10Gb 14 10 0 Scalable Switch 10x external 10 Gb uplinks 14x internal 10 Gb ports 49Y4798 A1EL / 3596 IBM Flex System Fabric EN4093 10Gb 28 10 2 Scalable Switch (Upgrade 1) Adds 2x external 40 Gb uplinks Adds 14x internal 10 Gb ports 88Y6037 A1EM / 3597 IBM Flex System Fabric EN4093 10Gb 42 14 2 Scalable Switch (Upgrade 2) (requires Upgrade 1): Adds 4x external 10 Gb uplinks Add 14x internal 10 Gb ports a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel. Each upgrade license enables additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter (one port of the adapter goes to each of two switches) Upgrade 1 requires a four-port Ethernet adapter (two ports of the adapter to each switch) to use all internal ports Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch) to use all internal ports Consideration: Adding Upgrade 2 enables an additional 14 internal ports. This allows a total of 42 internal ports -- three ports connected to each of the 14 compute nodes. To take full advantage of all 42 internal ports, a 6-port or 8-port adapter is required, such as the CN4058 8-port 10Gb Converged Adapter. Upgrade 2 still provides a benefit with a 4-port adapter because this upgrade enables an extra four external 10 Gb uplinks as well. 10 IBM Flex System Interoperability Guide
  • 25.
    1.4.3 IBM FlexSystem EN2092 1Gb Ethernet Scalable Switch The EN2092 comes standard with 14 internal and 10 external Gigabit Ethernet ports enabled. Further ports can be enabled, including the four external 10 Gb uplink ports with IBM Features on Demand license upgrades. Upgrade 1 and the 10 Gb Uplinks upgrade can be applied in either order. Table 1-10 IBM Flex System EN2092 1Gb Ethernet Scalable Switch part numbers and port upgrades Part number Feature codea Product description 49Y4294 A0TF / 3598 IBM Flex System EN2092 1Gb Ethernet Scalable Switch 14 internal 1 Gb ports 10 external 1 Gb ports 90Y3562 A1QW / 3594 IBM Flex System EN2092 1Gb Ethernet Scalable Switch (Upgrade 1) Adds 14 internal 1 Gb ports Adds 10 external 1 Gb ports 49Y4298 A1EN / 3599 IBM Flex System EN2092 1Gb Ethernet Scalable Switch (10 Gb Uplinks) Adds 4 external 10 Gb uplinks a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel. The standard switch has 14 internal ports, and the Upgrade 1 license enables 14 additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter installed in each compute node (one port of the adapter goes to each of two switches) Upgrade 1 requires a four-port Ethernet adapter installed in each compute node (two ports of the adapter to each switch) 1.4.4 IBM Flex System IB6131 InfiniBand Switch The IBM Flex System IB6131 InfiniBand Switch is a 32 port InfiniBand switch. It has 18 FDR/QDR (56/40 Gbps) external ports and 14 FDR/QDR (56/40 Gbps) internal ports for connections to nodes. This switch ships standard with quad data rate (QDR) and can be upgraded to fourteen data rate (FDR) with an IBM Features on Demand license upgrade. Ordering information is listed in Table 1-11. Table 1-11 IBM Flex System IB6131 InfiniBand Switch Part Number and upgrade option Part number Feature codesa Product Name 90Y3450 A1EK / 3699 IBM Flex System IB6131 InfiniBand Switch 18 external QDR ports 14 QDR internal ports 90Y3462 A1QX / ESW1 IBM Flex System IB6131 InfiniBand Switch (FDR Upgrade) Upgrades all ports to FDR speeds a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel. Chapter 1. Chassis interoperability 11
  • 26.
    1.4.5 IBM FlexSystem FC5022 16Gb SAN Scalable Switch Table 1-12 lists the available port and feature upgrades for the FC5022 16Gb SAN Scalable Switches. These upgrades are all IBM Features on Demand license upgrades. Table 1-12 FC5022 switch upgrades 24-port 24-port 16 Gb 16 Gb 16 Gb SAN switch ESB switch SAN switch Part Feature number codesa Description 90Y9356 00Y3324 88Y6374 88Y6382 A1EP / 3772 FC5022 16Gb SAN Scalable Switch (Upgrade 1) No No Yes 88Y6386 A1EQ / 3773 FC5022 16Gb SAN Scalable Switch (Upgrade 2) Yes Yes Yes 00Y3320 A3HN / ESW3 FC5022 16Gb Fabric Watch Upgrade No Yes Yes 00Y3322 A3HP / ESW4 FC5022 16Gb ISL/Trunking Upgrade No Yes Yes a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel. Table 1-13 shows the total number of active ports on the switch after applying compatible port upgrades. Table 1-13 Total port counts after applying upgrades Total number of active ports 24-port 16 Gb 24-port 16 Gb 16 Gb SAN switch ESB SAN switch SAN switch Ports on Demand upgrade 90Y9356 00Y3324 88Y6374 Included with base switch 24 24 12 Upgrade 1, 88Y6382 (adds 12 ports) Not supported Not supported 24 Upgrade 2, 88Y6386 (adds 24 ports) 48 48 48 12 IBM Flex System Interoperability Guide
  • 27.
    1.5 vNIC andUFP support Table 1-14 lists vNIC (virtual NIC) and UFP (Universal Fabric Port) support by combinations of switch, adapter, and operating system. In the table, we use the following abbreviations for the vNIC modes: vNIC1 = IBM Virtual Fabric Mode vNIC2 = Switch Independent Mode 10 GbE adapters only: Only 10 Gb Ethernet adapters support vNIC and UFP. 1 GbE adapter do not support these features. Table 1-14 Supported vNIC modes Flex System I/O module EN4093 10Gb Scalable Switch EN4091 10Gb Ethernet Pass-thru EN4093R 10Gb Switch CN4093 10Gb Converged Switch Top-of-rack switch None IBM RackSwitch™ G8124E IBM RackSwitch G8264 Operating system Windows Linuxab VMwarec Windows Linuxab VMwarec 10Gb onboard LOM (x240 and x440) vNIC1 vNIC1 vNIC1 vNIC1 vNIC1 vNIC1 vNIC2 vNIC2 vNIC2 vNIC2 vNIC2 vNIC2 UFPd UFPd UFP UFP UFP UFP CN4054 10Gb Virtual Fabric Adapter vNIC1 vNIC1 vNIC1 vNIC1 vNIC1 vNIC1 90Y3554 (e-config #1759) vNIC2 vNIC2 vNIC2 vNIC2 vNIC2 vNIC2 UFPd UFPd UFPd UFP UFP UFP EN4054 4-port 10Gb Ethernet Adapter The EN4054 4-port 10Gb Ethernet Adapter does not support vNIC nor UFP. (e-config #1762) EN4132 2-port 10 Gb Ethernet Adapter The EN4132 2-port 10 Gb Ethernet Adapter does not support vNIC nor UFP. 90Y3466 (e-config #EC2D) CN4058 8-port 10Gb Converged The CN4058 8-port 10Gb Converged Adapter does not support vNIC nor Adapter, (e-config #EC24) UFP. EN4132 2-port 10Gb RoCE Adapter, The EN4132 2-port 10Gb RoCE Adapter does not support vNIC nor UFP. (e-config #EC26) a. Linux kernels with Xen are not supported with either vNIC1 nor vNIC2. For support information, see IBM RETAIN® Tip H205800 at http://ibm.com/support/entry/portal/docdisplay?lndocid=migr-5090480. b. The combination of vNIC2 and iBoot is not supported for legacy booting with Linux. c. The combination of vNIC2 with VMware ESX 4.1 and storage protocols (FCoE and iSCSI) is not supported. d. The CN4093 10Gb Converged Switch is planned to support Universal Fabric Port (UFP) in 2Q/2013 Chapter 1. Chassis interoperability 13
  • 28.
    1.6 Chassis powersupplies Power supplies are available either as 2500W or 2100W capacities. The standard chassis ships with two 2500W power supplies. A maximum of six power supplies can be installed. The 2100W power supplies are only available via CTO and through the System x ordering channel. Table 1-15 shows the ordering information for the Enterprise Chassis power supplies. Power supplies cannot be mixed in the same chassis. Table 1-15 Power supply module option part numbers Part Feature Description Chassis models number codesa where standard 43W9049 A0UC / 3590 IBM Flex System Enterprise Chassis 2500W Power Module 8721-A1x (x-config) 7893-92X (e-config) 47C7633 A3JH / None IBM Flex System Enterprise Chassis 2100W Power Module None a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel. A chassis powered by the 2100W power supplies cannot provide N+N redundant power unless all the compute nodes are configured with 95W or lower Intel processors. N+1 redundancy is possible with any processors. Table 1-16 shows the nodes that are supported in chassis when powered by either the 2100W or 2500W modules. Table 1-16 Compute nodes supported by the power supplies Node 2100W 2500W power supply power supply IBM Flex System Manager management node Yes Yes x220 (with or without Storage Expansion Node or PCIe Expansion Node) Yes Yes x240 (with or without Storage Expansion Node or PCIe Expansion Node) Yesa Yesa x440 Yesa Yesa p24L No Yesa p260 No Yesa p460 No Yesa V7000 Storage Node (either primary or expansion node) Yes Yes a. Some restrictions based on the TDP power of the processors installed or the power policy enabled. See Table 1-17 on page 15. 14 IBM Flex System Interoperability Guide
  • 29.
    Table 1-17 onpage 15 lists details of the support for compute nodes supported based on type and number of power supplies installed in the chassis and the power policy enabled (N+N or N+1). In this table, the colors of the cells have the following meaning: Supported with no restrictions as to the number of compute nodes that can be installed Supported but with restrictions on the number of compute nodes that can be installed. Table 1-17 Specific number of compute nodes supported based on installed power supplies Compute CPU 2100W power supplies 2500W power supplies node TDP rating N+1, N=5 N+1, N=4 N+1, N=3 N+N, N=3 N+1, N=5 N+1, N=4 N+1, N=3 N+N, N=3 6 total 5 total 4 total 6 total 6 total 5 total 4 total 6 total x240 60W 14 14 14 14 14 14 14 14 70W 14 14 13 14 14 14 14 14 80W 14 14 13 14 14 14 14 14 95W 14 14 12 13 14 14 14 14 115W 14 14 11 12 14 14 14 14 130W 14 14 11 11 14 14 14 14 135W 14 14 11 11 14 14 13 14 x440 95W 7 7 6 6 7 7 7 7 115W 7 7 5 6 7 7 7 7 130W 7 7 5 5 7 7 6 7 p24L All Not supported 14 14 12 13 p260 All Not supported 14 14 12 13 p460 All Not supported 7 7 6 6 x220 50W 14 14 14 14 14 14 14 14 60W 14 14 14 14 14 14 14 14 70W 14 14 14 14 14 14 14 14 80W 14 14 14 14 14 14 14 14 95W 14 14 14 14 14 14 14 14 FSM 95W 2 2 2 2 2 2 2 2 V7000 N/A 3 3 3 3 3 3 3 3 Assumptions: All Compute Nodes fully configured Throttling and over subscription is enabled Tip: Consult the Power configurator for exact configuration support: http://ibm.com/systems/bladecenter/resources/powerconfig.html Chapter 1. Chassis interoperability 15
  • 30.
    1.7 Rack tochassis IBM offers an extensive range of industry-standard and EIA-compatible rack enclosures and expansion units. The flexible rack solutions help you consolidate servers and save space, while allowing easy access to crucial components and cable management. Table 1-18 lists the IBM Flex System Enterprise Chassis supported in each rack cabinet. Table 1-18 The chassis supported in each rack cabinet Part number Rack cabinet Supports the Enterprise Chassis 93634CX IBM PureFlex System 42U Rack Yes (recommended) 93634DX IBM PureFlex System 42U Expansion Rack Yes (recommended) 93634PX IBM 42U 1100 mm Deep Dynamic rack Yes (recommended) 201886X IBM 11U Office Enablement Kit Yes 93072PX IBM S2 25U Static standard rack Yes 93072RX IBM S2 25U Dynamic standard rack Yes 93074RX IBM S2 42U standard rack Yes 99564RX IBM S2 42U Dynamic standard rack Yes 93084PX IBM 42U Enterprise rack Yes 93604PX IBM 42U 1200 mm Deep Dynamic Rack Yes 93614PX IBM 42U 1200 mm Deep Static rack Yes 93624PX IBM 47U 1200 mm Deep Static rack Yes 9306-900 IBM Netfinity® 42U Rack No 9306-910 IBM Netfinity 42U Rack No 9308-42P IBM Netfinity Enterprise Rack No 9308-42X IBM Netfinity Enterprise Rack No Varies IBM NetBay 22U No 16 IBM Flex System Interoperability Guide
  • 31.
    2 Chapter 2. Compute node component compatibility This chapter lists the compatibility of components installed internally to each compute node. Topics in this chapter are: 2.1, “Compute node-to-card interoperability” on page 18 2.2, “Memory DIMM compatibility” on page 20 2.3, “Internal storage compatibility” on page 22 2.4, “Embedded virtualization” on page 25 2.5, “Expansion node compatibility” on page 26 © Copyright IBM Corp. 2012, 2013. All rights reserved. 17
  • 32.
    2.1 Compute node-to-cardinteroperability Table 2-1 lists the available I/O adapters and their compatibility with compute nodes. Power Systems compute nodes: Some I/O adapters supported by Power Systems compute nodes are restricted to only some of the available slots. See Table 2-2 on page 19 for specifics. Table 2-1 I/O adapter compatibility matrix - compute nodes Supported servers p260 22X p260 23X System x x-config e-config x440b p24L p460 x220 x240 part feature feature number code codea I/O adapters Ethernet adapters 49Y7900 A1BR 1763 / A10Y EN2024 4-port 1Gb Ethernet Adapter Y Y Y Y Y Y Y 90Y3466 A1QY EC2D / A1QY EN4132 2-port 10 Gb Ethernet Adapter Y Y Y N N N N None None 1762 / None EN4054 4-port 10Gb Ethernet Adapter N N N Y Y Y Y 90Y3554 A1R1 1759 / A1R1 CN4054 10Gb Virtual Fabric Adapter Y Y Y N N N N 90Y3558 A1R0 1760 / A1R0 CN4054 Virtual Fabric Adapter Upgradec Y Y Y N N N N None None EC24 / None CN4058 8-port 10Gb Converged Adapter N N N Y Y Y Y None None EC26 / None EN4132 2-port 10Gb RoCE Adapter N N N Y Y Y Y Fibre Channel adapters 69Y1938 A1BM 1764 / A1BM FC3172 2-port 8Gb FC Adapter Y Y Y Y Y Y Y 95Y2375 A2N5 EC25 / A2N5 FC3052 2-port 8Gb FC Adapter Y Y Y N N N N 88Y6370 A1BP EC2B / A1BP FC5022 2-port 16Gb FC Adapter Y Y Y N N N N InfiniBand adapters 90Y3454 A1QZ EC2C / A1QZ IB6132 2-port FDR InfiniBand Adapter Y Y Y N N N N None None 1761 / None IB6132 2-port QDR InfiniBand Adapter N N N Y Y Y Y SAS 90Y4390 A2XW None / A2XW ServeRAID M5115 SAS/SATA Controllerd Y Y Yb N N N N a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when supported). The second is for the x220 and x440. b. For compatibility as listed here, ensure the x440 is running IMM2 firmare Build 40a or later c. Features on Demand (software) upgrade to enable FCoE and iSCSI on the CN4054. One upgrade needed per adapter. d. Various enablement kits and Features on Demand upgrades are available for the ServeRAID M5115. See the ServeRAID M5115 Product Guide, http://www.redbooks.ibm.com/abstracts/tips0884.html?Open 18 IBM Flex System Interoperability Guide
  • 33.
    For Power Systemscompute nodes, Table 2-2 shows which specific I/O expansion slots each of the supported adapters can be installed in to. Yes in the table means the adapter is supported in that I/O expansion slot. Tip: Table 2-2 applies to Power Systems compute nodes only. Table 2-2 Slot locations supported by I/O expansion cards in Power Systems compute nodes Feature Description Slot 1 Slot 2 Slot 3 Slot 4 code (p460) (p460) 10 Gb Ethernet EC24 IBM Flex System CN4058 8-port 10Gb Converged Adapter Yes Yes Yes Yes EC26 IBM Flex System EN4132 2-port 10Gb RoCE Adapter No Yes Yes Yes 1762 IBM Flex System EN4054 4-port 10Gb Ethernet Adapter Yes Yes Yes Yes 1 Gb Ethernet 1763 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter Yes Yes Yes Yes InfiniBand 1761 IBM Flex System IB6132 2-port QDR InfiniBand Adapter No Yes No Yes Fibre Channel 1764 IBM Flex System FC3172 2-port 8Gb FC Adapter No Yes No Yes Chapter 2. Compute node component compatibility 19
  • 34.
    2.2 Memory DIMMcompatibility This section covers memory DIMMs for both compute node families. It covers the following topics: 2.2.1, “x86 compute nodes” on page 20 2.2.2, “Power Systems compute nodes” on page 21 2.2.1 x86 compute nodes Table 2-3 lists the memory DIMM options for the x86 compute nodes. Table 2-3 Supported memory DIMMs - x86 compute nodes Part x-config e-config Description x220 x240 x440 number feature featurea,b Unbuffered DIMM (UDIMM) modules 49Y1403 A0QS EEM2 / A0QS 2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC Yes No No DDR3 1333MHz LP UDIMM 49Y1404 8648 EEM3 / 8648 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC Yes Yes Yes DDR3 1333MHz LP UDIMM Registered DIMMs (RDIMMs) - 1333 MHz and 1066 MHz 49Y1405 8940 EM05 / None 2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC No Yes No DDR3 1333MHz LP RDIMM 49Y1406 8941 EEM4 / 8941 4GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC Yes Yes Yes DDR3 1333MHz LP RDIMM 49Y1407 8942 EM09 / 8942 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC Yes Yes Yes DDR3 1333MHz LP RDIMM 49Y1397 8923 EM17 / 8923 8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC Yes Yes Yes DDR3 1333MHz LP RDIMM 49Y1563 A1QT EM33 / A1QT 16GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9 Yes Yes Yes ECC DDR3 1333MHz LP RDIMM 49Y1400 8939 EEM1 / 8939 16GB (1x16GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC Yes Yes No DDR3 1066MHz LP RDIMM 90Y3101 A1CP EEM7 / None 32GB (1x32GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC No No No DDR3 1066MHz LP RDIMM Registered DIMMs (RDIMMs) - 1600 MHz 49Y1559 A28Z EEM5 / A28Z 4GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC Yes Yes Yes DDR3 1600MHz LP RDIMM 90Y3178 A24L EEMC / A24L 4GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC Yes Yes No DDR3 1600MHz LP RDIMM 90Y3109 A292 EEM9 / A292 8GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC Yes Yes Yes DDR3 1600MHz LP RDIMM 00D4968 A2U5 EEMB / A2U5 16GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC Yes Yes Yes DDR3 1600MHz LP RDIMM 20 IBM Flex System Interoperability Guide
  • 35.
    Part x-config e-config Description x220 x240 x440 number feature featurea,b Load-reduced DIMMs (LRDIMMs) 49Y1567 A290 EEM6 / A290 16GB (1x16GB, 4Rx4, 1.35V) PC3L-10600 CL9 No Yes Yes ECC DDR3 1333MHz LP LRDIMM 90Y3105 A291 EEM8 / A291 32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 Yes Yes Yes ECC DDR3 1333MHz LP LRDIMM a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when supported). The second is for the x220 and x440. b. For memory DIMMs, the first feature code listed will result in two DIMMs each, whereas the second feature code listed contains only one DIMM each. 2.2.2 Power Systems compute nodes Table 2-4 lists the supported memory DIMMs for Power Systems compute nodes. Table 2-4 Supported memory DIMMs - Power Systems compute nodes Part e-config Description p24L p260 p260 p460 number feature 22X 23X 78P1011 EM04 2x 2 GB DDR3 RDIMM 1066 MHz Yes Yes No Yes 78P0501 8196 2x 4 GB DDR3 RDIMM 1066 MHz Yes Yes Yes Yes 78P0502 8199 2x 8 GB DDR3 RDIMM 1066 MHz Yes Yes No Yes 78P1917 EEMD 2x 8 GB DDR3 RDIMM 1066 MHz Yes Yes Yes Yes 78P0639 8145 2x 16 GB DDR3 RDIMM 1066 MHz Yes Yes No Yes 78P1915 EEME 2x 16 GB DDR3 RDIMM 1066 MHz Yes Yes Yes Yes 78P1539 EEMF 2x 32 GB DDR3 RDIMM 1066 MHz Yes Yes Yes Yes Chapter 2. Compute node component compatibility 21
  • 36.
    2.3 Internal storagecompatibility This section covers supported internal storage for both compute node families. It covers the following topics: 2.3.1, “x86 compute nodes: 2.5-inch drives” on page 22 2.3.2, “x86 compute nodes: 1.8-inch drives” on page 23 2.3.3, “Power Systems compute nodes” on page 24 2.3.1 x86 compute nodes: 2.5-inch drives Table 2-5 lists the 2.5-inch drives for x86 compute nodes. Table 2-5 Supported 2-5-inch SAS and SATA drives Part x-config e-config Description x220 x240 x440 number feature featurea 10K SAS hard disk drives 90Y8877 A2XC None / A2XC IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD N N Y 42D0637 5599 3743 / 5599 IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD Y Y N 44W2264 5413 None / 5599 IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS SED N N Y 90Y8872 A2XD None / A2XD IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS HDD N N Y 49Y2003 5433 3766 / 5433 IBM 600GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD Y Y N 81Y9650 A282 EHD4 / A282 IBM 900GB 10K 6Gbps SAS 2.5" SFF HS HDD Y Y Y 15K SAS hard disk drives 90Y8926 A2XB None / A2XB IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS HDD N N Y 42D0677 5536 EHD1 / 5536 IBM 146GB 15K 6Gbps SAS 2.5" SFF Slim-HS HDD Y Y N 81Y9670 A283 EHD5 / A283 IBM 300GB 15K 6Gbps SAS 2.5" SFF HS HDD Y Y Y NL SAS hard disk drives 81Y9690 A1P3 EHD6 / A1P3 IBM 1TB 7.2K 6Gbps NL SAS 2.5" SFF HS HDD Y Y Y 90Y8953 A2XE None / A2XE IBM 500GB 7.2K 6Gbps NL SAS 2.5" SFF G2HS HDD N N Y 42D0707 5409 EHD2 / 5409 IBM 500GB 7200 6Gbps NL SAS 2.5" SFF HS HDD Y Y N NL SATA hard disk drives 81Y9730 A1AV EHD9 / A1AV IBM 1TB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD Y Y Y 81Y9722 A1NX EHD7 / A1NX IBM 250GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD Y Y Y 81Y9726 A1NZ EHD8 / A1NZ IBM 500GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD Y Y Y Solid-state drives - Enterprise 00W1125 A3HR None / A3HR IBM 100GB SATA 2.5" MLC HS Enterprise SSD Y Y Y 43W7746 5420 None / 5420 IBM 200GB SATA 1.8" MLC SSD Y Y Y 43W7718 A2FN EHD3 / A2FN IBM 200GB SATA 2.5" MLC HS SSD Y Y Y 43W7726 5428 None / 5428 IBM 50GB SATA 1.8" MLC SSD Y Y Y 22 IBM Flex System Interoperability Guide
  • 37.
    Part x-config e-config Description x220 x240 x440 number feature featurea Solid-state drives - Enterprise value 49Y5839 A3AS None / A3AS IBM 64GB SATA 2.5" MLC HS Enterprise Value SSD Y Y N 90Y8648 A2U4 EHDD / A2U4 IBM 128GB SATA 2.5" MLC HS Enterprise Value SSD Y Y Y 90Y8643 A2U3 EHDC / A2U3 IBM 256GB SATA 2.5" MLC HS Enterprise Value SSD Y Y Y 49Y5844 A3AU None / A3AU IBM 512GB SATA 2.5" MLC HS Enterprise Value SSD Y Y N a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when supported). The second is for the x220 and x440. 2.3.2 x86 compute nodes: 1.8-inch drives The x86 compute nodes support 1.8-inch solid-state drives with the addition of the ServeRAID M5115 RAID controller plus the appropriate enablement kits. For details about configurations, see ServeRAID M5115 SAS/SATA Controller for IBM Flex System, TIPS0884. Tip: The ServeRAID M5115 RAID controller is installed in I/O expansion slot 1 but can be installed along with the Compute Node Fabric Connector (aka periscope connector) used to connect the onboard Ethernet controller to the chassis midplane. Table 2-6 lists the supported enablement kits and Features on Demand activation upgrades available for use with the ServeRAID M5115. Table 2-6 ServeRAID M5115 compatibility Part Feature Description x220 x240 x440 number codea 90Y4390 A2XW ServeRAID M5115 SAS/SATA Controller for IBM Flex System Yes Yes Yes Hardware enablement kits - IBM Flex System x220 Compute Node 90Y4424 A35L ServeRAID M5100 Series Enablement Kit for IBM Flex System x220 Yes No No 90Y4425 A35M ServeRAID M5100 Series IBM Flex System Flash Kit for x220 Yes No No 90Y4426 A35N ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x220 Yes No No Hardware enablement kits - IBM Flex System x240 Compute Node 90Y4342 A2XX ServeRAID M5100 Series Enablement Kit for IBM Flex System x240 No Yes No 90Y4341 A2XY ServeRAID M5100 Series IBM Flex System Flash Kit for x240 No Yes No 90Y4391 A2XZ ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x240 No Yesb No Hardware enablement kits - IBM Flex System x440 Compute Node 46C9030 A3DS ServeRAID M5100 Series Enablement Kit for IBM Flex System x440 No No Yes 46C9031 A3DT ServeRAID M5100 Series IBM Flex System Flash Kit for x440 No No Yes 46C9032 A3DU ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x440 No No Yes Chapter 2. Compute node component compatibility 23
  • 38.
    Part Feature Description x220 x240 x440 number codea Feature on-demand licenses (for all three compute nodes) 90Y4410 A2Y1 ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System Yes Yes Yes 90Y4412 A2Y2 ServeRAID M5100 Series Performance Upgrade for IBM Flex System Yes Yes Yes 90Y4447 A36G ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System Yes Yes Yes a. The feature codes listed here are for both x-config (HVEC) and e-config (AAS), with the exception of those for the x240 which are for HVEC only. b. If the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) is installed, the x240 USB Enablement Kit (49Y8119) cannot also be installed. Both the x240 USB Enablement Kit and the SSD Expansion Kit both include special air baffles that cannot be installed at the same time. Table 2-7 lists the drives supported in conjunction with the ServeRAID M5115 RAID controller. Table 2-7 Supported 1.8-inch solid-state drives Part Feature Description x220 x240 x440 number codea 43W7746 5420 IBM 200GB SATA 1.8" MLC SSD Yes Yes Yes 43W7726 5428 IBM 50GB SATA 1.8" MLC SSD Yes Yes Yes 49Y5993 A3AR IBM 512GB SATA 1.8" MLC Enterprise Value SSD No No No 49Y5834 A3AQ IBM 64GB SATA 1.8" MLC Enterprise Value SSD No No No a. The feature codes listed here are for both x-config (HVEC) and e-config (AAS), with the exception of those for the x240 which are for HVEC only. 2.3.3 Power Systems compute nodes Local storage options for Power Systems compute nodes are shown in Table 2-8. None of the available drives are hot-swappable. The local drives (HDD or SDD) are mounted to the top cover of the system. If you use local drives, you must order the appropriate cover with connections for your wanted drive type. The maximum number of drives that can be installed in any Power Systems compute node is two. SSD and HDD drives cannot be mixed. Table 2-8 Local storage options for Power Systems compute nodes e-config Description p24L p260 p460 feature 2.5 inch SAS HDDs 8274 300 GB 10K RPM non-hot-swap 6 Gbps SAS Yes Yes Yes 8276 600 GB 10K RPM non-hot-swap 6 Gbps SAS Yes Yes Yes 8311 900 GB 10K RPM non-hot-swap 6 Gbps SAS Yes Yes Yes 7069 Top cover with HDD connectors for the p260 and p24L Yes Yes No 7066 Top cover with HDD connectors for the p460 No No Yes 1.8 inch SSDs 8207 177 GB SATA non-hot-swap SSD Yes Yes Yes 24 IBM Flex System Interoperability Guide
  • 39.
    e-config Description p24L p260 p460 feature 7068 Top cover with SSD connectors for the p260 and p24L Yes Yes No 7065 Top Cover with SSD connectors for p460 No No Yes No drives 7067 Top cover for no drives on the p260 and p24L Yes Yes No 7005 Top cover for no drives on the p460 No No Yes 2.4 Embedded virtualization The x86 compute nodes support an IBM standard USB flash drive (USB Memory Key) option preinstalled with VMware ESXi or VMware vSphere. It is fully contained on the flash drive, without requiring any disk space. On the x240 the USB memory keys plug into the USB ports on the optional x240 USB Enablement Kit. On the x220 and x440, the USB memory keys plug directly into USB ports on the system board. Table 2-9 lists the ordering information for the VMware hypervisor options. Table 2-9 IBM USB Memory Key for VMware hypervisors Part x-config e-config Description x220 x240 x440 number feature featurea 49Y8119 A33M None / None x240 USB Enablement Kit No Yesb No 41Y8300 A2VC EBK3 / A2VC IBM USB Memory Key for VMware ESXi 5.0 Yes Yes Yes 41Y8307 A383 None / A383 IBM USB Memory Key for VMware ESXi 5.0 Update1 Yes Yes Yes 41Y8298 A2G0 None / A2G0 IBM Blank USB Memory Key for VMware ESXi Yes Yes Yes Downloads a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when supported). The second is for the x220 and x440. b. If the x240 USB Enablement Kit (49Y8119) is installed, the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) cannot also be installed. Both the x240 USB Enablement Kit and the SSD Expansion Kit both include special air baffles that cannot be installed at the same time. You can use the Blank USB Memory Key, 41Y8298, to use any available IBM customized version of the VMware hypervisor. The VMware vSphere hypervisor with IBM customizations can be downloaded from the following website: http://ibm.com/systems/x/os/vmware/esxi Power Systems compute nodes do not support VMware ESXi installed on a USB Memory Key. Power Systems compute nodes support IBM PowerVM® as standard. These servers do support virtual servers, also known as logical partitions or LPARs. The maximum number of virtual serves is 10 times the number of cores in the compute node: p24L: Up to 160 virtual servers (10 x 16 cores) p260: Up to 160 virtual servers (10 x 16 cores) p460: Up to 320 virtual servers (10 x 32 cores) Chapter 2. Compute node component compatibility 25
  • 40.
    2.5 Expansion nodecompatibility This section describes the two expansion nodes and the components that are compatible with each. 2.5.1, “Compute nodes” on page 26 2.5.2, “Flex System I/O adapters - PCIe Expansion Node” on page 26 2.5.3, “PCIe I/O adapters - PCIe Expansion Node” on page 27 2.5.4, “Internal storage - Storage Expansion Node” on page 28 2.5.5, “RAID upgrades - Storage Expansion Node” on page 29 2.5.1 Compute nodes Table 2-10 lists the expansion nodes and their compatibility with compute nodes. Table 2-10 I/O adapter compatibility matrix - compute nodes Supported servers p260 22X p260 23X System x x-config e-config p24L p460 x220 x240 x440 part feature feature number code code Description 81Y8983 A1BV A1BV IBM Flex System PCIe Expansion Node Ya Ya N N N N N 68Y8588 A3JF A3JF IBM Flex System Storage Expansion Node Ya Ya N N N N N a. The x220 and x240 both require the second processor be installed. 2.5.2 Flex System I/O adapters - PCIe Expansion Node The PCIe Expansion Node supports the adapters listed in Table 2-11. Storage Expansion Node: The Storage Expansion Node does not include connectors for additional I/O adapters. Table 2-11 I/O adapter compatibility matrix - expansion nodes System x x-config e-config Supported in PCIe part number feature code feature codea I/O adapters Expansion Node Ethernet adapters 49Y7900 A1BR 1763 / A1BR EN2024 4-port 1Gb Ethernet Adapter Yes 90Y3466 A1QY EC2D / A1QY EN4132 2-port 10 Gb Ethernet Adapter Yesb None None 1762 / None EN4054 4-port 10Gb Ethernet Adapter No 90Y3554 A1R1 1759 / A1R1 CN4054 10Gb Virtual Fabric Adapter Yesb 90Y3558 A1R0 1760 / A1R0 CN4054 Virtual Fabric Adapter Upgradec Yes None None EC24 / None CN4058 8-port 10Gb Converged Adapter No None None EC26 / None EN4132 2-port 10Gb RoCE Adapter No 26 IBM Flex System Interoperability Guide
  • 41.
    System x x-config e-config Supported in PCIe part number feature code feature codea I/O adapters Expansion Node Fibre Channel adapters 69Y1938 A1BM 1764 / A1BM FC3172 2-port 8Gb FC Adapter Yes 95Y2375 A2N5 EC25 / A2N5 FC3052 2-port 8Gb FC Adapter Yes 88Y6370 A1BP EC2B / A1BP FC5022 2-port 16Gb FC Adapter Yes InfiniBand adapters 90Y3454 A1QZ EC2C / A1QZ IB6132 2-port FDR InfiniBand Adapter Yes None None 1761 / None IB6132 2-port QDR InfiniBand Adapter No SAS 90Y4390 A2XW Note / A2XW ServeRAID M5115 SAS/SATA Controller No a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when supported). The second is for the x220 and x440. b. Operates at PCIe 2.0 speeds when installed in the PCIe Expansion Node. For best performance install adapter directly on Compute Node. c. Features on Demand (software) upgrade to enable FCoE and iSCSI on the CN4054. One upgrade needed per adapter. 2.5.3 PCIe I/O adapters - PCIe Expansion Node The PCIe Expansion Node supports for up to four standard PCIe 2.0 adapters: Two PCIe 2.0 x16 slots that support full-length, full-height adapters (1x, 2x, 4x, 8x, and 16x adapters supported) Two PCIe 2.0 x8 slots that support low-profile adapters (1x, 2x, 4x, and 8x adapters supported) Storage Expansion Node: The Storage Expansion Node does not include connectors for PCIe I/O adapters. Table 2-12 lists the supported adapters. Some adapters must be installed in one of the full-height slots as noted. If the NVIDIA Tesla M2090 is installed in the Expansion Node, then an adapter cannot be installed in the other full-height slot. The low-profile slots and Flex System I/O expansion slots can still be used, however. Table 2-12 Supported adapter cards System x x-config e-config Description Maximum part feature feature supported number code code 46C9078 A3J3 A3J3 IBM 365GB High IOPS MLC Mono Adapter (low-profile adapter) 4 46C9081 A3J4 A3J4 IBM 785GB High IOPS MLC Mono Adapter (low-profile adapter) 4 81Y4519 5985 5985 640GB High IOPS MLC Duo Adapter (full-height adapter) 2 81Y4527 A1NB A1NB 1.28TB High IOPS MLC Duo Adapter (full-height adapter) 2 90Y4377 A3DY A3DY IBM 1.2TB High IOPS MLC Mono Adapter (low-profile adapter) 4 90Y4397 A3DZ A3DZ IBM 2.4TB High IOPS MLC Duo Adapter (full-height adapter) 2 Chapter 2. Compute node component compatibility 27
  • 42.
    System x x-config e-config Description Maximum part feature feature supported number code code 94Y5960 A1R4 A1R4 NVIDIA Tesla M2090 (full-height adapter) 1a a. if the NVIDIA Tesla M2090 is installed in the Expansion Node, then an adapter cannot be installed in the other full-height slot. The low-profile slots and Flex System I/O expansion slots can still be used Consult the IBM ServerProven® site for the current list of adapter cards that are supported in the Expansion Node: http://ibm.com/systems/info/x86servers/serverproven/compat/us/flexsystems.html Note: Although the design of Expansion Node allows for a much greater set of standard PCIe adapter cards, the preceding table lists the adapters that are specifically supported. If the PCI Express adapter that you require is not on the ServerProven web site, use the IBM ServerProven Opportunity Request for Evaluation (SPORE) process to confirm compatibility in the desired configuration. 2.5.4 Internal storage - Storage Expansion Node The Storage Expansion Node adds 12 drive bays to the attached compute node. The expansion node supports 2.5-inch drives, either HDDs or SSDs. PCIe Expansion Node: The PCIe Expansion Node does not support any HDDs or SSDs. Table 2-13 shows the hard disk drives and solid state drives supported within the Storage Expansion Node. Both SSD and HDD can be installed inside the unit at the same time, although as per best practice it is recommended that logical drives are created of similar type of disks. ie for a RAID 1 pair, choose identical drive types, SSD or HDD. Table 2-13 HDDs and SSDs supported in Storage Expansion Node System x x-config e-config Description part feature feature number code code NL SATA HDDs 81Y9722 A1NX A1NX IBM 250GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD 81Y9726 A1NZ A1NZ IBM 500GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD 81Y9730 A1AV A1AV IBM 1TB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD 10K SAS HDDs 81Y9650 A282 A282 IBM 900GB 10K 6Gbps SAS 2.5" SFF HS HDD 90Y8872 A2XD A2XD IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS HDD 90Y8877 A2XC A2XC IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD Solid state drives (SSD) 90Y8643 A2U3 A2U3 IBM 256GB SATA 2.5" MLC HS Enterprise Value SSD 28 IBM Flex System Interoperability Guide
  • 43.
    2.5.5 RAID upgrades- Storage Expansion Node The Storage Expansion Node supports the RAID upgrades listed in Table 2-14. PCIe Expansion Node: The PCIe Expansion Node does not support any of these upgrades. Table 2-14 FOD options available for the Storage Expansion Node System x x-config e-config Description part feature feature number code code Hardware upgrades 81Y4559 A1WY A1WY ServeRAID M5100 Series 1GB Flash/RAID 5 Upgrade for IBM System x 81Y4487 A1J4 A1J4 ServeRAID M5100 Series 512MB Flash/RAID 5 Upgrade for IBM System x Features on Demand upgrades (license only) 90Y4410 A2Y1 A2Y1 ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System 90Y4447 A36G A36G ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System 90Y4412 A2Y2 A2Y2 ServeRAID M5100 Series Performance Accelerator for IBM Flex System Chapter 2. Compute node component compatibility 29
  • 44.
    30 IBM Flex System Interoperability Guide
  • 45.
    3 Chapter 3. Software compatibility This chapter describes aspects of software compatibility. Topics in this chapter are: 3.1, “Operating system support” on page 32 3.2, “IBM Fabric Manager” on page 34 Unless it is otherwise specified, updates or service packs equal to or higher within the same operating system release family and version of the operating system are also supported. However, support for newer major versions are not supported unless specifically identified. For customers interested in deploying operating systems not listed here, IBM can provide customers with server hardware only warranty support. For operating system and software support, customers must contact the operating system vendor or community. Customers must obtain the operating system and OS software support directly from the operating system vendor or community. For more information, see “Additional OS Information” on the IBM ServerProven web page. © Copyright IBM Corp. 2012, 2013. All rights reserved. 31
  • 46.
    3.1 Operating systemsupport For the latest information, see IBM ServerProven at the following website: http://ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.shtml 3.1.1 x86 compute nodes Table 3-1 lists the operating systems supported by the x86 compute nodes. Table 3-1 Operating system support - x86 compute nodes Model x220 x240 x440 Microsoft Windows Server 2012 Yes Yes Yes Microsoft Windows Server 2008 R2 Yes (SP1) Yes (SP1) Yes (SP1) Microsoft Windows Server 2008 HPC Edition Yes (SP1) Yes (SP1) No Microsoft Windows Server 2008, Datacenter x64 Edition Yes (SP2) Yes (SP2) Yes (SP2) Microsoft Windows Server 2008, Enterprise x64 Edition Yes (SP2) Yes (SP2) Yes (SP2) Microsoft Windows Server 2008, Standard x64 Edition Yes (SP2) Yes (SP2) Yes (SP2) Microsoft Windows Server 2008, Web x64 Edition Yes (SP2) Yes (SP2) Yes (SP2) Red Hat Enterprise Linux 6 Server x64 Edition Yes (U2) Yes (U2) Yes (U3) Red Hat Enterprise Linux 5 Server with Xen x64 Edition Yes (U7)ab Yes (U7)b Yes (U8)b Red Hat Enterprise Linux 5 Server x64 Edition Yes (U7) Yes (U7) Yes (U8) SUSE Linux Enterprise Server 11 for AMD64/EM64T SP2 Yes (SP2) Yes (SP1) Yes (SP2) SUSE Linux Enterprise Server 11 with Xen for AMD64/EM64T SP2 Yes (SP2)ab Yes (SP1)b Yes (SP2)b SUSE Linux Enterprise Server 10 for AMD64/EM64T SP4 Yes (SP4) Yes (SP4) Yes (SP4) VMware ESXi 4.1 Yes (U2)a Yes (U2)c Yes (U2) VMware ESX 4.1 Yes (U2)a Yes (U2)c Yes (U2) VMware vSphere 5 Yesa Yesc Yes (U1) VMware vSphere 5.1 Yesa Yesc Yes a. Xen and VMware hypervisors are not supported with ServeRAID C105 (software RAID), but are supported with ServeRAID H1135 Controller 90Y4750 and ServeRAID M5115 Controller 90Y4390. b. Only pNIC mode is supported with Xen kernels. For support information, see RETAIN Tip H205800 at http://ibm.com/support/entry/portal/docdisplay?lndocid=migr-5090480. c. The IMM2 Ethernet over USB must be disabled using the IMM2 web interface. For support information, see RETAIN Tip H205897 at http://ibm.com/support/entry/portal/docdisplay?lndocid=migr-5090620. 32 IBM Flex System Interoperability Guide
  • 47.
    3.1.2 Power Systemscompute nodes Table 3-2 lists the operating systems supported by the Power Systems compute nodes. Table 3-2 Operating system support - Power Systems compute nodes Model p24L p260 p260 p460 22X 23X IBM AIX® Version 7.1 No Yes Yes Yes IBM AIX Version 6.1 No Yes Yes Yes IBM i 7.1 No Yes Yes Yes IBM i 6.1 No Yesa Yesa Yesa IBM Virtual I/O Server (VIOS) 2.2.1.4 Yes Yes No Yes IBM Virtual I/O Server (VIOS) 2.2.2.0 Yes Yes Yes Yes Red Hat Enterprise Linux 5 for IBM POWER® Yes (U7) Yes (U7) Yes (U9) Yes (U7) Red Hat Enterprise Linux 6 for IBM POWER Yes (U2) Yes (U2) Yes (U3) Yes (U2) SUSE Linux Enterprise Server 11 for IBM POWERb Yes (SP2) Yes (SP2) Yes (SP2) Yes (SP2) a. IBM i 6.1 is supported but cannot be ordered preinstalled from IBM Manufacturing. b. With current maintenance updates available from SUSE to enable all planned functionality. Specific technology levels, service pack, and APAR levels are as follows: For the p260 (model 22X), and p460: – IBM i 6.1 with i 6.1.1 machine code, or later – IBM i 7.1 TR4, or later – VIOS 2.2.1.4, or later – AIX V7.1 with the 7100-01 Technology Level with Service Pack 3 with APAR IV14284 – AIX V7.1 with the 7100-01 Technology Level with Service Pack 4, or later – AIX V7.1 with the 7100-00 Technology Level with Service Pack 6, or later – AIX V6.1 with the 6100-07 Technology Level, with Service Pack 3 with APAR IV14283 – AIX V6.1 with the 6100-07 Technology Level, with Service Pack 4, or later – AIX V6.1 with the 6100-06 Technology Level with Service Pack 8, or later – AIX V5.3 with the 5300-12 Technology Level with Service Pack 6, or later. An IBM AIX 5L V5.3 Service Extension is also required. For the p260 (model 23X): – IBM i 6.1 with i 6.1.1 machine code, or later – IBM i 7.1, or later – VIOS 2.2.2.0 or later – AIX V7.1 with the 7100-02 Technology Level or later – AIX V6.1 with the 6100-08 Technology Level or later – AIX V6.1 with the 6100-07 Technology Level, with Service Pack 71, or later – AIX V6.1 with the 6100-06 Technology Level with Service Pack 111 , or later – AIX V5.3 with the 5300-12 Technology Level with Service Pack 7, or later. An IBM AIX 5L V5.3 Service Extension is required. 1 Planned availability March 29, 2013 Chapter 3. Software compatibility 33
  • 48.
    3.2 IBM FabricManager IBM Fabric Manager is a solution that you can use to quickly replace and recover compute nodes in your environment. It accomplishes this task by assigning Ethernet MAC, Fibre Channel WWN, and SAS WWN addresses so that any compute nodes plugged into those bays take on the assigned addresses. This configuration enables the Ethernet and Fibre Channel infrastructure to be configured once and before any compute nodes are connected to the chassis. For information about IBM Fabric Manager, see the following website: http://www.ibm.com/systems/flex/fabricmanager The operating systems that IBM Fabric Manager supports are listed in the IBM Flex System Information Center at the following website: http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc.iof m.doc/dw1li_supported_os.html Table 3-3 lists the adapters that support IBM Fabric Manager and the compute nodes that they can be installed in. Table 3-3 IBM Fabric Manager support - adapters Part Feature Description x220 x240 x440 p24L p260 p460 number codesa Ethernet expansion cards None None / 1762 EN4054 4-port 10Gb Ethernet Adapter N/Ab N/Ab N/Ab Yes Yes Yes 90Y3554 A1R1 / 1759 CN4054 10Gb Virtual Fabric Adapter Yes Yes Yes N/Ab N/Ab N/Ab 49Y7900 A1BR / 1763 EN2024 4-port 1Gb Ethernet Adapter Yes Yes Yes Yes Yes Yes 90Y3466 A1QY / EC2D EN4132 2-port 10Gb Ethernet Adapter Yes Yes Yes N/Ab N/Ab N/Ab None None / EC24 CN4058 8-port 10Gb Converged Adapter N/Ab N/Ab N/Ab Yes Yes Yes None None / EC26 EN4132 2-port 10Gb RoCE Adapter N/Ab N/Ab N/Ab Yes Yes Yes Fibre Channel expansion cards 95Y2375 A2N5 / EC25 FC3052 2-port 8Gb FC Adapter Yes Yes Yes N/Ab N/Ab N/Ab 69Y1938 A1BM / 1764 FC3172 2-port 8Gb FC Adapter Yes Yes Yes Yes Yes Yes 88Y6370 A1BP / EC2B FC5022 2-port 16Gb FC Adapter Yes Yes Yes N/Ab N/Ab N/Ab InfiniBand expansion cards None None / 1761 IB6132 2-port QDR InfiniBand Adapter N/Ab N/Ab N/Ab No No No 90Y3454 A1QZ / EC2C IB6132 2-port FDR InfiniBand Adapter No No No N/Ab N/Ab N/Ab a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second feature code is for configurations ordered through the IBM Power Systems channel (e-config) b. Not applicable. This combination of adapter and compute node is not supported. 34 IBM Flex System Interoperability Guide
  • 49.
    Table 3-4 liststhe supported switches. Table 3-4 IBM Fabric Manager support - switches Description Part Feature Support IBM number codes Fabric Manager Flex System Fabric CN4093 10Gb Converged Scalable Switch 00D5823 A3HH / ESW2 No Flex System Fabric EN4093R 10Gb Scalable Switch 95Y3309 A3J6 / ESW7 No Flex System Fabric EN4093 10Gb Scalable Switch 49Y4270 A0TB / 3593 Yes - VLAN failovera Flex System EN2092 1Gb Ethernet Switch 49Y4294 A0TF / 3598 Yes - VLAN failovera Flex System EN4091 10Gb Ethernet Pass-thru 88Y6043 A1QV / 3700 Yesb Flex System FC5022 16Gb SAN Scalable Switch 88Y6374 A1EH / 3770 Yesb Flex System FC5022 16Gb 24-port SAN Scalable Switch 00Y3324 A3DP / ESW5 Yesb Flex System FC5022 16Gb ESB Switch 90Y9356 A2RQ / 3771 Yesb Flex System FC3171 8Gb SAN Switch 69Y1930 A0TD / 3595 Yesb Flex System FC3171 8Gb SAN Pass-thru 69Y1934 A0TJ / 3591 Yesb Flex System IB6131 InfiniBand Switch 90Y3450 A1EK / 3699 No a. VLAN failover (port based or untagged only) is supported b. IBM Fabric Manager is transparent to pass-thru and Fibre Channel switch modules. There is no dependency between IBM Fabric Manager and these modules. IBM Fabric Manager V3.0 is supported on the following operating systems (see 3.1, “Operating system support” on page 32 for operating systems supported by each compute node): Microsoft Windows 7 (client only) Microsoft Windows Server 2003 Microsoft Windows Server 2003 R2 Microsoft Windows Server 2008 Microsoft Windows Server 2008 R2 Red Hat Enterprise Linux 5 Red Hat Enterprise Linux 6 SUSE Linux Enterprise Server 10 SUSE Linux Enterprise Server 11 IBM Fabric Manager V3.0 is supported on the following web browsers: Internet Explorer 8 Internet Explorer 9 Firefox 14 IBM Fabric Manager V3.0 is supported on Java Runtime Edition 1.6. Chapter 3. Software compatibility 35
  • 50.
    36 IBM Flex System Interoperability Guide
  • 51.
    4 Chapter 4. Storage interoperability This chapter describes storage subsystem compatibility. Topics in this chapter are: 4.1, “Unified NAS storage” on page 38 4.2, “FCoE support” on page 39 4.3, “iSCSI support” on page 40 4.4, “NPIV support” on page 41 4.5, “Fibre Channel support” on page 41 Tip: Use these tables only as a starting point. Configuration support must be verified through the IBM System Storage Interoperation Center (SSIC) found at the following website: http://ibm.com/systems/support/storage/ssic/interoperability.wss The tables in this chapter and in SSIC are used primarily to document Fibre Channel SAN and FCoE-attached block storage interoperability and iSCSI storage when a hardware iSCSI initiator host adapters are used. © Copyright IBM Corp. 2012, 2013. All rights reserved. 37
  • 52.
    4.1 Unified NASstorage NFS, CIFS and iSCSI protocols on storage products such as IBM N series, IBM Storwize® V7000 Unified, and SONAS are supported with IBM Flex System based on the requirements including operating system levels. See the following interoperability documentation provided for those products for specific support: N series interoperability: http://ibm.com/support/docview.wss?uid=ssg1S7003897 IBM Storwize V7000 Unified http://ibm.com/support/docview.wss?uid=ssg1S1003911 IBM Storwize V7000 SVC 6.4: http://ibm.com/support/docview.wss?uid=ssg1S1004113 SVC 6.3: http://ibm.com/support/docview.wss?uid=ssg1S1003908 SONAS http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fcom.ibm.s onas.doc%2Fovr_nfssupportmatrix.html Software iSCSI: Generally iSCSI would be supported with all types of storage as long as software iSCSI initiators are used on the servers running supported OS and device driver levels. 38 IBM Flex System Interoperability Guide
  • 53.
    4.2 FCoE support This section lists FCoE support. Table 4-1 lists FCoE support using Fibre Channel targets. Table 4-2 on page 40 lists FCoE support using native FCoE targets (that is, end-to-end FCoE). Tip: Use these tables only as a starting point. Configuration support must be verified through the IBM System Storage Interoperation Center (SSIC) web site: http://ibm.com/systems/support/storage/ssic/interoperability.wss Table 4-1 FCoE support using FC targets Ethernet Flex System I/O FC Forwarder Supported Operating Storage targets adapter module (FCF) SAN fabric systems 10Gb DS8000® onboard Cisco MDS SVC EN4091 10Gb LOM (x240) Cisco 9124 IBM Storwize Ethernet + FCoE Nexus 5010 Cisco MDS V7000 Pass-thru upgrade, Cisco 9148 V7000 Storage (vNIC2 and Windows 90Y9310 Nexus 5020 Cisco MDS Node (FC) pNIC) Server 10Gb 9513 TS3200, TS3310, 2008 R2 TS3500 onboard SLES 10 LOM (x440) EN4093 10Gb Brocade SLES 11 + FCoE IBM B-type Switch (vNIC1, VDX 6730 RHEL 5 upgrade, vNIC2, UFP, RHEL 6 DS8000 90Y9310 and pNIC) Cisco ESX 4.1 SVC CN4054 EN4093R Nexus 5548 vSphere Storwize V7000 10Gb Cisco MDS 10Gb Switch Cisco 5.0 V7000 Storage Adapter, (vNIC1, vNIC2, Nexus 5596 Node (FC) 90Y3554 UFP and pNIC) IBM XIV® + FCoE upgrade, CN4093 10Gb Converged Switch IBM B-type 90Y3558 (vNIC1, vNIC2 and pNIC) Cisco MDS EN4093 10Gb (pNIC only) Brocade EN4093R IBM B-type VDX 6730 10Gb Switch (pNIC only) DS8000 CN4058 AIX 6.1 SVC 8-port 10Gb EN4093 10Gb AIX 7.1 Storwize V7000 Converged Switch (pNIC Cisco VIOS 2.2 V7000 Storage Adapter, only) Nexus 5548 SLES 11.2 Cisco MDS Node (FC) EC24 EN4093R Cisco RHEL 6.3 IBM XIV 10Gb Switch Nexus 5596 (pNIC only) CN4093 10Gb Converged Switch IBM B-type (pNIC only) Cisco MDS Chapter 4. Storage interoperability 39
  • 54.
    Table 4-2 FCoEsupport using FCoE targets (end-to-end FCoE) Ethernet adapter Flex System I/O Operating Storage targets module systems Windows Server 2008 R2 10Gb onboard LOM (x240) + SLES 10 FCoE upgrade, 90Y9310 CN4093 10Gb SLES 11 10Gb onboard LOM (x440) + Converged Switch V7000 Storage RHEL 5 FCoE upgrade, 90Y9310 (vNIC1, vNIC2, and Node (FCoE) RHEL 6 CN4054 10Gb Adapter, 90Y3554 pNIC) ESX 4.1 + FCoE upgrade, 90Y3558 vSphere 5.0 AIX 6.1 CN4093 10Gb AIX 7.1 CN4058 8-port 10Gb Converged V7000 Storage Converged Switch VIOS 2.2 Adapter, EC24 Node (FCoE) (pNIC only) SLES 11.2 RHEL 6.3 4.3 iSCSI support Table 4-3 lists iSCSI support using a hardware-based iSCSI initiator. IBM System Storage Interoperation Center normally only lists support for iSCSI storage attached using hardware iSCSI offload adapters in the servers. Flex System compute nodes support any type of iSCSI (1Gb or 10Gb) storage as long as software iSCSI initiator device drivers that meet the storage requirements for operating system and device driver levels are met. Tip: Use these tables only as a starting point. Configuration support must be verified through the IBM System Storage Interoperation Center (SSIC) web site: http://ibm.com/systems/support/storage/ssic/interoperability.wss Table 4-3 Hardware-based iSCSI support Ethernet adapter Flex System I/O module Operating systems Storage targets 10Gb onboard LOM EN4093 10Gb Switch (x240)a Windows Server 2008 R2 SVC (vNIC1, vNIC2, UFP, 10Gb onboard LOM SLES 10 & 11 Storwize V7000 and pNIC) (x440)a RHEL 5 & 6 V7000 Storage Node EN4093R 10Gb Switch CN4054 10Gb ESX 4.1 (iSCSI) (vNIC1, vNIC2, UFP Virtual Fabric vSphere 5.0 IBM XIV and pNIC) Adapter, 90Y3554b a. iSCSI/FCoE upgrade is required, IBM Virtual Fabric Advanced Software Upgrade (LOM), 90Y9310 b. iSCSI/FCoE upgrade is required, IBM Flex System CN4054 Virtual Fabric Adapter Upgrade, 90Y3558 40 IBM Flex System Interoperability Guide
  • 55.
    4.4 NPIV support NPIV is supported on all Fibre Channel and FCoE adapters that are supported in the compute nodes. See Table 2-1 on page 18 for the list of supported adapters. IBM i support: IBM i 6.1 and i 7.1 NPIV attachment for SAN volumes requires 520 byte sectors on those volumes. At this time, only the DS8000s, DS5100 and DS5300 SANs have this capability. 4.5 Fibre Channel support This section discusses Fibre Channel support for IBM BladeCenter. The following topics are covered: 4.5.1, “x86 compute nodes” 4.5.2, “Power Systems compute nodes” on page 42 Tip: Use these tables only as a starting point. Not all combinations may be supported. Configuration support must be verified through the IBM System Storage Interoperation Center (SSIC) web site: http://ibm.com/systems/support/storage/ssic/interoperability.wss 4.5.1 x86 compute nodes Table 4-1 lists Fibre Channel storage support for x86 compute nodes. Table 4-4 Fibre Channel support: x86 compute nodes FC adapter Flex System I/O module External SAN Operating FC storage fabric systems targets FC3171 8Gb switch, 69Y1930 FC3172 2-port 8Gb FC3171 8Gb Pass-thru, V7000 FC Adapter, 69Y1934 Microsoft Storage Cisco MDS 69Y1938 FC5022 16Gb 12-port, Windows Node (FC) IBM b-type FC3052 2-port 8Gb 88Y6374 Server 2008 DS3000 Brocade FC Adapter, FC5022 16Gb 24-port, RHEL 5 DS5000 95Y2375 00Y3324 RHEL 6 DS8000 FC5022 16Gb 24-port SLES 10 SVC ESB, 90Y9356 SLES 11 V7000 FC5022 16Gb 12-port, ESX 4.1 V3500 88Y6374 vSphere 5.0 V3700 FC5022 2-port 16Gb vSphere 5.1 XIV FC5022 16Gb 24-port, IBM b-type FC Adapter, Tape 00Y3324 Brocade 88Y6370 FC5022 16Gb 24-port ESB, 90Y9356 Chapter 4. Storage interoperability 41
  • 56.
    4.5.2 Power Systemscompute nodes Table 4-5 lists Fibre Channel storage support for Power Systems compute nodes. Table 4-5 Fibre Channel support: Power Systems compute nodes FC expansion card Flex System I/O module External SAN fabric Operating FC storage systems targets FC3171 8Gb switch, V7000 3595 Storage FC3171 8Gb Pass-thru, AIX 6.1 Node (FC) 3591 AIX 7.1 DS8000 IBM b-type FC3172 2-port 8Gb FC5022 16Gb 12-port, VIOS 2.2 SVC Brocade FC Adapter, 1764 3770 SLES 11 V7000 Cisco MDS FC5022 16Gb 24-port, RHEL 5 V3500 ESW5 RHEL 6 V3700 FC5022 16Gb 24-port XIV ESB, 3771 Tape 42 IBM Flex System Interoperability Guide
  • 57.
    Abbreviations and acronyms APAR Authorized Problem Analysis SAN storage area network Reports SAS Serial Attached SCSI DAC dual address cycle SATA Serial ATA DIMM dual inline memory module SDD Subsystem Device Driver ECC error checking and correcting SED self-encrypting drive ESB Enterprise Switch Bundle SFF Small Form Factor FC Fibre Channel SFP small form-factor pluggable FDR fourteen data rate SLES SUSE Linux Enterprise Server GB gigabyte SR short range HDD hard disk drive SSD solid-state drive HH half-high SSIC System Storage Interoperation HPC high performance computing Center HS hot swap SVC SAN Volume Controller I/O input/output TOR top of rack IB InfiniBand UDIMM unbuffered DIMM IBM International Business Machines USB universal serial bus IT information technology VIOS Virtual I/O Server ITSO International Technical Support WWN worldwide name Organization LOM LAN on motherboard LP low profile LR long range LRDIMM load-reduced DIMM MAC media access control MDS Multilayer Director Switch MLC multilevel cell MTP Multi-fiber Termination Push-on N/A not applicable NL nearline NPIV N_Port ID Virtualization OS operating system QDR quad data rate QSFP Quad Small Form-factor Pluggable RAID redundant array of independent disks RDIMM registered DIMM RETAIN Remote Electronic Technical Assistance Information Network RHEL Red Hat Enterprise Linux RPM revolutions per minute RSS receive-side scaling © Copyright IBM Corp. 2012, 2013. All rights reserved. 43
  • 58.
    44 IBM Flex System Interoperability Guide
  • 59.
    Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this paper. IBM Redbooks The following IBM Redbooks publications provide additional information about the topic in this document. Note that some publications referenced in this list might be available in softcopy only. You can search for, view, download or order these documents and other Redbooks, Redpapers, Web Docs, draft and additional materials, at the following website: ibm.com/redbooks IBM PureFlex System and IBM Flex System Products and Technology, SG24-7984 IBM Flex System p260 and p460 Planning and Implementation Guide, SG24-7989 IBM Flex System Networking in an Enterprise Data Center, REDP-4834 Overview of IBM PureSystems, TIPS0892 IBM Redbooks Product Guides are also available for the following IBM Flex System components: Chassis and compute nodes Switches and pass-through modules Adapter cards You can find these publications at the following web page: http://www.redbooks.ibm.com/portals/puresystems?Open&page=pgbycat Other publications and online resources These publications and websites are also relevant as further information sources: Configuration and Option Guide, found at: http://www.ibm.com/systems/xbc/cog/ IBM Flex System Enterprise Chassis Power Guide, found at: http://ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102111 IBM Flex System Information Center: http://publib.boulder.ibm.com/infocenter/flexsys/information IBM System Storage Interoperation Center: http://www.ibm.com/systems/support/storage/ssic © Copyright IBM Corp. 2012, 2013. All rights reserved. 45
  • 60.
    ServerProven hardware compatibilitypage for IBM Flex System: http://www.ibm.com/systems/info/x86servers/serverproven/compat/us/ xREF: IBM x86 Server Reference: http://www.redbooks.ibm.com/xref IBM System x and Cluster Solutions configurator (x-config) https://ibm.com/products/hardware/configurator/americas/bhui/asit/install.html IBM Configurator for e-business (e-config) http://ibm.com/services/econfig/ Help from IBM IBM Support and downloads ibm.com/support IBM Global Services ibm.com/services 46 IBM Flex System Interoperability Guide
  • 62.
    Back cover ® IBM Flex System Interoperability Guide Redpaper ™ Quick reference for To meet today’s complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software INTERNATIONAL IBM Flex System resources. This system must be simple to deploy, and be able to TECHNICAL Interoperability quickly and automatically adapt to changing conditions. You also need SUPPORT to be able to take advantage of broad expertise and proven guidelines ORGANIZATION Covers internal in systems management, applications, hardware maintenance, and components and more. external connectivity The IBM PureFlex System combines no-compromise system designs along with built-in expertise and integrates them into complete and Latest updates as of optimized solutions. At the heart of PureFlex System is the IBM Flex BUILDING TECHNICAL 30 January 2013 System Enterprise Chassis. This fully integrated infrastructure platform INFORMATION BASED ON supports a mix of compute, storage, and networking resources to meet PRACTICAL EXPERIENCE the demands of your applications. IBM Redbooks are developed The solution is easily scalable with the addition of another chassis with by the IBM International the required nodes. With the IBM Flex System Manager, multiple Technical Support chassis can be monitored from a single panel. The 14 node, 10U Organization. Experts from chassis delivers high speed performance complete with integrated IBM, Customers and Partners servers, storage, and networking. This flexible chassis is simple to from around the world create deploy, and scales to meet your needs in the future. timely technical information based on realistic scenarios. This IBM Redpaper publication is a reference to compatibility and Specific recommendations interoperability of components inside and connected to IBM PureFlex are provided to help you System and IBM Flex System solutions. implement IT solutions more effectively in your environment. For more information: ibm.com/redbooks REDP-FSIG-00