Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Front coverIBM Flex SystemInteroperability GuideQuick reference for IBM Flex SystemInteroperabilityCovers internal compone...
International Technical Support OrganizationIBM Flex System Interoperability Guide30 January 2013                         ...
Note: Before using this information and the product it supports, read the information in “Notices” on page v.This edition ...
Contents                 Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....
2.5.3 PCIe I/O adapters - PCIe Expansion Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27               ...
NoticesThis information was developed for products and services offered in the U.S.A.IBM may not offer the products, servi...
TrademarksIBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business MachinesCorpora...
Preface                 To meet today’s complex and ever-changing business demands, you need a solid foundation           ...
Now you can become a published author, too!                Here’s an opportunity to spotlight your skills, grow your caree...
Summary of changes                 This section describes the technical changes made in this edition of the paper and in p...
New information                  Added information about these new products:                   –   IBM Flex System p260 Co...
Updated the operating system table, Table 3-1 on page 32Clarified the support of the Pass-thru module and Fibre Channel sw...
xii   IBM Flex System Interoperability Guide
1    Chapter 1.   Chassis interoperability                 The IBM Flex System Enterprise Chassis is a 10U next-generation...
1.1 Chassis to compute node                   Table 1-1 lists the maximum number of compute nodes installed in the chassis...
1.2 Switch to adapter interoperability                  In this section, we describe switch to adapter interoperability.1....
1.2.2 Fibre Channel switches and adapters                   Table 1-3 lists Fibre Channel switch to card compatibility.Tab...
1.3 Switch to transceiver interoperability                This section specifies the transceivers and direct-attach copper...
CN4093      EN4093R      EN4093       EN4091        EN2092                                                              10...
1.3.2 Fibre Channel switches                 Support for transceivers and cables for Fibre Channel switch modules is shown...
1.3.3 InfiniBand switches                   Support for transceivers and cables for InfiniBand switch modules is shown in ...
1.4 Switch upgrades                 Various IBM Flex System switches can be upgraded via software licenses to enable      ...
1.4.2 IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch                   The EN4093 and EN4093R are initially ...
1.4.3 IBM Flex System EN2092 1Gb Ethernet Scalable Switch           The EN2092 comes standard with 14 internal and 10 exte...
1.4.5 IBM Flex System FC5022 16Gb SAN Scalable Switch                   Table 1-12 lists the available port and feature up...
1.5 vNIC and UFP support                 Table 1-14 lists vNIC (virtual NIC) and UFP (Universal Fabric Port) support by co...
1.6 Chassis power supplies                   Power supplies are available either as 2500W or 2100W capacities. The standar...
Table 1-17 on page 15 lists details of the support for compute nodes supported based on type                   and number ...
1.7 Rack to chassis                 IBM offers an extensive range of industry-standard and EIA-compatible rack enclosures ...
2    Chapter 2.   Compute node component                 compatibility                 This chapter lists the compatibilit...
2.1 Compute node-to-card interoperability                   Table 2-1 lists the available I/O adapters and their compatibi...
For Power Systems compute nodes, Table 2-2 shows which specific I/O expansion slots each                  of the supported...
2.2 Memory DIMM compatibility                This section covers memory DIMMs for both compute node families. It covers th...
Part        x-config    e-config          Description                                             x220     x240    x440 nu...
2.3 Internal storage compatibility                 This section covers supported internal storage for both compute node fa...
Part        x-config    e-config          Description                                              x220    x240    x440 nu...
Part          Feature     Description                                                                 x220    x240    x440...
e-config    Description                                                                           p24L    p260     p460 fe...
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
Upcoming SlideShare
Loading in …5
×

IBM Flex System Interoperability Guide

4,117 views

Published on

Learn about IBM Flex System Interoperability Guide. This IBM Redpaper publication is a reference to compatibility and interoperability of components inside and connected to IBM PureFlex System and IBM Flex System solutions. For more information on Pure Systems, visit http://ibm.co/J7Zb1v.

Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.

  • Be the first to comment

  • Be the first to like this

IBM Flex System Interoperability Guide

  1. 1. Front coverIBM Flex SystemInteroperability GuideQuick reference for IBM Flex SystemInteroperabilityCovers internal components andexternal connectivityLatest updates as of30 January 2013 David Watts Ilya Krutovibm.com/redbooks Redpaper
  2. 2. International Technical Support OrganizationIBM Flex System Interoperability Guide30 January 2013 REDP-FSIG-00
  3. 3. Note: Before using this information and the product it supports, read the information in “Notices” on page v.This edition applies to:IBM PureFlex SystemIBM Flex System Enterprise ChassisIBM Flex System ManagerIBM Flex System x220 Compute NodeIBM Flex System x240 Compute NodeIBM Flex System x440 Compute NodeIBM Flex System p260 Compute NodeIBM Flex System p24L Compute NodeIBM Flex System p460 Compute NodeIBM 42U 1100 mm Enterprise V2 Dynamic Rack© Copyright International Business Machines Corporation 2012, 2013. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.
  4. 4. Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii The team who wrote this paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 30 January 2013 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 8 December 2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 29 November 2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 13 November 2012. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 2 October 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Chapter 1. Chassis interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Chassis to compute node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Switch to adapter interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Ethernet switches and adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.2 Fibre Channel switches and adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.3 InfiniBand switches and adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Switch to transceiver interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3.1 Ethernet switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3.2 Fibre Channel switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.3 InfiniBand switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4 Switch upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.4.1 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch . . . . . . . . . . . 9 1.4.2 IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch . . . . . . . . . 10 1.4.3 IBM Flex System EN2092 1Gb Ethernet Scalable Switch . . . . . . . . . . . . . . . . . . 11 1.4.4 IBM Flex System IB6131 InfiniBand Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4.5 IBM Flex System FC5022 16Gb SAN Scalable Switch. . . . . . . . . . . . . . . . . . . . . 12 1.5 vNIC and UFP support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.6 Chassis power supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.7 Rack to chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Chapter 2. Compute node component compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1 Compute node-to-card interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2 Memory DIMM compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.1 x86 compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.2 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3 Internal storage compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3.1 x86 compute nodes: 2.5-inch drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3.2 x86 compute nodes: 1.8-inch drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.3 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4 Embedded virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.5 Expansion node compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5.1 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5.2 Flex System I/O adapters - PCIe Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . 26© Copyright IBM Corp. 2012, 2013. All rights reserved. iii
  5. 5. 2.5.3 PCIe I/O adapters - PCIe Expansion Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.4 Internal storage - Storage Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5.5 RAID upgrades - Storage Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Chapter 3. Software compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.1 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.1.1 x86 compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.1.2 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2 IBM Fabric Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Chapter 4. Storage interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.1 Unified NAS storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.2 FCoE support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.3 iSCSI support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 NPIV support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.5 Fibre Channel support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.5.1 x86 compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.5.2 Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Other publications and online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46iv IBM Flex System Interoperability Guide
  6. 6. NoticesThis information was developed for products and services offered in the U.S.A.IBM may not offer the products, services, or features discussed in this document in other countries. Consultyour local IBM representative for information on the products and services currently available in your area.Any reference to an IBM product, program, or service is not intended to state or imply that only that IBMproduct, program, or service may be used. Any functionally equivalent product, program, or service that doesnot infringe any IBM intellectual property right may be used instead. However, it is the users responsibility toevaluate and verify the operation of any non-IBM product, program, or service.IBM may have patents or pending patent applications covering subject matter described in this document. Thefurnishing of this document does not grant you any license to these patents. You can send license inquiries, inwriting, to:IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.The following paragraph does not apply to the United Kingdom or any other country where suchprovisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATIONPROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS ORIMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer ofexpress or implied warranties in certain transactions, therefore, this statement may not apply to you.This information could include technical inaccuracies or typographical errors. Changes are periodically madeto the information herein; these changes will be incorporated in new editions of the publication. IBM may makeimprovements and/or changes in the product(s) and/or the program(s) described in this publication at any timewithout notice.Any references in this information to non-IBM websites are provided for convenience only and do not in anymanner serve as an endorsement of those websites. The materials at those websites are not part of thematerials for this IBM product and use of those websites is at your own risk.IBM may use or distribute any of the information you supply in any way it believes appropriate withoutincurring any obligation to you.Any performance data contained herein was determined in a controlled environment. Therefore, the resultsobtained in other operating environments may vary significantly. Some measurements may have been madeon development-level systems and there is no guarantee that these measurements will be the same ongenerally available systems. Furthermore, some measurements may have been estimated throughextrapolation. Actual results may vary. Users of this document should verify the applicable data for theirspecific environment.Information concerning non-IBM products was obtained from the suppliers of those products, their publishedannouncements or other publicly available sources. IBM has not tested those products and cannot confirm theaccuracy of performance, compatibility or any other claims related to non-IBM products. Questions on thecapabilities of non-IBM products should be addressed to the suppliers of those products.This information contains examples of data and reports used in daily business operations. To illustrate themas completely as possible, the examples include the names of individuals, companies, brands, and products.All of these names are fictitious and any similarity to the names and addresses used by an actual businessenterprise is entirely coincidental.COPYRIGHT LICENSE:This information contains sample application programs in source language, which illustrate programmingtechniques on various operating platforms. You may copy, modify, and distribute these sample programs inany form without payment to IBM, for the purposes of developing, using, marketing or distributing applicationprograms conforming to the application programming interface for the operating platform for which the sampleprograms are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,cannot guarantee or imply reliability, serviceability, or function of these programs.© Copyright IBM Corp. 2012, 2013. All rights reserved. v
  7. 7. TrademarksIBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business MachinesCorporation in the United States, other countries, or both. These and other IBM trademarked terms aremarked on their first occurrence in this information with the appropriate symbol (® or ™), indicating USregistered or common law trademarks owned by IBM at the time this information was published. Suchtrademarks may also be registered or common law trademarks in other countries. A current list of IBMtrademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtmlThe following terms are trademarks of the International Business Machines Corporation in the United States,other countries, or both: AIX® POWER7+™ Redbooks (logo) ® BladeCenter® POWER7® RETAIN® DS8000® PowerVM® ServerProven® IBM Flex System™ POWER® Storwize® IBM Flex System Manager™ PureFlex™ System Storage® IBM® RackSwitch™ System x® Netfinity® Redbooks® XIV® Power Systems™ Redpaper™The following terms are trademarks of other companies:Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of IntelCorporation or its subsidiaries in the United States and other countries.Linux is a trademark of Linus Torvalds in the United States, other countries, or both.Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,other countries, or both.Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or itsaffiliates.Other company, product, or service names may be trademarks or service marks of others.vi IBM Flex System Interoperability Guide
  8. 8. Preface To meet today’s complex and ever-changing business demands, you need a solid foundation of compute, storage, networking, and software resources. This system must be simple to deploy, and be able to quickly and automatically adapt to changing conditions. You also need to be able to take advantage of broad expertise and proven guidelines in systems management, applications, hardware maintenance, and more. The IBM® PureFlex™ System combines no-compromise system designs along with built-in expertise and integrates them into complete and optimized solutions. At the heart of PureFlex System is the IBM Flex System™ Enterprise Chassis. This fully integrated infrastructure platform supports a mix of compute, storage, and networking resources to meet the demands of your applications. The solution is easily scalable with the addition of another chassis with the required nodes. With the IBM Flex System Manager™, multiple chassis can be monitored from a single panel. The 14 node, 10U chassis delivers high speed performance complete with integrated servers, storage, and networking. This flexible chassis is simple to deploy, and scales to meet your needs in the future. This IBM Redpaper™ publication is a reference to compatibility and interoperability of components inside and connected to IBM PureFlex System and IBM Flex System solutions. The latest version of this document can be downloaded from: http://www.redbooks.ibm.com/fsigThe team who wrote this paper This paper was produced by a team of specialists from around the world working at the International Technical Support Organization, Raleigh Center. David Watts is a Consulting IT Specialist at the ITSO Center in Raleigh. He manages residencies and produces IBM Redbooks® publications for hardware and software topics that are related to IBM System x® and IBM BladeCenter® servers and associated client platforms. He has authored over 300 books, papers, and web documents. David has worked for IBM both in the US and Australia since 1989. He is an IBM Certified IT Specialist and a member of the IT Specialist Certification Review Board. David holds a Bachelor of Engineering degree from the University of Queensland (Australia). Ilya Krutov is a Project Leader at the ITSO Center in Raleigh and has been with IBM since 1998. Before joining the ITSO, Ilya served in IBM as a Run Rate Team Leader, Portfolio Manager, Brand Manager, Technical Sales Specialist, and Certified Instructor. Ilya has expertise in IBM System x and BladeCenter products, server operating systems, and networking solutions. He has a Bachelor’s degree in Computer Engineering from the Moscow Engineering and Physics Institute. Special thanks to Ashish Jain, the former author of this document.© Copyright IBM Corp. 2012, 2013. All rights reserved. vii
  9. 9. Now you can become a published author, too! Here’s an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base. Find out more about the residency program, browse the residency index, and apply online at: ibm.com/redbooks/residencies.htmlComments welcome Your comments are important to us! We want our papers to be as helpful as possible. Send us your comments about this paper or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400Stay connected to IBM Redbooks Find us on Facebook: http://www.facebook.com/IBMRedbooks Follow us on Twitter: http://twitter.com/ibmredbooks Look for us on LinkedIn: http://www.linkedin.com/groups?home=&gid=2130806 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter: https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm Stay current on recent Redbooks publications with RSS Feeds: http://www.redbooks.ibm.com/rss.htmlviii IBM Flex System Interoperability Guide
  10. 10. Summary of changes This section describes the technical changes made in this edition of the paper and in previous editions. This edition might also include minor corrections and editorial changes that are not identified.30 January 2013 New information More specifics about configuration support for chassis power supplies, Table 1-17 on page 15. Windows Server 2012 support, Table 3-1 on page 32. Red Hat Enterprise Linux 5 support for the p260 model 23X, Table 3-2 on page 33 Changed information x440 restriction regarding the use of the ServeRAID M5115 is now removed with the release of IMM2 firmware build 40a, Updated the Fibre Channel support section, 4.5, “Fibre Channel support” on page 41.8 December 2012 New information Added Table 2-2 on page 19 indicating which slots I/O adapters are supported in with Power Systems compute nodes. The x440 now supports UDIMMs, Table 2-3 on page 2029 November 2012 Changed information Clarified that the use of expansion nodes requires that the second processor be installed in the compute node, Table 2-10 on page 26. Corrected the NPIV information, 4.4, “NPIV support” on page 41. Clarified NAS supported, 4.1, “Unified NAS storage” on page 38.13 November 2012 This revision reflects the addition, deletion, or modification of new and changed information described below.© Copyright IBM Corp. 2012, 2013. All rights reserved. ix
  11. 11. New information Added information about these new products: – IBM Flex System p260 Compute Node, 7895-23X – IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch – IBM Flex System Fabric EN4093R 10Gb Scalable Switch – IBM Flex System CN4058 8-port 10Gb Converged Adapter – IBM Flex System EN4132 2-port 10Gb RoCE Adapter – IBM Flex System Storage® Expansion Node – IBM Flex System PCIe Expansion Node – IBM PureFlex System 42U Rack – IBM Flex System V7000 Storage Node The x220 now supports 32 GB LRDIMM, page Table 2-3 on page 20 The Power Systems™ compute nodes support new DIMMs, Table 2-4 on page 21. New 2100W power supply option for the Enterprise Chassis, 1.6, “Chassis power supplies” on page 14. New section covering Features on Demand upgrades for scalable switches, 1.4, “Switch upgrades” on page 9. Changed information Moved the FCoE and NPIV tables to Chapter 4, “Storage interoperability” on page 37. Added machine types & models (MTMs) for the x220 and x440 when ordered via AAS (e-config), Table 1-1 on page 2 Added footnote regarding power management and the use of 14 Power Systems compute nodes with 32 GB DIMMs, Table 1-1 on page 2 Added AAS (e-config) feature codes to various tables of x86 compute node options. Note that AAS feature codes for the x220 and x440 are the same as those used in the HVEC system (x-config). However the AAS feature codes for the x240 are different than the equivalent HVEC feature codes. This is noted in the table. Updated the FCoE table, 4.2, “FCoE support” on page 39 Updated the vNIC table, Table 1-14 on page 13 Clarified that the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) and x240 USB Enablement Kit (49Y8119) cannot be installed at the same time, Table 2-6 on page 23. Updated the table of supported 2.5-inch drives, Table 2-5 on page 22. Updated the operating system table, Table 3-1 on page 322 October 2012 This revision reflects the addition, deletion, or modification of new and changed information described below. New information Temporary restrictions on the use of network and storage adapters with the x440, page 18 Changed information Updated the x86 memory table, Table 2-3 on page 20 Updated the FCoE table, 4.2, “FCoE support” on page 39x IBM Flex System Interoperability Guide
  12. 12. Updated the operating system table, Table 3-1 on page 32Clarified the support of the Pass-thru module and Fibre Channel switches with IBM FabricManager, Table 3-4 on page 35. Summary of changes xi
  13. 13. xii IBM Flex System Interoperability Guide
  14. 14. 1 Chapter 1. Chassis interoperability The IBM Flex System Enterprise Chassis is a 10U next-generation server platform with integrated chassis management. It is a compact, high-density, high-performance, rack-mount, and scalable server platform system. It supports up to 14 one-bay compute nodes that share common resources, such as power, cooling, management, and I/O resources within a single Enterprise Chassis. In addition, it can also support up to seven 2-bay compute nodes or three 4-bay compute nodes when the shelves are removed. You can mix and match 1-bay, 2-bay, and 4-bay compute nodes to meet your specific hardware needs. Topics in this chapter are: 1.1, “Chassis to compute node” on page 2 1.2, “Switch to adapter interoperability” on page 3 1.3, “Switch to transceiver interoperability” on page 5 1.4, “Switch upgrades” on page 9 1.5, “vNIC and UFP support” on page 13 1.6, “Chassis power supplies” on page 14 1.7, “Rack to chassis” on page 16© Copyright IBM Corp. 2012, 2013. All rights reserved. 1
  15. 15. 1.1 Chassis to compute node Table 1-1 lists the maximum number of compute nodes installed in the chassis.Table 1-1 Maximum number of compute nodes installed in the chassis Compute nodes Machine type Maximum number of compute nodes in the System x Power System Enterprise Chassis (x-config) (e-config) 8721-A1x 7893-92X (x-config) (e-config) x86 compute nodes IBM Flex System x220 Compute Node 7906 7906-25X 14 14 IBM Flex System x240 Compute Node 8737 7863-10X 14 14 IBM Flex System x440 Compute Node 7917 7917-45X 7 7 IBM Power Systems compute nodes IBM Flex System p24L Compute Node None 1457-7FL 14a 14a IBM Flex System p260 Compute Node (POWER7®) None 7895-22X 14a 14a IBM Flex System p260 Compute Node (POWER7+™) None 7895-23X 14a 14a IBM Flex System p460 Compute Node None 7895-42X 7a 7a Management node IBM Flex System Manager 8731-A1x 7955-01M 1b 1b a. For Power Systems compute nodes: if the chassis is configured with the power management policy “AC Power Source Redundancy with Compute Node Throttling Allowed”, some maximum chassis configurations containing Power Systems compute nodes with large populations of 32GB DIMMs may result in the chassis having insufficient power to power on all 14 compute nodes bays. In such circumstances, only 13 of the 14 bays would be allowed to be powered on. b. One Flex System Manager management node can manage up to four chassis2 IBM Flex System Interoperability Guide
  16. 16. 1.2 Switch to adapter interoperability In this section, we describe switch to adapter interoperability.1.2.1 Ethernet switches and adapters Table 1-2 lists Ethernet switch to card compatibility. Switch upgrades: To maximize the usable port count on the adapters, the switches may need additional license upgrades. See 1.4, “Switch upgrades” on page 9 for details.Table 1-2 Ethernet switch to card compatibility CN4093 EN4093R EN4093 EN4091 EN2092 10Gb 10Gb 10Gb 10Gb 1Gb Switch Switch Switch Pass-thru Switch Part number 00D5823 95Y3309 49Y4270 88Y6043 49Y4294 Part Feature A3HH / A3J6 / A0TB / A1QV / A0TF / number codesa Feature codesa ESW2 ESW7 3593 3700 3598 None None x220 Embedded 1 Gb Yesb Yes Yes No Yes None None x240 Embedded 10 Gb Yes Yes Yes Yes Yes None None x440 Embedded 10 Gb Yes Yes Yes Yes Yes 49Y7900 A1BR / 1763 EN2024 4-port 1Gb Yes Yes Yes Yesc Yes Ethernet Adapter 90Y3466 A1QY / EC2D EN4132 2-port 10 Gb No Yes Yes Yes No Ethernet Adapter None None / 1762 EN4054 4-port 10Gb Yes Yes Yes Yesc Yes Ethernet Adapter 90Y3554 A1R1 / 1759 CN4054 10Gb Virtual Yes Yes Yes Yesc Yes Fabric Adapter None None / EC24 CN4058 8-port 10Gb Yesd Yesd Yesd Yesc Yese Converged Adapter None None / EC26 EN4132 2-port 10Gb No Yes Yes Yes No RoCE Adapter a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second feature code is for configurations ordered through the IBM Power Systems channel (e-config) b. 1 Gb is supported on the CN4093’s two external 10 Gb SFP+ ports only. The 12 external Omni Ports do not support 1 GbE speeds. c. Only two of the ports of this adapter are connected when used with the EN4091 10Gb Pass-thru. d. Only six of the eight ports of the CN4058 adapter are connected with the CN4093, EN4093R, EN4093R switches e. Only four of the eight ports of CN4058 adapter are connected with the EN2092 switch. Chapter 1. Chassis interoperability 3
  17. 17. 1.2.2 Fibre Channel switches and adapters Table 1-3 lists Fibre Channel switch to card compatibility.Table 1-3 Fibre Channel switch to card compatibility FC5022 FC5022 FC5022 FC3171 FC3171 16Gb 16Gb 16Gb 8Gb 8Gb 12-port 24-port 24-port switch Pass-thru ESB Part number 88Y6374 00Y3324 90Y9356 69Y1930 69Y1934 Part Feature Feature codesa A1EH / A3DP / A2RQ / A0TD / A0TJ / number codesa 3770 ESW5 3771 3595 3591 69Y1938 A1BM / 1764 FC3172 2-port 8Gb FC Yes Yes Yes Yes Yes Adapter 95Y2375 A2N5 / EC25 FC3052 2-port 8Gb FC Yes Yes Yes Yes Yes Adapter 88Y6370 A1BP / EC2B FC5022 2-port 16Gb FC Yes Yes Yes No No Adapter a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second feature code is for configurations ordered through the IBM Power Systems channel (e-config)1.2.3 InfiniBand switches and adapters Table 1-4 lists InfiniBand switch to card compatibility.Table 1-4 InfiniBand switch to card compatibility IB6131 InfiniBand Switch Part number 90Y3450 Part Feature number codesa Feature codea A1EK / 3699 90Y3454 A1QZ / EC2C IB6132 2-port FDR InfiniBand Adapter Yesb None None / 1761 IB6132 2-port QDR InfiniBand Adapter Yes a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second feature code is for configurations ordered through the IBM Power Systems channel (e-config) b. To operate at FDR speeds, the IB6131 switch will need the FDR upgrade, as described in 1.4, “Switch upgrades” on page 94 IBM Flex System Interoperability Guide
  18. 18. 1.3 Switch to transceiver interoperability This section specifies the transceivers and direct-attach copper (DAC) cables supported by the various IBM Flex System I/O modules.1.3.1 Ethernet switches Support for transceivers and cables for Ethernet switch modules is shown in Table 1-5.Table 1-5 Modules and cables supported in Ethernet I/O modules CN4093 EN4093R EN4093 EN4091 EN2092 10Gb 10Gb 10Gb 10Gb 1Gb Switch Switch Switch Pass-thru Switch Part number 00D5823 95Y3309 49Y4270 88Y6043 49Y4294 Part Feature Feature codesa A3HH / A3J6 / A0TB / A1QV / A0TF / number codesa ESW2 ESW7 3593 3700 3598 SFP transceivers - 1 Gbps 81Y1622 3269 / IBM SFP SX Transceiver Yes Yes Yes Yes Yes EB2A (1000Base-SX) 81Y1618 3268 / IBM SFP RJ45 Transceiver Yes Yes Yes Yes Yes EB29 (1000Base-T) 90Y9424 A1PN / IBM SFP LX Transceiver Yes Yes Yes Yes Yes ECB8 (1000Base-LX) SFP+ transceivers - 10 Gbps 44W4408 4942 / 10 GBase-SR SFP+ (MMFiber) Yes Yes Yes Yes Yes 3282 46C3447 5053 / IBM SFP+ SR Transceiver Yes Yes Yes Yes Yes EB28 (10GBase-SR) 90Y9412 A1PM / IBM SFP+ LR Transceiver Yes Yes Yes Yes Yes ECB9 (10GBase-LR) QSFP+ transceivers - 40 Gbps 49Y7884 A1DR / IBM QSFP+ SR Transceiver Yes Yes Yes No No EB27 (40Gb) 8 Gb Fibre Channel SFP+ transceivers 44X1964 5075 / IBM 8 Gb SFP+ SW Optical Yes No No No No 3286 Transceiver SFP+ direct-attach copper (DAC) cables 90Y9427 A1PH / 1m IBM Passive DAC SFP+ Yes Yes Yes No Yes None 90Y9430 A1PJ / 3m IBM Passive DAC SFP+ Yes Yes Yes No Yes None 90Y9433 A1PK / 5m IBM Passive DAC SFP+ Yes Yes Yes No Yes ECB6 Chapter 1. Chassis interoperability 5
  19. 19. CN4093 EN4093R EN4093 EN4091 EN2092 10Gb 10Gb 10Gb 10Gb 1Gb Switch Switch Switch Pass-thru Switch Part number 00D5823 95Y3309 49Y4270 88Y6043 49Y4294 Part Feature Feature codesa A3HH / A3J6 / A0TB / A1QV / A0TF / number codesa ESW2 ESW7 3593 3700 3598 49Y7886 A1DL / 1m 40 Gb QSFP+ to 4 x 10 Gb Yes Yes Yes No No EB24 SFP+ Cable 49Y7887 A1DM / 3m 40 Gb QSFP+ to 4 x 10 Gb Yes Yes Yes No No EB25 SFP+ Cable 49Y7888 A1DN / 5m 40 Gb QSFP+ to 4 x 10 Gb Yes Yes Yes No No EB26 SFP+ Cable 95Y0323 A25A / IBM 1m 10 GBase Copper No No No Yes No None SFP+ Twinax (Active) 95Y0326 A25B / IBM 3m 10 GBase Copper No No No Yes No None SFP+ Twinax (Active) 95Y0329 A25C / IBM 5m 10 GBase Copper No No No Yes No None SFP+ Twinax (Active) 81Y8295 A18M / 1m 10 GbE Twinax Act Copper No No No Yes No None SFP+ DAC (active) 81Y8296 A18N / 3m 10 GE Twinax Act Copper No No No Yes No None SFP+ DAC (active) 81Y8297 A18P / 5m 10 GE Twinax Act Copper No No No Yes No None SFP+ DAC (active) QSFP cables 49Y7890 A1DP / 1m IBM QSFP+ to QSFP+ Yes Yes Yes No No EB2B Cable 49Y7891 A1DQ / 3m IBM QSFP+ to QSFP+ Yes Yes Yes No No EB2H Cable Fiber optic cables 90Y3519 A1MM / 10m IBM MTP Fiber Optical Yes Yes Yes No No EB2J Cable 90Y3521 A1MN / 30m IBM MTP Fiber Optical Yes Yes Yes No No EC2K a Cable a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second feature code is for configurations ordered through the IBM Power Systems channel (e-config)6 IBM Flex System Interoperability Guide
  20. 20. 1.3.2 Fibre Channel switches Support for transceivers and cables for Fibre Channel switch modules is shown in Table 1-6.Table 1-6 Modules and cables supported in Fibre Channel I/O modules FC5022 FC5022 FC5022 FC3171 FC3171 16Gb 16Gb 16Gb 8Gb 8Gb 12-port 24-port 24-port switch Pass-thru ESB Part number 88Y6374 00Y3324 90Y9356 69Y1930 69Y1934 Part Feature Feature codesa A1EH / A3DP / A2RQ / A0TD / A0TJ / number codesa 3770 ESW5 3771 3595 3591 16 Gb transceivers 88Y6393 A22R / Brocade 16 Gb SFP+ Optical Yes Yes Yes No No 5371 Transceiver 8 Gb transceivers 88Y6416 A2B9 / Brocade 8 Gb SFP+ SW Optical Yes Yes Yes No No 5370 Transceiver 44X1964 5075 / IBM 8 Gb SFP+ SW Optical No No No Yes Yes 3286 Transceiver 4 Gb transceivers 39R6475 4804 / 4 Gb SFP Transceiver Option No No No Yes Yes 3238 a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second feature code is for configurations ordered through the IBM Power Systems channel (e-config) Chapter 1. Chassis interoperability 7
  21. 21. 1.3.3 InfiniBand switches Support for transceivers and cables for InfiniBand switch modules is shown in Table 1-7. Compliant cables: The IB6131 switch supports all cables compliant to the InfiniBand Architecture specification.Table 1-7 Modules and cables supported in InfiniBand I/O modules IB6131 InfiniBand Switch Part number 90Y3450 Part Feature number codesa Feature codesa A1EK / 3699 49Y9980 3866 / 3249 IB QDR 3m QSFP Cable Option (passive) Yes 90Y3470 A227 / ECB1 3m FDR InfiniBand Cable (passive) Yes a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second feature code is for configurations ordered through the IBM Power Systems channel (e-config)8 IBM Flex System Interoperability Guide
  22. 22. 1.4 Switch upgrades Various IBM Flex System switches can be upgraded via software licenses to enable additional ports or features. Switches covered in this section: 1.4.1, “IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch” on page 9 1.4.2, “IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch” on page 10 1.4.3, “IBM Flex System EN2092 1Gb Ethernet Scalable Switch” on page 11 1.4.4, “IBM Flex System IB6131 InfiniBand Switch” on page 11 1.4.5, “IBM Flex System FC5022 16Gb SAN Scalable Switch” on page 121.4.1 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch The CN4093 switch is initially licensed for fourteen 10 GbE internal ports, two external 10 GbE SFP+ ports, and six external Omni Ports enabled. Further ports can be enabled, including 14 additional internal ports and two external 40 GbE QSFP+ uplink ports with the Upgrade 1 (00D5845) and 14 additional internal ports and six additional external Omni Ports with the Upgrade 2 (00D5847) license options. Upgrade 1 and Upgrade 2 can be applied on the switch independently from each other or in combination for full feature capability. Table 1-8 shows the part numbers for ordering the switches and the upgrades.Table 1-8 CN4093 10Gb Converged Scalable Switch part numbers and port upgrades Part Feature Description Total ports enabled number codea Internal External External External 10Gb 10Gb SFP+ 10Gb Omni 40Gb QSFP+ 00D5823 A3HH / ESW2 Base switch (no upgrades) 14 2 6 0 00D5845 A3HL / ESU1 Add Upgrade 1 28 2 6 2 00D5847 A3HM / ESU2 Add Upgrade 2 28 2 12 0 00D5845 A3HL / ESU1 Add both Upgrade 1 and 42 2 12 2 00D5847 A3HM / ESU2 Upgrade 2 a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel. Each upgrade license enables additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter (one port of the adapter goes to each of two switches) Adding Upgrade 1 or Upgrade 2 requires a four-port Ethernet adapter (two ports of the adapter to each switch) to use all internal ports Adding both Upgrade 1 and Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch) to use all internal ports Chapter 1. Chassis interoperability 9
  23. 23. 1.4.2 IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch The EN4093 and EN4093R are initially licensed with fourteen 10 Gb internal ports enabled and ten 10 Gb external uplink ports enabled. Further ports can be enabled, including the two 40 Gb external uplink ports with the Upgrade 1 and four additional SFP+ 10Gb ports with Upgrade 2 license options. Upgrade 1 must be applied before Upgrade 2 can be applied. These are IBM Features on Demand license upgrades. Table 1-9 lists the available parts and upgrades.Table 1-9 IBM Flex System Fabric EN4093 10Gb Scalable Switch part numbers and port upgrades Part Feature Product description Total ports enabled number codea Internal 10 Gb uplink 40 Gb uplink 49Y4270 A0TB / 3593 IBM Flex System Fabric EN4093 10Gb 14 10 0 Scalable Switch 10x external 10 Gb uplinks 14x internal 10 Gb ports 05Y3309 A3J6 / ESW7 IBM Flex System Fabric EN4093R 10Gb 14 10 0 Scalable Switch 10x external 10 Gb uplinks 14x internal 10 Gb ports 49Y4798 A1EL / 3596 IBM Flex System Fabric EN4093 10Gb 28 10 2 Scalable Switch (Upgrade 1) Adds 2x external 40 Gb uplinks Adds 14x internal 10 Gb ports 88Y6037 A1EM / 3597 IBM Flex System Fabric EN4093 10Gb 42 14 2 Scalable Switch (Upgrade 2) (requires Upgrade 1): Adds 4x external 10 Gb uplinks Add 14x internal 10 Gb ports a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel. Each upgrade license enables additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter (one port of the adapter goes to each of two switches) Upgrade 1 requires a four-port Ethernet adapter (two ports of the adapter to each switch) to use all internal ports Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch) to use all internal ports Consideration: Adding Upgrade 2 enables an additional 14 internal ports. This allows a total of 42 internal ports -- three ports connected to each of the 14 compute nodes. To take full advantage of all 42 internal ports, a 6-port or 8-port adapter is required, such as the CN4058 8-port 10Gb Converged Adapter. Upgrade 2 still provides a benefit with a 4-port adapter because this upgrade enables an extra four external 10 Gb uplinks as well.10 IBM Flex System Interoperability Guide
  24. 24. 1.4.3 IBM Flex System EN2092 1Gb Ethernet Scalable Switch The EN2092 comes standard with 14 internal and 10 external Gigabit Ethernet ports enabled. Further ports can be enabled, including the four external 10 Gb uplink ports with IBM Features on Demand license upgrades. Upgrade 1 and the 10 Gb Uplinks upgrade can be applied in either order. Table 1-10 IBM Flex System EN2092 1Gb Ethernet Scalable Switch part numbers and port upgrades Part number Feature codea Product description 49Y4294 A0TF / 3598 IBM Flex System EN2092 1Gb Ethernet Scalable Switch 14 internal 1 Gb ports 10 external 1 Gb ports 90Y3562 A1QW / 3594 IBM Flex System EN2092 1Gb Ethernet Scalable Switch (Upgrade 1) Adds 14 internal 1 Gb ports Adds 10 external 1 Gb ports 49Y4298 A1EN / 3599 IBM Flex System EN2092 1Gb Ethernet Scalable Switch (10 Gb Uplinks) Adds 4 external 10 Gb uplinks a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel. The standard switch has 14 internal ports, and the Upgrade 1 license enables 14 additional internal ports. To take full advantage of those ports, each compute node needs the appropriate I/O adapter installed: The base switch requires a two-port Ethernet adapter installed in each compute node (one port of the adapter goes to each of two switches) Upgrade 1 requires a four-port Ethernet adapter installed in each compute node (two ports of the adapter to each switch)1.4.4 IBM Flex System IB6131 InfiniBand Switch The IBM Flex System IB6131 InfiniBand Switch is a 32 port InfiniBand switch. It has 18 FDR/QDR (56/40 Gbps) external ports and 14 FDR/QDR (56/40 Gbps) internal ports for connections to nodes. This switch ships standard with quad data rate (QDR) and can be upgraded to fourteen data rate (FDR) with an IBM Features on Demand license upgrade. Ordering information is listed in Table 1-11. Table 1-11 IBM Flex System IB6131 InfiniBand Switch Part Number and upgrade option Part number Feature codesa Product Name 90Y3450 A1EK / 3699 IBM Flex System IB6131 InfiniBand Switch 18 external QDR ports 14 QDR internal ports 90Y3462 A1QX / ESW1 IBM Flex System IB6131 InfiniBand Switch (FDR Upgrade) Upgrades all ports to FDR speeds a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel. Chapter 1. Chassis interoperability 11
  25. 25. 1.4.5 IBM Flex System FC5022 16Gb SAN Scalable Switch Table 1-12 lists the available port and feature upgrades for the FC5022 16Gb SAN Scalable Switches. These upgrades are all IBM Features on Demand license upgrades.Table 1-12 FC5022 switch upgrades 24-port 24-port 16 Gb 16 Gb 16 Gb SAN switch ESB switch SAN switch Part Feature number codesa Description 90Y9356 00Y3324 88Y6374 88Y6382 A1EP / 3772 FC5022 16Gb SAN Scalable Switch (Upgrade 1) No No Yes 88Y6386 A1EQ / 3773 FC5022 16Gb SAN Scalable Switch (Upgrade 2) Yes Yes Yes 00Y3320 A3HN / ESW3 FC5022 16Gb Fabric Watch Upgrade No Yes Yes 00Y3322 A3HP / ESW4 FC5022 16Gb ISL/Trunking Upgrade No Yes Yes a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel. Table 1-13 shows the total number of active ports on the switch after applying compatible port upgrades. Table 1-13 Total port counts after applying upgrades Total number of active ports 24-port 16 Gb 24-port 16 Gb 16 Gb SAN switch ESB SAN switch SAN switch Ports on Demand upgrade 90Y9356 00Y3324 88Y6374 Included with base switch 24 24 12 Upgrade 1, 88Y6382 (adds 12 ports) Not supported Not supported 24 Upgrade 2, 88Y6386 (adds 24 ports) 48 48 4812 IBM Flex System Interoperability Guide
  26. 26. 1.5 vNIC and UFP support Table 1-14 lists vNIC (virtual NIC) and UFP (Universal Fabric Port) support by combinations of switch, adapter, and operating system. In the table, we use the following abbreviations for the vNIC modes: vNIC1 = IBM Virtual Fabric Mode vNIC2 = Switch Independent Mode 10 GbE adapters only: Only 10 Gb Ethernet adapters support vNIC and UFP. 1 GbE adapter do not support these features.Table 1-14 Supported vNIC modes Flex System I/O module EN4093 10Gb Scalable Switch EN4091 10Gb Ethernet Pass-thru EN4093R 10Gb Switch CN4093 10Gb Converged Switch Top-of-rack switch None IBM RackSwitch™ G8124E IBM RackSwitch G8264 Operating system Windows Linuxab VMwarec Windows Linuxab VMwarec 10Gb onboard LOM (x240 and x440) vNIC1 vNIC1 vNIC1 vNIC1 vNIC1 vNIC1 vNIC2 vNIC2 vNIC2 vNIC2 vNIC2 vNIC2 UFPd UFPd UFP UFP UFP UFP CN4054 10Gb Virtual Fabric Adapter vNIC1 vNIC1 vNIC1 vNIC1 vNIC1 vNIC1 90Y3554 (e-config #1759) vNIC2 vNIC2 vNIC2 vNIC2 vNIC2 vNIC2 UFPd UFPd UFPd UFP UFP UFP EN4054 4-port 10Gb Ethernet Adapter The EN4054 4-port 10Gb Ethernet Adapter does not support vNIC nor UFP. (e-config #1762) EN4132 2-port 10 Gb Ethernet Adapter The EN4132 2-port 10 Gb Ethernet Adapter does not support vNIC nor UFP. 90Y3466 (e-config #EC2D) CN4058 8-port 10Gb Converged The CN4058 8-port 10Gb Converged Adapter does not support vNIC nor Adapter, (e-config #EC24) UFP. EN4132 2-port 10Gb RoCE Adapter, The EN4132 2-port 10Gb RoCE Adapter does not support vNIC nor UFP. (e-config #EC26) a. Linux kernels with Xen are not supported with either vNIC1 nor vNIC2. For support information, see IBM RETAIN® Tip H205800 at http://ibm.com/support/entry/portal/docdisplay?lndocid=migr-5090480. b. The combination of vNIC2 and iBoot is not supported for legacy booting with Linux. c. The combination of vNIC2 with VMware ESX 4.1 and storage protocols (FCoE and iSCSI) is not supported. d. The CN4093 10Gb Converged Switch is planned to support Universal Fabric Port (UFP) in 2Q/2013 Chapter 1. Chassis interoperability 13
  27. 27. 1.6 Chassis power supplies Power supplies are available either as 2500W or 2100W capacities. The standard chassis ships with two 2500W power supplies. A maximum of six power supplies can be installed. The 2100W power supplies are only available via CTO and through the System x ordering channel. Table 1-15 shows the ordering information for the Enterprise Chassis power supplies. Power supplies cannot be mixed in the same chassis.Table 1-15 Power supply module option part numbers Part Feature Description Chassis models number codesa where standard 43W9049 A0UC / 3590 IBM Flex System Enterprise Chassis 2500W Power Module 8721-A1x (x-config) 7893-92X (e-config) 47C7633 A3JH / None IBM Flex System Enterprise Chassis 2100W Power Module None a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is for configurations ordered through the IBM Power Systems channel. A chassis powered by the 2100W power supplies cannot provide N+N redundant power unless all the compute nodes are configured with 95W or lower Intel processors. N+1 redundancy is possible with any processors. Table 1-16 shows the nodes that are supported in chassis when powered by either the 2100W or 2500W modules.Table 1-16 Compute nodes supported by the power supplies Node 2100W 2500W power supply power supply IBM Flex System Manager management node Yes Yes x220 (with or without Storage Expansion Node or PCIe Expansion Node) Yes Yes x240 (with or without Storage Expansion Node or PCIe Expansion Node) Yesa Yesa x440 Yesa Yesa p24L No Yesa p260 No Yesa p460 No Yesa V7000 Storage Node (either primary or expansion node) Yes Yes a. Some restrictions based on the TDP power of the processors installed or the power policy enabled. See Table 1-17 on page 15.14 IBM Flex System Interoperability Guide
  28. 28. Table 1-17 on page 15 lists details of the support for compute nodes supported based on type and number of power supplies installed in the chassis and the power policy enabled (N+N or N+1). In this table, the colors of the cells have the following meaning: Supported with no restrictions as to the number of compute nodes that can be installed Supported but with restrictions on the number of compute nodes that can be installed.Table 1-17 Specific number of compute nodes supported based on installed power supplies Compute CPU 2100W power supplies 2500W power supplies node TDP rating N+1, N=5 N+1, N=4 N+1, N=3 N+N, N=3 N+1, N=5 N+1, N=4 N+1, N=3 N+N, N=3 6 total 5 total 4 total 6 total 6 total 5 total 4 total 6 total x240 60W 14 14 14 14 14 14 14 14 70W 14 14 13 14 14 14 14 14 80W 14 14 13 14 14 14 14 14 95W 14 14 12 13 14 14 14 14 115W 14 14 11 12 14 14 14 14 130W 14 14 11 11 14 14 14 14 135W 14 14 11 11 14 14 13 14 x440 95W 7 7 6 6 7 7 7 7 115W 7 7 5 6 7 7 7 7 130W 7 7 5 5 7 7 6 7 p24L All Not supported 14 14 12 13 p260 All Not supported 14 14 12 13 p460 All Not supported 7 7 6 6 x220 50W 14 14 14 14 14 14 14 14 60W 14 14 14 14 14 14 14 14 70W 14 14 14 14 14 14 14 14 80W 14 14 14 14 14 14 14 14 95W 14 14 14 14 14 14 14 14 FSM 95W 2 2 2 2 2 2 2 2 V7000 N/A 3 3 3 3 3 3 3 3 Assumptions: All Compute Nodes fully configured Throttling and over subscription is enabled Tip: Consult the Power configurator for exact configuration support: http://ibm.com/systems/bladecenter/resources/powerconfig.html Chapter 1. Chassis interoperability 15
  29. 29. 1.7 Rack to chassis IBM offers an extensive range of industry-standard and EIA-compatible rack enclosures and expansion units. The flexible rack solutions help you consolidate servers and save space, while allowing easy access to crucial components and cable management. Table 1-18 lists the IBM Flex System Enterprise Chassis supported in each rack cabinet.Table 1-18 The chassis supported in each rack cabinet Part number Rack cabinet Supports the Enterprise Chassis 93634CX IBM PureFlex System 42U Rack Yes (recommended) 93634DX IBM PureFlex System 42U Expansion Rack Yes (recommended) 93634PX IBM 42U 1100 mm Deep Dynamic rack Yes (recommended) 201886X IBM 11U Office Enablement Kit Yes 93072PX IBM S2 25U Static standard rack Yes 93072RX IBM S2 25U Dynamic standard rack Yes 93074RX IBM S2 42U standard rack Yes 99564RX IBM S2 42U Dynamic standard rack Yes 93084PX IBM 42U Enterprise rack Yes 93604PX IBM 42U 1200 mm Deep Dynamic Rack Yes 93614PX IBM 42U 1200 mm Deep Static rack Yes 93624PX IBM 47U 1200 mm Deep Static rack Yes 9306-900 IBM Netfinity® 42U Rack No 9306-910 IBM Netfinity 42U Rack No 9308-42P IBM Netfinity Enterprise Rack No 9308-42X IBM Netfinity Enterprise Rack No Varies IBM NetBay 22U No16 IBM Flex System Interoperability Guide
  30. 30. 2 Chapter 2. Compute node component compatibility This chapter lists the compatibility of components installed internally to each compute node. Topics in this chapter are: 2.1, “Compute node-to-card interoperability” on page 18 2.2, “Memory DIMM compatibility” on page 20 2.3, “Internal storage compatibility” on page 22 2.4, “Embedded virtualization” on page 25 2.5, “Expansion node compatibility” on page 26© Copyright IBM Corp. 2012, 2013. All rights reserved. 17
  31. 31. 2.1 Compute node-to-card interoperability Table 2-1 lists the available I/O adapters and their compatibility with compute nodes. Power Systems compute nodes: Some I/O adapters supported by Power Systems compute nodes are restricted to only some of the available slots. See Table 2-2 on page 19 for specifics.Table 2-1 I/O adapter compatibility matrix - compute nodes Supported servers p260 22X p260 23X System x x-config e-config x440b p24L p460 x220 x240 part feature feature number code codea I/O adapters Ethernet adapters 49Y7900 A1BR 1763 / A10Y EN2024 4-port 1Gb Ethernet Adapter Y Y Y Y Y Y Y 90Y3466 A1QY EC2D / A1QY EN4132 2-port 10 Gb Ethernet Adapter Y Y Y N N N N None None 1762 / None EN4054 4-port 10Gb Ethernet Adapter N N N Y Y Y Y 90Y3554 A1R1 1759 / A1R1 CN4054 10Gb Virtual Fabric Adapter Y Y Y N N N N 90Y3558 A1R0 1760 / A1R0 CN4054 Virtual Fabric Adapter Upgradec Y Y Y N N N N None None EC24 / None CN4058 8-port 10Gb Converged Adapter N N N Y Y Y Y None None EC26 / None EN4132 2-port 10Gb RoCE Adapter N N N Y Y Y Y Fibre Channel adapters 69Y1938 A1BM 1764 / A1BM FC3172 2-port 8Gb FC Adapter Y Y Y Y Y Y Y 95Y2375 A2N5 EC25 / A2N5 FC3052 2-port 8Gb FC Adapter Y Y Y N N N N 88Y6370 A1BP EC2B / A1BP FC5022 2-port 16Gb FC Adapter Y Y Y N N N N InfiniBand adapters 90Y3454 A1QZ EC2C / A1QZ IB6132 2-port FDR InfiniBand Adapter Y Y Y N N N N None None 1761 / None IB6132 2-port QDR InfiniBand Adapter N N N Y Y Y Y SAS 90Y4390 A2XW None / A2XW ServeRAID M5115 SAS/SATA Controllerd Y Y Yb N N N N a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when supported). The second is for the x220 and x440. b. For compatibility as listed here, ensure the x440 is running IMM2 firmare Build 40a or later c. Features on Demand (software) upgrade to enable FCoE and iSCSI on the CN4054. One upgrade needed per adapter. d. Various enablement kits and Features on Demand upgrades are available for the ServeRAID M5115. See the ServeRAID M5115 Product Guide, http://www.redbooks.ibm.com/abstracts/tips0884.html?Open18 IBM Flex System Interoperability Guide
  32. 32. For Power Systems compute nodes, Table 2-2 shows which specific I/O expansion slots each of the supported adapters can be installed in to. Yes in the table means the adapter is supported in that I/O expansion slot. Tip: Table 2-2 applies to Power Systems compute nodes only.Table 2-2 Slot locations supported by I/O expansion cards in Power Systems compute nodes Feature Description Slot 1 Slot 2 Slot 3 Slot 4 code (p460) (p460) 10 Gb Ethernet EC24 IBM Flex System CN4058 8-port 10Gb Converged Adapter Yes Yes Yes Yes EC26 IBM Flex System EN4132 2-port 10Gb RoCE Adapter No Yes Yes Yes 1762 IBM Flex System EN4054 4-port 10Gb Ethernet Adapter Yes Yes Yes Yes 1 Gb Ethernet 1763 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter Yes Yes Yes Yes InfiniBand 1761 IBM Flex System IB6132 2-port QDR InfiniBand Adapter No Yes No Yes Fibre Channel 1764 IBM Flex System FC3172 2-port 8Gb FC Adapter No Yes No Yes Chapter 2. Compute node component compatibility 19
  33. 33. 2.2 Memory DIMM compatibility This section covers memory DIMMs for both compute node families. It covers the following topics: 2.2.1, “x86 compute nodes” on page 20 2.2.2, “Power Systems compute nodes” on page 212.2.1 x86 compute nodes Table 2-3 lists the memory DIMM options for the x86 compute nodes.Table 2-3 Supported memory DIMMs - x86 compute nodes Part x-config e-config Description x220 x240 x440 number feature featurea,b Unbuffered DIMM (UDIMM) modules 49Y1403 A0QS EEM2 / A0QS 2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC Yes No No DDR3 1333MHz LP UDIMM 49Y1404 8648 EEM3 / 8648 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC Yes Yes Yes DDR3 1333MHz LP UDIMM Registered DIMMs (RDIMMs) - 1333 MHz and 1066 MHz 49Y1405 8940 EM05 / None 2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC No Yes No DDR3 1333MHz LP RDIMM 49Y1406 8941 EEM4 / 8941 4GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC Yes Yes Yes DDR3 1333MHz LP RDIMM 49Y1407 8942 EM09 / 8942 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC Yes Yes Yes DDR3 1333MHz LP RDIMM 49Y1397 8923 EM17 / 8923 8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC Yes Yes Yes DDR3 1333MHz LP RDIMM 49Y1563 A1QT EM33 / A1QT 16GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9 Yes Yes Yes ECC DDR3 1333MHz LP RDIMM 49Y1400 8939 EEM1 / 8939 16GB (1x16GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC Yes Yes No DDR3 1066MHz LP RDIMM 90Y3101 A1CP EEM7 / None 32GB (1x32GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC No No No DDR3 1066MHz LP RDIMM Registered DIMMs (RDIMMs) - 1600 MHz 49Y1559 A28Z EEM5 / A28Z 4GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC Yes Yes Yes DDR3 1600MHz LP RDIMM 90Y3178 A24L EEMC / A24L 4GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC Yes Yes No DDR3 1600MHz LP RDIMM 90Y3109 A292 EEM9 / A292 8GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC Yes Yes Yes DDR3 1600MHz LP RDIMM 00D4968 A2U5 EEMB / A2U5 16GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC Yes Yes Yes DDR3 1600MHz LP RDIMM20 IBM Flex System Interoperability Guide
  34. 34. Part x-config e-config Description x220 x240 x440 number feature featurea,b Load-reduced DIMMs (LRDIMMs) 49Y1567 A290 EEM6 / A290 16GB (1x16GB, 4Rx4, 1.35V) PC3L-10600 CL9 No Yes Yes ECC DDR3 1333MHz LP LRDIMM 90Y3105 A291 EEM8 / A291 32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 Yes Yes Yes ECC DDR3 1333MHz LP LRDIMM a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when supported). The second is for the x220 and x440. b. For memory DIMMs, the first feature code listed will result in two DIMMs each, whereas the second feature code listed contains only one DIMM each.2.2.2 Power Systems compute nodes Table 2-4 lists the supported memory DIMMs for Power Systems compute nodes.Table 2-4 Supported memory DIMMs - Power Systems compute nodes Part e-config Description p24L p260 p260 p460 number feature 22X 23X 78P1011 EM04 2x 2 GB DDR3 RDIMM 1066 MHz Yes Yes No Yes 78P0501 8196 2x 4 GB DDR3 RDIMM 1066 MHz Yes Yes Yes Yes 78P0502 8199 2x 8 GB DDR3 RDIMM 1066 MHz Yes Yes No Yes 78P1917 EEMD 2x 8 GB DDR3 RDIMM 1066 MHz Yes Yes Yes Yes 78P0639 8145 2x 16 GB DDR3 RDIMM 1066 MHz Yes Yes No Yes 78P1915 EEME 2x 16 GB DDR3 RDIMM 1066 MHz Yes Yes Yes Yes 78P1539 EEMF 2x 32 GB DDR3 RDIMM 1066 MHz Yes Yes Yes Yes Chapter 2. Compute node component compatibility 21
  35. 35. 2.3 Internal storage compatibility This section covers supported internal storage for both compute node families. It covers the following topics: 2.3.1, “x86 compute nodes: 2.5-inch drives” on page 22 2.3.2, “x86 compute nodes: 1.8-inch drives” on page 23 2.3.3, “Power Systems compute nodes” on page 242.3.1 x86 compute nodes: 2.5-inch drives Table 2-5 lists the 2.5-inch drives for x86 compute nodes.Table 2-5 Supported 2-5-inch SAS and SATA drives Part x-config e-config Description x220 x240 x440 number feature featurea 10K SAS hard disk drives 90Y8877 A2XC None / A2XC IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD N N Y 42D0637 5599 3743 / 5599 IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD Y Y N 44W2264 5413 None / 5599 IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS SED N N Y 90Y8872 A2XD None / A2XD IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS HDD N N Y 49Y2003 5433 3766 / 5433 IBM 600GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD Y Y N 81Y9650 A282 EHD4 / A282 IBM 900GB 10K 6Gbps SAS 2.5" SFF HS HDD Y Y Y 15K SAS hard disk drives 90Y8926 A2XB None / A2XB IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS HDD N N Y 42D0677 5536 EHD1 / 5536 IBM 146GB 15K 6Gbps SAS 2.5" SFF Slim-HS HDD Y Y N 81Y9670 A283 EHD5 / A283 IBM 300GB 15K 6Gbps SAS 2.5" SFF HS HDD Y Y Y NL SAS hard disk drives 81Y9690 A1P3 EHD6 / A1P3 IBM 1TB 7.2K 6Gbps NL SAS 2.5" SFF HS HDD Y Y Y 90Y8953 A2XE None / A2XE IBM 500GB 7.2K 6Gbps NL SAS 2.5" SFF G2HS HDD N N Y 42D0707 5409 EHD2 / 5409 IBM 500GB 7200 6Gbps NL SAS 2.5" SFF HS HDD Y Y N NL SATA hard disk drives 81Y9730 A1AV EHD9 / A1AV IBM 1TB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD Y Y Y 81Y9722 A1NX EHD7 / A1NX IBM 250GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD Y Y Y 81Y9726 A1NZ EHD8 / A1NZ IBM 500GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD Y Y Y Solid-state drives - Enterprise 00W1125 A3HR None / A3HR IBM 100GB SATA 2.5" MLC HS Enterprise SSD Y Y Y 43W7746 5420 None / 5420 IBM 200GB SATA 1.8" MLC SSD Y Y Y 43W7718 A2FN EHD3 / A2FN IBM 200GB SATA 2.5" MLC HS SSD Y Y Y 43W7726 5428 None / 5428 IBM 50GB SATA 1.8" MLC SSD Y Y Y22 IBM Flex System Interoperability Guide
  36. 36. Part x-config e-config Description x220 x240 x440 number feature featurea Solid-state drives - Enterprise value 49Y5839 A3AS None / A3AS IBM 64GB SATA 2.5" MLC HS Enterprise Value SSD Y Y N 90Y8648 A2U4 EHDD / A2U4 IBM 128GB SATA 2.5" MLC HS Enterprise Value SSD Y Y Y 90Y8643 A2U3 EHDC / A2U3 IBM 256GB SATA 2.5" MLC HS Enterprise Value SSD Y Y Y 49Y5844 A3AU None / A3AU IBM 512GB SATA 2.5" MLC HS Enterprise Value SSD Y Y N a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when supported). The second is for the x220 and x440.2.3.2 x86 compute nodes: 1.8-inch drives The x86 compute nodes support 1.8-inch solid-state drives with the addition of the ServeRAID M5115 RAID controller plus the appropriate enablement kits. For details about configurations, see ServeRAID M5115 SAS/SATA Controller for IBM Flex System, TIPS0884. Tip: The ServeRAID M5115 RAID controller is installed in I/O expansion slot 1 but can be installed along with the Compute Node Fabric Connector (aka periscope connector) used to connect the onboard Ethernet controller to the chassis midplane. Table 2-6 lists the supported enablement kits and Features on Demand activation upgrades available for use with the ServeRAID M5115.Table 2-6 ServeRAID M5115 compatibility Part Feature Description x220 x240 x440 number codea 90Y4390 A2XW ServeRAID M5115 SAS/SATA Controller for IBM Flex System Yes Yes Yes Hardware enablement kits - IBM Flex System x220 Compute Node 90Y4424 A35L ServeRAID M5100 Series Enablement Kit for IBM Flex System x220 Yes No No 90Y4425 A35M ServeRAID M5100 Series IBM Flex System Flash Kit for x220 Yes No No 90Y4426 A35N ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x220 Yes No No Hardware enablement kits - IBM Flex System x240 Compute Node 90Y4342 A2XX ServeRAID M5100 Series Enablement Kit for IBM Flex System x240 No Yes No 90Y4341 A2XY ServeRAID M5100 Series IBM Flex System Flash Kit for x240 No Yes No 90Y4391 A2XZ ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x240 No Yesb No Hardware enablement kits - IBM Flex System x440 Compute Node 46C9030 A3DS ServeRAID M5100 Series Enablement Kit for IBM Flex System x440 No No Yes 46C9031 A3DT ServeRAID M5100 Series IBM Flex System Flash Kit for x440 No No Yes 46C9032 A3DU ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x440 No No Yes Chapter 2. Compute node component compatibility 23
  37. 37. Part Feature Description x220 x240 x440 number codea Feature on-demand licenses (for all three compute nodes) 90Y4410 A2Y1 ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System Yes Yes Yes 90Y4412 A2Y2 ServeRAID M5100 Series Performance Upgrade for IBM Flex System Yes Yes Yes 90Y4447 A36G ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System Yes Yes Yes a. The feature codes listed here are for both x-config (HVEC) and e-config (AAS), with the exception of those for the x240 which are for HVEC only. b. If the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) is installed, the x240 USB Enablement Kit (49Y8119) cannot also be installed. Both the x240 USB Enablement Kit and the SSD Expansion Kit both include special air baffles that cannot be installed at the same time. Table 2-7 lists the drives supported in conjunction with the ServeRAID M5115 RAID controller.Table 2-7 Supported 1.8-inch solid-state drives Part Feature Description x220 x240 x440 number codea 43W7746 5420 IBM 200GB SATA 1.8" MLC SSD Yes Yes Yes 43W7726 5428 IBM 50GB SATA 1.8" MLC SSD Yes Yes Yes 49Y5993 A3AR IBM 512GB SATA 1.8" MLC Enterprise Value SSD No No No 49Y5834 A3AQ IBM 64GB SATA 1.8" MLC Enterprise Value SSD No No No a. The feature codes listed here are for both x-config (HVEC) and e-config (AAS), with the exception of those for the x240 which are for HVEC only.2.3.3 Power Systems compute nodes Local storage options for Power Systems compute nodes are shown in Table 2-8. None of the available drives are hot-swappable. The local drives (HDD or SDD) are mounted to the top cover of the system. If you use local drives, you must order the appropriate cover with connections for your wanted drive type. The maximum number of drives that can be installed in any Power Systems compute node is two. SSD and HDD drives cannot be mixed.Table 2-8 Local storage options for Power Systems compute nodes e-config Description p24L p260 p460 feature 2.5 inch SAS HDDs 8274 300 GB 10K RPM non-hot-swap 6 Gbps SAS Yes Yes Yes 8276 600 GB 10K RPM non-hot-swap 6 Gbps SAS Yes Yes Yes 8311 900 GB 10K RPM non-hot-swap 6 Gbps SAS Yes Yes Yes 7069 Top cover with HDD connectors for the p260 and p24L Yes Yes No 7066 Top cover with HDD connectors for the p460 No No Yes 1.8 inch SSDs 8207 177 GB SATA non-hot-swap SSD Yes Yes Yes24 IBM Flex System Interoperability Guide
  38. 38. e-config Description p24L p260 p460 feature 7068 Top cover with SSD connectors for the p260 and p24L Yes Yes No 7065 Top Cover with SSD connectors for p460 No No Yes No drives 7067 Top cover for no drives on the p260 and p24L Yes Yes No 7005 Top cover for no drives on the p460 No No Yes2.4 Embedded virtualization The x86 compute nodes support an IBM standard USB flash drive (USB Memory Key) option preinstalled with VMware ESXi or VMware vSphere. It is fully contained on the flash drive, without requiring any disk space. On the x240 the USB memory keys plug into the USB ports on the optional x240 USB Enablement Kit. On the x220 and x440, the USB memory keys plug directly into USB ports on the system board. Table 2-9 lists the ordering information for the VMware hypervisor options.Table 2-9 IBM USB Memory Key for VMware hypervisors Part x-config e-config Description x220 x240 x440 number feature featurea 49Y8119 A33M None / None x240 USB Enablement Kit No Yesb No 41Y8300 A2VC EBK3 / A2VC IBM USB Memory Key for VMware ESXi 5.0 Yes Yes Yes 41Y8307 A383 None / A383 IBM USB Memory Key for VMware ESXi 5.0 Update1 Yes Yes Yes 41Y8298 A2G0 None / A2G0 IBM Blank USB Memory Key for VMware ESXi Yes Yes Yes Downloads a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when supported). The second is for the x220 and x440. b. If the x240 USB Enablement Kit (49Y8119) is installed, the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) cannot also be installed. Both the x240 USB Enablement Kit and the SSD Expansion Kit both include special air baffles that cannot be installed at the same time. You can use the Blank USB Memory Key, 41Y8298, to use any available IBM customized version of the VMware hypervisor. The VMware vSphere hypervisor with IBM customizations can be downloaded from the following website: http://ibm.com/systems/x/os/vmware/esxi Power Systems compute nodes do not support VMware ESXi installed on a USB Memory Key. Power Systems compute nodes support IBM PowerVM® as standard. These servers do support virtual servers, also known as logical partitions or LPARs. The maximum number of virtual serves is 10 times the number of cores in the compute node: p24L: Up to 160 virtual servers (10 x 16 cores) p260: Up to 160 virtual servers (10 x 16 cores) p460: Up to 320 virtual servers (10 x 32 cores) Chapter 2. Compute node component compatibility 25

×