Learn about IBM Flex System Interoperability Guide. This IBM Redpaper publication is a reference to compatibility and interoperability of components inside and connected to IBM PureFlex System and IBM Flex System solutions. For more information on Pure Systems, visit http://ibm.co/J7Zb1v.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
1. Front cover
IBM Flex System
Interoperability Guide
Quick reference for IBM Flex System
Interoperability
Covers internal components and
external connectivity
Latest updates as of
30 January 2013
David Watts
Ilya Krutov
ibm.com/redbooks Redpaper
8. Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® POWER7+™ Redbooks (logo) ®
BladeCenter® POWER7® RETAIN®
DS8000® PowerVM® ServerProven®
IBM Flex System™ POWER® Storwize®
IBM Flex System Manager™ PureFlex™ System Storage®
IBM® RackSwitch™ System x®
Netfinity® Redbooks® XIV®
Power Systems™ Redpaper™
The following terms are trademarks of other companies:
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Other company, product, or service names may be trademarks or service marks of others.
vi IBM Flex System Interoperability Guide
10. Now you can become a published author, too!
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
viii IBM Flex System Interoperability Guide
12. New information
Added information about these new products:
– IBM Flex System p260 Compute Node, 7895-23X
– IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
– IBM Flex System Fabric EN4093R 10Gb Scalable Switch
– IBM Flex System CN4058 8-port 10Gb Converged Adapter
– IBM Flex System EN4132 2-port 10Gb RoCE Adapter
– IBM Flex System Storage® Expansion Node
– IBM Flex System PCIe Expansion Node
– IBM PureFlex System 42U Rack
– IBM Flex System V7000 Storage Node
The x220 now supports 32 GB LRDIMM, page Table 2-3 on page 20
The Power Systems™ compute nodes support new DIMMs, Table 2-4 on page 21.
New 2100W power supply option for the Enterprise Chassis, 1.6, “Chassis power
supplies” on page 14.
New section covering Features on Demand upgrades for scalable switches, 1.4, “Switch
upgrades” on page 9.
Changed information
Moved the FCoE and NPIV tables to Chapter 4, “Storage interoperability” on page 37.
Added machine types & models (MTMs) for the x220 and x440 when ordered via AAS
(e-config), Table 1-1 on page 2
Added footnote regarding power management and the use of 14 Power Systems compute
nodes with 32 GB DIMMs, Table 1-1 on page 2
Added AAS (e-config) feature codes to various tables of x86 compute node options. Note
that AAS feature codes for the x220 and x440 are the same as those used in the HVEC
system (x-config). However the AAS feature codes for the x240 are different than the
equivalent HVEC feature codes. This is noted in the table.
Updated the FCoE table, 4.2, “FCoE support” on page 39
Updated the vNIC table, Table 1-14 on page 13
Clarified that the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) and x240 USB
Enablement Kit (49Y8119) cannot be installed at the same time, Table 2-6 on page 23.
Updated the table of supported 2.5-inch drives, Table 2-5 on page 22.
Updated the operating system table, Table 3-1 on page 32
2 October 2012
This revision reflects the addition, deletion, or modification of new and changed information
described below.
New information
Temporary restrictions on the use of network and storage adapters with the x440, page 18
Changed information
Updated the x86 memory table, Table 2-3 on page 20
Updated the FCoE table, 4.2, “FCoE support” on page 39
x IBM Flex System Interoperability Guide
13. Updated the operating system table, Table 3-1 on page 32
Clarified the support of the Pass-thru module and Fibre Channel switches with IBM Fabric
Manager, Table 3-4 on page 35.
Summary of changes xi
16. 1.1 Chassis to compute node
Table 1-1 lists the maximum number of compute nodes installed in the chassis.
Table 1-1 Maximum number of compute nodes installed in the chassis
Compute nodes Machine type Maximum number of
compute nodes in the
System x Power System Enterprise Chassis
(x-config) (e-config)
8721-A1x 7893-92X
(x-config) (e-config)
x86 compute nodes
IBM Flex System x220 Compute Node 7906 7906-25X 14 14
IBM Flex System x240 Compute Node 8737 7863-10X 14 14
IBM Flex System x440 Compute Node 7917 7917-45X 7 7
IBM Power Systems compute nodes
IBM Flex System p24L Compute Node None 1457-7FL 14a 14a
IBM Flex System p260 Compute Node (POWER7®) None 7895-22X 14a 14a
IBM Flex System p260 Compute Node (POWER7+™) None 7895-23X 14a 14a
IBM Flex System p460 Compute Node None 7895-42X 7a 7a
Management node
IBM Flex System Manager 8731-A1x 7955-01M 1b 1b
a. For Power Systems compute nodes: if the chassis is configured with the power management policy “AC Power
Source Redundancy with Compute Node Throttling Allowed”, some maximum chassis configurations containing
Power Systems compute nodes with large populations of 32GB DIMMs may result in the chassis having insufficient
power to power on all 14 compute nodes bays. In such circumstances, only 13 of the 14 bays would be allowed to
be powered on.
b. One Flex System Manager management node can manage up to four chassis
2 IBM Flex System Interoperability Guide
17. 1.2 Switch to adapter interoperability
In this section, we describe switch to adapter interoperability.
1.2.1 Ethernet switches and adapters
Table 1-2 lists Ethernet switch to card compatibility.
Switch upgrades: To maximize the usable port count on the adapters, the switches may
need additional license upgrades. See 1.4, “Switch upgrades” on page 9 for details.
Table 1-2 Ethernet switch to card compatibility
CN4093 EN4093R EN4093 EN4091 EN2092
10Gb 10Gb 10Gb 10Gb 1Gb
Switch Switch Switch Pass-thru Switch
Part number 00D5823 95Y3309 49Y4270 88Y6043 49Y4294
Part Feature A3HH / A3J6 / A0TB / A1QV / A0TF /
number codesa Feature codesa ESW2 ESW7 3593 3700 3598
None None x220 Embedded 1 Gb Yesb Yes Yes No Yes
None None x240 Embedded 10 Gb Yes Yes Yes Yes Yes
None None x440 Embedded 10 Gb Yes Yes Yes Yes Yes
49Y7900 A1BR / 1763 EN2024 4-port 1Gb Yes Yes Yes Yesc Yes
Ethernet Adapter
90Y3466 A1QY / EC2D EN4132 2-port 10 Gb No Yes Yes Yes No
Ethernet Adapter
None None / 1762 EN4054 4-port 10Gb Yes Yes Yes Yesc Yes
Ethernet Adapter
90Y3554 A1R1 / 1759 CN4054 10Gb Virtual Yes Yes Yes Yesc Yes
Fabric Adapter
None None / EC24 CN4058 8-port 10Gb Yesd Yesd Yesd Yesc Yese
Converged Adapter
None None / EC26 EN4132 2-port 10Gb No Yes Yes Yes No
RoCE Adapter
a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second
feature code is for configurations ordered through the IBM Power Systems channel (e-config)
b. 1 Gb is supported on the CN4093’s two external 10 Gb SFP+ ports only. The 12 external Omni Ports do not support
1 GbE speeds.
c. Only two of the ports of this adapter are connected when used with the EN4091 10Gb Pass-thru.
d. Only six of the eight ports of the CN4058 adapter are connected with the CN4093, EN4093R, EN4093R switches
e. Only four of the eight ports of CN4058 adapter are connected with the EN2092 switch.
Chapter 1. Chassis interoperability 3
18. 1.2.2 Fibre Channel switches and adapters
Table 1-3 lists Fibre Channel switch to card compatibility.
Table 1-3 Fibre Channel switch to card compatibility
FC5022 FC5022 FC5022 FC3171 FC3171
16Gb 16Gb 16Gb 8Gb 8Gb
12-port 24-port 24-port switch Pass-thru
ESB
Part number 88Y6374 00Y3324 90Y9356 69Y1930 69Y1934
Part Feature Feature codesa A1EH / A3DP / A2RQ / A0TD / A0TJ /
number codesa 3770 ESW5 3771 3595 3591
69Y1938 A1BM / 1764 FC3172 2-port 8Gb FC Yes Yes Yes Yes Yes
Adapter
95Y2375 A2N5 / EC25 FC3052 2-port 8Gb FC Yes Yes Yes Yes Yes
Adapter
88Y6370 A1BP / EC2B FC5022 2-port 16Gb FC Yes Yes Yes No No
Adapter
a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second
feature code is for configurations ordered through the IBM Power Systems channel (e-config)
1.2.3 InfiniBand switches and adapters
Table 1-4 lists InfiniBand switch to card compatibility.
Table 1-4 InfiniBand switch to card compatibility
IB6131 InfiniBand
Switch
Part number 90Y3450
Part Feature
number codesa Feature codea A1EK / 3699
90Y3454 A1QZ / EC2C IB6132 2-port FDR InfiniBand Adapter Yesb
None None / 1761 IB6132 2-port QDR InfiniBand Adapter Yes
a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second
feature code is for configurations ordered through the IBM Power Systems channel (e-config)
b. To operate at FDR speeds, the IB6131 switch will need the FDR upgrade, as described in 1.4, “Switch upgrades” on
page 9
4 IBM Flex System Interoperability Guide
19. 1.3 Switch to transceiver interoperability
This section specifies the transceivers and direct-attach copper (DAC) cables supported by
the various IBM Flex System I/O modules.
1.3.1 Ethernet switches
Support for transceivers and cables for Ethernet switch modules is shown in Table 1-5.
Table 1-5 Modules and cables supported in Ethernet I/O modules
CN4093 EN4093R EN4093 EN4091 EN2092
10Gb 10Gb 10Gb 10Gb 1Gb
Switch Switch Switch Pass-thru Switch
Part number 00D5823 95Y3309 49Y4270 88Y6043 49Y4294
Part Feature Feature codesa A3HH / A3J6 / A0TB / A1QV / A0TF /
number codesa ESW2 ESW7 3593 3700 3598
SFP transceivers - 1 Gbps
81Y1622 3269 / IBM SFP SX Transceiver Yes Yes Yes Yes Yes
EB2A (1000Base-SX)
81Y1618 3268 / IBM SFP RJ45 Transceiver Yes Yes Yes Yes Yes
EB29 (1000Base-T)
90Y9424 A1PN / IBM SFP LX Transceiver Yes Yes Yes Yes Yes
ECB8 (1000Base-LX)
SFP+ transceivers - 10 Gbps
44W4408 4942 / 10 GBase-SR SFP+ (MMFiber) Yes Yes Yes Yes Yes
3282
46C3447 5053 / IBM SFP+ SR Transceiver Yes Yes Yes Yes Yes
EB28 (10GBase-SR)
90Y9412 A1PM / IBM SFP+ LR Transceiver Yes Yes Yes Yes Yes
ECB9 (10GBase-LR)
QSFP+ transceivers - 40 Gbps
49Y7884 A1DR / IBM QSFP+ SR Transceiver Yes Yes Yes No No
EB27 (40Gb)
8 Gb Fibre Channel SFP+ transceivers
44X1964 5075 / IBM 8 Gb SFP+ SW Optical Yes No No No No
3286 Transceiver
SFP+ direct-attach copper (DAC) cables
90Y9427 A1PH / 1m IBM Passive DAC SFP+ Yes Yes Yes No Yes
None
90Y9430 A1PJ / 3m IBM Passive DAC SFP+ Yes Yes Yes No Yes
None
90Y9433 A1PK / 5m IBM Passive DAC SFP+ Yes Yes Yes No Yes
ECB6
Chapter 1. Chassis interoperability 5
20. CN4093 EN4093R EN4093 EN4091 EN2092
10Gb 10Gb 10Gb 10Gb 1Gb
Switch Switch Switch Pass-thru Switch
Part number 00D5823 95Y3309 49Y4270 88Y6043 49Y4294
Part Feature Feature codesa A3HH / A3J6 / A0TB / A1QV / A0TF /
number codesa ESW2 ESW7 3593 3700 3598
49Y7886 A1DL / 1m 40 Gb QSFP+ to 4 x 10 Gb Yes Yes Yes No No
EB24 SFP+ Cable
49Y7887 A1DM / 3m 40 Gb QSFP+ to 4 x 10 Gb Yes Yes Yes No No
EB25 SFP+ Cable
49Y7888 A1DN / 5m 40 Gb QSFP+ to 4 x 10 Gb Yes Yes Yes No No
EB26 SFP+ Cable
95Y0323 A25A / IBM 1m 10 GBase Copper No No No Yes No
None SFP+ Twinax (Active)
95Y0326 A25B / IBM 3m 10 GBase Copper No No No Yes No
None SFP+ Twinax (Active)
95Y0329 A25C / IBM 5m 10 GBase Copper No No No Yes No
None SFP+ Twinax (Active)
81Y8295 A18M / 1m 10 GbE Twinax Act Copper No No No Yes No
None SFP+ DAC (active)
81Y8296 A18N / 3m 10 GE Twinax Act Copper No No No Yes No
None SFP+ DAC (active)
81Y8297 A18P / 5m 10 GE Twinax Act Copper No No No Yes No
None SFP+ DAC (active)
QSFP cables
49Y7890 A1DP / 1m IBM QSFP+ to QSFP+ Yes Yes Yes No No
EB2B Cable
49Y7891 A1DQ / 3m IBM QSFP+ to QSFP+ Yes Yes Yes No No
EB2H Cable
Fiber optic cables
90Y3519 A1MM / 10m IBM MTP Fiber Optical Yes Yes Yes No No
EB2J Cable
90Y3521 A1MN / 30m IBM MTP Fiber Optical Yes Yes Yes No No
EC2K a Cable
a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second
feature code is for configurations ordered through the IBM Power Systems channel (e-config)
6 IBM Flex System Interoperability Guide
21. 1.3.2 Fibre Channel switches
Support for transceivers and cables for Fibre Channel switch modules is shown in Table 1-6.
Table 1-6 Modules and cables supported in Fibre Channel I/O modules
FC5022 FC5022 FC5022 FC3171 FC3171
16Gb 16Gb 16Gb 8Gb 8Gb
12-port 24-port 24-port switch Pass-thru
ESB
Part number 88Y6374 00Y3324 90Y9356 69Y1930 69Y1934
Part Feature Feature codesa A1EH / A3DP / A2RQ / A0TD / A0TJ /
number codesa 3770 ESW5 3771 3595 3591
16 Gb transceivers
88Y6393 A22R / Brocade 16 Gb SFP+ Optical Yes Yes Yes No No
5371 Transceiver
8 Gb transceivers
88Y6416 A2B9 / Brocade 8 Gb SFP+ SW Optical Yes Yes Yes No No
5370 Transceiver
44X1964 5075 / IBM 8 Gb SFP+ SW Optical No No No Yes Yes
3286 Transceiver
4 Gb transceivers
39R6475 4804 / 4 Gb SFP Transceiver Option No No No Yes Yes
3238
a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second
feature code is for configurations ordered through the IBM Power Systems channel (e-config)
Chapter 1. Chassis interoperability 7
22. 1.3.3 InfiniBand switches
Support for transceivers and cables for InfiniBand switch modules is shown in Table 1-7.
Compliant cables: The IB6131 switch supports all cables compliant to the InfiniBand
Architecture specification.
Table 1-7 Modules and cables supported in InfiniBand I/O modules
IB6131 InfiniBand
Switch
Part number 90Y3450
Part Feature
number codesa Feature codesa A1EK / 3699
49Y9980 3866 / 3249 IB QDR 3m QSFP Cable Option (passive) Yes
90Y3470 A227 / ECB1 3m FDR InfiniBand Cable (passive) Yes
a. The first feature code listed is for configurations ordered through System x sales channels (x-config). The second
feature code is for configurations ordered through the IBM Power Systems channel (e-config)
8 IBM Flex System Interoperability Guide
23. 1.4 Switch upgrades
Various IBM Flex System switches can be upgraded via software licenses to enable
additional ports or features.
Switches covered in this section:
1.4.1, “IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch” on page 9
1.4.2, “IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch” on page 10
1.4.3, “IBM Flex System EN2092 1Gb Ethernet Scalable Switch” on page 11
1.4.4, “IBM Flex System IB6131 InfiniBand Switch” on page 11
1.4.5, “IBM Flex System FC5022 16Gb SAN Scalable Switch” on page 12
1.4.1 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
The CN4093 switch is initially licensed for fourteen 10 GbE internal ports, two external 10
GbE SFP+ ports, and six external Omni Ports enabled.
Further ports can be enabled, including 14 additional internal ports and two external 40 GbE
QSFP+ uplink ports with the Upgrade 1 (00D5845) and 14 additional internal ports and six
additional external Omni Ports with the Upgrade 2 (00D5847) license options.
Upgrade 1 and Upgrade 2 can be applied on the switch independently from each other or in
combination for full feature capability.
Table 1-8 shows the part numbers for ordering the switches and the upgrades.
Table 1-8 CN4093 10Gb Converged Scalable Switch part numbers and port upgrades
Part Feature Description Total ports enabled
number codea
Internal External External External
10Gb 10Gb SFP+ 10Gb Omni 40Gb QSFP+
00D5823 A3HH / ESW2 Base switch (no upgrades) 14 2 6 0
00D5845 A3HL / ESU1 Add Upgrade 1 28 2 6 2
00D5847 A3HM / ESU2 Add Upgrade 2 28 2 12 0
00D5845 A3HL / ESU1 Add both Upgrade 1 and 42 2 12 2
00D5847 A3HM / ESU2 Upgrade 2
a. The first feature code listed is for configurations ordered through System x sales channels. The second feature
code is for configurations ordered through the IBM Power Systems channel.
Each upgrade license enables additional internal ports. To take full advantage of those ports,
each compute node needs the appropriate I/O adapter installed:
The base switch requires a two-port Ethernet adapter (one port of the adapter goes to
each of two switches)
Adding Upgrade 1 or Upgrade 2 requires a four-port Ethernet adapter (two ports of the
adapter to each switch) to use all internal ports
Adding both Upgrade 1 and Upgrade 2 requires a six-port Ethernet adapter (three ports to
each switch) to use all internal ports
Chapter 1. Chassis interoperability 9
24. 1.4.2 IBM Flex System Fabric EN4093 & EN4093R 10Gb Scalable Switch
The EN4093 and EN4093R are initially licensed with fourteen 10 Gb internal ports enabled
and ten 10 Gb external uplink ports enabled. Further ports can be enabled, including the two
40 Gb external uplink ports with the Upgrade 1 and four additional SFP+ 10Gb ports with
Upgrade 2 license options.
Upgrade 1 must be applied before Upgrade 2 can be applied. These are IBM Features on
Demand license upgrades.
Table 1-9 lists the available parts and upgrades.
Table 1-9 IBM Flex System Fabric EN4093 10Gb Scalable Switch part numbers and port upgrades
Part Feature Product description Total ports enabled
number codea
Internal 10 Gb uplink 40 Gb uplink
49Y4270 A0TB / 3593 IBM Flex System Fabric EN4093 10Gb 14 10 0
Scalable Switch
10x external 10 Gb uplinks
14x internal 10 Gb ports
05Y3309 A3J6 / ESW7 IBM Flex System Fabric EN4093R 10Gb 14 10 0
Scalable Switch
10x external 10 Gb uplinks
14x internal 10 Gb ports
49Y4798 A1EL / 3596 IBM Flex System Fabric EN4093 10Gb 28 10 2
Scalable Switch (Upgrade 1)
Adds 2x external 40 Gb uplinks
Adds 14x internal 10 Gb ports
88Y6037 A1EM / 3597 IBM Flex System Fabric EN4093 10Gb 42 14 2
Scalable Switch (Upgrade 2) (requires
Upgrade 1):
Adds 4x external 10 Gb uplinks
Add 14x internal 10 Gb ports
a. The first feature code listed is for configurations ordered through System x sales channels. The second feature
code is for configurations ordered through the IBM Power Systems channel.
Each upgrade license enables additional internal ports. To take full advantage of those ports,
each compute node needs the appropriate I/O adapter installed:
The base switch requires a two-port Ethernet adapter (one port of the adapter goes to
each of two switches)
Upgrade 1 requires a four-port Ethernet adapter (two ports of the adapter to each switch)
to use all internal ports
Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch) to use all
internal ports
Consideration: Adding Upgrade 2 enables an additional 14 internal ports. This allows a
total of 42 internal ports -- three ports connected to each of the 14 compute nodes. To take
full advantage of all 42 internal ports, a 6-port or 8-port adapter is required, such as the
CN4058 8-port 10Gb Converged Adapter.
Upgrade 2 still provides a benefit with a 4-port adapter because this upgrade enables an
extra four external 10 Gb uplinks as well.
10 IBM Flex System Interoperability Guide
25. 1.4.3 IBM Flex System EN2092 1Gb Ethernet Scalable Switch
The EN2092 comes standard with 14 internal and 10 external Gigabit Ethernet ports enabled.
Further ports can be enabled, including the four external 10 Gb uplink ports with IBM
Features on Demand license upgrades. Upgrade 1 and the 10 Gb Uplinks upgrade can be
applied in either order.
Table 1-10 IBM Flex System EN2092 1Gb Ethernet Scalable Switch part numbers and port upgrades
Part number Feature codea Product description
49Y4294 A0TF / 3598 IBM Flex System EN2092 1Gb Ethernet Scalable Switch
14 internal 1 Gb ports
10 external 1 Gb ports
90Y3562 A1QW / 3594 IBM Flex System EN2092 1Gb Ethernet Scalable Switch
(Upgrade 1)
Adds 14 internal 1 Gb ports
Adds 10 external 1 Gb ports
49Y4298 A1EN / 3599 IBM Flex System EN2092 1Gb Ethernet Scalable Switch (10 Gb
Uplinks)
Adds 4 external 10 Gb uplinks
a. The first feature code listed is for configurations ordered through System x sales channels. The
second feature code is for configurations ordered through the IBM Power Systems channel.
The standard switch has 14 internal ports, and the Upgrade 1 license enables 14 additional
internal ports. To take full advantage of those ports, each compute node needs the
appropriate I/O adapter installed:
The base switch requires a two-port Ethernet adapter installed in each compute node (one
port of the adapter goes to each of two switches)
Upgrade 1 requires a four-port Ethernet adapter installed in each compute node (two ports
of the adapter to each switch)
1.4.4 IBM Flex System IB6131 InfiniBand Switch
The IBM Flex System IB6131 InfiniBand Switch is a 32 port InfiniBand switch. It has 18
FDR/QDR (56/40 Gbps) external ports and 14 FDR/QDR (56/40 Gbps) internal ports for
connections to nodes.
This switch ships standard with quad data rate (QDR) and can be upgraded to fourteen data
rate (FDR) with an IBM Features on Demand license upgrade. Ordering information is listed
in Table 1-11.
Table 1-11 IBM Flex System IB6131 InfiniBand Switch Part Number and upgrade option
Part number Feature codesa Product Name
90Y3450 A1EK / 3699 IBM Flex System IB6131 InfiniBand Switch
18 external QDR ports
14 QDR internal ports
90Y3462 A1QX / ESW1 IBM Flex System IB6131 InfiniBand Switch (FDR Upgrade)
Upgrades all ports to FDR speeds
a. The first feature code listed is for configurations ordered through System x sales channels. The
second feature code is for configurations ordered through the IBM Power Systems channel.
Chapter 1. Chassis interoperability 11
26. 1.4.5 IBM Flex System FC5022 16Gb SAN Scalable Switch
Table 1-12 lists the available port and feature upgrades for the FC5022 16Gb SAN Scalable
Switches. These upgrades are all IBM Features on Demand license upgrades.
Table 1-12 FC5022 switch upgrades
24-port 24-port 16 Gb
16 Gb 16 Gb SAN switch
ESB switch SAN switch
Part Feature
number codesa Description 90Y9356 00Y3324 88Y6374
88Y6382 A1EP / 3772 FC5022 16Gb SAN Scalable Switch (Upgrade 1) No No Yes
88Y6386 A1EQ / 3773 FC5022 16Gb SAN Scalable Switch (Upgrade 2) Yes Yes Yes
00Y3320 A3HN / ESW3 FC5022 16Gb Fabric Watch Upgrade No Yes Yes
00Y3322 A3HP / ESW4 FC5022 16Gb ISL/Trunking Upgrade No Yes Yes
a. The first feature code listed is for configurations ordered through System x sales channels. The second feature code is
for configurations ordered through the IBM Power Systems channel.
Table 1-13 shows the total number of active ports on the switch after applying compatible port
upgrades.
Table 1-13 Total port counts after applying upgrades
Total number of active ports
24-port 16 Gb 24-port 16 Gb 16 Gb SAN switch
ESB SAN switch SAN switch
Ports on Demand upgrade 90Y9356 00Y3324 88Y6374
Included with base switch 24 24 12
Upgrade 1, 88Y6382 (adds 12 ports) Not supported Not supported 24
Upgrade 2, 88Y6386 (adds 24 ports) 48 48 48
12 IBM Flex System Interoperability Guide
27. 1.5 vNIC and UFP support
Table 1-14 lists vNIC (virtual NIC) and UFP (Universal Fabric Port) support by combinations
of switch, adapter, and operating system.
In the table, we use the following abbreviations for the vNIC modes:
vNIC1 = IBM Virtual Fabric Mode
vNIC2 = Switch Independent Mode
10 GbE adapters only: Only 10 Gb Ethernet adapters support vNIC and UFP. 1 GbE
adapter do not support these features.
Table 1-14 Supported vNIC modes
Flex System I/O module EN4093 10Gb Scalable Switch EN4091 10Gb Ethernet Pass-thru
EN4093R 10Gb Switch
CN4093 10Gb Converged Switch
Top-of-rack switch None IBM RackSwitch™ G8124E
IBM RackSwitch G8264
Operating system Windows Linuxab VMwarec Windows Linuxab VMwarec
10Gb onboard LOM (x240 and x440) vNIC1 vNIC1 vNIC1 vNIC1 vNIC1 vNIC1
vNIC2 vNIC2 vNIC2 vNIC2 vNIC2 vNIC2
UFPd UFPd UFP UFP UFP UFP
CN4054 10Gb Virtual Fabric Adapter vNIC1 vNIC1 vNIC1 vNIC1 vNIC1 vNIC1
90Y3554 (e-config #1759) vNIC2 vNIC2 vNIC2 vNIC2 vNIC2 vNIC2
UFPd UFPd UFPd UFP UFP UFP
EN4054 4-port 10Gb Ethernet Adapter
The EN4054 4-port 10Gb Ethernet Adapter does not support vNIC nor UFP.
(e-config #1762)
EN4132 2-port 10 Gb Ethernet Adapter
The EN4132 2-port 10 Gb Ethernet Adapter does not support vNIC nor UFP.
90Y3466 (e-config #EC2D)
CN4058 8-port 10Gb Converged The CN4058 8-port 10Gb Converged Adapter does not support vNIC nor
Adapter, (e-config #EC24) UFP.
EN4132 2-port 10Gb RoCE Adapter,
The EN4132 2-port 10Gb RoCE Adapter does not support vNIC nor UFP.
(e-config #EC26)
a. Linux kernels with Xen are not supported with either vNIC1 nor vNIC2. For support information, see IBM RETAIN®
Tip H205800 at http://ibm.com/support/entry/portal/docdisplay?lndocid=migr-5090480.
b. The combination of vNIC2 and iBoot is not supported for legacy booting with Linux.
c. The combination of vNIC2 with VMware ESX 4.1 and storage protocols (FCoE and iSCSI) is not supported.
d. The CN4093 10Gb Converged Switch is planned to support Universal Fabric Port (UFP) in 2Q/2013
Chapter 1. Chassis interoperability 13
28. 1.6 Chassis power supplies
Power supplies are available either as 2500W or 2100W capacities. The standard chassis
ships with two 2500W power supplies. A maximum of six power supplies can be installed.
The 2100W power supplies are only available via CTO and through the System x ordering
channel.
Table 1-15 shows the ordering information for the Enterprise Chassis power supplies. Power
supplies cannot be mixed in the same chassis.
Table 1-15 Power supply module option part numbers
Part Feature Description Chassis models
number codesa where standard
43W9049 A0UC / 3590 IBM Flex System Enterprise Chassis 2500W Power Module 8721-A1x (x-config)
7893-92X (e-config)
47C7633 A3JH / None IBM Flex System Enterprise Chassis 2100W Power Module None
a. The first feature code listed is for configurations ordered through System x sales channels. The second feature
code is for configurations ordered through the IBM Power Systems channel.
A chassis powered by the 2100W power supplies cannot provide N+N redundant power
unless all the compute nodes are configured with 95W or lower Intel processors. N+1
redundancy is possible with any processors.
Table 1-16 shows the nodes that are supported in chassis when powered by either the
2100W or 2500W modules.
Table 1-16 Compute nodes supported by the power supplies
Node 2100W 2500W
power supply power supply
IBM Flex System Manager management node Yes Yes
x220 (with or without Storage Expansion Node or PCIe Expansion Node) Yes Yes
x240 (with or without Storage Expansion Node or PCIe Expansion Node) Yesa Yesa
x440 Yesa Yesa
p24L No Yesa
p260 No Yesa
p460 No Yesa
V7000 Storage Node (either primary or expansion node) Yes Yes
a. Some restrictions based on the TDP power of the processors installed or the power policy enabled. See Table 1-17
on page 15.
14 IBM Flex System Interoperability Guide
29. Table 1-17 on page 15 lists details of the support for compute nodes supported based on type
and number of power supplies installed in the chassis and the power policy enabled (N+N or
N+1).
In this table, the colors of the cells have the following meaning:
Supported with no restrictions as to the number of compute nodes that can be installed
Supported but with restrictions on the number of compute nodes that can be installed.
Table 1-17 Specific number of compute nodes supported based on installed power supplies
Compute CPU 2100W power supplies 2500W power supplies
node TDP
rating N+1, N=5 N+1, N=4 N+1, N=3 N+N, N=3 N+1, N=5 N+1, N=4 N+1, N=3 N+N, N=3
6 total 5 total 4 total 6 total 6 total 5 total 4 total 6 total
x240 60W 14 14 14 14 14 14 14 14
70W 14 14 13 14 14 14 14 14
80W 14 14 13 14 14 14 14 14
95W 14 14 12 13 14 14 14 14
115W 14 14 11 12 14 14 14 14
130W 14 14 11 11 14 14 14 14
135W 14 14 11 11 14 14 13 14
x440 95W 7 7 6 6 7 7 7 7
115W 7 7 5 6 7 7 7 7
130W 7 7 5 5 7 7 6 7
p24L All Not supported 14 14 12 13
p260 All Not supported 14 14 12 13
p460 All Not supported 7 7 6 6
x220 50W 14 14 14 14 14 14 14 14
60W 14 14 14 14 14 14 14 14
70W 14 14 14 14 14 14 14 14
80W 14 14 14 14 14 14 14 14
95W 14 14 14 14 14 14 14 14
FSM 95W 2 2 2 2 2 2 2 2
V7000 N/A 3 3 3 3 3 3 3 3
Assumptions:
All Compute Nodes fully configured
Throttling and over subscription is enabled
Tip: Consult the Power configurator for exact configuration support:
http://ibm.com/systems/bladecenter/resources/powerconfig.html
Chapter 1. Chassis interoperability 15
30. 1.7 Rack to chassis
IBM offers an extensive range of industry-standard and EIA-compatible rack enclosures and
expansion units. The flexible rack solutions help you consolidate servers and save space,
while allowing easy access to crucial components and cable management.
Table 1-18 lists the IBM Flex System Enterprise Chassis supported in each rack cabinet.
Table 1-18 The chassis supported in each rack cabinet
Part number Rack cabinet Supports the
Enterprise Chassis
93634CX IBM PureFlex System 42U Rack Yes (recommended)
93634DX IBM PureFlex System 42U Expansion Rack Yes (recommended)
93634PX IBM 42U 1100 mm Deep Dynamic rack Yes (recommended)
201886X IBM 11U Office Enablement Kit Yes
93072PX IBM S2 25U Static standard rack Yes
93072RX IBM S2 25U Dynamic standard rack Yes
93074RX IBM S2 42U standard rack Yes
99564RX IBM S2 42U Dynamic standard rack Yes
93084PX IBM 42U Enterprise rack Yes
93604PX IBM 42U 1200 mm Deep Dynamic Rack Yes
93614PX IBM 42U 1200 mm Deep Static rack Yes
93624PX IBM 47U 1200 mm Deep Static rack Yes
9306-900 IBM Netfinity® 42U Rack No
9306-910 IBM Netfinity 42U Rack No
9308-42P IBM Netfinity Enterprise Rack No
9308-42X IBM Netfinity Enterprise Rack No
Varies IBM NetBay 22U No
16 IBM Flex System Interoperability Guide
32. 2.1 Compute node-to-card interoperability
Table 2-1 lists the available I/O adapters and their compatibility with compute nodes.
Power Systems compute nodes: Some I/O adapters supported by Power Systems
compute nodes are restricted to only some of the available slots. See Table 2-2 on page 19
for specifics.
Table 2-1 I/O adapter compatibility matrix - compute nodes
Supported servers
p260 22X
p260 23X
System x x-config e-config
x440b
p24L
p460
x220
x240
part feature feature
number code codea I/O adapters
Ethernet adapters
49Y7900 A1BR 1763 / A10Y EN2024 4-port 1Gb Ethernet Adapter Y Y Y Y Y Y Y
90Y3466 A1QY EC2D / A1QY EN4132 2-port 10 Gb Ethernet Adapter Y Y Y N N N N
None None 1762 / None EN4054 4-port 10Gb Ethernet Adapter N N N Y Y Y Y
90Y3554 A1R1 1759 / A1R1 CN4054 10Gb Virtual Fabric Adapter Y Y Y N N N N
90Y3558 A1R0 1760 / A1R0 CN4054 Virtual Fabric Adapter Upgradec Y Y Y N N N N
None None EC24 / None CN4058 8-port 10Gb Converged Adapter N N N Y Y Y Y
None None EC26 / None EN4132 2-port 10Gb RoCE Adapter N N N Y Y Y Y
Fibre Channel adapters
69Y1938 A1BM 1764 / A1BM FC3172 2-port 8Gb FC Adapter Y Y Y Y Y Y Y
95Y2375 A2N5 EC25 / A2N5 FC3052 2-port 8Gb FC Adapter Y Y Y N N N N
88Y6370 A1BP EC2B / A1BP FC5022 2-port 16Gb FC Adapter Y Y Y N N N N
InfiniBand adapters
90Y3454 A1QZ EC2C / A1QZ IB6132 2-port FDR InfiniBand Adapter Y Y Y N N N N
None None 1761 / None IB6132 2-port QDR InfiniBand Adapter N N N Y Y Y Y
SAS
90Y4390 A2XW None / A2XW ServeRAID M5115 SAS/SATA Controllerd Y Y Yb N N N N
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when
supported). The second is for the x220 and x440.
b. For compatibility as listed here, ensure the x440 is running IMM2 firmare Build 40a or later
c. Features on Demand (software) upgrade to enable FCoE and iSCSI on the CN4054. One upgrade needed per
adapter.
d. Various enablement kits and Features on Demand upgrades are available for the ServeRAID M5115. See the
ServeRAID M5115 Product Guide, http://www.redbooks.ibm.com/abstracts/tips0884.html?Open
18 IBM Flex System Interoperability Guide
33. For Power Systems compute nodes, Table 2-2 shows which specific I/O expansion slots each
of the supported adapters can be installed in to. Yes in the table means the adapter is
supported in that I/O expansion slot.
Tip: Table 2-2 applies to Power Systems compute nodes only.
Table 2-2 Slot locations supported by I/O expansion cards in Power Systems compute nodes
Feature Description Slot 1 Slot 2 Slot 3 Slot 4
code (p460) (p460)
10 Gb Ethernet
EC24 IBM Flex System CN4058 8-port 10Gb Converged Adapter Yes Yes Yes Yes
EC26 IBM Flex System EN4132 2-port 10Gb RoCE Adapter No Yes Yes Yes
1762 IBM Flex System EN4054 4-port 10Gb Ethernet Adapter Yes Yes Yes Yes
1 Gb Ethernet
1763 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter Yes Yes Yes Yes
InfiniBand
1761 IBM Flex System IB6132 2-port QDR InfiniBand Adapter No Yes No Yes
Fibre Channel
1764 IBM Flex System FC3172 2-port 8Gb FC Adapter No Yes No Yes
Chapter 2. Compute node component compatibility 19
35. Part x-config e-config Description x220 x240 x440
number feature featurea,b
Load-reduced DIMMs (LRDIMMs)
49Y1567 A290 EEM6 / A290 16GB (1x16GB, 4Rx4, 1.35V) PC3L-10600 CL9 No Yes Yes
ECC DDR3 1333MHz LP LRDIMM
90Y3105 A291 EEM8 / A291 32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 Yes Yes Yes
ECC DDR3 1333MHz LP LRDIMM
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when
supported). The second is for the x220 and x440.
b. For memory DIMMs, the first feature code listed will result in two DIMMs each, whereas the second feature code
listed contains only one DIMM each.
2.2.2 Power Systems compute nodes
Table 2-4 lists the supported memory DIMMs for Power Systems compute nodes.
Table 2-4 Supported memory DIMMs - Power Systems compute nodes
Part e-config Description p24L p260 p260 p460
number feature 22X 23X
78P1011 EM04 2x 2 GB DDR3 RDIMM 1066 MHz Yes Yes No Yes
78P0501 8196 2x 4 GB DDR3 RDIMM 1066 MHz Yes Yes Yes Yes
78P0502 8199 2x 8 GB DDR3 RDIMM 1066 MHz Yes Yes No Yes
78P1917 EEMD 2x 8 GB DDR3 RDIMM 1066 MHz Yes Yes Yes Yes
78P0639 8145 2x 16 GB DDR3 RDIMM 1066 MHz Yes Yes No Yes
78P1915 EEME 2x 16 GB DDR3 RDIMM 1066 MHz Yes Yes Yes Yes
78P1539 EEMF 2x 32 GB DDR3 RDIMM 1066 MHz Yes Yes Yes Yes
Chapter 2. Compute node component compatibility 21
36. 2.3 Internal storage compatibility
This section covers supported internal storage for both compute node families. It covers the
following topics:
2.3.1, “x86 compute nodes: 2.5-inch drives” on page 22
2.3.2, “x86 compute nodes: 1.8-inch drives” on page 23
2.3.3, “Power Systems compute nodes” on page 24
2.3.1 x86 compute nodes: 2.5-inch drives
Table 2-5 lists the 2.5-inch drives for x86 compute nodes.
Table 2-5 Supported 2-5-inch SAS and SATA drives
Part x-config e-config Description x220 x240 x440
number feature featurea
10K SAS hard disk drives
90Y8877 A2XC None / A2XC IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS HDD N N Y
42D0637 5599 3743 / 5599 IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD Y Y N
44W2264 5413 None / 5599 IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS SED N N Y
90Y8872 A2XD None / A2XD IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS HDD N N Y
49Y2003 5433 3766 / 5433 IBM 600GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD Y Y N
81Y9650 A282 EHD4 / A282 IBM 900GB 10K 6Gbps SAS 2.5" SFF HS HDD Y Y Y
15K SAS hard disk drives
90Y8926 A2XB None / A2XB IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS HDD N N Y
42D0677 5536 EHD1 / 5536 IBM 146GB 15K 6Gbps SAS 2.5" SFF Slim-HS HDD Y Y N
81Y9670 A283 EHD5 / A283 IBM 300GB 15K 6Gbps SAS 2.5" SFF HS HDD Y Y Y
NL SAS hard disk drives
81Y9690 A1P3 EHD6 / A1P3 IBM 1TB 7.2K 6Gbps NL SAS 2.5" SFF HS HDD Y Y Y
90Y8953 A2XE None / A2XE IBM 500GB 7.2K 6Gbps NL SAS 2.5" SFF G2HS HDD N N Y
42D0707 5409 EHD2 / 5409 IBM 500GB 7200 6Gbps NL SAS 2.5" SFF HS HDD Y Y N
NL SATA hard disk drives
81Y9730 A1AV EHD9 / A1AV IBM 1TB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD Y Y Y
81Y9722 A1NX EHD7 / A1NX IBM 250GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD Y Y Y
81Y9726 A1NZ EHD8 / A1NZ IBM 500GB 7.2K 6Gbps NL SATA 2.5" SFF HS HDD Y Y Y
Solid-state drives - Enterprise
00W1125 A3HR None / A3HR IBM 100GB SATA 2.5" MLC HS Enterprise SSD Y Y Y
43W7746 5420 None / 5420 IBM 200GB SATA 1.8" MLC SSD Y Y Y
43W7718 A2FN EHD3 / A2FN IBM 200GB SATA 2.5" MLC HS SSD Y Y Y
43W7726 5428 None / 5428 IBM 50GB SATA 1.8" MLC SSD Y Y Y
22 IBM Flex System Interoperability Guide
37. Part x-config e-config Description x220 x240 x440
number feature featurea
Solid-state drives - Enterprise value
49Y5839 A3AS None / A3AS IBM 64GB SATA 2.5" MLC HS Enterprise Value SSD Y Y N
90Y8648 A2U4 EHDD / A2U4 IBM 128GB SATA 2.5" MLC HS Enterprise Value SSD Y Y Y
90Y8643 A2U3 EHDC / A2U3 IBM 256GB SATA 2.5" MLC HS Enterprise Value SSD Y Y Y
49Y5844 A3AU None / A3AU IBM 512GB SATA 2.5" MLC HS Enterprise Value SSD Y Y N
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when
supported). The second is for the x220 and x440.
2.3.2 x86 compute nodes: 1.8-inch drives
The x86 compute nodes support 1.8-inch solid-state drives with the addition of the
ServeRAID M5115 RAID controller plus the appropriate enablement kits. For details about
configurations, see ServeRAID M5115 SAS/SATA Controller for IBM Flex System, TIPS0884.
Tip: The ServeRAID M5115 RAID controller is installed in I/O expansion slot 1 but can be
installed along with the Compute Node Fabric Connector (aka periscope connector) used
to connect the onboard Ethernet controller to the chassis midplane.
Table 2-6 lists the supported enablement kits and Features on Demand activation upgrades
available for use with the ServeRAID M5115.
Table 2-6 ServeRAID M5115 compatibility
Part Feature Description x220 x240 x440
number codea
90Y4390 A2XW ServeRAID M5115 SAS/SATA Controller for IBM Flex System Yes Yes Yes
Hardware enablement kits - IBM Flex System x220 Compute Node
90Y4424 A35L ServeRAID M5100 Series Enablement Kit for IBM Flex System x220 Yes No No
90Y4425 A35M ServeRAID M5100 Series IBM Flex System Flash Kit for x220 Yes No No
90Y4426 A35N ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x220 Yes No No
Hardware enablement kits - IBM Flex System x240 Compute Node
90Y4342 A2XX ServeRAID M5100 Series Enablement Kit for IBM Flex System x240 No Yes No
90Y4341 A2XY ServeRAID M5100 Series IBM Flex System Flash Kit for x240 No Yes No
90Y4391 A2XZ ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x240 No Yesb No
Hardware enablement kits - IBM Flex System x440 Compute Node
46C9030 A3DS ServeRAID M5100 Series Enablement Kit for IBM Flex System x440 No No Yes
46C9031 A3DT ServeRAID M5100 Series IBM Flex System Flash Kit for x440 No No Yes
46C9032 A3DU ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x440 No No Yes
Chapter 2. Compute node component compatibility 23
38. Part Feature Description x220 x240 x440
number codea
Feature on-demand licenses (for all three compute nodes)
90Y4410 A2Y1 ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex System Yes Yes Yes
90Y4412 A2Y2 ServeRAID M5100 Series Performance Upgrade for IBM Flex System Yes Yes Yes
90Y4447 A36G ServeRAID M5100 Series SSD Caching Enabler for IBM Flex System Yes Yes Yes
a. The feature codes listed here are for both x-config (HVEC) and e-config (AAS), with the exception of those for the
x240 which are for HVEC only.
b. If the ServeRAID M5100 Series SSD Expansion Kit (90Y4391) is installed, the x240 USB Enablement Kit
(49Y8119) cannot also be installed. Both the x240 USB Enablement Kit and the SSD Expansion Kit both include
special air baffles that cannot be installed at the same time.
Table 2-7 lists the drives supported in conjunction with the ServeRAID M5115 RAID
controller.
Table 2-7 Supported 1.8-inch solid-state drives
Part Feature Description x220 x240 x440
number codea
43W7746 5420 IBM 200GB SATA 1.8" MLC SSD Yes Yes Yes
43W7726 5428 IBM 50GB SATA 1.8" MLC SSD Yes Yes Yes
49Y5993 A3AR IBM 512GB SATA 1.8" MLC Enterprise Value SSD No No No
49Y5834 A3AQ IBM 64GB SATA 1.8" MLC Enterprise Value SSD No No No
a. The feature codes listed here are for both x-config (HVEC) and e-config (AAS), with the exception of those for the
x240 which are for HVEC only.
2.3.3 Power Systems compute nodes
Local storage options for Power Systems compute nodes are shown in Table 2-8. None of the
available drives are hot-swappable. The local drives (HDD or SDD) are mounted to the top
cover of the system. If you use local drives, you must order the appropriate cover with
connections for your wanted drive type. The maximum number of drives that can be installed
in any Power Systems compute node is two. SSD and HDD drives cannot be mixed.
Table 2-8 Local storage options for Power Systems compute nodes
e-config Description p24L p260 p460
feature
2.5 inch SAS HDDs
8274 300 GB 10K RPM non-hot-swap 6 Gbps SAS Yes Yes Yes
8276 600 GB 10K RPM non-hot-swap 6 Gbps SAS Yes Yes Yes
8311 900 GB 10K RPM non-hot-swap 6 Gbps SAS Yes Yes Yes
7069 Top cover with HDD connectors for the p260 and p24L Yes Yes No
7066 Top cover with HDD connectors for the p460 No No Yes
1.8 inch SSDs
8207 177 GB SATA non-hot-swap SSD Yes Yes Yes
24 IBM Flex System Interoperability Guide
39. e-config Description p24L p260 p460
feature
7068 Top cover with SSD connectors for the p260 and p24L Yes Yes No
7065 Top Cover with SSD connectors for p460 No No Yes
No drives
7067 Top cover for no drives on the p260 and p24L Yes Yes No
7005 Top cover for no drives on the p460 No No Yes
2.4 Embedded virtualization
The x86 compute nodes support an IBM standard USB flash drive (USB Memory Key) option
preinstalled with VMware ESXi or VMware vSphere. It is fully contained on the flash drive,
without requiring any disk space.
On the x240 the USB memory keys plug into the USB ports on the optional x240 USB
Enablement Kit. On the x220 and x440, the USB memory keys plug directly into USB ports on
the system board.
Table 2-9 lists the ordering information for the VMware hypervisor options.
Table 2-9 IBM USB Memory Key for VMware hypervisors
Part x-config e-config Description x220 x240 x440
number feature featurea
49Y8119 A33M None / None x240 USB Enablement Kit No Yesb No
41Y8300 A2VC EBK3 / A2VC IBM USB Memory Key for VMware ESXi 5.0 Yes Yes Yes
41Y8307 A383 None / A383 IBM USB Memory Key for VMware ESXi 5.0 Update1 Yes Yes Yes
41Y8298 A2G0 None / A2G0 IBM Blank USB Memory Key for VMware ESXi Yes Yes Yes
Downloads
a. There are two e-config (AAS) feature codes for some options. The first is for the x240, p24L, p260 and p460 (when
supported). The second is for the x220 and x440.
b. If the x240 USB Enablement Kit (49Y8119) is installed, the ServeRAID M5100 Series SSD Expansion Kit
(90Y4391) cannot also be installed. Both the x240 USB Enablement Kit and the SSD Expansion Kit both include
special air baffles that cannot be installed at the same time.
You can use the Blank USB Memory Key, 41Y8298, to use any available IBM customized
version of the VMware hypervisor. The VMware vSphere hypervisor with IBM customizations
can be downloaded from the following website:
http://ibm.com/systems/x/os/vmware/esxi
Power Systems compute nodes do not support VMware ESXi installed on a USB Memory
Key. Power Systems compute nodes support IBM PowerVM® as standard.
These servers do support virtual servers, also known as logical partitions or LPARs. The
maximum number of virtual serves is 10 times the number of cores in the compute node:
p24L: Up to 160 virtual servers (10 x 16 cores)
p260: Up to 160 virtual servers (10 x 16 cores)
p460: Up to 320 virtual servers (10 x 32 cores)
Chapter 2. Compute node component compatibility 25