• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
IBM ex5 Technical Overview
 

IBM ex5 Technical Overview

on

  • 2,431 views

Now in its fifth generation, IBM X-Architecture continues to build on its deep partnership with Intel and a decade ofx86 innovations to provide unparalleled configuration choice,mainframe-inspired ...

Now in its fifth generation, IBM X-Architecture continues to build on its deep partnership with Intel and a decade ofx86 innovations to provide unparalleled configuration choice,mainframe-inspired reliability, comprehensive systems management and an energy-smart design. With the ability to help you maximize memory, minimize costs and simplify deployment and ownership, eX5 can help you get the greatest value for your organization today and in the future.

Statistics

Views

Total Views
2,431
Views on SlideShare
2,423
Embed Views
8

Actions

Likes
1
Downloads
108
Comments
0

2 Embeds 8

http://www.docshut.com 6
http://www.slashdocs.com 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…

Comments are closed.

Edit your comment
  • Now let’s take a closer look at how HX5 delivers value to our customers: HX5 with MAX5 delivers 25% more memory capacity than any server in it’s class. Resulting in optimal server utilization for memory constrained applications. More memory also allows customers to run more, larger virtual machines per servers and results in greater database throughput. For customers looking for maximum performance, HX5 offers compute capacity of up to 32 CPU cores with the 4S blade. This not only provides blazing fast performance, it also enables customers save on system costs by getting most out of their licensing costs where SW is priced by servers or # of CPUs. For memory limited apps, more apps just doesn’t cost another server….. Smaller DIMMs HX5 delivers faster time to value for our clients. Since it is built from the same 2S base blade, customers have to qualify a single platform for 2-socket to 4-socket server needs, which will allow them to standardize on a single system and deliver faster time to value. The image customers develop for a 2S HX5 system, will essentially work on all other scaled HX5 systems. Memory Price HS22/HS22V HX5 2G $125 $125 4G $235 $275 8G $990 $989 6K memory savings Memory Price: 2G $125 4G $275 8G $989 Blade Price (base model w/ all CPUs populated ) 4S HX5: $17,000 4S HX5+MAX5: $25,600 256GB Memory config price (memory only) 4S HX5: 32*8G = $31,648 4S HX5+MAX5: 32*2G + 48*4G = $17200 Total System Price 4S HX5: 32*8G = $48,648 4S HX5+MAX5: 32*2G + 48*4G = $42,800 Delta: $5,848
  • 5x Memory (Jan 23, 2010) HP DL380 G6 has 18 DIMMs, but only supports up to 128GB (12 x 16) due to rank limitations in Nehalem EP
  • QPI transfers memory @ 6GTS
  • after this “switch gears”
  • Each CPU needs at least one memory card Each memory card needs at least 2 DIMM’s DIMM’s must be installed in matching pairs Optimal memory performance requires 2 memory cards per CPU, with 4 DIMMs on each memory card, equal amounts of memory per card. If memory mirrored, then DIMM’s must match in sets of 4 eX5 Memory expansion works best with DIMM’s in sets of 4 eX5 Memory expansion works best with a ratio of 2/1 memory in host/expander Memory Size DIMM FRU Option (contains only 1 DIMM) 1GB DIMM (1Gb, x8, SR) 43X5044 44T1490 44T1480 2GB DIMM (1Gb, x8, DR) 43X5045 44T1491 44T1481 4GB DIMM (1Gb, x8, QR) 43X5055 46C7452 46C7448 8GB DIMM (2Gb, x8, QR) 43X5070 46C7488 46C7482 16GB DIMM (2Gb, x4, QR) 43X5071 46C7489 46C7483 Best not to mix ranks in a channel Memory Installation Card number DIMM Numbers 1 1 & 8 7 1 & 8 3 1 & 8 5 1 & 8 2 1 & 8 8 1 & 8 4 1 & 8 6 1 & 8 1 3 & 6 7 3 & 6 3 3 & 6 5 3 & 6 2 3 & 6 8 3 & 6 4 3 & 6 6 3 & 6 1 2 & 7 7 2 & 7 3 2 & 7 5 2 & 7 2 2 & 7 8 2 & 7 4 2 & 7 6 2 & 7 1 4 & 5 7 4 & 5 3 4 & 5 5 4 & 5 2 4 & 5 8 4 & 5 4 4 & 5 6 4 & 5
  • A PCIe Gen 2 lane is 500MB/s X16 PCIe slot is ~~ 8GB/s X8 PCIe slot is ~~5GB/s X4 PCIe slot is ~~2GB/s
  • SAP Business All-In-One fast-start Fast-Start is SAP‘s primary ‘out-of-the-box’ Mid-market solution, which includes an optional, pre-installed, fixed price infrastructure offering. Fast-Start solutions are exclusively sold by SAP Value Added Resellers. Fast-Start infrastructure stacks will be co-marketed with a HW reseller, and fulfilled by the HW reseller. The SAP® Business All-in-One fast-start program enables you to configure a SAP Industry specific solution online, obtain an estimate of costs, and implement the solution quickly and cost-effectively while delivering a rapid time to value SAP All-In-One Value Proposition Provide industry-specific functionality at a low cost of ownership. Bounded project scope, predictable costs, and short implementation times. Minimize IT staff dedicated to supporting on premise business solutions. Flexibility to grow and adapt to meet your company’s changing business requirements IBM All-In-One fast-start Value Proposition Preinstalled solution with minimal IT support requirements Flexibility and growth with investment protection using IBM Blade C enter S or System x 3850 with IBM Systems Storage Local support from IBM and SAP Business partners Predictable IT costs through IBM leasing offerings Reduced risk with preinstalled Linux All-In-One Fast-Start solution Which customers are good candidates for SAP Business All-In-One fast-start Small and Midmarket customers who need industry specific business functionality Customers looking to replace legacy ERP solutions Customers who have complex business requirements Customers who have plans for growth and expansion
  • SAP Business All-In-One fast-start Fast-Start is SAP‘s primary ‘out-of-the-box’ Mid-market solution, which includes an optional, pre-installed, fixed price infrastructure offering. Fast-Start solutions are exclusively sold by SAP Value Added Resellers. Fast-Start infrastructure stacks will be co-marketed with a HW reseller, and fulfilled by the HW reseller. The SAP® Business All-in-One fast-start program enables you to configure a SAP Industry specific solution online, obtain an estimate of costs, and implement the solution quickly and cost-effectively while delivering a rapid time to value SAP All-In-One Value Proposition Provide industry-specific functionality at a low cost of ownership. Bounded project scope, predictable costs, and short implementation times. Minimize IT staff dedicated to supporting on premise business solutions. Flexibility to grow and adapt to meet your company’s changing business requirements IBM All-In-One fast-start Value Proposition Preinstalled solution with minimal IT support requirements Flexibility and growth with investment protection using IBM Blade C enter S or System x 3850 with IBM Systems Storage Local support from IBM and SAP Business partners Predictable IT costs through IBM leasing offerings Reduced risk with preinstalled Linux All-In-One Fast-Start solution Which customers are good candidates for SAP Business All-In-One fast-start Small and Midmarket customers who need industry specific business functionality Customers looking to replace legacy ERP solutions Customers who have complex business requirements Customers who have plans for growth and expansion
  • Have to borrow a QPI link to connect to MK.
  • Ent / nebs
  • It is technically possible for a platform to have different QPI links running at different frequencies (e.g. slower between node controllers and faster from CPU to IOH’s). But this is an OEM design decision and does add complexity to the design…so different QPI link speeds in the same system are not probable.
  • Best DIMM choices More ranks are better (QR better than DR better than SR) Remember: More ranks does NOT reduce frequency
  • Have to borrow a QPI link to connect to MK.
  • Important Points “ Uncore” provides segment differentiation. Separate V/F domains decouples core and uncore designs. Core -consistent logic across processor family -Same uArch across all segments.. Good for software optimization… Uncore – things that glue all the cores together… # cores Size of LLC Type of memory… native vs. buffer Number of sockets… Integrated graphics available on some but not all… Different knobs that we can play to Important point to note: Uncore is what differentiates Nehalem EP from EX Q: is the common core, the same core as Nehalem EP A: Yes
  • Nehalem-EP (NHM-EP) 4 cores/8 threads per socket, Nehalem core, 45nm process, 8MB shared LLC SMT (~hyper threading) QPI links (only 1 is coherent link) per socket One integrated memory controller (IMC) Three DDR channels per socket Scales to 2 sockets MAX Nehalem-EX (NHM-EX) Up to 8 cores/ 16 threads per socket, Nehalem core, 45nm process, 24MB shared L3 SMT (~Hyper-Threading) Turbo Boost Four QPI Links (3 coherent) Two integrated memory controllers (IMC) Four buffered memory channels per socket Scales to up to 16 sockets
  • The physical connectivity of each interconnect link is made up of twenty differential signal pairs plus a differential forwarded clock. Each port supports a link pair consisting of two uni-directional links to complete the connection between two components. This supports traffic in both directions simultaneously. The Intel® QuickPath Interconnect is a double-pumped data bus, meaning data is captured at the rate of one data transfer per edge of the forwarded clock. So every clock period captures two chunks of data. The maximum amount of data sent across a full-width Intel® QuickPath Interconnect is 16 bits, or 2 bytes. Note that 16 bits are used for this calculation, not 20 bits. Although the link has up to 20 1-bit lanes, no more than 16 bits of real ‘data’ payload are ever transmitted at a time, so the more accurate calculation would be to use 16 bits of data at a time. The maximum frequency of the initial Intel® QuickPath Interconnect implementation is 3.2 GHz. This yields a double-pumped data rate of 6.4 GT/s with 2 bytes per transition, or 12.8 GB/s. An Intel® QuickPath Interconnect link pair operates two uni-directional links simultaneously, which gives a final theoretical raw bandwidth of 25.6 GB/s.

IBM ex5 Technical Overview IBM ex5 Technical Overview Presentation Transcript

  • Enterprise X-Architecture 5 th Generation Systems for a Smarter Planet Technical Presentation March 30, 2010
  • Table of Contents
    • Client Challenges and eX5 Opportunity
    • eX5 Systems Overview
    • eX5 System Details
    • x3850 X5
    • x3690 X5
    MAX5 HX5 Processor Information Technical Highlights: Reliability Technical Highlights: Performance
  • Client Challenges and eX5 Opportunity
  • Memory Capacity
    • Buy what they need when they need it
    • License Fees
    • Operational Expense
    • Energy and mgmt expenses
    • Fit more into the datacenter they have today
    • Reduce cost to qualify systems
    Do More With Less
    • Speed time from deployment to production
    • Optimized performance for their workload needs
    • Get more out of the people, IT, and spending they have
    • Flexibility to get the IT they need, the way they need it
    Simplify Difficult challenges create an opportunity for innovation Client challenges with enterprise workloads Database, Virtualization, Transaction processing
    • More virtual machines
    • Larger virtual machines
    • Bigger databases
    • Faster database performance
    • Greater server utilization
  • Turning “opportunity” into results Building on industry standards with unique innovation
    • The broadest portfolio of flexible enterprise rack and blade systems
    • Lower entry points to enterprise technology
    • The most memory with unique expansion capabilities
    • The fastest integrated data access
    • Maximum flexibility with node partitioning
    • Optimized configurations for target workloads
    Innovation that uniquely meets client requirements
  • Opportunity for Innovation Leadership through Innovation
    • 5 th Generation of the industry leading enterprise x86 architected servers
      • Leveraging IBM’s deep history of innovation
      • Over $800M invested in innovation over 10 years of collaboration
      • #1 marketshare for enterprise workload x86 servers
    • 2x the memory capability of competitors’ offerings
      • Delivering up to 50% savings in virtualization licensing
      • Delivering up to 50% savings in database licensing
    • Lowering cost of acquisition and simplifying deployment
      • Decoupling processors/memory
      • Delivering up to 34% savings in management
      • Unprecedented flexibility in deployment
    Introducing IBM System x eX5 Family of Servers
  • Maximize Memory Minimize Cost Simplify Deployment The broadest portfolio of systems optimized for your most demanding workloads
    • Maximize Memory
    • Over 5x more memory in 2 sockets than current x86 (Intel ® Xeon ® 5500 Series) systems
    • Nearly 8x more memory in 4 sockets than current x86 (Intel Xeon 5500 Series) systems
    • More memory delivers 60% more virtual machines for the same license cost
    • Minimize Cost
    • 50% less VMware license cost on eX5 for same number of virtual machines
    • Save over $1M USD in external database storage costs
    • Simplify Deployment
    • Leverage IBM Lab Services experts to configure and install hardware and software
    • Workload Optimized solutions reduce deployment from months to days
    • IBM Systems Director provides automated image deployment and pre-configured server identity
    The next generation of x86 is here!!
  • eX5 Systems Overview
  • eX5 leadership for an evolving marketplace with increasing demands 1 st Gen: First x86 server with scalable 16 processor design 2 nd Gen: First x86 server with 100 #1 Benchmarks 3 rd Gen: First x86 server with Hot-swap memory 4 th Gen: First x86 server to break 1 Million tpmC 5 th Gen: Breakthrough performance, ultimate flexibility, simpler management 2010 5th Generation 2003 2nd Generation 2005 3rd Generation 2007 4th Generation 2001 1st Generation
  • eX5 Portfolio — Systems for a Smarter Planet Broad coverage for most enterprise applications, server consolidation, virtualized workload enablement. 4U / 4-Way Scalable Powerful and scalable system allows some workloads to migrate onto 2-socket design that delivers enterprise computing in a dense package Demand for minimum footprint as well as integrated networking infrastructure has increased the growth of the blade form factor. BladeCenter HX5 System x3690 X5 Consolidation, virtualization, and database workloads being migrated off of proprietary hardware are demanding more addressability System x3850 X5
  • IBM System x3850 X5 Flagship System x platform for leadership scalable performance and capacity Versatile 4-socket, 4U rack-optimized scalable enterprise server provides a flexible platform for maximum utilization, reliability and performance of compute- and memory-intensive workloads.
    • Maximize Memory
      • 64 threads and 1TB capacity for 3.3x database and 3.6x the virtualization performance over industry 2-socket x86 (Intel Xeon 5500 Series) systems
      • MAX5 memory expansion for 50% more virtual machines and leadership database performance
      • Minimize Cost
      • Lower cost, high performance configurations reaching desired memory capacity using less expensive DIMMs
      • eXFlash 480k internal IOPs for 40x local database performance and $1.3M savings in equal IOPs storage
      • Simplify Deployment
      • Pre-defined database and virtualization workload optimized systems for faster deployment and faster time to value
    System Specifications
    • 4x Intel Xeon 7500-series CPUs
    • 64 DDR3 DIMMs, up to 96 with MAX5
    • 6 open PCIe slots (+ 2 additional)
    • Up to 8x 2.5” HDDs or 16x 1.8” SSDs
    • RAID 0/1 Std, Optional RAID 5/6
    • 2x 1GB Ethernet LOM
    • 2x 10GB Ethernet SFP+ Virtual Fabric / FCoEE
    • Scalable to 8S, 128 DIMM
    • Internal USB for embedded hypervisor
    • IMM, uEFI & IBM Systems Director
  • (2x) 1975W Rear Access Hot Swap, Redundant P/S (4x) Intel Xeon CPU’s (8x) Memory Cards for 64 DIMMs – 8 1066MHz DDR3 DIMMs per card 6x - PCIe Gen2 Slots (+2 additional) 2x 60mm Hot Swap Fans (8x) Gen2 2.5” Drives or 2 eXFlash packs (2x) 120mm Hot Swap Fans Dual USB Light Path Diagnostics RAID (0/1) standard, RAID 5/6 Optional DVD Drive 2x 10Gb Ethernet Adapter x3850 X5: 4-socket 4U x86 (Nehalem EX) platform
  • IBM BladeCenter HX5 Scalable high end blade for high density compute and memory capacity Scalable blade server enables standardization on same platform for two- and four-socket server needs for faster time to value, while delivering peak performance and productivity in high-density environments.
    • Maximize Memory
      • 1.7x greater performance over 2-socket x86 (Intel Xeon 5500 Series) systems while using same two processor SW license
      • MAX5 memory expansion to 320GB in 60mm for over 25% more VMs per processor compared to competition
      • Minimize Cost
      • Upgrade to 80 DIMM for max memory performance or to save up to $6K by using smaller, less expensive DIMMs
      • Memory bound VMWare customers can save over $7K in licensing with memory rich two socket configurations
      • Simplify Deployment
      • Get up and running up to 2X faster by qualifying a single platform for 2 and 4 socket server needs
      • Partitioning of 4 socket to two 2 sockets without any physical system reconfiguration, and automatically fail over for maximum uptime
    System Specifications
    • 2x next-generation Intel Xeon (Nehalem EX) CPUs
    • 16x DDR3 VLP DIMMs
    • MAX5 memory expansion to 2 socket 40 DIMM
    • Scalable to 4S, 32 DIMM
    • Up to 8 I/O ports per node
    • Up to 2x SSDs per node
    • Optional 10GB Virtual Fabric Adapter / FCoEE
    • Internal USB for embedded hypervisor
    • IMM, uEFI, and IBM Systems Director
  • HX5 4-Socket Blade Top view of 4-socket system shown 2x Intel Xeon 7500 Series CPUs 4x IO Expansion Slots (2x CIOv + 2x CFFh) 32x VLP DDR3 Memory (16 per node) 4x SSD drives (1.8”) (2 per node) 2x 30mm nodes
    • HX5 Configurations
      • 2S, 16D, 8 I/O ports, 30m
      • 4S, 32D, 16 I/O ports, 60mm
    • Additional Features
    • Internal USB for embedded hypervisor
    • Dual & redundant I/O and Power
    • IMM & UEFI
    Scale Connector 16x memory buffers (8 per node)
  • IBM System x3690 X5 Industry’s first entry enterprise 2-socket for maximum memory and performance High-end 2-socket, 2U server offers up to four times the memory capacity of today’s 2-socket servers with double the processing cores for unmatched performance and memory capacity .
    • Maximize Memory
      • 33% more cores and 5x more memory capacity for 1.7x more transactions per minute and 2x more virtual machines than 2-socket x86 (Intel Xeon 5500 Series) systems
      • MAX5 memory expansion for additional 46% more virtual machines and leadership database performance
      • Minimize Cost
      • Achieve 4-socket memory capacity with 2-socket software license costs and cheaper “2-socket only” processors
      • eXFlash 720k internal IOPs for 40x local database performance and $2M savings in equal IOPs storage
      • Simplify Deployment
      • Pre-defined database and virtualization workload engines for faster deployment and faster time to value
    System Specifications
    • 2x Intel Xeon 7500-series CPUs
    • 32 DDR3 DIMMs, up to 64 with MAX5
    • 2 x8 PCIe slots, 2 x8 Low Profile slots
    • Up to 16x 2.5” HDDs or 24x 1.8” SSDs
    • RAID 0/1 Std, Opt RAID 5
    • 2x 1GB Ethernet
    • Optional 2x 10GB SFP+ Virtual Fabric / FCoEE
    • Scalable to 4S, 64 DIMM or 128 DIMM
    • Internal USB for embedded hypervisor
    • IMM, uEFI, and IBM Systems Director
  • (1x) x16 (full height, full length or (2x) x8 (1 full size, 1 full height / half length) (2x) PCIe x8 Low Profile (1x) PCIe x4 Low Profile for RAID (4x) N+N 675W Rear Access Hot Swap Redundant P/S (16x) Gen2 2.5” Drives or 3 eXFlash packs Scaling ports 32 x DDR3 Memory DIMMs 16 in upper mezzanine (pictured) 16 below 8x Memory Buffers (4x) 60mm Hot Swap Fans Dual USB Light Path Diagnostics DVD Drive x3690 X5: 2-socket 2U (Nehalem EX) platform
  • MAX5: Memory Access for eX5
    • Take your system to the MAX with MAX5
    • MAX memory capacity
    • An additional 32 DIMM slots for x3850 X5 and x3690 X5
    • An additional 24 DIMM slots for HX5
    • MAX virtual density
    • - Increase the size and number of VMs
    • MAX flexibility
    • - Expand memory capacity, scale servers, or both
    • MAX productivity
    • - Increase server utilization and performance
    • MAX license optimization
    • - Get more done with fewer systems
    Greater productivity and utilization through memory expansion and flexibility
  • MAX5 for x3690 X5 and x3850 X5 Memory Buffers QPI Ports EXA Ports Firehawk Chipset 32 Memory DIMMs
    • QPI attaches to systems
    • EXA Scalability to other memory drawers
    Lightpath Diagnostics Hot swap fans System removes from chassis for easy access
  • MAX5 for HX5 24x VLP DDR3 memory 6x memory buffers
    • HX5+MAX5 Configurations
      • 2S, 40D, 8 I/O ports, 60m
    MAX5 HX5
    • FireHawk
    • 6 SMI lanes
    • 4 QPI ports
    • 3 scalability ports
    • Never before seen levels of scaling…
    • 2-socket, 30mm building block
    • 2-socket  4-socket w/ logical partitioning
    • Max compute density!
    • Up to 32 cores in a 1¼ U equivalent space
    • Modular scalability in 2-socket increments to get to 4-socket
    • Targeted for database , and compute intensive simulations
    • Bringing the goodness of eX5 to blades…
    • Snaps onto base blade (sold as a bundle w/ base HX5)
    • Enables more memory than any other blades
    • Blade leadership!
    • Up to 30% more VMs than max competition blade
    • Flexible configurations & unmatched memory capacity
    • Targeted for Virtualization & DB for customers that need a blade form factor
    HX5 Blade HX5 Blade with MAX5 Common Building Block Maximum performance and flexibility for database and virtualization in a a blade IBM BladeCenter Scalable Blades 2P, 30mm 2-socket, 16DIMM 8 I/O ports 30mm 4-socket, 32DIMM 16 I/O 60mm 2-socket, 40DIMM 8 I/O 60mm
  • eX5 System Details
  • Intel Generic Boxboro-EX 4 Socket
    • Connectivity
      • Fully-connected (4 Intel ® QuickPath interconnects per socket)
      • 6.4, 5.86, or 4.8 GT/s on all links
      • Socket-LS
      • With 2 IOHs: 82 PCIe lanes (72 Gen2 Boxboro lanes + 4 Gen1 lanes on unused ESI port + 6 Gen1 ICH10 lanes)
    • Memory
      • CPU-integrated memory controller
      • Registered DDR3-1066 DIMMs running at speeds of 800, 978 or 1066 MHz via on-board memory buffer
      • 64 DIMM support (4:1 DIMM to buffer ratio)
    • Technologies & Enabling
      • Intel ® Intelligent Power Node Manager
      • Virtualization: VT-x, VT-d, & VT-c
      • Security: TPM 1.2, Measured Boot, UEFI
      • RAS features
    MB MB MB MB MB MB MB MB MB MB MB MB MB MB MB MB Intel® QuickPath interconnects Boxboro X4 ESI Boxboro ICH 10* Nehalem-EX Nehalem-EX x8 x8 2x4 x8 x4 x8 x8 x8 x16 x8 x4 x8 x16 2x4 2x4 2x4 X4 PCIe Gen1 Nehalem-EX Nehalem-EX
  • Intel Generic Boxboro-EX 2 Socket Platform
    • Modified Boxboro-EX ingredients to offer a new high-memory 2S platform
    • Nehalem-EX 2S will only be validated with up to 2 IOHs
    • Nehalem-EX 2S SKUs cannot scale natively (w/ QPI) beyond 2skts
    • Nehalem-EX 2S SKUs can scale beyond 2skts with a node controller
    • Comparison w/ Boxboro-EX & Tylersburg-EP:
    Validated Configuration Boxboro-EX 2S (2 IOH) Tylersburg-EP (2 IOH) # of cores per CPU Up to 8 Up to 4 QuickPath Interconnect 4^ 2 # of DIMMs 32 18 PCIe lanes 72+10 72+6 Boxboro X4 ESI Boxboro ICH 10* Nehalem-EX Nehalem-EX x8 x8 2x4 x8 2x2 x4 x8 x8 x8 x16 x8 2x2 x4 x8 x16 2x4 2x4 2x4 X4 PCIe Gen1 Intel® QuickPath interconnects
  • Intel ® Xeon ® 7500/6500 Platform Memory
    • 4 Socket platform capability (64 DIMMs):
      • Up to 16 DDR3 DIMMs per socket via up to four Scalable Memory Buffers
      • Support for up to 16GB DDR3 DIMMs
      • 1TB with 16GB DIMMS
    • Memory types supported:
      • 1066MHz DDR3
      • Registered (RDIMM)
      • Single-rank (SR), dual-rank (DR), quad-rank (QR)
    • Actual system memory speed depends on specific processor capabilities. (see NHM-EX SKU stack for max SMI link speeds per SKU):
      • 6.4GT/s SMI link speed capable of running memory speeds up to 1066Mhz
      • 5.86GT/s SMI link speed capable of running memory speeds up to 978Mhz**
      • 4.8GT/s SMI link speed capable of running memory speeds up to 800Mhz
    RDIMM RDIMM RDIMM RDIMM SMB Nehalem-EX 7500/6500 4 SMI 2 DDR3 Up to 16 DIIMS ^SMI^SMI/SMB = Intel Scalable Memory Interconnect / Scalable Memory Buffer ^ SMI^SMI/SMB = Intel Scalable Memory Interconnect / Scalable Memory Buffer ** Memory speed sett by UEFI ; All channels in a system will run at the fastest common frequency
  • x3850 X5
  • x3850 X5: 4-socket 4U x86 (Nehalem EX) platform 6 Fans 2x – 1975 W P/S (2x) 1975W Rear Access Hot Swap, Redundant P/S (4x) Intel Xeon EX CPU’s (8x) Memory Cards – 8 DDR3 DIMMs per card 7x - PCIe Gen2 Slots 2x 60mm Hot Swap Fans (8x) Gen2 2.5” Drives or (16x) 1.8” SSD Drives (2x) 120mm Hot Swap Fans Dual USB DVD Drive Light Path Diagnostics RAID (0/1) Card, RAID 5/6 Optional 4U Rack Mechanical Chassis Front view
  • x3850 X5: 4-socket 4U x86 (Nehalem EX) platform QPI cables (2x) 1975W Rear Access Hot Swap, Redundant P/S w/120mm fan (4x) PCIe full length (3x) PCIe half length x8 Dual Backplane – SAS & SSD Redundant w/220VAC only 4U Rack Mechanical Chassis Rear view
  • System Images – Interior PS1 PS2 Boxborro 2 Boxborro1 M5015 RAID HS Fan Pack PCIe x16 PCIe x4 PCIe x8 PCIe x8 PCIe x8 PCIe x8 PCIe x8 USB System Recovery Jumpers CPU1 CPU2 CPU3 CPU4 Slot 7 Slot 6 Slot 5 Slot 4 Slot 3 Slot 2 Slot 1 Mem Card 1 Mem Card 2 Mem Card 3 Mem Card 4 Mem Card 5 Mem Card 6 Mem Card 7 Mem Card 8 SAS Backplanes + HDD Cage
  • x3850 X5 Memory Card Current : x3850/x3950 M2 Memory Card New : x3850/x3950 X5 Memory Card Specifications : 64 DDR3 DIMMs 32 buses @ 1066M 2 DIMMs per bus SMI (FBD2) 16 buses Each @ 6.4GT/s (2B Rd/1B Wr) PC3-10600R LP DIMMs 1GB, x8, 1R, 1Gb 2GB, x8, 2R, 1Gb PC3-8500R LP DIMMs 4GB, x8, 4R, 1Gb 8GB, x8, 4R, 2Gb 16GB, x4, 4R, 2Gb
  • System Images – Chassis View Front Access 120mm fans Rear I/O Shutttle Rear Access HS PSUs - lift up on handle then pull out QPI Scalability Ports / Cables
  • System Images – CPU and Memory Memory Cards CPU and Heatsink installation
  • System Images – Options M1015 RAID card and Installation bracket 8 HDD SAS Backplane
  • x3850 X5 – Hardware; Speeds and Feeds
    • Processors
    • 4 Socket Intel Nehalem EX 7500 series
    • (4) QPI Ports/Processor
    • (4) SMI Ports/Processor
    • Memory
    • (8) Memory Cards, 2 per CPU
    • (16) Intel Memory Buffer Memory Buffers total
      • SMI Connected
    • DDR3 DIMMs
    • (64) Total DIMM Slots
    • 1066, 978 and 800 MHz DDR3 Speeds
      • Processor QPI Speed Dependant
    • 1, 2, 4, 8 and 16GB Support
      • Installed In Matched Pairs
    • Memory Sparing and Mirroring Support
      • Installed in Matched Quads
    • Chipset
    • (2) Intel Boxboro IOH (QPI-to-PCIe Bridge)
      • (36) PCIe Gen2 Lanes
      • (2) QPI Ports
      • (4) ESI Lanes To ICH10
    • Intel ICH10 Southbridge
      • (8) USB 2.0 Ports
      • 3Gb/s SATA DVD Connection
    • Networking
    • Broadcom BCM5709C
      • Dual 1Gb connection
      • x4 PCIe Gen2 Connection
    • Emulex 10Gb dual port custom
      • IBM Specific Adapter Option
      • Installs in PCIe Slot 7, x8 PCIe Gen2
      • V-NIC Capable
    • PCIe Slots
    • Slot 1 PCIE Gen2 x16 Full Length
    • Slot 2 PCIE Gen2 x4 Full Length (x8 mech)
    • Slot 3 PCIE Gen2 x8 Full Length
    • Slot 4 PCIE Gen2 x8 Full Length
    • Slot 5 PCIE Gen2 x8 Half Length
    • Slot 6 PCIE Gen2 x8 Half Length
    • Slot 7 PCIE Gen2 x8 Half Length (10GbE)
    • All slots 5Gb/s, full height
  • x3850 X5 – Hardware; Speeds and Feeds Cont.
    • 2.5” Storage
    • Up To (8) 2.5” HDD Bays
      • Support For SAS / SATA and SSD
    • SAS Drives
      • 146GB / 10K / 6Gbps
      • 300GB / 10K / 6Gbps
      • 73GB / 15K / 6Gbps
      • 146GB / 15K / 6Gbps
    • SATA Drive
      • 500GB / 7200rpm
    • SSD Drive
      • 50GB
    • Configured with one or two 4-Drive backplanes
    • UEFI BIOS
    • Next-generation replacement for BIOS-based firmware which provides a richer management experience
    • Removes limit on number of adapter cards—important in virtualized environments
    • Ability to remotely configure machines completely via command scripts with Advance Settings Utility
    • IMM
    • Common IMM Across Rack Portfolio
      • x3550M2
      • x3650M2
      • x3750 X5
      • x3850 X5
      • x3950 X5
    • 300MHz, 32-bit MIPS Processor
    • Matrox G200 Video Core
    • 128MB Dedicated DDR2 Memory
    • Avocent Based Digital Video Compression
    • Dedicated 10/100Mb Ethernet
    • 9-Pin Serial Port
    • 128-bit AES Hardware Encryption Engine
    • IPMI v2.0
    • Fan Speed Control
    • Serial Over LAN
    • Active Energy Manager/xSEC
    • LightPath
  • x3850M2 to x3850X5 Transition Subsystem x3850M2 x3850 X5 PSU 1440W hot swap, full redundancy HIGH line, 720W low line. Rear Access. No PS option, 2 std. 1975W hot swap, full redundancy HIGH line, 875W low line. Rear Access. No PS option, 2 std. Config restrictions at 110V. CPU Card No VRDs, 4 VRMs, Top access to CPU/VRM and CPU card No VRMs, 4 VRDs, Top access to CPUs and CPU card Memory 4 Memory cards, DDR2 PC2-5300 running at 533Mbs. 8 DIMMs per memory card (32 DIMMs per chassis max) 8 Memory cards, DDR3 PC3-10600 running at 1066Mbs. 8 DIMMs per memory card (64 DIMMs per chassis max) PCIe Board CalIOC2 2.0, Slots = 7@2.5Gb PCIE x8, slots 6-7 hot plug Boxboro EX, Slots = 7@5GHz Slot1 x16, Slot2 x4, Slots 3-7 x8 No hotswap. SAS Controller LSI Logic 1078 w/RAID1 (no key), Elliott Key for RAID5, SAS 4x external port Ruskin IR or Fareham MR adapter Battery option for Fareham Ethernet Controller BCM 5709 Dual Gigabit Ethernet PCIE x4 attach to Southbridge BCM 5709 Dual Gigabit Ethernet PCIE x4 Gen2 attach to Boxboro Dual 10Gb Emulex adapter in PCIe Slot 7 on some models Video Controller ATI RN50 on RSA2, 16MB Matrox G200 in IMM, 16MB Service Processor RSA2 standard Maxim VSC452 Integrated BMC (IMM) SAS Design Four 2.5 Inch internal + 4x external Eight 2.5 Inch internal Support for SATA, SSD USB, SuperIO Design ICH7, 5 external ports, 1 internal, No SuperIO, embedded 512M key, No PS2 Kb/Ms, No FDD controller ICH10, 6 external ports, 2 internal, No SuperIO, embedded 512M key, No PS2 Kb/Ms, No FDD controller Fans 4 x 120mm, 2 x 92mm, 8 total (2 x 80mm in Power supplies) 2 x 120mm, 2 x 60mm, 6 total 2 x 120mm in Power Supplies
  • x3850 X5 configuration flexibility Base System Shipping today Memory Expansion with MAX5 via QPI Shipping today Native Scaling via QPI Shipping today Memory Expansion and Scaling with MAX5 via QPI and EXA Available 2011
  • Xeon 8-socket interconnect block diagram QPI Cable Links for QPI bus Four QPI cables 1 2 4 3 3 4 2 1 QPI 3 0 3 0 2 2 0 0 2 2 2 2 2 0 0 2 3 3 3 3 1 3 3 1 Cable cross connects CPUs 2 and 3
  • x3850 X5 with native QPI scaling Connects directly via QPI bus QPI Cables Intel Xeon Intel Xeon MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Intel Xeon MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Intel Xeon Intel Boxboro PCI-E Intel Boxboro PCI-E Intel Xeon Intel Xeon MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Intel Xeon MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Intel Xeon Intel Boxboro PCI-E Intel Boxboro PCI-E
  • x3850 8-socket QPI scaling A D B C REAR VIEW OF CHASSIS QPI 1-2 QPI 3-4 x3850 #1 QPI 1-2 QPI 3-4 x3850 #2
  • x3850 X5 with MAX5 using QPI Intel Xeon Intel Xeon MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Intel Xeon MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Intel Xeon Intel Boxboro PCI-E Intel Boxboro PCI-E MAX5 MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Connects directly to each CPU via QPI bus 6.4 GT/s MB1 MB2 1 3 8 6 2 4 7 5 DDR3 Memory DIMMs Memory Buffer SMI Link to Firehawk EXA port
  • x3850 X5 to MAX5 QPI cabling eX5 Memory Drawer End x3850 X5 End
  • 8-socket x3850 X5 with MAX5 using QPI and EXA Connects directly to each CPU via QPI bus 6.4 GT/s Connects via EXA
  • eX5 Rack Memory Expansion Module Configurations 1G x3850 X5 4U (2) x16, (10) x8, (2) x4 (1) x16, (5) x8, (1) x4 (2) x16, (10) x8, (2) x4 (1) x16, (5) x8, (1) x4 PCI-E Slots 8 No QPI 96 4 5u 1G + 1D 2G + 2D 2G 1G Configuration 4S No No Partition EXA QPI None Scaling 8 192 8 10u 5 128 8 8u 8 64 4 4u Processor SKUs Supported DIMMS CPU’s Size 1G+1D x3950 X5 4U Memory Drawer 1U QPI 2G+2D Memory Drawer 1U Memory Drawer 1U x3950 X5 4U x3950 X5 4U QPI QPI EXA 2G x3850 X5 4U x3850 X5 4U QPI
  • x3850 X5 System Population Guidelines
    • Processors
    • Base Systems contain 2 CPU’s
    • 8-socket QPI scaling is supported with 4+4 CPU’s, all matching.
    • eX5 Memory expansion requires 4 CPU’s in host.
    • Memory
    • Each CPU needs at least one memory card
    • Each memory card needs at least 2 DIMM’s
    • DIMM’s must be installed in matching pairs
    • Optimal memory performance requires 2 memory cards per CPU, with 4 DIMMs on each memory card, equal amounts of memory per card.
    • If memory mirrored, then DIMM’s must match in sets of 4
    • eX5 Memory expansion works best with DIMM’s in sets of 4
    • eX5 Memory expansion works best with a ratio of 2/1 memory in host/expander
    • Drives
    • Base Systems contain backplane for 4 drives
    • Maximum of 8 SFF SAS drives
    • I/O Adapters
    • Alternate adapter population between IOH chipset devices; alternate between slots (1-4) and (5-7)
  • x3850 X5 DIMM Population Rules
    • General Memory Population Rules
    • DIMM’s must be installed in matching pairs
    • Each memory card requires at least 2 DIMM’s
    • Memory Population Best Practices
    • Populate 1 DIMM / Memory Buffer on each SMI lane 1 st
    • Populate 1 DIMM / Memory Buffer across all cards before moving to populate the next channel on the Memory Buffer
    • Populate DIMMs furthest away from Memory Buffer 1 st (ie – 1, 8, 3, 6) before populating 2 nd DIMM in channel
    • Memory DIMMs should be plugged in order of DIMM size
        • Plug largest DIMMs first, followed by next largest size
      • Each CPU & memory card should have identical amounts of RAM
    CPU MC2 MC1 Memory Buffer 2 Memory Buffer 2 Memory Buffer 1 Memory Buffer 1 DIMM 2 DIMM 1 DIMM 4 DIMM 3 DIMM 7 DIMM 5 DIMM 8 DIMM 6 DIMM 6 DIMM 8 DIMM 7 DIMM 1 DIMM 3 DIMM 2 DIMM 4 DIMM 5 Memory Card 1 Memory Card 2 SMI Lane 1 SMI Lane 4 SMI Lane 2 SMI Lane 3 Channel 0 Channel 0 Channel 0 Channel 0 Channel 1 Channel 1 Channel 1 Channel 1 Optimally expandable memory config is 2 memory cards / CPU, 4 DIMMs / memory card, equal amounts of memory / card
  • x3850 X5 PCIe Slot Population Plug one PCIe card / Boxboro before moving to next set of slots Card # PCIe Slot # 1 1 2 5 3 3 4 6 5 4 6 7 7 2
  • ServeRAID M5015, M1015, BR10i and upgrades IBM’s new portfolio beats HP in performance and data protection, and is now upgradeable! Option SBB/MFI Description 46M0829 46M0850 ServeRAID M5015 SAS/SATA Controller 46M0831 46M0862 ServeRAID M1015 SAS/SATA Controller 44E8689 44E8692 ServeRAID-BR10i SAS/SATA Controller 46M0930 46M0932 ServeRAID M5000 Series Advance Feature Key
    • Coulomb Counting Battery Gas Gauge provides extremely accurate capacity monitoring versus open circuit voltage measuring used by competition
    • Learn cycles allow battery to be tested regularly avoiding complete data loss during an unexpected power loss
    Intelligent Battery Backup Unit (iBBU) for ultimate data protection
    • M1000 Series Advance Feature Key offers clients a value RAID 5,50 offering with easy-to-use plug-in key
    • Improved reliability with RAID 6,60 via M5000 Series Advance Feature Key
    Flexible offerings with numerous optional upgrades can span low cost to enterprise customer needs
    • 5-20% higher performance than HP’s PMC solution translates to reduced computational times, especially in apps with high sequential writes, e.g. video surveillance (see the white paper)
    Higher performance than HP
    • Single screen control reduces configuration time by 45% versus predecessor
    • Advanced SED management including secure drive erase and local keys
    • Common 6Gb software stack reduces driver qualification time in heterogeneous environments by half versus predecessor
    Improved usability in MegaRAID Storage Manager Customer Benefits Key Features M5000 Key M5015 BR10i x3850 X5 M5000 Key M5015 M1015 x3690 X5 RAID 6/60 & Encryption (Optional) RAID 5/50 (Optional) RAID 0/1/10 (Standard)
  • Emulex 10GbE Virtual Fabric Adapter for IBM System x * FCoE and vNIC support will be available post-announce Simplified networking, trusted SAN Interoperability, and increased business agility Option SBB/MFI Description 49Y4250 49Y4253 Emulex 10GbE Virtual Fabric Adapter for IBM System x
    • Up to 40% savings in power and cooling due to consolidation of adapters and switches
    • Up to 40% reduction in Interconnect costs due to convergence of LAN and SAN traffic
    • Higher utilization of datacenter space: Fewer switches & cables per rack
    I/O Convergence
    • Top-to-bottom driver commonality and shared hardware strategy with 10GbE LOM solutions
    • Emulex OneCommand™ Unified Management centralizes and consolidates resource management
    Common Solution
    • Line-rate performance and nearly 30% performance increase over HP solution based on Netxen P3 (now QLogic)
    • Up to 8 virtual NICs or mix of vNICs and vCNA gives customers the choice to run management and data traffic on the same adapter *
    Scalability & Performance for Virtualization
    • One Card, Every Network (10Gbps and FCoE *)
    • Copper or Fibre fabric offers customers multiple choices to fit their datacenter needs
    • Auto-negotiates to fit 1GbE environment offering customers investment protection with their existing hardware
    Flexibility Customer Benefits Key Features
  • 8Gb Fibre Channel HBAs for IBM System x Option SBB/MFI Description 42D0485 42D0486 Emulex 8Gb FC Single-port HBA for IBM System x 42D0494 42D0495 Emulex 8Gb FC Dual-port HBA for IBM System x 42D0501 42D0502 QLogic 8Gb FC Single-port HBA for IBM System x 42D0510 42D0511 QLogic 8Gb FC Dual-port HBA for IBM System x 46M6049 46M6051 Brocade 8Gb FC Single-port HBA for IBM System x 46M6050 46M6052 Brocade 8Gb FC Dual-port HBA for IBM System x Did you know? In-flight data encryption provides end-to-end security while solving the business challenges posed by security and regulatory compliance Unified SAN Management tools can easily manage and optimize resources by extending fabric intelligence and services to servers saving you valuable management dollars True low-profile MD2 form factor is ideal for space limited server environments 8Gb Fibre Channel HBAs can auto-negotiate with 4Gb and 2Gb environments providing investment protection Enhancing SAN connectivity with 8Gb Fibre Channel from industry leading partners
    • Advanced management suite allows administrators to efficiently configure, monitor and manage HBA and fabric
    Lower CAPEX with hardware consolidation and simplified cabling infrastructure
    • 1.6Gb/sec throughput per port provides the superior performance required by clients with mission critical storage applications
    Lower OPEX due to reduced power and cooling cost
    • Fibre Channel HBAs for IBM System x are optimized for virtualization, active power management and high performance computing needs
    Investment protection
    • These HBAs are part of a solution of switches, fabric and management tools forming a comprehensive storage offering
    End-to-end management Customer Benefits Key Features
  • Converged Networking Technologies significantly reduce TCO with smaller storage footprint, lower cooling, lower power, and simplified management 10Gb CNAs for IBM System x Option SBB/MFI Description 42C1800 42C1803 Qlogic 10Gb CNA for IBM System x 49Y4218 49Y4219 QLogic 10Gb SFP+ SR Optical Transceiver 42C1820 42C1823 Brocade 10Gb CNA for IBM System x 49Y4216 49Y4217 Brocade 10Gb SFP+ SR Optical Transceiver
    • Up to 40% reduction in Interconnect costs due to convergence of LAN and SAN traffic
    • Savings from cabling and optical components alone can pay for additional servers
    • Higher utilization of datacenter space: Fewer switches & cables per rack
    Lower CAPEX with hardware consolidation and simplified cabling infrastructure
    • Up to 40% savings in power and cooling due to consolidation of adapters and switches
    Lower OPEX due to reduced power and cooling cost
    • Seamlessly integrates with existing LAN and SAN infrastructure
    Investment protection
    • Unified management tools provide a single point of manageability from adapter to switch across LAN and SAN
    End-to-end management Customer Benefits Key Features
    • Based on fast growing solid state device (SSD) technology
    • Leadership technology backed by IBM service and support
    • Standard PCI-E form factor supports a broad set of IBM platforms
    • Available in 160GB and 320GB capacities
    • Lowers CAPEX and OPEX
    Superior solution to enterprise disk drives for applications sustaining high operations per second – e.g. database, data warehouse, stream analytics Option SBB/MFI Description 46M0877 46M0888 IBM 160GB High IOPS SS Class SSD PCIe Adapter 46M0898 46M0900 IBM 320GB High IOPS MS Class SSD PCIe Adapter The performance of a SAN in the palm of your hand IBM’s High IOPS SSD PCIe Adapters
  • x3690 X5
  • (1x) x16 (full height, full length or (2x) x8 (1 full size, 1 full height / half length) (2x) PCIe x8 Low Profile (4x) N+N 675W Rear Access Hot Swap Redundant P/S (16x) Gen2 2.5” Drives or (24x) 1.8” SSD Drives 2 Scalability cables for eX5 Memory Expansion (16x) DDR3 Memory DIMMs Upper mezzanine 8x Intel Memory Buffers (BoB’s) Intel Nehelam EX processor details, staired heat sinks x3690 X5: 2-socket 2U platform 2U Rack Mechanical Chassis Top view Maximum performance, maximum memory for virtualization and database applications
  • QPI Cable cage (2x) 1Gb Ethernet (2 std, 2 optional) x3690 X5: 2-socket 2U platform 2U Rack Mechanical Chassis Rear view Maximum performance, maximum memory for virtualization and database applications
  • x3690 X5 – Hardware; Speeds and Feeds
    • Processors
    • 2 Socket Design
    • 11 Supported SKUs
    • (4) QPI Ports/Processor
    • (4) SMI Ports/Processor
    • Memory
    • (8) Intel Memory Buffer Memory Buffers In Base System
      • SMI Connected
    • DDR3 DIMMs
    • (32) Total DIMM Slots
    • 1066, 978 and 800 MHz DDR3 Speeds
      • Processor QPI Speed Dependant
    • 1, 2, 4, 8 and 16 GB Support
      • Installed In Matched Pairs
    • Memory Sparing and Mirroring Support
    • Chipset
    • Intel Boxboro IOH (QPI-to-PCIe Bridge)
      • (36) PCIe Gen2 Lanes
      • (2) QPI Ports
      • (4) ESI Lanes To ICH10
    • Intel ICH10 Southbridge
      • (5) PCIe Gen1 Lanes
      • (8) USB 2.0 Ports
      • 3Gb/s SATA DVD Connection
    • Networking
    • Broadcom BCM5709C
      • Dual 1Gb connection
      • x4 PCIe Gen1 Connection to ICH10
    • Emulex 10Gb
      • IBM Specific Adapter Option
      • Installs in PCIe Slot 5, x8 PCIe Gen2
      • V-NIC Capable
    • PCIe Slots
    • Slot 1 – x8 or x16 PCIe Gen2
      • Full Height / x8 / ¾ Length Standard
      • Full Height / x16 / Full Length Optional
        • Requires removal of memory mezzanine
    • Slot 2 – x8 PCIe Gen2 Full Height / Half Length
      • N/A with Full Length x16 Card Installed
    • Slot 3 – x8 PCIe Gen2 Low Profile
    • Slot 4 – x4 PCIe Gen2 Low Profile
      • x8 Connector
      • Standard Slot For Storage Adapter
    • Slot 5 – x8 PCIe Gen2 Low Profile
      • Custom Connector Also Hold Emulex 10Gb Card
      • Completely Compatible with all x1, x4 and x8’s
  • x3690 X5 – Hardware; Speeds and Feeds Cont.
    • 2.5” Storage
    • Broad Range Of Adapters Supported
      • 3 and 6Gb/s Cards Available
    • Port Expander Card Available
      • Allows For (16) Drive RAID Array
    • Up To(16) 2.5” HDD Bays
      • Support For SAS / SATA and SSD
    • SAS Drives
      • 146GB / 10K / 6Gbps
      • 300GB / 10K / 6Gbps
      • 73GB / 15K / 6Gbps
      • 146GB / 15K / 6Gbps
    • SATA Drive
      • 500GB / 7200rpm
    • SSD Drive
      • 50GB
    • Configured In Increments of (4) Drives
    • 1.8” Storage
    • SSD Performance Optimized Adapters
    • Up To 24 1.8” HDD Bays
      • SATA SSD Only
    • 50GB SLC Initially Available, 50GB and 200GB MLC available 1Q2010
    • Configured In Increments of (8) Drives
    • Limited To (3) Power Supplies In (32) Drive Configuration
      • TBD
    • IMM
    • Common IMM Across Rack Portfolio
      • x3550M2
      • x3650M2
      • x3690 X5
      • x3850 X5
      • x3950 X5
    • 300MHz, 32-bit MIPS Processor
    • Matrox G200 Video Core
    • 128MB DDR2 Memory
      • 16MB Allocated for Video
    • Front and Rear Video Ports
    • Avocent Based Digital Video Compression
    • Dedicated 10/100Mb Ethernet
    • 9-Pin Serial Port
    • 128-bit AES Hardware Encryption Engine
    • IPMI v2.0
    • Fan Speed Control
    • Altitude Sensing
    • Serial Over LAN
    • Active Energy Manager
    • LightPath
  • x3690 X5 – Inside, No Memory Mezzanine (2x) Intel Xeon EX CPU’s Memory Mezzanine Connectors (16x) DDR3 Memory DIMMs Base Planar (4x) Intel Memory Buffers (BoB’s) Base Planar (2x) USB (front) Light Path Diagnostics UltraSlim SATA DVD Drive Front video
  • x3690 X5 – Inside, Memory Mezzanine Installed (1) PCIe2 x16 FHFL or (2) PCIe2 x8 FHHL (2) PCIe2 x8 Low Profile & (1) PCIe2 x4 LP (x8 connector) (4x) N+N Redundant 675W Hot Swap Power Supplies (16x) Gen2 2.5” SAS/SATA Drives (16x) DDR3 Memory DIMMs Upper Memory Mezzanine (4x) Intel Memory Buffers (BoB’s) Upper Memory Mezzanine (5x) Hot Swap Redundant Cooling Fans Redundant Power Interposer
  • x3690 X5 – Front / Rear Views
  • x3690 X5 – System Planar (2) QPI Scalability Connections Shown with Cable Guide Installed Intel Boxboro IOH Intel ICH10 Southbridge Processor Socket 1 Processor Socket 2 Memory Mezzanine Connectors
  • x3690 X5 – Memory Mezzanine (4) Intel Memory Buffer Memory Buffers (16) DDR3 DIMM Sockets
  • x3690 X5 – PCIe Slots 3-5 LP PCIe2 x8 Slot 3 LP PCIe2 x4 Slot 4 (x8 Connector) Internal USB Hypervisor Spare Internal USB LP PCIe2 x8 Slot 5 (Emulex10Gb enabled)
  • x3690 X5 – Redundant Power Interposer Power Supply 3 & 4 Connectors System Planar Connections
  • eX5 QPI cables for MAX5 memory expansion eX5 Memory Drawer Side x3690 X5 Side
  • Possible x3690 X5 Configurations 2M+2D 1M+1D 2M 1M Configuration Size CPU’s DIMMS PCI-E Slots Scaling Partitioning Processor SKUs Supported 1M 2U 2 32 (1) x16 or (2) x8, and (2) x8 LP None Software - OS 11 1M + 1D 3U 2 64 (1) x16 or (2) x8, and (2) x8 LP QPI Software - OS 10 2M 4U 4 64 (2) x16 or (4) x8, and (4) x8 LP QPI Software - OS 8 2M + 2D 6U 4 128 (2) x16 or (4) x8, and (4) x8 LP QPI/EXA Physical - EXA 10 Ghidorah 4U Memory Drawer 1U QPI x3690 X5 2U Memory Drawer 1U QPI EXA x3690 X5 2U QPI x3690 X5 2U x3690 X5 2U Memory Drawer 1U QPI x3690 X5 2U x3690 X5 2U
  • MAX5
    • QPI attaches to systems
    • EXA Scalability to other memory drawers
    • Allows Memory expansion and Processor expansion (16 sockets)
    MAX5 for x3690 X5 and x3850 X5 1U Rack Mechanical Chassis Top view Buffer on Board - Memory Buffer N+N 650W Power Supplies QPI Link Ports EXA Scalability Ports Firehawk memory and node controller 32 Memory Dimms Lightpath Diagnostics 1U Rack Mechanical Chassis Top view eX5 memory expansion and scalability for leadership performance and increased utilization
  • MAX5 for System x front and rear views Redundant 675W Power Supplies Lightpath Diagnostics QPI Ports EXA Ports Hot swap fans Front Rear System removes from chassis for easy access
  • MAX5 for System x top view Memory Buffers QPI Ports EXA Ports Firehawk Chipset 32 Memory Dimms
    • QPI attaches to systems
    • EXA Scalability to other memory drawers
  • MAX5 for rack systems front and rear view
  • x3850 X5 with MAX5 using QPI Intel Xeon Intel Xeon MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Intel Xeon MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Intel Xeon Intel Boxboro PCI-E Intel Boxboro PCI-E MAX5 MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Connects directly to each CPU via QPI bus 6.4 GT/s MB1 MB2 1 3 8 6 2 4 7 5 DDR3 Memory DIMMs Memory Buffer SMI Link to Firehawk EXA port
  • x3850 X5 to MAX5 QPI cabling eX5 Memory Drawer End x3850 X5 End
  • 8-socket x3850 X5 with MAX5 using QPI and EXA Connects directly to each CPU via QPI bus 6.4 GT/s Connects via EXA
  • x3690 X5 and MAX5
    • 2-socket 3U package
      • MAX5 Is Mechanically Attached To Base System
    • Dual QPI Links Between x3690 X5 and MAX5
    • Supports 64 DIMMs Total
      • 32 DIMMs In Base x3690 X5
      • 32 DIMMs In MAX5
    • Supports up to 1TB Of Total Memory
      • Using 16GB DIMMs
    • MAX5 Runs On IBM Firehawk Chip
      • Control Of (32) DIMMs
      • (4) QPI Ports For Connection To Base System
        • (2) Used In x3690 X5 Configurations
      • (3) EXA Ports
        • Support Scalability To Another x3690 X5 / MAX5
    PCI-E Intel NHM-EX MAX5 Intel NHM-EX PCI-E Intel NHM-EX Intel NHM-EX Intel Boxboro EXA Scale Up (3) Ports Intel Boxboro
  • MAX5 connectivity for rack systems
    • MAX5 Features
    • Firehawk memory controller
    • 32 DIMMs
    • 1U form factor
    • Connects to systems via QPI
    MAX5 1U Nehalem Nehalem PCI-E Nehalem PCI-E Nehalem 4U x3850 X5 MAX5 1U Nehalem Nehalem PCI-E 2U x3690 X5 QPI QPI QPI QPI QPI QPI EXA Scaling Connects directly to each CPU via QPI bus 6.4 GT/s EXA Scaling MB1 MB2 1 3 8 6 2 4 7 5 DDR3 Memory DIMMs Memory Expander/Buffer SMI Link to EXA5 Boxboro Boxboro Boxboro
  • MAX5 Technical Specs
    • Processors
    • None!
    • Memory
    • (32) Total DIMM Slots
    • DDR3 DIMMs
    • (8) Intel Memory Buffer Memory Buffers total
      • SMI Connected
    • 1066, 978 and 800MHz DDR3 Speeds
      • Processor QPI Speed Dependant
    • 1, 2, 4, 8 and 16GB Support
      • Installed In Matched Pairs
    • Memory Sparing and Mirroring Support
      • Installed in Matched Quads
    • Chipset
    • IBM Firehawk
      • (8) SMI Lanes
      • (4) QPI Ports
      • (3) Scalability Ports
    • Usability
    • Power sequences from host platform
    • Memory configuration from host platform
    • Multi-node configuration from host platform
    • Lightpath enabled
    • DSA supported
  • HX5
  • HX5 – 2 to 4 Socket Blade
    • HX5: 2-socket, 16 DIMM 1-wide blade form factor
      • Scalable to 4-socket, 32 DIMM natively via QPI links
      • Supports MAX5 memory expansion
      • Support 95w & 105W NHM-EX processors
      • Max DIMM speed: 978Mhz
      • 16 DIMM slots per 2-socket blade (2GB, 4GB and 8GB VLP)
      • Support 2 SATA SSD drives per 2-socket blade
    • HX5+MAX5: 2-socket 2-wide blade form factor
      • Support NHM-EX 130w processors
      • Max DIMM speed: 1066MHz
      • 40 DIMM slots per 2-socket double-wide blade (2GB, 4GB and 8GB VLP)
      • Support 2 SATA SSD drives per 2-socket blade
    2S, 40DIMM 60mm 2S, 16DIMM 30mm 4S, 32DIMM 60mm Intel Boxboro PCI-E MB1 MB1 MB1 MB1 Intel NHM-EX MB1 MB1 MB1 MB1 Intel NHM-EX Intel Boxboro PCI-E MB1 MB1 MB1 MB1 Intel NHM-EX MAX5 MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Intel NHM-EX
  • HX5 – 2- & 4-Socket Blade Top view of 4-socket system shown 2x Intel Xeon EX CPUs 2x IO Expansion Slots (1x CIOv + 1x CFFh) 16x VLP DDR3 Memory 2x SSD drives (1.8”) 2x 30mm nodes
    • HX5 Configurations
      • 2S, 16D, 8 I/O ports, 30m
      • 4S, 32D, 16 I/O ports, 60mm
    • Additional Features
    • Internal USB for embedded hypervisor
    • Dual & redundant I/O and Power
    • IMM & UEFI
    Scale Connector 8x memory buffers
  • MAX5 for HX5 24x VLP DDR3 memory 6x memory buffers
    • HX5+MAX5 Configurations
      • 2S, 40D, 8 I/O ports, 60m
    MAX5 HX5
    • FireHawk
    • 6 SMI lanes
    • 4 QPI ports
    • 3 scalability ports
  • HX5 Configurations 1 2 3 2S, 40D, 60mm 2S, 16D, 30mm 4S, 32D, 60mm *Not required for operation, but strongly recommended for optimal performance 40 32 16 DIMMS 2 4 2 SSDs (max) 3 2 1 (59Y5877) IBM HX5 MAX5 1-node scalability kit (46M6975) IBM HX5 2-node scalability kit (59Y5889) IBM HX5 1-node Speed Burst card * Additional Required Options 8 ports; 2x 1GE LOM, 1x CIOv, 1x CFFh 16 ports; 4x 1GE LOM, 2x CIOv, 2x CFFh 8 ports: 2x 1GE LOM, 1x CIOv, 1x CFFh I/O Slots QPI 4 (95W, 105W) 60mm HX5 + HX5 QPI 2 (95W, 105W) 30mm HX5 HX5 + MAX5 Configuration QPI Scaling 2 (95W, 105W, 130W) 60mm CPU’s Size
  • HX5 Operating System Support All NOS apply to both HX5 and HX5+MAX5 models Note: Solaris 10 support WIP U5, which includes KVM support. N-1 1 & 2 P2-Hammerhead P1-Thresher Red Hat Enterprise Linux 5 Server, 64-bit U5, which includes KVM support. N-1 1 & 2 P2 Red Hat Enterprise Linux 5 Server with Xen, 64-bit 1 & 2 1 & 2 1 & 2 1 & 2 1 & 2 1 & 2 1 & 2 1 & 2 1 & 2 1 & 2 Nodes U1. Emulex Tiger Shark - no drivers. Driver for Intel 10G NIC will be async - not in kernel. No CIM provider for SAS-2. Waiting on code from LSI. No way to manage SAS-2. N P1 VMWare ESXi 4.0 N P2 SUSE Linux Enterprise Server 11 with Xen, 64-bit SP3 N-1 P2 SUSE Linux Enterprise Server 10, 64-bit SP3 N-1 P2 SUSE Linux Enterprise Server 10 with Xen, 64-bit N N N-1 N N N N / N-1 P3 for Hammerhead, to be moved to P2 immediately following initial SIT Exit. Test completion outlook is 4-6 weeks after initial SIT Exit. P2 for Thresher. P3-Hammerhead P2-Thresher Red Hat Enterprise Linux 6, 64-bit SP2 P1 Windows Server 2008, 64-bit (Std, Ent, Web, DC) U1 P1 VMware ESX 4.0 P1 SUSE Linux Enterprise Server 11, 64-bit Comments Pri NOS SP2 P3 Windows HPC Server 2008 P1 Windows Server 2008 R2 (64-bit)
  • HX5 & Westmere support in BladeCenter H chassis
    • BCH 4SX mid-plane refresh (1Q10)
      • Replaces EOL midplane connector
      • HW VPD discernable through AMM
      • No new MTMs
    • BCH 2980W PSUs deliver 95% efficiency…best in class
    • - BCH New Blowers provide higher CFMs and enable 130W single wide Blades
    Shipping today 7/13 ANN – 8/23 GA New blowers & PSUs New MTM Source: BCH Power Matrix 25JAN10_elb.xls Paper analysis, tests pending BC-H 4TX BCH 4SX* 2980W High Efficiency No support 14 7* 14/7* Legacy blowers 2900W 14* 14 7* 14/7* Enhanced blowers No support 14 7* 14 / 7* Legacy blowers 2980W 14* 14 7* 14 / 7* Enhanced blowers Enhanced blowers 14 14 7 14 /7 HS22 WSM 130W Westmere up to 95W (HS22 & HS22V) HX5 w/ MAX5 130W HX5 / HX5+MAX5 95W & 105W
  • HX5 & Westmere support in BCE, BCS, BCHT & BCT Source: BCH Power Matrix 25JAN10_elb.xls ^^ Throttling / over-subscription enabled No support Up to 95W TBD 12 Enterprise 6 6 3 6 BCS No support Up to 80W CPU No support No support NEBS BCHT No support Same as NEH-EP HS22 No support No support 2000W No support 14^^ No support No support 2320W BC-E HS22 WSM 130W Westmere (HS22 & HS22V) HX5 w/ MAX5 130W HX5 95W / 105W
  • Nehalem EX Memory Frequencies
    • Possible DDR3 frequencies: 1066, 978 or 800MHz
    • DDR3 Frequencies based on processor SKU, DIMMs used, or population location
      • Memory in base HX5 limited to 978 MHz max
      • Memory in MAX5 operates up to 1066 MHz
    • Max QPI frequency = max SMI frequency
      • SMI and DDR3 must operate with fixed ratios
    • QPI** SMI DDR3
      • 6.4 GT/s 6.4 GT/s <-Fixed-> 1066 MHz
      • 5.8 GT/s 5.8 GT/s <-Fixed-> 978 MHz
      • 4.8 GT/s 4.8 GT/s <-Fixed-> 800 MHz
      • ** In some cases QPI and SMI may be programmed to run at different speeds due to technical requirements and customer’s desire for power savings.
    • 1333MHz DIMM operation is not supported
      • 1333MHz DIMMS may be used at 1066, 978 and 800MHz
    • 800MHz DIMMs will force SMI to run at 4.8GT/s
  • DIMM Population Rules Memory Controller-1 QPI Memory Controller-2 Mill Brook 1 Mill Brook 2 Mill Brook 3 Mill Brook 4 DIMM Pair SMI 4 SMI 1 SMI 2 SMI 3 QPI (x4) Memory Controller-3 QPI Memory Controller-4 Mill Brook 5 Mill Brook 6 Mill Brook 7 Mill Brook 8 SMI 8 SMI 5 SMI 6 SMI 7 Core 5 Core 6 Core 7 Core 8 Core 1 Core 2 Core 3 Core 4 24MB L3 Cache
    • Recommended for best performance:
      • Each CPU should have identical amounts of RAM
        • Same size and quantity of DIMMs
      • 1 DPC, all channels populated
      • Populate DIMMs in identical groups of 4 per CPU
      • Populate 1 DIMM / Millbrook on each SMI lane 1 st
      • Populate 1 DIMM / Millbrook across all Millbrooks before moving to populate the next channel on the Millbrook
      • Memory DIMMs should be plugged in order of DIMM size
        • Plug largest DIMMs first, followed by next largest size
    • Must:
      • DIMM’s must be installed in matching pairs
      • Each CPU requires at least 2 DIMM’s
    Core 5 Core 6 Core 7 Core 8 Core 1 Core 2 Core 3 Core 4 24MB L3 Cache
  • IBM MAX5 for BladeCenter Technical Specs
    • BladeCenter MAX5 Features
    • Firehawk memory controller
    • 24 DIMMs
    • 6 SMI lanes, 4 QPI ports
    • 3 scalability ports
    • 30mm (single-wide) form factor
    EXA Scaling
    • Processors
    • None!
    • Memory
    • (24) Total DDR4 DIMM Slots
    • (6) Intel Millbrook Memory Buffers total
      • SMI Connected
    • 1066, 978 and 800MHz DDR3 Speeds
      • Processor QPI Speed Dependant
    • 2GB, 4GB and 8GB Support
      • Installed In Matched Pairs
    • Memory Sparing and Mirroring Support
      • Installed in Matched Quads
    • Connectivity
    • Mechanically attached to base system
    • Dual QPI links between 2S HX5 and MAX5
    MAX5 2S HX5+MAX5 (2-sockets, 40 DIMMs) QPI MB1 MB2 1 3 8 6 2 4 7 5 DDR3 Memory DIMMs Millbrook Memory Expander/Buffer SMI Link FireHawk Intel Boxboro PCI-E MB1 MB1 MB1 MB1 Intel NHM-EX MAX5 MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Intel NHM-EX
  • Why use MAX5
    • 1) Applications require lots of memory per core
      • 25% more memory for 2P or 4P server than competition
      • Over 2X system memory compared to DP processors
      • Enable pp to 32 DIMMs per Processor!
    • 2) Maximum application performance
      • Allows HX5 Blade to use top CPU SKUs (130W)
      • 12 additional memory channels for max bandwidth
    • 3) Reduce system cost
      • Increased system memory without additional CPUs or SW licenses
      • Reduce system cost by using smaller DIMMs to reach target memory capacity
  • Technical items you need to know about MAX5
    • MAX5 Memory Latency is similar to the CPU local memory
    • QPI 1.0 uses a Source snoop protocol, which means snoops come from the requestor with snoop results sent to the Home agent.
    • The local CPU must wait for snoop results. MAX5’s snoop filter allows it to return memory reads immediately . . . it already knows the snoop result.
    • MAX5 has less bandwidth than local memory, but is better than peer CPUs.
    • 2) Avoid using 4.8Gb rated CPUs with MAX5.
    • CPUs are rated 6.4, 5.86, or 4.8Gb. MAX5’s internal clock is tied to the QPI speed. It will run at 75% with 4.8Gb. Also 4.8Gb reduces the memory bandwidth to MAX5 from each CPU. Performance is reduced.
    • 3) For best performance, fully populate all CPU DIMMs prior to adding MAX5
    • While MAX5 can be added to base HX5 system at any time, for optimal performance, fully populate all CPU DIMMs on base HX5 prior to adding MAX5.
  • HX5 I/O Daughter Card Support Upper Card Lower CFFh Card CFFh outline
    • 8 I/O ports per single-wide HX5
      • 2 ports of 1Gig Ethernet
      • 2 ports on 1x CIOv
      • 4 ports on 1x CFFH
    CIOv Card CFFh CIOv (1Xe)
  • HX5 Blade, CFFh Support Blade Server 1 CFFh PCIe 8x + 8x On-Board 1GbE 4 Lane 1 Lane Mid-Plane 1 Lane 4 Lane Switch 7 Switch 8 Switch 9 Switch 10 SM1 (Ethernet) (Ethernet) SM3 SM4 SM2 8 10 1 2 7 9 CIOv PCIe 8x 3 4 Upper 4Xe
    • 1x I/O Expansion Cards (CIOv):
    • Dual Gb Ethernet
    • Dual 4Gb Fiber Channel
    • Dual SAS (3Gb)
    • Starfish SSD Option (in Upper 4Xe w/CFFh)
    • No connectivity to midplane
    • 4x I/O Expansion Cards (CFFh):
    • Dual 4x DDR InfiniBand (5Gb)
    • Dual/Quad 10Gb Ethernet
    • Single Myrinet10G [ecosystem partner]
    • Quad 1Gb Ethernet (with 1Gb/10Gb HSSM - 2009)
    • Dual Gb Ethernet + Dual 4Gb FC (via MSIM)
    • Quad Gb Ethernet (via MSIM)
    Blade Server #n SSD Expansion Card
  • SSD card w/ CFFh SSD card CFFh Housing for 2x 1.8” SSDs Gigarray Connector
    • Solid State Drive Expansion Card (46M6908) required for SSD support
      • SAS Card w/ RAID 0/1
      • Supports up to 2x SSDs
      • No chassis/midplane connectivity
      • Location: Upper high-speed I/O slot
    SSD Expansion Card for IBM BladeCenter HX5 SSD support in HX5 - Solid State Drive Expansion Card
  • Processor Information
  • Intel® Xeon® Roadmap 2010 2008 Nehalem - EX Boxboro Chipset / OEM Boxboro – EX Intel ® 5100 Chipset Intel ® 5000V Intel® 3200 Chipset Garlow Platform IBM x3350 Ibex Peak PCH Nehalem – EP 5500 Series Tylersburg – 36D Chipset Tylersburg - EP Platform Westmere-EP Cranberry Lake Platform (HS 12) Bensley-VS Intel® 5000P/5400 Chipset Bensley Platform, HS 21, HS 21 XM x3650… Quad-Core Xeon® proc. 7300 series Caneland Platform IBM x3850/x3950 M2 Intel® 7300 Chipset/OEM Chipsets Dunnington 7400 Series Tylersburg – 24D Chipset Tylersburg - EN Platform Westmere-EP Nehalem – EP 5500 Series Expandable 7000 Efficient Performance 5000 Entry 5000 Workstation/HPC Foxhollow Platform QPI PCIe 2.0 QPI QPI Nehalem – EP 5500 Series Dual Tylersburg – 36D Chipset Tylersburg - WS Platform 5400 Chipset Stoakley Platform IBM x3450 Westmere - EP PCIe 2.0 QPI PCIe 2.0 PCIe 2.0 PCIe 2.0 PCIe 2.0 2009 *Other names and brands may be claimed as the property of others Copyright © 2008 Intel Corporation. All products, dates and figures are preliminary, for planning purposes only and are subject to change without notice. Sandy Bridge Transition future Xeon® proc. 3300 series Xeon® proc. 3100 series Lynnfield (Nehalem based) Havendale (Nehalem based) Xeon® proc 5400 series Xeon® proc 5200 series Xeon® proc 5400 series Xeon® proc 5200 series Xeon® proc 5400 series Xeon® proc 5200 series
  • Intel Nehalem-EX: Designed For Modularity QPI: Intel ® QuickPath Interconnect #QPI Links # mem channels Size of cache # cores Power Manage- ment Type of Memory Integrated graphics Differentiation in the “Uncore”: DRAM QPI Core Uncore C O R E C O R E C O R E IMC QPI Power & Clock … QPI … … … L3 Cache
  • Intel Nehalem-EX Overview
    • Key Features:
      • Up to 8 cores per socket
      • Intel® Hyper-Threading (2 threads/core)
      • 24MB shared last level cache
      • Intel® Turbo Boost Technology
      • 45nm process technology
      • Integrated memory controller
        • Supports speeds of DDR3-800, 978 and 1066 MHz via a memory buffer (Mill Brook)
      • Four full-width, bidirectional Intel® QuickPath interconnects
        • 4.8, 5.86, or 6.4 GT/s
      • CPU TDP: ~130W to ~95W
      • Socket–LS (LGA 1567)
      • 44 bits physical address and 48 bits virtual address
    24M Shared Last Level Cache Integrated Memory Controller Interconnect controller 4 Full-width Intel ® QuickPath Interconnects 4 Intel® Scalable Memory Interconnects Driving performance through Multi-threaded, Multi-Core Technology in addition to platform enhancements C O R E 1 C O R E 2 C O R E 3 C O R E 4 C O R E 5 C O R E 6 C O R E 7 C O R E 8
    • Nehalem-EP (NHM-EP)
    • 4 cores/8 threads per socket, Nehalem core, 45nm process, 8MB shared LLC
    • SMT (~hyper threading)
    • Two QPI links (only 1 is coherent link) per socket
    • One integrated memory controller (IMC)
    • Three DDR channels per socket
    Latest and Next Generation Intel Xeon Processor
    • Nehalem-EX (NHM-EX)
    • Up to 8 cores/ 16 threads per socket, Nehalem core, 45 nm process, 24MB shared L3
    • SMT (~Hyper-Threading)
    • Turbo Boost
    • Four QPI Links (3 coherent)
    • Two integrated memory controllers (IMC)
    • Four buffered memory channels per socket
  • Quick QPI Overview 25-Feb-09
    • The Intel® QuickPath Interconnect is a high-speed, packetized, point-to-point interconnect.
    • It has a snoop protocol optimized for low latency and high scalability, as well as packet and lane structures enabling quick completions of transactions.
    • Reliability, availability, and serviceability features (RAS) are built into the architecture.
    IBM Confidential
  • Nehalem-EX SKU Line-up X6550 2Ghz / 18M / 6.4GT/s E6540 2Ghz / 18M / 6.4GT/s X7560 2.26Ghz / 24M / 6.4GT/s X7550 2Ghz / 18M / 6.4GT/s 8S/Scalable* 4S/Scalable* L7555 1.86Ghz / 24M / 5.86GT/s E7530 1.86Ghz / 12M / 5.8GT/s 1.0 0.91 0.89 0.44 0.72 0.65 E7520 1.86Ghz / 18M / x4.8GT/s 0.49 0.89 X7542 2.66Ghz / 12M / 5.86GT/s 0.60 EX High Perf. LV L7545 1.86Ghz / 18M / 5.86GT/s 0.69 E7540 2Ghz / 18M / 6.4GT/s 0.72 Turbo: 0/1/3/5 Turbo: 1/2/4/5 Advanced Standard Basic 6-8 Cores 95W HT / Turbo / S8S* 6 Cores 130W Turbo / S8S* No Hyper-Threading 8 Cores Hyper-Threading Turbo: 1/2/3/3 130W 6 Cores Hyper-Threading Turbo: 0/1/1/2 105W 4 Cores Hyper-Threading 95W-105W 95W 8C 6C HPC Turbo: 0/1/1/1 E6510 1.73Ghz / 12M / 4.8GT/s 2S Only. Not Scalable 105W 2S/Scalable*
    • *Scaling capability refers to maximum supported number of CPUs in a “glueless” Boxboro-EX platform (e.g. 8S means that this SKU can be used to populate up to 8 sockets in a single system)
    • All 8S, 4S & most 2S capable SKUs may be used in even larger systems through the use of node controllers (not available from Intel). The entry 2S SKU is not scalable even with a node controller.
    Processor Number Core Freq / LL Cache / QPI/SMI Max Link Speed .XX = approx rel. TPC-C perf. (single skt in 1S, 2S or 4S config) Turbo bin upside (133Mhz increments) 7-8C/5-6C/3-4C/1-2C
  • X7560 2.26Ghz / 24M / 6.4GT/s X7550 2Ghz / 18M / 6.4GT/s 8S/Scalable 4S/Scalable L7555 1.86Ghz / 24M / 5.86GT/s E7530 1.86Ghz / 12M / 5.8GT/s E7520 1.86Ghz / 18M / x4.8GT/s X7542 2.66Ghz / 18M / 5.86GT/s EX High Perf. LV L7545 1.86Ghz / 18M / 5.86GT/s E7540 2Ghz / 18M / 6.4GT/s Turbo: 0/1/3/5 Turbo: 1/2/4/5 Standard Basic 6-8 Cores 95W HT / Turbo / S8S 6 Cores 130W Turbo / S8S No Hyper-Threading 8 Cores Hyper-Threading Turbo: 1/2/3/3 130W 6 Cores Hyper-Threading Turbo: 0/1/1/2 105W 4 Cores Hyper-Threading 95W-105W 95W 8C 6C HPC Turbo: 0/1/1/1 2S/Scalable x3850 X5 – Supported Processor SKUs Advanced
  • X6550 2Ghz / 18M / 6.4GT/s E6540 2Ghz / 18M / 6.4GT/s X7560 2.26Ghz / 24M / 6.4GT/s X7550 2Ghz / 18M / 6.4GT/s 8S/Scalable 4S/Scalable L7555 1.86Ghz / 24M / 5.86GT/s E7530 1.86Ghz / 12M / 5.8GT/s E7520 1.86Ghz / 18M / x4.8GT/s X7542 2.66Ghz / 12M / 5.86GT/s EX High Perf. LV L7545 1.86Ghz / 18M / 5.86GT/s E7540 2Ghz / 18M / 6.4GT/s Turbo: 0/1/3/5 Turbo: 1/2/4/5 Standard Basic 6-8 Cores 95W HT / Turbo / S8S 6 Cores 130W Turbo / S8S No Hyper-Threading 8 Cores Hyper-Threading Turbo: 1/2/3/3 130W 6 Cores Hyper-Threading Turbo: 0/1/1/2 105W 4 Cores Hyper-Threading 95W-105W 95W 8C 6C HPC Turbo: 0/1/1/1 E6510 1.73Ghz / 12M / 4.8GT/s 2S Only. Not Scalable 105W 2S/Scalable x3690 X5 – Supported Processor SKUs Advanced
  • Intel Nehalem EX Processor Stack…HX5 Delivering a full range of CPUs to compete at every cost point E7530 1.86Ghz/12M/5.8GT/s 2S/4S Scalable
    • 4 Cores
    • HT
    Basic
    • 8 Cores
    • HT
    • Turbo 1/2/3/3
    • 6 Cores
    • HT
    • Turbo 0/1/1/2
    Standard Advanced X7560 2.26Ghz/24M/6.4GT/s 2S/4S/8S Scalable 130W  HX5 Standard Models / CTO / Special Bids without memory sidecar X7550 2.00Ghz/18M/6.4GT/s 2S/4S/8S Scalable 130W X6550 2.00Ghz/18M/6.4GT/s 2S capable only 130W L7555 1.86GHz/24M/5.86GT/s 2S/4S/8S Scalable; T 1/2/4/5 95W E7540 2.00GHz/18M/6.4GT/s 2S/4S/8S Scalable 105W E6540 2.00Ghz/18M/6.4GT/s 2S capable only 105W L7545 1.86GHz/18M/5.86GT/s 2S/4S/8S Scalable; T 0/1/1/2 95W 105W E6510 1.73Ghz/12M/4.8GT/s 2S capable only 105W E7520 1.86GHz/18M/4.8GT/s 2S/4S Scalable 95W  X7542 2.66Ghz/18M/5.86GT/s; !HT 2S/4S/8S Scalable; T 0/1/1/1 130W    HX5 CTO / Special Bids (no standard models) without memory sidecar  HX5 Standard Models / CTO / Special Bids with memory sidecar  HX5 CTO / Special Bids (no standard models) with memory sidecar    Not scalable            HX5 HX5+MAX5 Delayed availability (w/ 2S HX5+MAX5) Delayed availability (w/ 2S HX5+MAX5) Delayed availability (w/ 4S HX5+MAX5 )
  • Processor SKUs HX5 w/ MAX5 E7530 1.86Ghz/12M/5.8GT/s 2S/4S Scalable
    • 4 Cores
    • HT
    Basic
    • 8 Cores
    • HT
    • Turbo 1/2/3/3
    • 6 Cores
    • HT
    • Turbo 0/1/1/2
    Standard Advanced X7560 2.26Ghz/24M/6.4GT/s 2S/4S/8S Scalable 130W X7550 2.00Ghz/18M/6.4GT/s 2S/4S/8S Scalable 130W X6550 2.00Ghz/18M/6.4GT/s 2S capable only 130W L7555 1.86GHz/24M/5.86GT/s 2S/4S/8S Scalable; T 1/2/4/5 95W E7540 2.00GHz/18M/6.4GT/s 2S/4S/8S Scalable 105W E6540 2.00Ghz/18M/6.4GT/s 2S capable only 105W L7545 1.86GHz/18M/5.86GT/s 2S/4S/8S Scalable; T 0/1/1/2 95W 105W E7520 1.86GHz/18M/4.8GT/s 2S/4S Scalable 95W X7542 2.66Ghz/18M/5.86GT/s; !HT 2S/4S/8S Scalable; T 0/1/1/1 130W  HX5 Standard Models / CTO / Special Bids with memory sidecar  HX5 CTO / Special Bids (no standard models) with memory sidecar           HX5 w/ MAX5 MAX5 optimized Delayed availability (w/ 4S HX5+MAX5 )
    • 4S scalable w/ MAX5
    • 30% cheaper than 7500 series CPUs
  • Technical Highlights: Reliability
  • Trust Your IT
    • Business resilience from IBM gives your customers the ability to rapidly adapt and respond to both risk and opportunity, in order to maintain continuous business operations, reduce operational costs, enable growth and be a more trusted partner
    • eX5 systems include Predictive Failure Analysis and Light Path Diagnostics for advance warning on power supplies, fans, VRMs, disks, processors, and memory and redundant, hot-swap components so clients can replace failures without taking their system down
    • Automatic Node Failover and QPI Fail down for greater system uptime then monolithic server and blade designs
    • Only eX5 offers 3 levels of Memory ProteXion™ for maximum memory integrity
      • IBM Chipkill™, Redundant Bit Steering, and Memory Mirroring
    • Predictive Failure Analyzers to minimize uptime interruptions
      • Processors, Memory, Drives, Fans, Power Supplies, QPI Cables
  • IBM BladeCenter has no single point of failure
    • When multiple components are consolidated into a single chassis, resiliency is critical. While other vendors merely talk about this concept, IBM has embraced it. BladeCenter was made to keep your IT up and running with comprehensive redundancy, including hot-swap components and no single point of failure.
      • Dual power connections to each blade
      • Dual I/O connections to each blade
      • Dual paths through the midplane to I/O, power and KVM
      • Automated failover from one blade to another
      • Redundant N+N power bus
      • Hot-swap, redundant power supplies
      • True N+N thermal solutions (including hot-swap, redundant blower modules)
      • Hot-swap, redundant I/O modules (switches and bridges)
      • IBM First Failure Data Capture helps you make decisions based on accurate data for quick problem diagnosis. It simplifies problem identification by creating detailed event logs via the Advanced Management Module. This complete logging of events makes it easy to spot issues quickly and easily.
      • Solid-state drives deliver superior uptime with three times the availability of mechanical drives with RAID-1 .
  • Redundant I/O Links
    • CPUs (2 to 4 sockets)
      • Direct attach to Memory, two Integrated Memory Controllers (IMCs)
      • Each IMC runs two SMI buses concurrently (DIMMs added in pairs)
      • 4 QPI Links, 3 for CPUs, 1 for I/O Hub
    • I/O Hub (1 per 2 CPUs)
      • Two QPI links to two different CPUs
      • 36 PCIe Gen2 lanes + ESI link for ICH10 Southbridge
    • Memory Buffer (4 per CPU)
      • SMI Link to CPUs
      • Two DDR3 buses
    • Part failure strategy is to reboot and disable failed component
      • DIMMs are disabled in pairs. Automatically re-enabled when replaced.
      • CPU fail would also remove CPU attached memory.
      • All servers require CPU1 or CPU2 to be functional to run.
      • Ghidorah requires CPU3 or CPU4 to be functional to run PCIe Slots 1-4.
    Intel Xeon Intel Xeon MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Intel Xeon MB1 MB1 MB1 MB1 MB1 MB1 MB1 MB1 Intel Xeon I/O Hub PCI-E I/O Hub PCI-E X
  • QPI & SMI Link Fail Strategy
    • If a link fails completely, the server will retry it at reboot
    • If the link still fails at reboot, an alternate path will be setup, or the link elements taken offline
    • If the link restarts but is intermittent (i.e., fails every hour), on the 2nd restart the bad link will not be used
    • When the bad link is not used, or 24 hrs passes, on reboots the error must be re-learned
    • The strategy does not have a concept of permanently disabling intermittent or bad links, and therefore does not require any customer action to re-enable links
    • The idea is to keep a simple algorithm, but one that avoids an infinite cycle of reboots on intermittent links
  • QPI & SMI Link Fault Handling
    • SMI Lane Failover
    • No LEDs, informational message only.
    • Strategy is to not drive a repair action. No performance impact.
    • SMI Failed Link
    • If SMI restart fails (did not link at rated speed), disable its twin link, disable that IMC.
    • QPI Faildown (or failed link)
    • Lightpath (Error LED, Link LED for external QPI), Error message, reduced performance.
    • Note: Firehawk does not support QPI route through.
      • If a QPI link to Firehawk fails the corresponding CPU is also disabled.
  • QPI Link Failure Service Actions
    • Example: 5 types of Ghidorah QPI Links
    • CPU to CPU (internal)
      • Replace CPUs. Replace CPU planar.
    • CPU to CPU (QPI wrap card = CPU1 to CPU2, CPU3 to CPU4)
      • Replace CPUs. Replace wrap card. Replace CPU planar.
    • CPU to I/O Hub (internal)
      • Replace CPU. Replace IO card. Replace CPU planar.
    • CPU to CPU (external)
      • Replace QPI cable. Replace CPUs. Replace CPU planar.
    • CPU to Firehawk (external)
      • Replace QPI cable. Replace CPU. Replace Memory drawer planar. Replace CPU planar.
    • Considering non-replacement procedures to isolate failure (part swapping or configuration).
  • Differentiation with Memory RAS Maximum reliability for maximum benefits IBM mainframe inspired technology that provides superior reliability that keeps your business going.
    • Quantifiable Benefits
      • Chipkill™ & ECC Memory
        • Better memory reliability to support In-Memory Databases
        • Chipkill Memory enables increased availability by detecting and correcting multiple-bit memory DIMM errors
      • Memory ProteXion™ - Additional bit protection in addition to Chipkill
        • Maximum server availability
        • With the multi-core processors, more memory support and memory reliability key to system performance.
        • Automatically route the data around the failed memory DIMMS and keep the servers and applications up and running
      • Memory Mirroring
        • A highly available memory mirrored configuration will give far better price per performance and performance per watt than any competition
  • Xeon & eX5 Memory Fault Handling
    • Patrol Scrubbing
    • Frequency of scrub (24 hours)
    • Recoverable Errors
    • Threshold setting per rank (100h)
    • Kick off scrub after threshold on specific IMC (up to one extra scrub per IMC per 24 hours)
    • Frequency to clear error counters (24 hours, prior to scrub kickoff)
    • First Error Action (threshold hit after second IMC scrub)
    • Not reported unless in manufacturing mode
    • Additional SBE is still recoverable
    • Second Recoverable Error on same rank (Additional SBE threshold reached)
    • Request OS Page Offline (new feature in later OS and Hypervisor, likely post-GA)
    • OS Page Offline Threshold per rank
    • On 4th request or if no Page Offline capability, log error message, Lightpath PFA
    • Need to keep LED on during reboot
    • DIMM is not disabled due to a PFA, it is a scheduled maintenance
    • Disable checking on rank once PFA reported to avoid SMI being invoked too often
  • Memory Fault Handling
    • Unrecoverable Error
    • Log error message, Ligthpath
    • Reboot and Disable DIMM pair
    • If on Patrol Scrub, report to OS for Page Offline, keep running
    • DIMM installation
    • Detect swap by SPD data (DDR3 DIMMs have serial numbers)
    • Automatic re-enable if DIMM pair was disabled
    • Mirrored Error (Recoverable, was Unrecoverable for non-mirrored)
    • When setup option is selected, mirror all IMCs intra-socket and expansion memory
    • Log error message, Lightpath PFA
    • On reboot, if error encountered, mirror with less memory (smaller amount in the IMC)
    • Rank Sparing
    • When selected, allocate one spare rank per IMC
    • Initiate rank sparing copy if a rank hits OS Page Offline Threshold
    • Manufacturing Mode
    • Card and Box test should take repair actions for all memory errors
    • If in manufacturing mode, set threshold setting per rank to 10h
    • If in manufacturing mode, treat all recoverable errors (threshold reached) as PFA, report error
  • Changes to reliability features
    • Platform RAS Feature Differences (x3850 M2  x3850 X5)
    • QPI Scaling cables are not Hot Pluggable
    • No Hot Swap Memory (Removed by PDT)
    • No Hot Swap PCIe Adapters (Removed by PDT)
    • RSA II replaced by IMM
    • CPU RAS Feature Differences (Nehalem EP  Nehalem EX)
    • QPI Faildown (link reduces to half width)
    • SMI Lane Failover (spare bit in both directions)
    • Chipkill support on x8 DIMMs
    • Recovery on Additional Single Bit Error after Chipkill
    • Recovery on uncorrectable memory scrub errors
    • Recovery on uncorrectable L3 cache explicit writeback
    • Data poisoning to prevent corrupt data propagation
  • Technical Highlights: Performance
    • MAX5 Memory Latency is similar to the local CPU socket
    • QPI 1.0 uses a Source snoop protocol, which means snoops come from the requestor with snoop results sent to the Home agent.
    • The local CPU must wait for snoop results. MAX5’s snoop filter allows it to return memory reads immediately . . . it already knows the snoop result.
    • MAX5 has less bandwidth than local memory, but is better than peer CPUs.
    • 2) Avoid using 4.8Gb rated CPUs with MAX5
    • CPUs are rated 6.4, 5.86, or 4.8Gb. MAX5’s internal clock is tied to the QPI speed. It will run at 75% with 4.8Gb. Also 4.8Gb reduces the memory bandwidth to MAX5 from each CPU. Performance is reduced.
    • Install half the DIMM capacity in the rack server prior to adding MAX5
    • For example, it is less expensive and better performance to add x3850 X5 memory cards and 4 DIMMs per memory card prior to buying MAX5.
    • Similar to previous servers, balancing the memory amount per memory card and CPU will achieve the best performance.
    What technical items do I need to know about eX5?
  • IBM x3690 X5 + MAX5 memory map setup options 0G 3G 4G 17G 33G 49G 65G CPU1 CPU1 CPU2 CPU1 CPU2 CPU1 CPU1 CPU2 UNASSIGNED 0G 3G 4G 17G 33G 65G (Setup Option) Pooled RHEL, SLES (Default) Partitioned to CPUs Windows, VMWare Example: 64GB, 32 x 2GB DIMMs, 1 x3690 X5, 2 CPUs (8 DIMMs each), 1 MAX5 (16 DIMMs) 1G MMIO IBM Confidential X5 Mem X5 Mem
  • IBM x3850 X5 + MAX5 memory map setup options 0G 3G 4G 17G 33G 49G 65G 73G 81G 89G 97G CPU1 CPU1 CPU2 CPU3 CPU4 CPU1 CPU2 CPU3 CPU4 CPU1 CPU1 CPU2 CPU3 CPU4 UNASSIGNED 0G 3G 4G 17G 33G 49G 65G 97G (Setup Option) Pooled RHEL, SLES (Default) Partitioned to CPUs Windows, VMWare Example: 96GB, 48 x 2GB DIMMs, 1 x3850 X5, 4 CPUs, 8 Mem Cds (4 DIMMs each), 1 MAX5 (16 DIMMs) 1G MMIO IBM Confidential X5 Mem X5 Mem
  • IBM x3850 X5 + MAX5 memory map 2-node example 0G 2G 4G 18G 34G 50G 66G 74G 82G 90G 98G CPU1 CPU1 CPU2 CPU3 CPU4 CPU1 CPU2 CPU3 CPU4 98G Primary Node Partitioned to CPUs Secondary Node Partitioned to CPUs Example: 192GB, 96 x 2GB DIMMs, 2 x3850 X5, 8 CPUs, 16 Mem Cds (4 DIMMs each), 2 MAX5 (16 DIMMs each) 2G MMIO IBM Confidential 256MB L4 Cache creates small memory gap to next node OS Memory is 191.5GB CPU5 CPU6 CPU7 CPU8 114G 130G 146G 162G X5 Mem CPU5 CPU6 CPU7 CPU8 170G 178G 186G 194G X5 Mem
  • Trademarks IBM, the IBM logo, the e-business logo, Active Memory, Predictive Failure Analysis, ServeRAID, System i, System Storage, System x, Xcelerated Memory Technology, and XArchitecture are trademarks of IBM Corporation in the United States and/or other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://ibm.com/legal/copytrade.shtml .  Intel, the Intel Logo, Itanium, ServerWorks, and Xeon are registered trademarks of Intel Corporation in the United States, other countries, or both. Dell is a trademark of Dell, Inc. in the United States, other countries, or both. HP is a trademark of Hewlett-Packard Development Company, L.P. in the United States, other countries, or both. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Hyper-V, SQL Server, and Windows are trademarks or registered trademarks of Microsoft Corporation in the United States, other countries, or both. Red Hat is a trademark of Red Hat, Inc. SAP and all SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. TPC, TPC-C, tpmC, TPC-E and tpsE are trademarks of the Transaction Processing Performance Council. UNIX is a registered trademark of The Open Group in the United States, other countries, or both. VMware, VMworld, VMmark, and ESX are registered trademark of VMware, Inc. in the United States and/or other jurisdictions. All other company/product names and service marks may be trademarks or registered trademarks of their respective companies. IBM reserves the right to change specifications or other product information without notice. References in this publication to IBM products or services do not imply that IBM intends to make them available in all countries in which IBM operates. IBM PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties in certain transactions; therefore, this statement may not apply to you.  This publication may contain links to third party sites that are not under the control of or maintained by IBM. Access to any such third party site is at the user's own risk and IBM is not responsible for the accuracy or reliability of any information, data, opinions, advice or statements made on these sites. IBM provides these links merely as a convenience and the inclusion of such links does not imply an endorsement. Information in this presentation concerning non-IBM products was obtained from the suppliers of these products, published announcement material or other publicly available sources. IBM has not tested these products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. 
  • © IBM Corporation 2008. IBM Corporation Route 100 Somers, NY 10589 U.S.A. Produced in the United States of America, October 2008. All rights reserved.