• Like
  • Save
Xtw01t2v012011 sys tech
Upcoming SlideShare
Loading in...5
×
 

Xtw01t2v012011 sys tech

on

  • 281 views

 

Statistics

Views

Total Views
281
Views on SlideShare
281
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • {DESCRIPTION} This is the title slide. There in an image of some System x servers in the bottom right corner. {TRANSCRIPT} Welcome to Systems Technologies. This is the second module in the IBM Technical Principles course XTW01.
  • {DESCRIPTION} This slide presents a bulleted list of this courses objectives {TRANSCRIPT} Upon completion of this module you will be familiar with : The buses found in, the memory technologies used, the processors employed in System x servers We will also discuss the disk technologies and network technologies used in System x servers
  • {DESCRIPTION} This slide presents a bulleted list of the technologies discussed in this topic. {TRANSCRIPT} In this section we will discuss system buses .
  • {DESCRIPTION} This slide presents a bulleted list of the topics that will be discussed in this section. There is an image of a bus transportation vehicle in the bottom right corner. {TRANSCRIPT} The term bus draws to mind an image of the local metro transportation. With regard to computer buses the analogy is quite accurate. Just as a metro bus moves people between points in a city, a bus in a computer system is used to move signaling of like kind between points within a computer system. In this section we will review the history of bus architecture and discuss the various buses within a computer system.
  • {DESCRIPTION} This slide presents a bulleted list of the technologies discussed in this topic. {TRANSCRIPT} In this section we will discuss memory .
  • {DESCRIPTION} A TEST {TRANSCRIPT} Two types of memory are utilized in a micro computer today; volatile, whose content is quickly changed, but will be lost upon loss of power and non-volatile, which is slower but once programmed will retain its content even if power is removed. Generally volatile memory is used as main system memory, whereas non-volatile is used for the storage of system and adapter initialization code or BIOS and firmware. Volatile memory employs two design methodologies, static which is fast, more complex and more expensive, and dynamic which is slower, fabricated more easily, is less costly but incurs a performance overhead due to it requiring additional read operations to refresh its contents. Main system memory is typically dynamic, with static memory being used where performance is of prime concern. Non-volatile memory used in microcomputers today also employs two design methodologies termed NAND and NOR, the NAND having better speed characteristics such that it has begun to be utilized as a replacement for mass storage as it does not suffer the mechanical issues imposed on hard disk drives. The NOR variety being typically used for system firmware. In this section we will review basic memory operation, its technologies and its usage in System x Servers.
  • {DESCRIPTION} This slide presents a bulleted list of the technologies discussed in this topic. {TRANSCRIPT} In this section we will discuss processors .
  • {DESCRIPTION} A TEST {TRANSCRIPT} Processors architecture has changed radically in recent years, providing greater power, increased memory capacities, and lower energy utilization. In this section we will review the processor families currently utilized in IBM System x servers.
  • {DESCRIPTION} This slide presents bulleted list of the Celeron G1100 features. There is an image of its functional block diagram on the right and a Celeron inside logo at the bottom left. {TRANSCRIPT} The Celeron G series, employs the Nehalem architecture which was introduced with the Core i7 desktop processor. This architecture integrates of most functionality of the now defunct Northbridge chip, particularly the memory controller into the processor itself. It employs multiple processor cores, each having a L2 cache and a L3 cache shared amongst them. Nehalem introduced the QuickPath interconnect and several features to reduce power consumption. The Celeron G1100 utilizes two dies interconnected via QPI on one substrate. The core die, employs two processor cores, each with 256KBs L2 cache and a single shared 2MB L3 cache. The uncore die hosts a 2 channel DDR3 memory controller, which supports 2 UDIMMs at up to 1066 MT/s per channel. It does not support ECC. ECC functionality is present in the processor, but is disabled by default. When paired with a 3400 series Periphiel Controller Hub (PCH), ECC functionality is enabled during boot by utilizing dynamic fusing. During boot the PCH sends a fuse override request to the processor via the DMI, a Proprietary x4 PCI-E, interface. The processor executes internal p-code to change the state of the fuse which enables ECC. The Celeron G presents 16 lanes of PCI-E at 5.0 GT/s per lane. This interface maybe configured as one x16 or two x8 interfaces. The DMI interface is utilized to interface with the system chip set or PCH. Additionally the processor hosts a graphics processor unit.
  • {DESCRIPTION} This slide presents a bulleted list of the key technologies employed in the Celeron G processor. There is an image of a Celeron processor in the right upper corner and a Celeron inside logo in the bottom left. {TRANSCRIPT} Intel processors employ techniques to enhance throughput and energy conservation. In the case of the Celeron G processor , these include. Intel® Virtualization Technology (VT-x) - created a new processor execution mode (root) the virtual machine manager (VMM) runs in. It allows the VMM to execute privileged instructions disallowed by processors without VT-x. Additionally it enhances the isolation that must be maintained between guest OS’s. Intel® 64 - architecture delivers 64-bit computing and improves performance by allowing systems to address more than 4GB of both virtual and physical memory. Idle States - allows the processor to disable specific functionality based on utilization, thereby reducing energy consumed. Enhanced Intel SpeedStep® Technology - The cores operating frequency and voltage can be lowered to reduce power consumption. Execute Disable Bit - a hardware-based security feature that can reduce exposure to viruses and malicious-code attacks and prevent harmful software from executing and propagating.
  • {DESCRIPTION} This slide presents bulleted list of the Pentium G6950 features. There is an image of its functional block diagram on the right and a Pentium inside logo at the bottom left. {TRANSCRIPT} The Pentium G series, also employs the Nehalem architecture. The core die, employs two processor cores, each with 256KBs L2 cache and a single 3MB L3 cache shared amongst them. The “uncore” die hosts a 2 channel DDR3 memory controller, which supports 2 UDIMMs at up to 1066 MT/s per channel. It does not support ECC. ECC functionality is present in the processor, and is enabled when the processor is paired with a 3400 series PCH as with the Celeron G. The Pentium G presents 16 lanes of PCI-E at 5.0 GT/s per lane. This interface maybe configured as one x16 or two x8 interfaces. DMI is utilized to interface with the Peripheral Controller Hub (PCH) Additionally it hosts a graphics processor unit..
  • {DESCRIPTION} This slide presents a bulleted list of the key technologies employed in the Pentium G processor. There is an image of a Pentium processor in the right upper corner and a Pentium inside logo in the bottom left. {TRANSCRIPT} These are the technologies supported with the Pentium G processor.
  • {DESCRIPTION} This slide presents bulleted list of the Core i3 500 series features. There is an image of its functional block diagram on the right and a Core i3 inside logo at the bottom left. {TRANSCRIPT} The Core i3 series, also employs the Nehalem architecture. The core die, employs two processor cores, each with 256KBs L2 cache and a single 4MB L3 cache shared amongst them. The “uncore” die hosts a 2 channel DDR3 memory controller, which supports 2 UDIMMs at up to 1333 MT/s per channel. It does not support ECC. ECC functionality is implemented as with the Celeron G and Pentium G. The Core i3 presents 16 lanes of PCI-E at 5.0 GT/s per lane. This interface maybe configured as one x16 or two x8 interfaces. DMI is utilized to interface with the system chip set or Peripheral Controller Hub (PCH) Additionally its hosts a graphics processor unit.
  • {DESCRIPTION} This slide presents a bulleted list of the key technologies employed in the Core i3 processor. There is an image of a Core i3 processor in the right upper corner and a Core i3 inside logo in the bottom left. {TRANSCRIPT} These are the technologies supported with the Core i3 processor. Added to the Core i3 is: Intel® Hyper-Threading Technology - delivers thread-level parallelism on each core resulting in more efficient use of core resources, higher processing throughput, and improved performance on multi-threaded software.
  • {DESCRIPTION} This slide presents bulleted list of the Xeon 3400 features. There is an image of its functional block diagram on the right and a Xeon inside logo at the bottom left. {TRANSCRIPT} The Xeon 3400 series, employs the Nehalem architecture. It is a single die device, employing four processor cores, each with 256KBs L2 cache and a single 8MB L3 cache shared amongst them. The die hosts a 2 channel DDR3 memory controller, which supports 3 RDIMMs or 2 UDIMMs per channel. It supports ECC. It presents 16 lanes of PCI-E at 5.0 GT/s per lane. This interface maybe configured as one x16 or two x8 or four x4 interfaces. DMI is utilized to interface with the PCH.
  • {DESCRIPTION} This slide presents a bulleted list of the key technologies employed in the Xeon 3400 processor. There is an image of a Xeon processor in the right upper corner and a Xeon inside logo in the bottom left. {TRANSCRIPT} The Xeon 3000 series employs these technologies. Those we haven’t discussed include: Intel® Virtualization Technology for Directed I/O (VT-d) - extends Intel's Virtualization Technology support for IA-32 (VT-x) by adding new support for I/O-device virtualization. Intel® Trusted Execution Technology - provides hardware-based mechanisms that help protect against software-based attacks and protects the confidentiality and integrity of data. It does this by enabling an environment where applications can run within their own space, protected from all other software on the system. These capabilities provide the protection mechanisms, rooted in hardware, that are necessary to provide trust in the application's execution environment. In turn, this can help to protect vital data and processes from being compromised by malicious software running on the platform. Intel® Demand Based Switching - in which the applied voltage and clock speed of a processor core are kept at the minimum necessary levels for optimal performance of required operations. The core operates at a reduced voltage and clock speed until more processing power is required. This is achieved by monitoring the core’s use by application-level workloads, reducing the CPU speed when it is running idle while increasing it as the load increases.
  • {DESCRIPTION} This slide presents images of the of the Intel Xeon 5500 and 5600 series processors. It bullets the technologies employed in the Intel Xeon 5500 and 5600 series processors. {TRANSCRIPT} The Xeon 5500 series, employs the Nehalem-EP architecture. It is a single die device, employing four processor cores, each with 256KBs L2 cache and a shared 4or 8MB L3 cache dependant on processor. The die hosts a 3 channel DDR3 memory controller, which supports 3 RDIMMs or 2 UDIMMs per channel. It supports ECC. QPI is implemented as an external interface to the PCH and to interconnect processors in a SMP environment. Two QPI ports are available. Transfer rates are 4.8, 5.86, or 6.4GT/s dependant upon the processor. The Xeon 5600 series processor implements four or six independent processor cores on one silicon die utilizing a 32 nm process. This step in lithography bears the code name Westmere-EP. It employs all the technologies introduced by the Xeon 5500 processor. In addition to the increased core count, the L3 cache has been increased to 12MBs, it supports both 1.5v and 1.35v RDIMMs, and includes technologies to further reduce power consumption. New with the 5000 series is Intelligent Power Technology. When not in use an entire processor core can be idled and its power consumption reduced to near-zero independent from other operating cores thereby reducing idle state power consumption by up to 50 percent. This feature can be engaged automatically by the processor or controlled by the operation system or systems management. Westmere extends this capability to the “uncore” regions of the processor.
  • {DESCRIPTION} This slide presents a bulleted list of processor features. There is an image of an Opteron die. {TRANSCRIPT} The AMD Opteron 6100 series processors employ two, quad or six core 45nm process dies on one substrate, to provide 8 or 12 core functionality at speeds of up to 2.8GHz. The 6100 Series employs 2 - 64KB L1 caches, one parity protected for instructions, the other ECC protected for data, and a 512KB ECC protected L2 cache, per processor core. 2 - 6MB caches on each die are combined to implement a 12MB L3 cache that is shared between cores. The processors integrated memory controller provides 4 DDR3 low voltage (1.35v - 1.5v) memory channels, operating at up to 1333MT/s, supporting three registered DIMMs or 2 unbuffered DIMMs per channel. The processor employs HyperTransport 3 as a processor to processor / processor to I/O interconnect/link. HyperTransport 3 supports: PCI Express mapping: hardware translation of PCI-E data formats to HyperTransport data formats increases throughput between I/O, processors and memory. AC operating mode: allows the HyperTransport bus to achieve longer distances. In addition to chip to chip interconnection HyperTransport may be used to directly access adapters, ancillary PC boards, backplanes and other servers. Link splitting : the 16-bit link path may be utilized as two separate 8-bit links. This affords the ability to interconnect a larger number of devices without the use of additional hardware. Hot Plugging : HyperTransport devices (less CPU) may be installed and removed with the bus running. Dynamic Link Clock/Width Adjustment: allows the CPU to change the clock and the number of bits that are transmitted per clock cycle dynamically thereby reducing power consumption. HT Assist when enabled utilizes 1MB in each of the 2 L3 caches as a probe filter which creates a directory to track cache lines utilized by each CPU core. Should a CPU require data targeted in the L2 cache of another core the requester queries a CPU which in turn queries the L3 cache to identify the CPU which currently maintains the cached data. It then queries the appropriate CPU which passes the data to the requesting CPU. This methodology greatly reduces the number of query transactions that would need to occur e.g. query each CPU core to determine whether it hosted the data, validate it, then transfer it to the requester. Measurements have shown the utilization of HT Assist improves memory bandwidth by nearly 60%. The Opteron 6100 processor also supports: AMD Virtualization ™ or AMD-V ™ utilizes processor instruction extensions to facilitate the development of more efficient, secure and robust software for system virtualization. These extensions remove the overheads associated with software-only virtualization solutions and attempt to reduce the performance gap between virtualized and non-virtualized systems. The hypervisor uses these processor extensions to intercept and emulate privileged operations in the virtualized or guest OS. To maximize power utilization and cooling efficiencies, the Opteron 6100 processor also supports: C1E Power State: a sleep state, invoked when all processor cores are idle, which turns off memory controllers and HyperTransport 3 links. AMD Cool Speed technology: which r educes the frequency and voltage operating point (p-state) when a temperature limit is reached, thereby reducing peak thermal load, and power utilization. This results in energy cost savings by reducing power used to power the server and to cool the server environment. The Advanced Platform Management Link (APML): a SMBus v2.0 compatible 2-wire interface. APML is also referred to as the sideband interface (SBI). APML is used to communicate with the Remote Management and Temperature Sensor Interfaces. The processor is a SMBus slave; platform Baseboard Management Controllers may master the APML interface to read and write limited p-states to perform power management and RAS operations. AMD CoolCore ™ Technology: which reduces energy consumption by turning off unused parts of the processor AMD Smart Fetch Technology: which reduces power consumption by allowing idle cores to enter a "halt" state, causing them to draw less power during processing idle times, without compromising system performance Independent Dynamic Core Technology: which enables a variable clock frequency for each core, depending on the specific performance requirement of the applications it is supporting, which reduces power consumption. Dual Dynamic Power Management™ (DDPM™) Technology: provides an independent power supply to the cores and to the memory controller, allowing the cores and memory controller to operate on different voltages, depending on their usage. AMD PowerCap Manager: places a cap on the P-state level of a core via the BIOS which delivers consistent, predictable power consumption by the system.
  • {DESCRIPTION} This screen displays a topology of the processor-core and un-core illustrating the connection of the 8-cores, 24MB shared cache, and two integrated Memory controllers with bi-directional arrows pointing (blue) to the SMI links and yellow to the QPI links. {TRANSCRIPT} The Xeon 6500 / 7500 series, employs the Nehalem-EX architecture. It is a single die 45nm device, employing four, six or eight processor cores, each with 256KBs L2 cache and a shared 12, 18 or 24MB L3 cache dependant on processor. The die hosts 2 memory controllers each employing Symmetrical Memory Interfaces (SMI), ports between the processor and Scalable Memory Buffers (SMB). The ports associated with each memory controller operate in lock-step, each memory operation involves 2 data words and 2 ECC bytes. The SMB provides 2 DDR3 channels supporting 2 DIMMs each. QPI is implemented as an external interface to the PCH and to interconnect processors in a SMP environment. 4 QPI links are available. Transfer rates are 4.8, 5.86, or 6.4GT/s dependant upon the processor. The 6500 supports 2 way SMP, the 7500 4 or 8 way. It implements those technologies present in the Nehalem-EP
  • {DESCRIPTION} This slide presents a bulleted list of the technologies discussed in this topic. {TRANSCRIPT} In this section we will discuss disk subsystems .
  • {DESCRIPTION} A TEST {TRANSCRIPT} The Small Computer Systems Interface (SCSI) had been the mainstay of server disk I/O interfaces for over three decades. Technological advances and methodologies made over its lifetime, in an attempt to minimalize the I/O bottleneck inherent a hard disks mechanical nature, have been implemented throughout computer architecture. Methodologies such as caching, command queveing, LVDS, DDR, Redundancy, hot plugability were first implemented in disk I/O based upon SCSI. Both an internal and external parallel interface bus, SCSI designers found themselves impeded by the very constraints we discussed in the section on buses. In essence how to insure signaling integrity over a length of a group of wires at ever increasing clock speeds. The solution in the external space ultimately became Fibre Channel, though SCSI maintained its presence in the internal mass storage space. In the Desktop arena, SCSI was utilized only in unique applications, there seemed no real need for its hard disks ruggedized mechanicals, its caching methodologies and the added costs, but desktop designers in the pursuit of faster transfer rates were beginning to face the very same obstacles with their disk I/O interface of choice the Advanced Technology Attachment (ATA),. Their solution a serial interface with a fewer number of wires, and increased clock rate, the Serial Advanced Technology Attachment interface (SATA). Having opportunity with the definition of a new standard, features implemented by SCSI such and command queveing and hot plugability were included in SATA’s design. The performance of SCSI in terms of I/O bandwidth until recently with the advent of the SATA 3.0 standard was much greater then that of SATA, so there was resistance to the adoption of the interface in server designs. The erroneous perception SATA hard drives were less reliable then SCSI also lent to the latency in its acceptance, but eventually SATA replaced SCSI as the interface/drive of choice in the server arena. SATA did not implement all of the features present with SCSI, in particular its command set, which impacted systems performance so in 2003 Serial Attached SCSI (SAS) was introduced. It fully implements the features which were present with SCSI, adds some new ones, and provides the advantages of the serial interface. In this topic we will examine SATA and SAS and there utilization in System x servers.
  • {DESCRIPTION} This slide presents a bulleted list of the technologies discussed in this topic. {TRANSCRIPT} In this section we will discuss Networking .
  • {DESCRIPTION} A TEST {TRANSCRIPT} The networking content of this course is still under development.
  • {DESCRIPTION} This is the summary slide. It contains a bullet list outlining the goals of this course module. {TRANSCRIPT} Having completed this course module you should: Have a firm understanding of the various system buses and interconnects, and the role they play in server architecture. Have a clear understanding of memory architecture, the reasoning for and need to follow certain population rules when adding memory. Be familiar with the various processors offered with IBM System x Servers. Understand the disk subsystems used in servers today.
  • {DESCRIPTION} This slide presents a bulleted list of terms used in this course module. {TRANSCRIPT} Listed here and on the following two pages are the acronyms used in this course module.
  • {DESCRIPTION} This slide presents a bulleted list of terms used in this course module. {TRANSCRIPT} This is page two of three containing the acronyms used in this course module.
  • {DESCRIPTION} This slide presents a bulleted list of terms used in this course module. {TRANSCRIPT} This is page three of three containing the acronyms used in this course module.
  • {DESCRIPTION} Displays the statement of “End of Presentation” in the center of the slide. {TRANSCRIPT} Thank you. This concludes Topic 2 of IBM System x Technical Principles - Systems Technologies .

Xtw01t2v012011 sys tech Xtw01t2v012011 sys tech Presentation Transcript

  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Systems Technologies XTW01 Topic 2
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Course Objectives Upon completion of this module you will be familiar with: >The buses found in System x servers >The memory technologies used in System x servers >The processors employed in System x servers >The disk technologies used in System x servers >The network technologies used in System x servers.
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Agenda >* System Buses * >Memory >Processors >Disk Subsystems >Networking
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation System Buses - Introduction >What is a Bus? >A little history >Constraints on bus design >PCI >PCI Express >Interconnects >Scalable Memory Interconnects
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Agenda >System Buses >* Memory * >Processors >Disk Subsystems >Networking
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Memory - Introduction >How it all works >DIMM Architecture >Processor point of view >RAS >Population “Rules” >Flash
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Agenda >System Buses >Memory >* Processors * >Disk Subsystems >Networking
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Processors - Introduction >What’s in the box?
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation 99 Intel® Celeron® G1100 Series Processor Processors - Intel Celeron G (1 of 2) >Two Dies One Substrate  CPU Die 32nm lithography - Two Cores - 256K L2 Cache per Core - Shared 2MB L3 Cache - QuickPath Interconnect to GPU Die  GPU Die 45nm lithography - 2 Channel DDR3 Memory Controller  2 UDIMMs per channel  NO ECC Support * - Video - PCI Express x16 lanes - DMI x4 to ChipSet * ECC functionality when paired with 3240 PCH
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Processors - Intel Celeron G (2 of 2) Intel® Celeron® G1100 Series Processor >Intel® Virtualization Technology (VT-x) >Intel® 64 >Idle States >Enhanced Intel SpeedStep® Technology >Execute Disable Bit
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation 1111 Intel® Pentium® G6900 Series Processor Processors - Intel Pentium G (1 of 2) >Two Dies One Substrate  CPU Die 32nm lithography - Two Cores - 256K L2 Cache per Core - Shared 3MB L3 Cache - QuickPath Interconnect to GPU Die  GPU Die 45nm lithography - 2 Channel DDR3 Memory Controller 1066 MT/s  2 UDIMMs per channel  NO ECC support * - Video - PCI Express x16 lanes - DMI x4 to ChipSet * ECC functionality when paired with 3240 PCH
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Processors - Intel Pentium G (2 of 2) Intel® Pentium® G6900 Series Processor >Intel® Virtualization Technology (VT-x) >Intel® 64 >Idle States >Enhanced Intel SpeedStep® Technology >Execute Disable Bit
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation 1313 Intel® Core i3 500 Series Processor Processors - Intel Core i3 (1 of 2) >Two Dies One Substrate  CPU Die 32nm lithography - Two Cores - 256K L2 Cache per Core - Shared 4MB L3 Cache - QuickPath Interconnect to GPU Die  GPU Die 45nm lithography - 2 Channel DDR3 Memory Controller 1333 MT/s  2 UDIMMs per channel  No ECC support* - Video - PCI Express x16 lanes - DMI x4 to ChipSet * ECC functionality when paired with 3240 PCH
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Processors - Intel Core i3 (2 of 2) Intel® Core i3 500 Series Processor >Intel® Hyper-Threading Technology >Intel® Virtualization Technology (VT-x) >Intel® 64 >Idle States >Enhanced Intel SpeedStep® Technology >Execute Disable Bit
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation 1515 Intel® Xeon® 3400 Series Processor Processors - Intel Xeon 3000 Series (1 of 2) >45nm lithography >Four Cores >256K L2 Cache per Core >Shared 8MB L3 Cache >2 Channel DDR3 Memory Controller  3 RDIMMs or 2 UDIMMs per channel  ECC support >PCI Express x16 lanes >DMI x4 to ChipSet
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Processors - Intel Xeon 3000 Series (2 of 2) Intel® Xeon® 3400 Series Processor >Intel® Turbo Boost Technology >Intel® Hyper-Threading Technology >Intel® Virtualization Technology (VT-x) >Intel® Virtualization Technology for Directed I/O (VT-d) >Intel® Trusted Execution Technology >Intel® 64 >Idle States >Enhanced Intel SpeedStep® Technology >Intel® Demand Based Switching >Execute Disable Bit
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation >5500 Nehalem-EP Microarchitecture 45nm >5500 Dual or Quad Core Processor on die >5600 Westmere-EP Microarchitecture 32nm >5600 Quad or Six Core Processor on die >Integrated three channel DDR3 memory controller  800 / 1066 / 1333 MT/s >Intel QuickPath Technology >Intel Turbo Boost Technology >Intel Hyper-Threading Technology >Intel Intelligent Power Technology >Three cache levels:  32 KB of L1 data cache per core  32 KB of L1 instruction cache per core  256 KB L2 cache per core  5500 Shared 4 or 8MB L3 cache  5600 Shared 12MB L3 cache Processors - Intel Xeon 5000 Series
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Processors - AMD Opteron 6100 Series AMD Opteron 6100 Series Key Features: >Eight or twelve cores / 2 - 45nm dies / 1 substrate >Speeds to 2.8GHz >New Socket (G34) 1974 pins / lands >Balanced SmartCache  64KB ECC protected L1 data cache per core  64KB parity protected L1 Instruction cache per core  512KB ECC protected L2 cache per core  2 - 6MB ECC protected L3 cache shared between cores >Integrated memory controller  Four memory channels  Supports DDR3 ECC SDRAM at speeds up to 1333 MT/s (667 MHz)  Supports up to 12 RDIMMS / 8 UDIMMS  Supports Memory Sparing >Four 6.4 GT/s HyperTransport 3.1 links  HT Assist >AMD Virtualization™ (AMD-V™) >AMD-P Suite >C1E Power State >AMD Cool Speed technology >Advanced Platform Management Link (APML) >AMD CoolCore™ >AMD Smart Fetch >Independent Dynamic Core >Dual Dynamic Power Management™ (DDPM™) >AMD PowerCap Manager
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Processors - Intel Xeon 6500 / 7500 Series 19 >Nehalem EX 45nm architecture >Four, Six or Eight Core Processors on die >Two Integrated memory controllers  Two SMI ports  up to 1066 MT/s >Intel QuickPath Technology  4 links  4.8, 5.86, or 6.4GT/s >Intel Turbo Boost Technology >Intel Hyper-Threading Technology >Intel Intelligent Power Technology >Three cache levels:  32 KB of L1 data cache per core  32 KB of L1 instruction cache per core  256 KB L2 cache per core  Shared 12, 18, or 24MB L3 cache
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Agenda >System Buses >Memory >Processors >* Disk Subsystems * >Networking
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Disk Subsystems - Introduction >SATA >SAS >RAID
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Agenda >System Buses >Memory >Processors >Disk Subsystems >* Networking *
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Networking - Introduction >Content under development
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Summary >Having completed this course module you should:  Have a firm understanding of the various system buses and interconnects and the role they play in server architecture.  Have a clear understanding of memory architecture, the reasoning for and need to follow certain population rules when adding memory.  Be familiar with the various processors offered with IBM System x Servers.  Understand the disk subsystems used in servers today.
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Glossary (1 of 3) >AMD American Micro Devices >ASIC Application Specific Integrated Circuit >AT Advanced Technology >ATA Advanced Technology Attachment >BA Bank Address >BIOS Basic Input and Output >BL Bit Line >CAD Command Address Data >CAS Column Address Strobe >CKE Clock Enable >CL CAS Latency >CPU Central Processing Unit >CS Chip Select >DDR Double Data Rate >DIMM Dual Inline Memory Module >DMA Direct Memory Access >DQ Data Q >DQM Data Q Mask >DRAM Dynamic Random Access Memory >ECC Error Correcting Code >Ex5 Enhanced System X Architecture 5 >EXA Enhanced System X Architecture >HBA Host Bus Adapter >HI-Z High Impedance
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Glossary (2 of 3) > I/O Input / Output > I2C Inter Integrated Circuit bus > IBM International Business Machines > LPC Low Pin Count > LUN Logical Unit Number > MAX5 Memory > MCA Micro Channel Architecture > MITS Micro Instrumentation and Telemetry Systems > MTBF Mean Time Between Failure > NAND Not AND > NEAT New Enhanced AT > NOR Not OR > NOT > PC Personal Computer > PCB Printed Circuit Board > PCI Peripheral Component Interconnect > PCI-X Peripheral Component Interconnect Extended > PHY Physical > PS/2 Personal System 2 > QPI Quick Path Interconnect > RAID Redundant Array of Inexpensive Disks > RAM Random Access Memory > RAS Row Address Strobe > RDIMM Registered Dual Inline Memory Module
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation Glossary (3 of 3) > RPM Revolutions Per Minute > SAS Serial Attached SCSI > SATA Serial Advanced Technology Attachment > SCSI Small Computer Systems Interface > SDRAMSynchronous Dynamic Random Access Memory > SEMP Storage Enclosure Management Processor > SEP Storage Enclosure Processor > SMB Scalable Memory Buffer > SMBus Systems Management Bus > SMI Symmetrical Memory Interface > SMP Symmetrical Multi Processor > SMP Serial Management Protocol > SPD Serial Presence Detect > SSP SAS Management Protocol > STP Serial ATA Tunneling Protocol > UDIMM Unbuffered Dual Inline Memory Module > UEFI Unified Extensible Firmware Interface > VLSI Very Large Scale Integration > WL Word Line > XOR Exclusive OR
  • IBM Systems & Technology Group Education & Sales Enablement © 2010 IBM Corporation End of Presentation