• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Intel Harpertown ATCA Blade Prd Rev 1.4
 

Intel Harpertown ATCA Blade Prd Rev 1.4

on

  • 2,393 views

A document I was authoring in 2007 covering my system design of an ATCA server blade. Was not quite complete when company merger resulted in project cancellation.

A document I was authoring in 2007 covering my system design of an ATCA server blade. Was not quite complete when company merger resulted in project cancellation.

Statistics

Views

Total Views
2,393
Views on SlideShare
2,390
Embed Views
3

Actions

Likes
0
Downloads
0
Comments
0

2 Embeds 3

http://www.lmodules.com 2
http://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Intel Harpertown ATCA Blade Prd Rev 1.4 Intel Harpertown ATCA Blade Prd Rev 1.4 Document Transcript

    • Intel Dual Harpertown Quad-Core ATCA Server Node Board Product Requirements Document Revision 1.4 September 18, 2007 Sample systems engineering document authored by Jerry Viviano. I was given permission by Solectron to distribute this document after the project was cancelled. The cancellation occurred when Flextronics procured Solectron. Flextronics had only recently decided to cancel its own ATCA blade and chassis developments, as they decided that they did not want to continue in that line of work. I was the chief architect of this blade and the primary author of this document. However, parts of it were contributed to by a team of overseas Solectron engineers which was providing technical guidance to. This document can be freely distributed within the company to which I sent it for the purposes of assessing my skills in system engineering. It is not to be distributed outside of the company. A few things should be kept in mind when reviewing this document – 1) The document was in the midst of its genesis when its writing was discontinued. So there are some pieces which are not quite complete nor polished. 2) While I authored almost all of the document, a few parts were written by off-shore engineers I was working with. So some of the grammar is not quite up to snuff. Most of this content is in section 5. 3) Since the document is now disembodied from the Solectron server farm, the external links to files will naturally not work. Jerry Viviano
    • REQUIRED APPROVALS Michael Cowan Hardware Design Manager Sundra Raj BDC Site Head Barry Hutt Senior Director – Compute and Storage Segment PradipKumar Mandal Program Manager Chandrashekar DR Engineering Manager – Hardware Shrishail Halbhavi Project Lead – Software Vedantem Prasanth Program Lead Raleigh/Durham Team Members Department Name Hardware Design Michael Cowan, michaelcowan@solectron.com Engineering Frank Han, frankhan@solectron.com Jon McChristian, jonmcchristian@solectron.com Jerry Viviano, jerryviviano@solectron.com David Warren, davidwarren@solectron.com
    • Table of Contents Intel Dual Harpertown Quad-Core .........................1 ATCA Server Node Board.......................................1 Product Requirements Document .........................1 Revision 1.4.............................................................1 1 Introduction..........................................................10 1.1 Protection of Vendor Information.........................................................10 1.2 Scope.......................................................................................................10 1.3 Document Conventions/Navigating......................................................11 1.3.1 Hyperlinks...................................................................................................................................... 11 1.3.2 Section references......................................................................................................................... 11 1.3.3 Requirements Notation.................................................................................................................. 11 1.4 Terms and Abbreviations......................................................................12 1.5 Authors....................................................................................................14 1.6 Document History...................................................................................15 1.7 References..............................................................................................18 1.8 Known Shortcomings............................................................................18 2 Architectural Overview........................................19 2.1 Overview..................................................................................................19 2.2 General ATCA Conformance.................................................................19 2.3 Summary Hardware Feature List..........................................................20 2.4 Block Diagram.........................................................................................21 3 Main Hardware Functional Elements.................22 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 3 of 84
    • 3.1 Main Processor Subsystem...................................................................22 3.2 North Bridge - Memory Controller Hub................................................23 3.3 Memory Subsystem................................................................................23 3.4 South Bridge – ICH9/ICH9R...................................................................25 3.5 Super I/O Device.....................................................................................26 3.6 Real Time Clock......................................................................................26 3.7 BIOS Flash Configuration......................................................................26 3.8 Base Channel Interface..........................................................................26 3.9 Fabric Interface.......................................................................................27 3.10 Processor Thermal/Power Management ...........................................28 3.11 Power Supply Systems........................................................................28 3.11.1 Input Voltage Range.................................................................................................................... 29 3.11.2 Power Circuitry Topology.............................................................................................................29 3.11.3 Power Sequencing.......................................................................................................................31 3.11.4 Voltage Rail Current Requirements............................................................................................. 32 3.11.5 Power Consumption.................................................................................................................... 33 3.12 Board Health Monitoring Systems......................................................34 3.12.1 Thermal Sensors......................................................................................................................... 34 3.12.2 Power Supply Sensors and ADC Channel Assignments............................................................. 34 3.12.3 Platform Environmental Control Interface (PECI)........................................................................ 35 3.12.4 Payload Watch Dog Timer........................................................................................................... 35 3.12.5 IPMC Watch Dog Timer............................................................................................................... 35 3.13 Intelligent Platform Management Controller Requirements............35 3.14 Debug Capabilities/Support................................................................36 3.14.1 Payload Processor Physical Serial Ports..................................................................................... 36 3.14.2 IPMC serial port........................................................................................................................... 36 3.14.3 JTAG interface............................................................................................................................. 36 3.14.4 Intel XDP......................................................................................................................................37 3.14.5 Payload Reset Button.................................................................................................................. 37 3.14.6 IPMC Reset Button...................................................................................................................... 37 3.14.7 Chassisless Debug/Development Support.................................................................................. 38 3.14.8 Port 80 Support............................................................................................................................38 3.15 Component Placement.........................................................................38 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 4 of 84
    • 3.16 Front Panel ...........................................................................................39 3.16.1 Front Panel Connectors............................................................................................................... 39 3.16.2 Front Panel LEDs........................................................................................................................ 39 3.16.3 Barcode Product ID Label............................................................................................................ 40 3.17 SATA Hard Drive Option......................................................................41 3.18 Cross Interrupt Lines...........................................................................42 3.19 Back plane Connections......................................................................43 3.19.1 Zone 1..........................................................................................................................................43 3.19.2 Zone 2..........................................................................................................................................43 3.19.3 Zone 3..........................................................................................................................................43 4 Main Software Functional Elements..................44 4.1 Operating Systems.................................................................................44 4.2 IPMI...........................................................................................................44 4.3 Open IPMI................................................................................................44 4.4 Watch Dog Timer....................................................................................45 4.5 Payload Remote/Local Boot..................................................................45 4.6 Firmware Upgradeability........................................................................45 4.6.1 IPMC Firmware Upgrade............................................................................................................... 45 4.6.2 Payload BIOS Firmware Upgrade................................................................................................. 45 4.7 OS-Controlled Hardware Power Management.....................................46 4.8 Serial Over LAN (SOL)...........................................................................46 4.9 Virtualization Support............................................................................46 4.10 BIOS.......................................................................................................47 5 Detailed Software Requirements........................48 5.1 General Description...............................................................................48 5.1.1 System Perspective....................................................................................................................... 48 5.1.2 Assumptions on Availability of Various Development Tools/Components.................................... 50 5.1.3 Dependencies and Risks............................................................................................................... 50 5.1.4 Required Development Environment and Tools............................................................................ 51 5.2 External Interface Specifications..........................................................51 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 5 of 84
    • 5.2.1 User Interfaces.............................................................................................................................. 51 5.3 Functional Requirements......................................................................52 5.3.1 BIOS Functional Requirements..................................................................................................... 52 5.3.2 Linux Functional Requirements..................................................................................................... 57 5.3.3 General Linux Requirements......................................................................................................... 57 5.4 IPMI Firmware Requirements................................................................59 5.5 General ATCA Specification Software Conformance Requirements 63 5.6 Deliverables.............................................................................................64 6 Performance Requirements................................65 7 System Address Map...........................................66 8 Hardware Device Addresses..............................68 9 Reliability Requirements.....................................69 9.1 Blade Insertions......................................................................................69 9.2 CPU Insertions........................................................................................69 9.3 DIMM Insertions......................................................................................69 9.4 MTBF........................................................................................................69 10 Optional ATCA Subsystems.............................70 10.1 Advanced Mezzanine Cards (AMC)...................................................70 10.2 Rear Transition Modules (RTM).........................................................70 11 Mechanical Requirements................................71 11.1 General..................................................................................................71 11.2 Front Panel............................................................................................71 11.3 Improved Alignment Keying................................................................71 11.4 Front Board Cover................................................................................71 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 6 of 84
    • 12 DFx Requirements.............................................72 13 Regulatory Compliance Requirements...........73 13.1 RoHS Requirements.............................................................................74 14 Risks....................................................................75 14.1 Scarcity of Marketing Input.................................................................75 14.2 San Clemente Schedule.......................................................................75 14.3 Thermal Issues.....................................................................................75 14.4 Lack of Intel Support............................................................................75 14.5 Lack Of A Budget.................................................................................75 14.6 Late Engagement with BIOS Vendor..................................................76 15 Bill Of Materials..................................................77 16 Requirements Conformance Matrix.................78 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 7 of 84
    • Tables Table 1: Summary Hardware Feature list.............20 Table 2 Supported Processors..............................22 Table 3 Differences Between ICH9 and ICH9R....25 Table 4 Power Sequencing Timing Parameters...31 Table 5 Voltage Rail Current Requirements.........32 Table 6: Power Consumption................................33 Table 7 ADC-Monitored Power Supply Signals...35 Table 8 Front Panel LED Definitions.....................40 Table 9 IPMI Action Handler Actions Per Event. .58 Table 10 Required Regulatory Standard Compliances............................................................74 Table 11 Intel Blade Draft Bill Of Material (BOM).77 Figures Figure 1 Navigation Toolbar.................................11 Figure 2 System Hardware Block Diagram..........21 Figure 3 Memory/Memory Controller Interface Topology..................................................................24 Figure 4 Allowable Memory Configurations.......25 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 8 of 84
    • Figure 5 8-Node Dual Star Fabric Topology........28 Figure 6 Power Circuitry Topology.......................30 Figure 7 Power Circuitry Sequencing...................31 Figure 8 Flexible JTAG Chaining Topology........37 Figure 9 Port 80 Debug Support...........................38 Figure 10 Initial Component Placement Proposal ..................................................................................39 Figure 11 Front Panel Elements Placement.........41 Figure 12 SATA Connectors..................................42 Figure 13 Software Architecture Block Diagram. 49 Figure 14 System Initialization Block Diagram...53 Figure 15 San Clemente System Address Map. . .67 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 9 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 10 of 84 1 Introduction The purpose of this document is to define the general features and requirements to be implemented in the Intel Dual Harpertown quad-core ATCA Processing Server Node blade project, hereafter referred to as the “Intel Harpertown blade”, what critical hardware and software elements will be used to implement the requirements, how they will be interconnected, and to a degree, what functions they will perform and how. In the interest of brevity, the term ‘ATCA’ will generally be used throughout this document to represent the more complete “AdvancedTCA” or “PICMG ATCA”. The Intel Harpertown blade will function as a compute server node blade in an Advanced TCA industry standard chassis, according to 1.7. There are numerous backplane protocols to choose from while still being within the bounds of compliance with the ATCA definition. In particular, The Intel Harpertown blade will utilize the Ethernet protocol across the backplane fabric as define in 1.7 as opposed to Serial RapidIO, PCI-Express, or any other ATCA-defined backplane protocol. It should be noted that the goal of the Intel Harpertown blade project will not specifically be to produce a specific salable item. Rather, it is to produce a reference design which can be easily tailored to Solectron’s potential customers’ needs. As such, in some respects, the design could be considered to be a bit ‘overkill’. For example, the current plan is for the design to support up to 48 GBytes of DDR2 memory. If a customer would prefer to save costs by limiting their specific design variant to only 16 or 32 GBytes, the design should easily be modified to that reduced requirement. • To avoid damage to the document template, do not add, change or remove formats. 1.1 Protection of Vendor Information Under NDA guidelines agreed to with Intel, Solectron is bound to protect the usage of certain code names used within this document. The following terms: • Harpertown • Bensley • Cranberry Lake • San Clemente by our agreements with Intel are strictly forbidden from being discussed or disseminated outside of Solectron, or even within Solectron with individuals who do not have a legitimate ‘need to know’. Intel considers unauthorized dissemination of these terms to be dangerous to their competitive advantage. Inappropriate use of these terms by Solectron could result in our being cutoff from critical design documents and support pertaining to the advanced information associated with the devices on which The Intel Harpertown blade is based. It is to Solectron’s competitive advantage to continue receiving this type of support from Intel. 1.2 Scope This product requirements document (PRD) defines the electrical, mechanical, environmental, and functional specification for a high performance Dual Intel Harpertown processor based single board computer built according the Advanced Telecom Computing Architecture (ATCA) specification. Architecturally, this document fits between the marketing requirements document (MRD) which calls out the general large-scale functional requirements of the product, and the hardware design specification and software design Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 10 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 11 of 84 specification (HDS & SDS) with each respectively explain the detailed implementation plan and requirements for their respective parts of the blade. This PRD also fits between the MRD and HDS/SDS levels of planning chronologically as well. That is, the PRD takes the initial general hardware and software requirements from the finished MRD, and breaks them down into more specific approaches to be implemented in hardware and software. After the PRD is completed, it is then handed to the software and hardware teams as input into their respective detailed design documents, the SDS and HDS respectively. 1.3 Document Conventions/Navigating Navigation of this document may be made easier by viewing it in the Document Map mode. This yields an outline of the section headers along the left hand edge of the viewing window. The reader can click on a section header to immediately jump to the section. The control for enabling/disabling Document Map mode is at the first level of the View menu. 1.3.1 Hyperlinks Items highlighted in blue are hyperlinks to either other parts of this document, or to actual outside documents. Due to the limitations of the Solectron networking architecture, hyperlinks to external documents may not work from locations outside of the RTP facility. When a hyperlink is used to jump to another part of the document, a toolbar as shown in Figure 1 will appear. This can be used to immediately jump back to the hyperlink source location. This can be tried by simply clicking on the above reference to Figure 1. Figure 1 Navigation Toolbar 1.3.2 Section references The § symbol implies section. It is used to refer to sections in this document as well as sections in other documents 1.3.3 Requirements Notation Definite requirements specified in this document will be prefaced with a short category prefix, followed by an underscore, followed by a number, and then a colon. Both integer and decimal numbers are allowed, similar to the Dewey Decimal system. Two example requirements are: ARC_1: The Intel Harpertown blade will be an intelligent compute server node board utilizing dual Intel quad-core Harpertown 45 nm processors for a total of 8 processing cores. MEM_3: The system will support from 1 to 6 DIMMs, maximum of 3 DIMMs per channel subject to the San Clemente limitation of 6 ranks per channel. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 11 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 12 of 84 1.4 Terms and Abbreviations Term Description § Section (in this or other document) as in § 3.1.2 [N] External document reference ‘N’ ACPI Advanced Controlled Power Interface API Application Programmer’s Interface APIC Advanced Programmable Interrupt Controller AdvancedTCA Advanced Telecom Computing Architecture AMC Advanced Mezzanine Card AMI American Megatrends, Inc. ATCA Advanced Telecom Computing Architecture BIOS Basic Input Output System BOOTP Boot Protocol BSP Boot Strap Processor CRB Configuration Reference Board DDR Double Data Rate DHCP Dynamic Host Control Protocol DIMM Dual In-line Memory Module DMTF Distributed Management Task Force FSB Front Side Bus Gb Gigabit(s) GB Gigabyte(s) GbE Gigabit Ethernet Gbps Gigabit(s) Per Second GBps Gigabyte(s) Per Second HDS Hardware Design Specification HPM Hardware Platform Management I2C, or I2C Inter Integrated Chip (Bus) ICH I/O Controller Hub, AKA South bridge IPI Inter processor Interrupt IPMB Intelligent Platform Management Bus IPMC Intelligent Platform Management Controller IPMI Intelligent Platform Management Interface KCS Keyboard Control System KCS is a system that runs over the LPC bus between the IPMC and payload. It is the basic protocol that is used to communicate back and forth between the IPMC and payload. LFM Linear Feed per Minute LPC Low Pin Count LSP Linux Support Package MCH Memory Controller Hub, AKA North bridge MP Multi Processor MRD Marketing Requirements Document MTBF Mean Time Between Failures MTD Memory Technology Device NBP Network Boot Protocol NEBS Network Equipment Building System OS Operating System Payload In compute server domains, refers to the board’s main compute system. In this case, the payload would be the combination of the Intel Harpertown processors, the MCH and ICH. PCI Peripheral Component Interconnect PCI-E PCI Express PCISIG PCI Special Interest Group Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 12 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 13 of 84 Term Description § Section (in this or other document) as in § 3.1.2 [N] External document reference ‘N’ ACPI Advanced Controlled Power Interface API Application Programmer’s Interface PECI Platform Environment Control Interface – A proprietary one-wire bus interface that provides a communication channel between the Intel processor and external thermal monitoring devices, for use in fan speed control. PECI communicates readings from the processor’s digital thermometer. PECI replaces the thermal diode available in previous processors. PICMG PCI Industrial Computer Manufacturers Group PMC PCI Mezzanine card POST Power-On Self-Test POST Power On Self Test PPS Pigeon Point Systems PXE Preboot eXecution Environment RDIMM Registered Dual In-line Memory Module RPM Red Hat Package Manager RTC Real Time Clock RTM Rear Transition Module SDR Sensor Data Record SDS Software Design Specification. ShMC Shelf Management Controller SIPI Startup IPI SMI System Management Interface SMP Symmetric Multi Processor SPD Serial Presence Data TBD To Be Decided TDP Thermal Design Power, also Total Dissipated Power TFTP Trivial File Transfer Protocol VID Voltage ID VLP Very Low Profile VPD Vital Product Data VRM Voltage Regulator Module Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 13 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 14 of 84 1.5 Authors Department Name Tel. E-Mail Design and Engineering Jerry Viviano 919-998-4801 jerryviviano@solectron.com Prime Editor Design and Engineering Shreekanth Hiremath 0091 80 41151798 shrekanthHiremath@solectron.com Design and Engineering Jayanta Nath 0091 80 41151798 JayantaKumar@solectron.com Contributor Design and Engineering Gopal Jahagirdar 0091 80 41151798 GopalJahagirdar@solectton.com Contributor Design and Engineering Shrishail Halbhavi 0091 80 41151798 shrishailhalbhavi@solectron.com Contributor Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 14 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 15 of 84 1.6 Document History Rev. Date Author Change History and Comments 1.0 July 25, J. Viviano First official revision based on AA.1 and group feedback. 2007 Numbered requirements Added DFX section Expaneded on Debug ports section. Specified exact processor model numbers. Updated block diagram Resolved issues on redundant boot flashes. Changed from 8 to 6 DIMMs. Max six DIMM changed from 8 to 4 GBytes. Added Super I/O info. Cleared up issues with Watch Dog Timers. Added SATA hard drive requirement. Removed 1 Gbps requirement. Specified Red Hat kernel revision. 1.1 Aug 9, J.Viviano Filled in sections on IPMC and BIOS firmware upgrade. 2007 Corrected Base interface Zone specifier to Zone 2. Addition of Port 80 debug support. Additional illustration on allowable memory configurations. Added Serial Over LAN content. Clarification of maximum memory to 48 GBytes, when/if 8 GByte DIMMs arrive. Added `Known Compliancy Issues’ column to regulatory requirements document list.. Additional OS-Controlled HW power management requirement ACPI. Power supply requirements added. SOL requirements updated. Added RoHS ref. & requirement. Removed reference to using PCI-E as update channel. Updated approvers & authors lists. Updated scope to reference HDS & SRS Added PRD Requirements Conformace Matrix. Added Risks section. Added Virtualization support section. Component placement map added. BOM Updated Tehuti references removed. Changed to Broadcom 57710 and Intel 82598 1.2 Aug 24, J. Viviano Renamed Approvers’ titles to those in the Roles and Responsibilities document. 2007 Updated block diagram. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 15 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 16 of 84 Rev. Date Author Change History and Comments Minor editorial changes. Added risks: Lack Of A Budget & Late Engagement with BIOS Vendor Detailed PWR_4 rqmt to spec upper limit at 230 Watts. Significantly reduced voltage and current monitoring requirements. Added BIOS requirements. Detailed BX, BX4, KX, KX4 requirements. Consolidated SATA option requirement into SATA_1, removing DBG_9. Clarified wording. Reduced Footer Size Added reasons for not requiring RTM inclusion. Added document conventions section. Clarified 50 Watt/ 80 Watt TDP text. Added JTAG chaining topology. Added dual-star requirement. Added DIMMs/Sockets gold surfaces. Changed from VMware Infrastructure 3 to VMWare Server virtualization software. More details on the front panel LEDs. Added table of all known possible processors. Discussed San Clemente and ICH9R internal temperature sensors. Added text on front panel LEDs functionality and colors. Added mechanical drawing of front panel elements placements. Enhanced BOM format to segregate costs of base board, memory, processor section, IPMC, debug sections. Removed Logic Analyzer interface requirement. Separated JTAG chain into separate payload and IPMC sections. 1.3 Sept 14, J. Viviano Changed block diagram to Intel 82598 instead of Tehuti. 2007 Changed block diagram to have only one optional SATA drive instead of two. Changed block diagram to have RTC in ICH9 instead of Super I/O Updated block diagram to include new 1600 MTps FSB Harpertown variants. Cleaned up wording on HLTH_9 payload WDT. Cleaned up wording of IPMC_2, eliminating bootloader redundancy. Cleaned up FWUG_ requirements, eliminating boootloader failsafe statement. Upgraded block diagram to Intel 82598, RTC in ICH9R Noted that the cross-interrupt lines will be separate from the IPMC-to-Payload SMI. Added SATA_2 & SATA_3 requirements. Added SATA connector type specifications. Changed firmware upgrade to waive bootloader upgrade failsafe requirement. Changed card insertion/extraction cycles to 250 min. Added justification of 24 GByte memory load, Equation 1. Added PTPM_4 Dynamically adjusted processor speeds. Added suggestion that all serial ports be accessible through modified faceplate. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 16 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 17 of 84 Rev. Date Author Change History and Comments Removed references to BCM57710. Added significant content to the performance requirements section. Changed MontaVista Linux to Red Hat Linux as OS. Clarification added on Front Side Bus speed control Super I/O changed from 87427 to 83627 Merged in Software Requirements Document Added requirement HLTH_2.5, which specifies that the ICH9/ICH9R internal temperature sensor will be monitored. Added description of PECI information flow. Added a section describing the ICH9/ICH9R south bridge. Added requirement SB_1 which specifies that the system will accommodate both the ICH9 and ICH9R south bridge devices. Added power circuitry topology map and power sequencing timing info. Rewrote section on voltage and current monitoring. Consolidated all power system information and requirements into a single section. Added table of voltage supply rails with voltage tolerances and max currents. Updated Virtualization requirements to one OS instance per core minimum. Removed references to VMware Server in detailed software requirements. Changed thremal sensor connection to IPMC to exclusively I2C. Updated the bill of materials. 1.4 Sept 18, J. Viviano Fixed the accidental omission of the SRS section 4.3, IPMI Functional 2007 Requirements Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 17 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 18 of 84 1.7 References [1] PICMG® 3.0 Revision 2.0 AdvancedTCA® Base Specification® [2] PICMG® 3.1 Revision 1.0 Specification Ethernet/Fibre Channel for AdvancedTCA™ Systems [3] PICMG 3.0 Short Form Specification [4] Voltage Regulator Module (VRM) and Enterprise Voltage Regulator-Down (EVRD) 11.0 [5] San Clemente Memory Controller Hub Chipset EDS Rev 1.2 [6] Intelligent Platform Management Interface Specification Second Generation v2.0, Feb 12, 2004 [7] Intelligent Platform Management Interface Second Generation Specification v2.0, revision 1.0 Intelligent Platform Management Interface Specification v1.5, revision 1.1 Addendum Document Revision 3 Feb 15, 2006 [8] HPM.1 Hardware Platform Management IPM Controller Firmware Upgrade Specification [9] Debug Port Design Guide for UP/DP Systems [10] Solectron CORSDC-10-100059 DFX Design Guidelines for Rigid Printed Boards and Assemblies [11] Solectron CORSDC-10-100110.pdf Solectron Global Testability Design Guidelines [12] Intel based ATCA Compute Node Marketing Requirements Document (MRD) [13] Advanced Configuration and Power Interface Specification [14] Intel ATCA MRD 16Jul07 Rev 1 (2).xls [15] Intel Multiprocessor specification [16] Intel Software developer manual (3A & 3B) [17] Intelligent Platform Management Interface Specification v1.5 [18] Intelligent Platform Management Bus Communications Protocol Specification V1.0 [19] IPMB Address Allocation Specification v1 [20] Platform Management FRU Information Storage Definition v1.0 [21] VMWare’s “Virtual Machine Guide” (VMWare Server 1.0) [22] Harpertown Processor Electrical, Mechanical, and Thermal Specification (EMTS) Rev 1.25 [23] San Clemente MCH Chipset External Design Specification (EDS) Addendum Rev 1.2 [24] Intel I/O Controller Hub 9 (ICH9) Family Datasheet Intel Document Number 316972-001 [25] DIRECTIVE 2002/95/EC OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 January 2003 on the restriction of the use of certain hazardous substances in electrical and electronic equipment. 1.8 Known Shortcomings Section Topic Plan 13 Reliability requirements document unresolved Is not critical at this time. An MTBF is specified in this document in § 9.4 . Virtualization requirements still not defined Team will research this further and will come sufficiently. to a better understanding as to what the requirement really is. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 18 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 19 of 84 2 Architectural Overview 2.1 Overview ARC_1: The Intel Harpertown blade will be an intelligent compute server node board utilizing dual Intel quad- core Harpertown 45 nm processors for a total of 8 processing cores. ARC_2: The processor and chipset complement will be based on the Intel Cranberry Lake architecture, tailored to high density compute-server attributes. ARC_3: The board will be compliant with the PICMG 3.0 ATCA (Advanced Telecom Computing Architecture) Rev 2.0 standard, as well as PICMG 3.1 Rev 1.0 Ethernet/Fibre Channel for AdvancedTCA Systems. The Intel Harpertown blade will be a single-slot single-board compute server that offers a powerful control plane processing complex, dual Gigabit Ethernet ports for the base interface and dual 10 or 1 Gigabit Ethernet for the fabric interfaces. Other critical hardware components comprising the heart of the system include the Intel San Clemente Memory Hub Controller (MCH) A.K.A north bridge, ICH9/ICH9R I/O Controller Hub (ICH) A.K.A south bridge, Intel 82598 Dual XAUI fabric interface, and Renesas 2166 as the Intelligent Platform Management Controller (IPMC). The initial release of the board will support Red Hat Linux, but is expected to eventually offer several operating systems, including variants of Windows, UNIX, and Linux. ARC_4: The Intel Harpertown blade will provide system management capabilities and will be hot swap compatible per the ATCA specification. The power and flexibility of the design make it ideally suited for the telecom and datacomm markets. 2.2 General ATCA Conformance ARC_5: The Intel Harpertown blade will be conformant in all manners to PICMG ATCA spec 1.7 and the PICMG ATCA 10 Gbps Ethernet backplane spec 1.7. These documents specify minimum electrical, mechanical, thermal, and functional requirements of any system claiming to be ATCA compliant, specifically utilizing Ethernet as the backplane fabric interface. Since the PICMG ATCA specs cover these items in great detail, this document will generally only repeat requirements from the documents for clarity when necessary. The ATCA spec allows for a variety of backplane topologies, ranging from the simplest allowable form, the dual star, to the most comprehensive form which is complete mesh. ARC_6: The Intel Harpertown blade will be tailored to be a compute node in a dual star configuration, requiring only two backplane fabric interconnections, one to each of the two hubs. As such, it will not be expected to be able to take full advantage of a dual-dual star configuration or a full mesh configuration. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 19 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 20 of 84 2.3 Summary Hardware Feature List FEATURE FEATURE DESCRIPTION HOST CPUs Quad core 64 bit Xeon processors Two Quad-core 12 MByte L2 Cache per dual-core processor Intel Harpertown Intel PECI Thermal monitoring system. Xeon Processors Enhanced Intel SpeedStep Technology. Processor FSB interface interconnection speed of 1333 MTps, 10.6 GBps 333/400 MHz Clk, 80 Watt TDP board quad-pumped to 1333/1600 MTps. design. See processor list in § 3.1 . Chipset Dual independent processor interfaces interconnection with speeds of up to1333 MTps, 10.6 Intel 5100 GBps per processor interface. San Clemente Four x4 PCI Express interfaces configurable into various combinations of x4, x8, and x16, 250 Memory Hub MBps per lane. Controller (MHC) Memory interface - 144 bit, 266/333 MHz, 533/667 MTps, synchronous registered DDR2 North bridge controller interface with ECC. One Serial Peripheral Interconnect (SPI) port. 3 SMBus’s– Serial Presence Data (Master), Config (Slave), and GPIO (Master) interface. Intel ICH9 or 2 GBps Enterprise South bridge Interface (ESI) to 5100. ICH9R Six x1 PCI-E interfaces for general purpose use. I/O Controller South bridge One 10/100/1000-Base-T MAC internal. One Low Pin Count (LPC) interface used for communication with the IPMC. ICH9 or ICH9R Memory Up to 48 GByte 533/667 MHz DDR-2 ECC memory. Up to 6 DIMMs split into 2 channels. Boot-flash Two 2MB flash memory implemented as primary and secondary boot Base & Fabric Two 1000BASE-T connections to the ATCA base interface via magnetics. Interface Two 10 Gbps Ethernet XAUI ports connected to the ATCA backplane fabric interface. These ports are also 1 Gbps capable for compatibility with a vast array of legacy products. Update Channel The Intel Harpertown blade will not require an update channel as update channels are optional according to the PICMG specification 1.7. Board Renesas H8S2166 16-bit uController. 512 KByte internal flash, redundant boot sectors, six Management I2C interfaces, one LPC port, 8 internal 10-bit ADC channels. Controller (IPMC) RTC Real time clock, battery backed. Form Factor Single slot ATCA form factor (280mm x 322mm) RTM The initial offering of the Intel Harpertown blade will not support RTM. Reasons for this are detailed in § 10.2 . Table 1: Summary Hardware Feature list Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 20 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 21 of 84 2.4 Block Diagram Registered DDR2 72-bit ECC, 240 Pin DIMMs Intel Dual Harpertown ATCA Block Diagram 533/667 MHz. Max 6 DIMMS 6 Ranks 24 GB Max w/ 4 GB Dual Rank DIMMs DIMM 2 CHN 1 - 533 & 677 MTPS Solectron Confidential. For Internal Use Only 48 GB Max w/ 8 GB Dual Rank DIMMs DIMM 2 CHN 0 - 533 & 677 MTPS DIMM 1 CHN 1 - 533 & 677 MTPS RTP to Perform SI Analysis on All High Rear Trans. Module DIMM 1 CHN 0 - 533 & 677 MTPS Speed Buses – DDR2, CPU/MCH, PCI-E, B A C K P L A N E DIMM 0 CHN 1 - 533 & 677 MTPS XAUI, 1GbE, etc . DIMM 0 CHN 0 - 533 & 677 MTPS ECC Chn 1 ECC Chn 0 266/333 MHz Red Dashed Outlines Imply Debug Only Load Intel Harpertown Proc 0 Serial 533/667 MTps x8 PCI-E 16 Gbps FD Presence Data Core 0 Core 2 SMBus 4.3/5.3 GBps Per Chnl XDP1 8.6/10.6 GBps Total Opt SATA x8 PCI-E PCI- 16 Gbps FD FSB0 E Primary Core 1 Core 3 North Bridge 6 7 Segment Flash BIOS Decoder (MCH) Fabric SATA For Port 80 PCI-Express Secondary to 2-Port XAUI ESI 2 GB/s 5100 Lgcl PCI Bus 0 LPC LPC Flash BIOS XAUI Switch XDP0 Intel 82598 San Clemente South Bridge XAUI PECI Core 0 Core 2 B0 Stepping (ICH) UART UART ICH9 or ICH9R Winbond XEON 0 FSB1 PECI Debug Port Core 1 Core 3 RTC 83627 Update Chnl PCI- XEON 1 I2C 12 PCI-E Super I/O Debug Port E I2C USB 2.0 6 x1 Intel Harpertown Proc 1 CFG SMBus Each Front Side Bus PECI- I2C EE- X4 PCI-E Dual 10/100/ 4 TX Diff Pairs + 4 RX Diff Pairs PROM 333/400 MHz 1333/1600 MTps Base Chnl Bridge 1000 BASE-T Primary 10/100/1000BASE-T Intel 82571 21/25 GBps From MCH, Serial Over High Speed Mngt Redundant 10/100/1000BASE-T LAN 10/12 GBps To MCH G 4 TX Diff Pairs + 4 RX Diff P I2C I 2C I 2C I 2C LPC Pairs I Active I2C Low Speed Mngt. IPMB O I 2C I IPMC I2C Min I2C Fast Mode, 400 Kbps N Renesas H8S2166 I 2C T Stand-by I2C Low Speed Mngt. Reset -48 B To IPMC Power Voltage, Mngt DC-DC ` A 10 Watt -48 CPLD & A Power, & Temp D SCI Max C WDT Enable RET Sensors Watch Dog Interface IPMC B Debug RS- DC-DC RET A 232 Port 190 Watt Max Gnd Lgc Logic Ground Main Power To Rest of Card Figure 2 System Hardware Block Diagram Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 21 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 22 of 84 3 Main Hardware Functional Elements 3.1 Main Processor Subsystem As of the writing of this document, (summer 2007) the Harpertown processors are the next-generation quad-core Xeon processors designed for dual-processor architectures. Initial release of these processors is scheduled for Q4 ’07 or Q1’08. Each processor will contain 4 individual processing cores. With both processor sockets populated, a total of 8 individual processing cores will operate per blade in a symmetric multiprocessing fashion. The processors will be socketed in standard LGA771 zero insertion force sockets. PROC_1: The Intel Harpertown blade will utilize two Intel Xeon Harpertown 64-bit processors tailored for ATCA applications. The system will be able to support any of the Intel Harpertown processors listed in PROC_2: If so desired, the board will operate with only a single processor populated. PROC_3: While the default design targets the 50 Watt Harpertowns, it should also accommodate 80 Watt Harpertown processors as well. Note that power budget calculations of predict that a pair of 50 watt TDP processors, running at peak power, will push the board power consumption to, or near to the 200 Watt limit. A pair of the 80 Watt TDR processors from Table 2 would likely cause the total board power consumption to breach the ATCA 200 Watt limit. However, it is possible that some customers would be interested in systems that are generally compliant with the ATCA specification except for the 200 Watt limit. Such boards could be paired with chassis and power supply systems designed to handle such higher power boards. Additionally, it may be advantageous in some applications to utilize a single high speed 80 processor running at 3.0 GHz supporting a reduced number of application threads. It may also be possible to cut back on memory such that it is possible to run a pair of 80 Watt Harpertowns without violating the board 200 Watt limitation. Clock Processor Processor Speed Front Side Bus Family Number GHz Watts Cache MB Speed MTps Harpertown E5472 3.00 80 12 1600 Harpertown E5462 2.80 80 12 1600 Harpertown E5450 3.00 80 12 1333 Harpertown E5440 2.83 80 12 1333 Harpertown E5430 2.66 80 12 1333 Harpertown E5420 2.50 80 12 1333 Harpertown E5410 2.33 80 12 1333 Harpertown E5405 2.00 80 12 1333 Harpertown LV L5430 2.66 50 12 1333 Harpertown LV L5410 2.33 50 12 1333 Harpertown ATCA 40 Watt L5408 See Note1 40 12 See Note1 Table 2 Supported Processors Also note that the TDP ratings of both the 50 Watt and 80 Watt processors represent peak, not average power. As such, the use of some of the 80 Watt processors may be more viable than first appearance would suggest. Another approach is to free up power for the 80 Watt devices by reducing memory. 1 Intel unable to supply processor or front side bus speeds at the time of the writing of this document. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 22 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 23 of 84 It is not possible to safely adjust the front side bus speed. This question was posed to Intel technical support. Their response was as follows: “The value driven on the bus select output pins BSEL[2:0] is fixed and cannot be changed via software or power on configuration strap. The motherboard could override, or ignore these outputs and set it's own BSEL value for input to the clock drivers and MCH. Note that the CPU is only tested to run at it's marked FSB speed, so it may not boot correctly at a different FSB speed than intended.” 3.2 North Bridge - Memory Controller Hub MCH_1: In accordance with the Intel Cranberry Lake server architecture, the Intel 5100 ‘San Clemente’ will be used as the memory controller hub (MCH)/north bridge. MCH_2: We will use only the ‘B0’ variant of the 5100 silicon, as the ‘A0’ revision is only a development version of the chip, is less capable, and will only be available for a short period of time. The San Clemente offers the following characteristics: • Support for 2 Intel Harpertown quad-core Xeon processors. • Dual independent front side bus processor interfaces. • Front-side buses operating at 266/333 MHz quad-pumped, yielding a transfer rate of 1066/1333 MT/s. • A standard single SMBus port on the San Clemente queries the loaded DIMMs to read Serial Presence Data (SPD) at boot-up, allowing the MCH to automatically configure itself and the memory for optimal operation. • The MCH offers six x4 PCI-Express busses for general-purpose use. These x4 groups will be able to be aggregates into various combinations of x4, x8, and x16 multi-lane buses. 3.3 Memory Subsystem The memory subsystem will have the following properties: MEM_1: Two independent channels of registered (not fully buffered) DDR2 ECC 72-bit memory interface. MEM_2: The memory bus interface will be provisionable for either 533 or 667 MTps. The corresponding memory clocking signals will be 266/667 MHz. MEM_3: The system will support from 1 to 6 DIMMs, maximum of 3 DIMMs per channel subject to the San Clemente limitation of 6 ranks per channel. MEM_4: DIMMs will be Very Low Profile (VLP) to allow for vertical mounting. This affords much closer packing of the memory sockets than beveled socketing. This saves board space, shortens the high-speed bus trace lengths, and increases memory bus signal integrity. MEM_5:The memory controller system will allow for a minimum of 256 MBytes and a maximum of 48 GBytes of total memory through the use of standard non-custom RDIMMs. The board will be theoretically capable of 48 GBytes when/if dual-rank 8 GByte VLP DIMMs become available. As of the time of this writing, the densest commercially viable VLP dual-rank DIMMs are 4 GByte. 24 GBytes would be achievable with six 4 GB dual ranked DIMMs. It should be noted that 256 MBytes would likely be far to small to be of practical utility. MEM_6: If commercially viable dual rank VLP 8 GB DIMMs become available, then it shall be possible to load three dual rank 8 GB DIMMs per channel. This would consume all 6 ranks and all 6 DIMMs at 8 GB each, for a total of 48 GB of memory. Of course, feasibility and requirement of such a memory load would additionally be subject to the 200 Watt power budget. MEM_7: Various combinations of DIMM speed and size will be allowed. For more detail, see 1.7. * Note * The maximum memory requirements listed above assume no board power limitations. These requirements only refer to the max addressable memory space assuming the DIMMs are available and do not cause card power budget violations. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 23 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 24 of 84 One rule of thumb sited for servers is “2 GBytes of RAM for 1 GHz of processor speed”. Equation 1 below suggests 48 GBytes of memory for a balanced system. Of course, this would be highly dependent on the applications for which the server blade is deployed. The equation suggests that we would want 48 GBytes of memory for a balanced system. This could be achieved only when 8 GByte dual rank DIMMs become available. Unfortunately, Solectron does not have control of this. 8 cores X 3.0 GHz X 2 GBytes/core/GHz = 48 GBytes Equation 1 Rule of Thumb Memory Sizing Therefore, a generally good default load configuration would be 24 GByte which provides 3 GByte per core. The 24 GByte load will be comprised of six 4 GByte DIMMs. However, this will be easily customizable to whatever the customer requests within the above-described memory bounds. Adobe Acrobat 7.0 Document A datasheet for an example DIMM applicable to this system is supplied here -> . This datasheet covers both 2 and 4 GByte devices, both appropriate for this design. At the time of this writing, suitable dual rank 8 GB DIMMs have not been located. It should be remembered that a primary goal to be reached in this reference design is one of maximum straightforward customizability in response to customers’ interests. In this case, that means to design for the maximum number of DIMMs allowed by the memory controller, which is six. With such a design, customers will be able to request a reduced number of sockets populated, or a full complement of sockets with any of a large variety of DIMM complements loaded into the sockets. Further, in designing for the maximum number of DIMMs, we will have a design which will allow for an optimal combination of whatever size and type of DIMM is available at the time of board production. Figure 3 below shows the DIMM socket topology to be supported in the Intel Harpertown blade. Figure 4 below lists all of the allowable memory configurations, as supplied to us by Intel. Figure 3 Memory/Memory Controller Interface Topology See 1.7 and/or Figure 4 for general allowable memory bus and DIMM topologies. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 24 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 25 of 84 Figure 4 Allowable Memory Configurations 3.4 South Bridge – ICH9/ICH9R The south bridge function on the Intel Harpertown blade will be provided by either the Intel ICH9 or ICH9R I/O Controller Hub (ICH), as appropriate for the specific customer design. Table 3 below lists the basic differences between the two devices. Device # SATA Ports AHCI RAID Intel Viiv Platform Driver Support ICH9 4 No No No ICH9R 6 Yes Yes Yes Table 3 Differences Between ICH9 and ICH9R As we plan on having at most 2 SATA drives on-board, either device will handle the needed SATA ports requirement. AHCI would be useful for SATA command queuing, which speeds up hard drive accesses. RAID would naturally only be needed if more than one local hard drive would be used. Intel Viiv is used primarily for multi-media support, so is of no importance to the Intel Harpertown blade. So which device will be selected for a particular purpose will be driven by local hard drive requirements on a customer by customer basis. The ICH9 is approximately $3 US less expensive than the ICH9R, so will be used whenever the additional features of the ICH9R will not be needed. SB_1: The Intel Harpertown blade will be able to support either the ICH9 or ICH9R south bridge. Additional functions/ports of the ICH9/ICH9R are: • Hosting the system LPC bus. • Hosting the PECI system. • An I2C port interfaced to the IPMC for ICH9/ICH9R configuration. • Hosting SATA drives if included in the design. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 25 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 26 of 84 3.5 Super I/O Device The WinBond 83627 Super I/O controller will be utilized on the Intel Harpertown blade. It will interface to the ICH9/ICH9R through the system LPC bus. The 83627 will be used primarily to support the payload debug serial ports during debug and development. As such, design of the entire software and hardware systems should proceed on the basis that we will no-load this part on production boards. SIO_1: In order to minimize BOM cost, the design should proceed so as to not depend, if possible, on the presence of a Super I/O device in the production version of the board. 3.6 Real Time Clock RTC_1: The Intel Harpertown blade will have a battery-backed industry-standard real-time-clock (RTC). RTC_2: The RTC battery and battery holder, like all other components of the system, will be compliant to NEBS vibrational, shock, and fire retardance requirements. RTC_3: The accuracy of the RTC will be +/- 20 ppm or better. RTC_4: The backup time of the RTC battery will be at least 10,000 hours. RTC_5: The battery holder will not require special tools to change the battery. RTC_6: In order to maintain time during battery swap, an additional storage capacitor will provide a secondary layer of RTC backup, allowing the RTC to maintain time for a period of at least 60 seconds without the battery. This assumes, of course, that there was enough life left in the battery to run the clock before it was swapped out. 3.7 BIOS Flash Configuration The Intel Harpertown blade will be a telco-grade product. As such, it will be required to offer failsafe bootloader- fault recovery. Using a single large boot flash with both a primary and secondary boot area is one approach to accomplishing said redundancy. However, with such an architecture, some failure modes can corrupt access to or content in both sectors - rendering the board useless, requiring the flash to be desoldered and replaced. Having two physically separate boot flash packages will drastically reduce the chances of having a anything result in failure to be able to successfully boot out of at least one of the devices. BIOS_FLSH_1: The Intel Harpertown blade will have two physically separate flash BIOS devices. BIOS_FLSH_2: The flash BIOS devices will be socketed on the prototype boards. BIOS_FLSH_3: The devices will be SMT type for production boards. The final design of the flash interfaces should accommodate efficient programming of the flash devices during manufacturing. This is typically done while the board is in and controlled by the In Circuit Test (ICT) jig. Design for testing mandates call for either a JTAG interface to allow the programming, or the ability to put the other devices on the flash programming interfaces into high impedance mode. This allows the ICT device to control the programming interface uninterrupted. BIOS_FLSH_4: The PCB will be designed to allow for ICT programming of the flash BIOS devices. 3.8 Base Channel Interface BASE_1: The Intel Harpertown blade will provide a pair of standard redundant (two total) 10/100/1000BASE-T Ethernet connections to the ATCA backplane through the Zone 2 Base channel connector pins of J23, as specified in 1.7. BASE_2: The Base channel links will be compliant to the 10/100/1000 BASE-T IEEE 802.3 Ethernet standard. BASE_3: The Base channel communication topology will be dual star. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 26 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 27 of 84 3.9 Fabric Interface One of the defining features of the Intel Harpertown blade will be its flexibility in backplane fabric interface. The options detailed in the FAB_x requirements below are from two existing PICMG 3.1 options, 1 and 9, and two additional anticipated PICMG future options. PICMG 3.1 Option 1 is 1000BASE-BX running at 1 Gbps full duplex. Option 9 is 10G BASE-BX4 running at 10 Gbps full duplex. Standardization is currently (summer 2007) taking place on 1000BASE-KX and 10G BASE-KX4 running at 1 Gbps and 10 Gbps, respectively. Final details of the KX protocols are not yet known, but support of these modes is guaranteed by the Intel 82598 chip which we will be using for this interface. Control as to which of the protocols the cards will utilize will be managed through Electronic E-Keying under control of the IPMC/ShMC, not through a peer-to-peer autonegotiation process. FAB_1: The Intel Harpertown blade two-channel fabric interface will support two channels of PICMG 3.1 Option 1, I000BASE-BX. FAB_2: The Intel Harpertown blade two-channel fabric interface will support two channels of PICMG 3.1 Option 9, A.K.A. 10GBASE-BX4. The 10GBASE-BX4 is also known as XAUI in the industry. FAB_3: The Intel Harpertown blade two-channel fabric interface will support two channels I000BASE-KX in anticipation of PICMG’s likely adoption of this protocol. FAB_4: The Intel Harpertown blade two-channel fabric interface will support two channels 10G BASE-KX4 in anticipation of PICMG’s likely adoption of this protocol and alternate variant of XAUI. FAB_5: The Intel Harpertown blade will be designed to interface as a node to a the standard PICMG ATCA dual- star backplane fabric topology. Figure 5 below illustrates an eight-node dual-star configuration. Note that it would be more typical for the shelf to have provisions for 12 or 14 nodes in the star, but this is not mandatory. A shelf could have as few as 2 nodes in the star, but must always have exactly two hubs to be considered a dual-star compliant system. Other systems exist, such as dual-dual stars which would have 4 hubs. It is not mandatory that the Intel Harpertown blade be compliant with these higher-order star configurations, however, if the backplanes associated with these systems are designed appropriately, they should accommodate any dual-star node. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 27 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 28 of 84 Node Node Node Node Hub Hub Node Node Node Node Figure 5 8-Node Dual Star Fabric Topology 3.10 Processor Thermal/Power Management PTPM_1: The Intel Harpertown blade will incorporate Intel Platform Environmental Control Interface (PECI) to monitor processor core temperatures. PTPM_2: In response to over-temperature conditions, The Intel Harpertown blade will be able to either notify the ShMC through the IPMC, reduce it’s processor power consumption on a processor-by-processor basis, or both. PTPM_3: The Intel Harpertown blade will support Enhanced Intel SpeedStep Technology which can manage and coordinate changes in processor core speed and VID to alter power consumption. SpeedStep Technology dynamically configures the processors and power supplies to move through P-states defined as processor speed and VID core voltage combinations. These P-state pairs are designed to provide optimal performance at reduced power consumption settings. PTPM_4: The Intel Harpertown blade will implement a software driven system that will control the processor speeds dynamically to optimally match blade total power consumption with the desired power consumption set point level. The typical set point level will be 200 Watts, but will be adjustable by some means by the system administrator. 3.11 Power Supply Systems PWR_1: The board processor power supply systems should be capable of supporting 80 Watt TDP processors maximum. PWR_2: The main –48 volt input DC-DC power supply will be capable of at least 200 Watt capacity. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 28 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 29 of 84 PWR_3: The main –48 volt input DC-DC power supply will be of a type which will allow modification from a minimum of 200 Watts up to 230 Watts without requiring major board redesign. This will allow the board to take advantage of shelves which are intentionally designed to operate at above the normal 200 Watt per slot limit. 3.11.1 Input Voltage Range In accordance with 1.7 § 4.1.2.2 , and 1.7, the Intel board will conform to the following specifications: PWR_4: The Intel Harpertown blade shall be fully operational over a supply voltage range of –39.5 VDC to –72 VDC. PWR_5: The Intel Harpertown blade shall not be damaged by supply voltages in the range of 0 VDC to –75 VDC. 3.11.2 Power Circuitry Topology Figure 6 below is an illustration of the Intel Harpertown power circuitry topology. The first power module, main power module, takes its input from the –48 volt dual backplane supply rails. It then produces the +3.3 volt supply used by the IPMC, and a +12 volt supply from which all other voltage rails for the board processing elements are produced. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 29 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 30 of 84 Figure 6 Power Circuitry Topology PWR_6: The CPU power supplies will be multi-phase such that no-loading a phase will result in a less expensive system more optimized for 50 Watt TDP processors. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 30 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 31 of 84 3.11.3 Power Sequencing Figure 7 below illustrates the sequencing of some of the status, control, and supply voltages associated with the bring-up of the CPU. Table 4 below that lists the allowable ranges of the timing parameters associated with Figure 7 . Figure 7 Power Circuitry Sequencing T# Parameter Min Max Unit Ta : PWM Vcc & Vtt to OUTEN delay time 0 5 ms Tb : Vboot rise time 0.05 10 ms Tc : Vboot to VID valid delay time 0.05 3.0 ms Td : VccCPU rise time to final VID 0 2.5 ms Te : VccCPU to VR_Ready assertion time 0.05 3 ms Tf : VTT rise time 0.05 10 ms Table 4 Power Sequencing Timing Parameters Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 31 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 32 of 84 3.11.4 Voltage Rail Current Requirements Table 5 below lists the current requirements of each of the main elements powered by the supply rails shown in the power circuitry topology of Figure 6. See the Harpertown EMTS 1.7 load line for more information on CPU core voltage tolerances. See San Clemente EDS Addendum 1.7 for more information. Device Voltage (V) Min (V) Max (V) Current (A) Current Max (A) Typ 2 CPU0 (80W VCC_CPU0 See Note See Note2 102.000 TDP) VTT 1.1V 1.045 1.155 8.000 VCCPLL 1.5V 1.455 1.605 0.260 See Note2 See Note2 CPU1 (80W VCC_CPU1 102.000 TDP) VTT 1.1V 1.045 1.155 8.000 VCCPLL 1.5V 1.455 1.605 0.260 1.5V 1.455 1.605 12.000 10 MCH 1.1V 1.067 1.133 4.800 2.6 1.8V 1.755 1.845 2.000 1.7 3.3V 3.218 3.382 0.01 5V 4.750 5.250 0.004 ICH9R 3.3V 3.135 3.465 0.600 1.5V 1.425 1.575 2.400 1.05V 0.998 1.102 1.700 3.3V 3.000 3.600 0.034 0.026 82571 1.8V 1.710 1.890 0.913 0.893 1.1V 1.045 1.155 1.520 1.022 10G Device 1.2 V 1.140 1.260 4.925 3.502 82598 1.8 V 1.710 1.890 0.289 0.289 3.3 V 3.000 3.600 0.021 0.021 DDR2 1.8V 1.700 1.900 26 Memory (8 ?? ?? DIMMs of 0.9V ?? 3 4GB each) Others 3.3V 3.218 3.382 2.000 Table 5 Voltage Rail Current Requirements 2 See Harpertown EMTS 1.7 load line charts. Voltage tolerances are dependant on core current consumption. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 32 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 33 of 84 3.11.5 Power Consumption Table 6 below, an embedded spreadsheet, illustrates the current projected power consumption budget for the Intel blade. Note that the table can be double-clicked to allow editing of the cells. This enables ‘what-if’ tests within this document. Power Estimation for Opera 12-Jul-07 Akhilesh Jaiswal Component Power (W)Comment CPU0 (Harpertown) 50 No documentaton available supporting this data. 50W is put based on Jeff's input CPU1 (Harpertown) 50 No documentaton available supporting this data. 50W is put based on Jeff's input San Clemente (with two active memory 22 TDP (this is not the max power) channel) 32 GB DDR2 Memory (8 DIMM with 53.2 This is calculated using the tool available 4GB each) @ 533MHz from Micron. Here are the power requirements for different modules. 1. 2GB Module @ 533MHz = 5.2W 2. 2GB Module @ 667MHz = 6.7W 3. 4GB Module @ 533MHz = 6.65W 4. 4GB Module @ 667MHz = 7.9W ICH9 3.4 TDP (this is not the max power) Tehuti XAUI Switch 3 As per Jeff, Tehuti has confirmed that the ASIC version will consume 3W. No documentaton available Gigabit Ethernet Controller (82571) 3.76 Super IO 1 Estimate IPMC 3 Estimate Miscellaneous 5 Post DC-DC converter total 194.36 Total board power (assuming 85% 228.66 regulation efficiency) Table 6: Power Consumption Note that the resultant calculated power of Table 6 is currently above the 200 Watt per card limit. No derating has been applied to the total power. Derating accounts for the fact that typically, not all board components are operating at or near max power. At this point in time, it is not possible to arrive at a reliable derating factor. It will remain to be seen from heuristic testing what derating factor can be applied to the max power calculation. If power does still breach the 200 watt limit, we can reduce maximum processor speed, or reduce memory to bring the board back into conformance. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 33 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 34 of 84 3.12 Board Health Monitoring Systems 3.12.1 Thermal Sensors HLTH_1: Two pair of air flow temperature sensors will be deployed on the board. The first pair will be placed near the input air flow ports of the board along the bottom edge. One will be toward the front of the board, and the other toward the backplane. The second pair will be similar to the first pair except they will be situated at the top of the board monitoring exit air flow. The exit air flow sensors should be places strategically such that they will sense the temperature of the air downstream of the payload processors and memory. HLTH_2: The San Clemente MCH chip internal temperature will be monitored by sensing the MCH internal thermal diode voltage. HLTH_3: The air flow temperature sensors will be tied into the IPMC through an I2C bus. This will leave the IPMC ADC channels available for voltage monitoring. HLTH_4: The ICH9R south bridge device also contains two on-die temperature sensors. These can be directly read by the payload system. As such, these will be monitored and used at a minimum as a means to trigger a shutdown of the system in the event of an overheating situation. Note that the thermal sensors described in this section are independent and separate from the Harpertown processor on-die temperature sensors monitored by the PECI system as described in § 3.12.2.1 . 3.12.2 Power Supply Sensors and ADC Channel Assignments The Intel Harpertown blade will monitor critical voltage rails and a supply current utilized by the board. The board will also monitor the real-time-clock backup battery. The most critical supply rails will be monitored through ADC channels supplied by the IPMC. The Pgood signals on the remaining supply rails from DC switching supplies will be monitored as well. In order to calculate board power consumption, the Intel Harpertown blade will include a current sensor on the 12 volt intermediate rail. The current sensor, in conjunction with the 12 volt voltage monitor, will allow calculation of board power consumption. This, in turn, will provide feedback into a closed-loop processor speed auto-throttle servo. The closed-loop servo will yield simple, accurate, automated control of power consumption under a variety of board configurations. A typical use would be to automatically adjust processor speed until the board power consumption will match the specified set-point. HLTH_5: The presence of the battery voltage monitoring circuitry may theoretically increase the quiescent battery current draw during board power-down mode. The increase in quiescent current, if any, shall not reduce the RTC backup battery life to less than the minimum as specified in § 3.6 . Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 34 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 35 of 84 3.12.2.1 Health Monitoring and ADCs The Renesas 2166 IPMC incorporated 8 channels of 10-bit analog-to-digital conversion (ADC). These will all be used for monitoring various board supply voltages and a board supply current. Table 7 lists the board supply signals which will be monitored. It may be helpful to refer to § 3.11.5 to better understand these assignments. Note that the actual ADC channel numbers are not assigned here. This is left to the detailed designer. Item 12 Volt Rail Voltage 12 Volt Rail Current RTC Battery VCORE_CPU0 VCORE_CPU1 V3P3 V1P8 V1P5 Table 7 ADC-Monitored Power Supply Signals HLTH_6: The supply voltages and current of Table 7 will be monitored through the 8 IPMC ADC channels for board health monitoring purposes. 3.12.3 Platform Environmental Control Interface (PECI) The Intel PECI system monitors the temperature of each processor core in the system. PECI automatically converts the voltages of all of its temperature sensor diodes to a digital format. The digital PECI information is then sent out of the processors to a PECI hosting device through a special, dedicated one wire PECI bus to the PECI host. In the Intel Harpertown blade architecture, this host is the ICH9/ICH9R. The PECI channel will be monitored with a Maxim 6618 or 6621 PECI to I2C translator. The I2C output is then linked to an IPMC channel for IPMI monitoring of processor core temperatures. The path of the PECI information can be seen in Figure 2 System Hardware Block Diagram. HLTH_7: A standard Intel PECI system will be employed as part of the design to monitor all processor dies. There will be one temperature sensor per die, which translates to two per processor, with then translates to as many as four per board. 3.12.4 Payload Watch Dog Timer HLTH_8: The Intel Harpertown blade payload will incorporate a standard payload watchdog timer (WDT) to monitor and detect any payload processor system corruption. HLTH_9: The IPMC will host the payload watchdog timer internally by using an IPMI standard BMC Watchdog timer which would allow the IPMC to inherently know when/if the payload WDT has tripped. 3.12.5 IPMC Watch Dog Timer HLTH_10: The Intel Harpertown blade payload will incorporate a watchdog timer (WDT) for monitoring the status of the IPMC. This WDT will be external to the IPMC itself, likely in the Intel Harpertown blade CPLD. This WDT will operate in accordance with at least the minimum watchdog requirements as specified in 1.7, 1.7 and 1.7. 3.13 Intelligent Platform Management Controller Requirements IPMC_1: The Intel Harpertown blade will incorporate a Renesas HD64F2166VTE33V16-bit microcontroller as the Intelligent Platform Management Controller (IPMC). The Renesas will fulfill at a minimum the below requirements: Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 35 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 36 of 84 IPMC_2: The IPMC will contain enough internal flash memory to enable dual copies of the IPMC runtime code. This will allow for a backup copy of the necessary code in the event that an IPMC firmware upgrade fails or IPMC firmware gets corrupted by any other means. IPMC_3: The IPMC will have enough I2C/SMBus ports to support, manage, and monitor the necessary major elements on the Intel Harpertown blade. IPMC_5: The IPMC will incorporate 8 channels of 10-bit analog-to-digital conversion for sensor monitoring. IPMC_6: The IPMC will provide a standard Low Pin Count (LPC) bus to interface with the payload processor system, allowing easy high-speed communications between the IPMC and payload system. IPMC_7: The IPMC will consume less than 3 Watts of power when operating in a sustained processing rate, reserving as much power as possible for the payload processor system. IPMC_9: The IPMC will be capable of hosting Pigeon Point’s standard IPMI code set and associated peripheral items. IPMC_10: The IPMC will be capable of interfacing with external SRAM (in conjunction with a board spin) in the event that this becomes necessary. Such a scenario could result, for instance, when an AMC interface would be added to future revisions of the blade design. Figure 2 clearly illustrates the major systems and subsystems accessed, controlled, and monitored by the IPMC. It should be noted though that Figure 2 should NOT in itself be construed as a requirement. The major elements of the block diagram are not likely to change. However, some of the lower level devices may be modified during the detailed design phase of the project. 3.14 Debug Capabilities/Support The Intel Harpertown blade will incorporate a variety of ports and/or other devices to be used primarily for debug and development purposes. By default, none of these will be loaded on production boards. However, some of these may be converted to standard system deployment, if a customer so chooses. All debug ports shall be keyed in some manner to prevent accidentally connecting a cable incorrectly. 3.14.1 Payload Processor Physical Serial Ports DBG_1: Two payload processor physical RS232 serial ports will be supplied, one for each of the payload CPUs. These will be located on convenient spots on the board to allow developers to monitor and configure the main processors through standard dumb terminal and/or VT100 interface techniques. Not required, but if possible, the payload serial ports should be situated such that they could be accessed through a modified faceplate while the blade is installed and running in a chassis. 3.14.2 IPMC serial port DBG_2: One RS232-compliant serial port will be supplied, again primarily for system bring-up, debug, and development. If a customer so chooses, the serial port could be brought out to the front panel. Not required, but if possible, the IPMC serial port should be situated such that it could be accessed through a modified faceplate while the blade is installed and running in a chassis. 3.14.3 JTAG interface There will be multiple devices on the blade which will support JTAG interfaces. It would be desirable to chain all of these into a single chain. However, it will be necessary to have two separate, independent chains. This is necessitated by the fact that not all of the JTAG devices will be powered up at the same time. Those devices in the IPMC section of the board will always be powered when any power is applied to the board, and those in the payload section will only be powered some of the time. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 36 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 37 of 84 DBG_3: A single IEEE 1149.1 compliant JTAG interface will be supplied to the IPMC section which can be used for accessing the chained devices in that section. These would include, at a minimum, the IPMC and the CPLD reset controller. Of course, this JTAG port could be used for boundary scan tests in production. DBG_3.5: A second IEEE 1149.1 compliant JTAG interface will be supplied to the payload section which can be used for accessing the chained devices in that section. These would include, at a minimum, the payload processors, the San Clemente northbridge, and the Intel 82571 and 82598 ethernet controller base channel and fabric interface chips. Of course, this JTAG port could be used for boundary scan tests in production. If detailed design engineers determine that it is best to restrict the processor and San Clemente JTAG ports to their respective XDP connectors, that will be acceptable as well. See § 3.14.4 . DBG_4: In order to provide maximum flexibility, the chaining topology depicted in Figure 8 should be used. Note that the # symbol implies No-loading of the associated part. This method allows all devices to be chained for optimal production process compatibility. Through selective manual 0-ohm resistor loading, specific sections and/ or subsections of the chain can be isolated for detailed lab debugging. Figure 8 Flexible JTAG Chaining Topology 3.14.4 Intel XDP DBG_5: Two standard Intel 60-pin XDP debug ports will be available for debugging the system. The first connector will interface with the processor complex, and the second with the San Clemente memory controller hub (MCH). Interface designs will conform to Intel XDP design guidelines as specified in 1.7 . 3.14.5 Payload Reset Button DBG_6: A single manual reset button will be supplied for the payload processors. When this button is pushed, the entire payload processor system will be reset. Details of the reset process are beyond the scope of this document. 3.14.6 IPMC Reset Button DBG_7: A manual reset button will be supplied for resetting the IPMC. As it is an ATCA requirement that the IPMC can reset without causing the rest of the board to reset, the activation of this manual reset will not force an entire board reset. Further details of the reset process invoked by the button are beyond the scope of this document. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 37 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 38 of 84 3.14.7 Chassisless Debug/Development Support Group software and hardware development and debug would be greatly facilitated if these tasks could take place outside of a chassis. Additionally, providing the capability of chassisless debugging will greatly simplify provisioning third party developers such as BIOS and OS suppliers. The following provision shall be provided in order to eliminate the chassis requirement. It should be noted that additional mods to IPMC and/or payload software may be required in order to run outside of a chassis, that is without a ShMC. The use of the local SATA drive as specified in § 3.17, will also be of great utility in running the board outside of a chassis. DBG_8: Power connector. A connector will be supplied allowing connection to an external –48 volt DC power supply. It will also be acceptable to supply power simply through a cabled Zone 1 mating connector. 3.14.8 Port 80 Support DBG_9: The Intel Harpertown blade will support port 80 support as shown in Figure 9. This is a 7 segment mechanism primarily used for visually monitoring of BIOS progress. Codes are sent out to the dual 7 segment displays indicating BIOS state. Primary 7 Segment Flash BIOS Decoder For Port 80 Secondary LPC Flash BIOS LPC South Bridge (ICH) ICH9 or ICH9R Figure 9 Port 80 Debug Support 3.15 Component Placement Figure 10 below is a proposed starting point for major component placement on the Intel Harpertown board. It should be understood that the layout proposed in Figure 10 is not considered final. For instance, research is still ongoing into the predominance of chassis bottom-mounted fans versus top-mounted fans. The processors should ideally be placed near the fans in order to effect maximal air flow through the processor heat sinks. Without such considerations, the air-flow impedance of the heat sinks will naturally tend to cause the air streams to shunt around the heat sinks. Situating the processors/heat sinks right next to the fans will reduce this problem significantly. If it is found that chassis with top-mounted fans exist as well as chassis with bottom-mounted fans, it may be more difficult, if at all possible, to design the board to optimally accommodate both types of chassis. It can be seen from Figure 10 that the main board power supply section is situated very close to the 10 Gbps backplane interfaces. In order to reduce the possibility of power supply switching noise influx into the backplane fabric interface, work needs to be done to separate these two functional subsystems. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 38 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 39 of 84 Figure 10 Initial Component Placement Proposal 3.16 Front Panel 3.16.1 Front Panel Connectors FP_1: No front panel connectors will be supplied with the initial implementation of the Intel Harpertown blade. Specifically, none of the following type of connectors will be supplied either through the front panel, or through the RTM, zone 3 area. • RS-232 Connectors • Ethernet Connectors • USB Connectors • FireWire Connectors • Keyboard or Mouse Connectors • Video Connectors • Other connectors of similar nature 3.16.2 Front Panel LEDs FP_2: The Intel Harpertown blade shall support 4 front panel LEDs. These would be the two mandatory LEDs (LED 1 and Blue LED) as specified in 1.7 as well as the two optional LEDs (LED 2 and LED 3) also specified in Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 39 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 40 of 84 1.7. The definitions of the four LEDs shall conform to Table 8. Refer to Figure 11 for more detailed placement information. LED Name LED Meaning, Operation, and Other Details Blue LED Briefly, • As the name implies, the illumination color shall be blue. • 100% on indicates blade is ready for extraction. • 0% on indicated blade is not ready for extraction. A full description of this LED’s requirements is quite extensive and is contained in 1.7 . LED 1 As specified in § 2.2.8.1.2 of 1.7 - “It is common practice in many telco and data center equipment designs to provide at least one status LED used to indicate operational failure of Payload resources. LED 1 is intended to serve this function. The color and details of meaning of this LED vary from industry to industry. In North American telco applications, this LED is required by GR-2914 to be RED and to be illuminated when the Front Board is to be removed from the Shelf. In European telco applications, this LED is typically AMBER and illuminated when the Front Board is in a failed state. This LED may be multi-color to meet different geographic requirements. The legend for LED 1, when used as an out of service indicator, should read “OOS”.” As such, LED 1 shall be red for US applications and amber for European applications. The default Intel Harpertown blade configuration will be with a bi-color red/amber LED under software control. Customer-specific configuration of this LED may result in it being either a red LED or an amber LED. LED 2 As specified in § 2.2.8.1.3 of 1.7 – “LED 2 shall ... ... should be GREEN. The significance and operational control of this LED, as well as its legend, shall be defined by the system implementer.” The significance and operation of LED 2 is not yet defined by Solectron and will be specified as its utility becomes clear to the team, or to specific customers. LED 3 As specified in § 2.2.8.1.4 of 1.7 – “LED 3 shall ... ... should be AMBER. The significance and operational control of this LED, as well as its legend, shall be defined by the system implementer.” The significance and operation of LED 3 is not yet defined by Solectron and will be specified as its utility becomes clear to the team, or to specific customers. Table 8 Front Panel LED Definitions FP_3: If the two optional LEDs are unused, they may either be later eliminated from the design or left in the design, but always kept in the electrically off state. 3.16.3 Barcode Product ID Label FP_4: The default placement of the end customer barcode product ID label will be on the faceplate in the location recommended in 1.7 . However, as 1.7 only specifies a recommended, and not mandatory, location, the label placement will ultimately be as requested by the SLR customer. Refer to Figure 11 for more detailed placement information. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 40 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 41 of 84 Figure 11 Front Panel Elements Placement 3.17 SATA Hard Drive Option SATA_1: The Intel Harpertown blade will supply at least one SATA connector footprint. The ICH9/ICH9R will supply the electrical SATA interface to the footprint. Adding a connector to the footprint and the associated SATA cable will allow the inclusion of an on-board SATA drive. This will eliminate the need for a remote boot server and provide for local hard drive storage for customers who wish to deploy a local SATA hard drive. This will also be necessary for chassisless debug/development as described in § 3.14.7 . SATA_2: The Intel Harpertown blade PCB will be designed to accommodate one mounting arrangement for a standard 2.5 inch SATA hard drive. SATA_3: SATA signal and power connector types will be of the types shown in Figure 12 . That is, specifically, the SATA drive power will not be of the old 4-pin Molex variant, but of the 15-pin SATA native type. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 41 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 42 of 84 Figure 12 SATA Connectors If possible, it would be preferable to have solid board mount connectors rather than cabled connectors. This will reduce the possibility additional air-flow impedance on the card, and will also reduce the possibility of poor connector/drive connections. It will not be possible to determine if solid mount connectors will be usable until further component placement analysis is complete. If cables must be used, the board cable connector should be situated as closely as possible to the hard drive SATA interface so as to minimize cable length. Note that it may be impossible to pass some vibrational and/or shock tests such as NEBS compliancy if an on- board hard drive is integrated into a particular customer’s design. As such, if a customer wishes to include on- board hard drive storage, he/she will have to be made aware of this concern. The customer will then have to chose whether to include the hard drive or not. Engineers will search to find NEBS-compliant hard drives to resolve this conflict. 3.18 Cross Interrupt Lines XIRQ_1: The Intel Harpertown blade will incorporate at least one general-purpose interrupt line from the IPMC to the payload, and at least one interrupt line from the payload to the IPMC. There is currently no known purpose for these lines, They are added as a precaution in the event that it is determined later that there is a need for one device to get the attention of the other. This requirement will be waived in the unlikely event that it is determined that there is a shortage of interrupt or GPIO lines on either end. Note that the IPMC-to-payload cross interrupt line is separate from the IPMC to payload System Management Interrupt (SMI) line which has already defined purposes. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 42 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 43 of 84 3.19 Back plane Connections 3.19.1 Zone 1 BPC_1: The Intel Harpertown blade will utilize one standard ATCA-compliant zone 1 connector, such as the Tyco 1766501-1 . This connector will be used for power, logic ground, interface the card’s IPMB bus to the backplane, and hardware address specification. BPC_2: The ring voltage pins and metallic test pins of the zone 1 connector will be left unterminated on the Intel Harpertown blade. In fact, if less expensive connectors are available without these pins loaded, they should be selected over fully loaded connectors. 3.19.2 Zone 2 BPC_3: The Intel Harpertown blade will utilize one standard ATCA-compliant zone 2 connector, such as the Tyco 6469001-1. The ATCA specification defines five identical connectors for the Zone 2 Data Transport interface. These connectors are referred to as “Free Board” connectors and are assigned reference designators J20 through J24 per the ATCA specification. The default Intel Harpertown blade will utilize only one of these connectors, J23 which will support both the base channel channels and the fabric channels. In the event that an update channel interface were added in future revisions of the card, J20 would be added for this purpose. It may be possible to utilize the GLAN interface of the ICH9/ICH9R as an update channel interface. In this configuration, the GLAN interfaces of adjacent Intel Harpertown blade ICH9/ICH9Rs would communicate peer-to-peer with each other through the backplane. This possibility is currently under investigation.. 3.19.3 Zone 3 BPC_4: Since the initial Intel Harpertown blade will not offer and RTM interface, it will not deploy any zone 3 connectors. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 43 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 44 of 84 4 Main Software Functional Elements The purpose of this section is to highlight the main functional elements of the software components which will be running on the Intel Harpertown blade. More detailed descriptions of these and other software elements can be found in § 5 . 4.1 Operating Systems OS_1: The initial offering of the Intel Harpertown blade will be Red Hat Linux Advanced Server 5, 2.6.18 kernel. It should be understood that the intent is for the Intel Harpertown blade to support a variety of operating systems later on, as customer needs become more clear. As such, the board should be designed to be flexible enough to support additional operating systems. Without specific knowledge of which operating systems will be supported in the future, it is not possible to guarantee complete hardware compatibility. However, attempts should be made where possible in the design of the hardware so as to prevent obstacles to supporting other popular server-grade operating systems. Operating systems which will likely be targeted for distribution with the Intel Harpertown blade are: • Red Hat Linux • SuSe Linux 4.2 IPMI The Intelligent Platform Management Interface (IPMI) firmware will be responsible for coordinating with the ATCA compliant Shelf Management Controller (ShMC) through Intelligent Platform Management Bus (IPMB), Monitoring the health of the payload and reporting ShMC about any anomalies that could occur during the operation of the blade. IPMI_1: The IPMI firmware will coordinate activities such as event logging, board power up, board shut down, and status reporting. IPMI_2: The IPMI firmware will support at a minimum, all of the required IPMI features as specified in PICMG 3.0 Rev 2.0 1.7. IPMI_3: The IPMI firmware will be supplied by Pigeon Point Systems (PPS) in source form through their BMR- H8S-ATCA board management solution software offering. IPMI_4: The IPMI firmware will run on the Renesas IPMC as outlined in § 3.13. IPMI_5: Above IPMI responsibilities, the IPMI firmware will operate a continuous background loop. This loop will allow calls to periodic non-IPMI related routines written and managed by Solectron. 4.3 Open IPMI OpenIPMI is a set of libraries and IPMI drivers used to provide higher-level abstraction of IPMI functionalities that underlying IPMC has. Latest source code of these open libraries and drivers (covered under GNU Public license) that run on operating system on the payload will be downloaded from the official OpenIPMI web site. OPMI_1: OpenIPMI will be ported onto the Intel Harpertown blade. These libraries and drivers will interact with the IPMI firmware that is running on IPMC. OpenIPMI is used to access various information such as E-Keying and sensor events etc. and various activities such as firmware upgrade. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 44 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 45 of 84 OPMI_2: OpenIPMI will poll for the IPMI events and publish them it to next level layer software. 4.4 Watch Dog Timer The Intel Harpertown blade will incorporate a standard watchdog timer managed by the IPMC that will operate in accordance with at least the minimum watchdog requirements as specified in1.7, 1.7 and 1.7. 4.5 Payload Remote/Local Boot BOOT_1: The Intel Harpertown blade will allow at a minimum the capability to boot remotely through the base channel system. Preboot Execution Environment (PXE) will be utilized as the framework for managing the remote boot process. BOOT_2: If a customer chooses to include the option of on-board SATA hard drive(s), the system will also be capable of booting locally. BOOT_3: Boot sever search order will be controllable through BIOS configuration. 4.6 Firmware Upgradeability 4.6.1 IPMC Firmware Upgrade FWUG_1: The Intel Harpertown blade will support IPMC firmware upgradeability as per HPM.1 1.7. FWUG_2: The IPMC firmware will be contained within the Renesas internal flash memory FWUG_3: IPMC firmware upgrades shall be failsafe. i.e. in case of a failed/aborted firmware upgrade, the IPMC firmware should be capable of restoring the previous version of IPMC firmware. Note that this does not apply to bootloader upgrades. Since the architecture only supports a single copy of the bootloader in the Renesas boot sector, it is not redundant, and hence cannot be made to be failsafe. FWUG_4: The firmware recovery mechanism shall not need any personnel to be on site. FWUG_5: IPMC firmware shall be able to upgrade itself via either of the IPMB busses from the Shelf Manager. FWUG_6: Upgrading the IPMC firmware from the payload shall be supported. FWUG_7: In the event that both flash images become corrupted, an alternate means shall be supplied to allow for manual reflashing of the Renesas through its associated JTAG port. 4.6.2 Payload BIOS Firmware Upgrade FWUG_8: The Intel Harpertown blade will support redundant bootable images in the two payload boot flashes to support failover when one copy of the boot image is corrupted. FWUG_9: Utility for upgrading the payload BIOS shall be provided. FWUG_10: Upgrading of the bank other than the one which was used for booting shall be possible. FWUG_11: Upgrading of the bank used for booting shall not be possible. Note: This requirement is applicable only when the upgrade is performed using the utility. FWUG_12: Provision to select the bank for the next boot shall be provided. FWUG_13: The payload BIOS shall be upgradeable from the Linux OS running on the payload. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 45 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 46 of 84 4.7 OS-Controlled Hardware Power Management The Intel Harpertown blade will conform to Advanced Configuration and Power Interface (ACPI) revision 3.0b 1.7 . This document describes an industry-standard method which allows the operating system to control the processor energy consumption states. 4.8 Serial Over LAN (SOL) SOL_1: Serial-Over-LAN protocol as specified in IPMI 2.0 will be supported by the Intel Harpertown blade. The hardware implementation is not required to be implemented as specified in IPMI 2.0 . SOL_2: In support of this requirement, the base channel Ethernet MAC/Phy chip will be the Intel 82571. SOL_3: The SOL system will allow redirection of payload serial port data to a logical serial port in the 82571 through proper selection of payload serial port address. This includes debug payload data. SOL_4: The default production destination for payload serial data will be the SOL port. The DMTF (Distributed Management Task Force) specified NC-SI bus structure is not currently under consideration as an addendum to the design as it would require the significant modification of an RMII path from the payload to the IPMC, and the PPS IPMI firmware is not yet written to support this. Additionally, the DMTF committee who is authoring the specification has not yet sanctioned it in it’s final form. It is not a requirement that the IPMC serial data be redirectable through the SOL system. However, it should be a design goal to make this happen if possible. 4.9 Virtualization Support Note that at the time of this writing (Sept ’07) discussions were still ongoing pertaining to the virtualization requirements. As such, the information in this section was written with the most up-to-date understanding at that time. VS_1: The Intel Harpertown blade will support at a minimum one virtual OS instance per processor core. This type of virtualization is referred to as OS virtualization. It should be noted that use of the Harpertown processor, by definition, provides a basis for efficient and effective virtualization techniques. Optimizations designed by Intel into their recent XEON processors are categorized under the umbrella term “Virtualization Technology” (VT). Intel has further expanded these capabilities for the Harpertown, and refers to these as “Enhanced Virtualization Technology.” These Harpertown features are as follows: • Memory-mapped TPR virtualization • NMI-window exiting • Advanced VM-exit info for INS & OUTS instructions • WBINVD Exiting – Addresses virtualization support for special bus agents. Designers will still seek guidance from virtualization software vendors and Intel to ensure that any design techniques external to the payload which need to be tended to are taken care of appropriately. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 46 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 47 of 84 4.10 BIOS The Intel Harpertown blade payload system will utilize a Basic Input Output System (BIOS) in the typical manner of any standard Intel-based computing system. The details of the BIOS requirements are listed in § 5. However, in general terms, the BIOS will be responsible for at least the following types of functions. Note, unless otherwise specified, these functions and responsibilities refer to the payload processing system, not the IPMC system. • First firmware to run at boot-up. • Power-On Self-Test (POST). • Boot firmware serial port console driver. • Boot process interruption support. • BIOS configuration through console. • Configuration and management of the payload/IPMC LPC communication link. • Boot progress monitoring and reporting to IPMC. • Establishment of Serial Over LAN link for monitoring and/or control of BIOS and boot-up process. • Boot device searching and selection (Remote boot, which local hard drive, possibly others) • Management of Preboot Execution Environment (PXE) remote boot process. • Payload sensor monitoring and status reporting to IPMC. The Intel Harpertown blade will utilize a BIOS from American Megatrends, Inc (AMI), customized and verified for the blade. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 47 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 48 of 84 5 Detailed Software Requirements 5.1 General Description This section describes the general factors affecting the software and its requirements. 5.1.1 System Perspective Figure 13 below illustrates the main functional blocks of the system software and how it interfaces to other systems on the board and to the backplane. The Intel Harpertown blade will function as a server blade in an AdvancedTCA industry standard chassis. In particular, the Intel Harpertown blade will utilize the Ethernet protocol across the backplane fabric. The ATCA spec allows for a variety of backplane topologies, ranging from the dual star, dual-dual-star to the most comprehensive form, complete mesh. The Intel Harpertown blade will be tailored to be a node in a dual star configuration, requiring only two backplane fabric interconnections, one to each of the two hubs. As such, it is not expected to be able to work properly in a dual-dual star configuration or a full mesh configuration. It operates either in fabric option-1 or in fabric option-9 channel. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 48 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 49 of 84 Figure 13 Software Architecture Block Diagram 5.1.1.1 General BIOS Functions o Boot Flash o Field Upgradeability o Dram Auto sizing o Memory test o ECC support o Serial interface o Boot options o Network boot o Serial console support o IPMI support o Watchdog support o LED behavior Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 49 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 50 of 84 5.1.1.2 General Linux Functions • Physical Serial console Support • 82571EB Serial Over LAN support. • Intel 82571 Ethernet driver • Intel 82598 XAUI Ethernet controller driver • IPMI driver • IPMI Action handler • HPM.1 IPMI Firmware upgrade support • BIOS upgrade. • Watchdog driver • Driver for Boot Flash (M50FW016 ST Microelectronics). • Virtualization support 5.1.1.3 General IPMC Functions o All firmware requirements as specified in 1.7 o IPMI v1.5 o ATCA hot swap interfaces o Front Panel LED controls o Payload power supply controls (multiple voltage levels) o Control of E-Keying governed fabric and base interfaces. o Persistence of above controls across IPM Controller resets o Dual redundant IPMB-0 o Thermal sensors (DS75S digital) o Payload voltage monitoring o All mandatory IPMI/PICMG 3.0 commands o Payload alert notifications o LPC payload interface (system interface) o HPM.1 Firmware upgrade through IPMB, payload and debug serial interface. o Enhanced firmware configuration (Firmware Recovery) o FRU inventory management. o Sensor Data Record (SDR) management. o Various sensors event generation. 5.1.2 Assumptions on Availability of Various Development Tools/Components • Red Hat Kernel and the filesytem for Harpertown XEON will be available with the required generic driver support. • Linux 2.6.x based driver for Intel 82598 Ethernet controller. • Working AMI BIOS, which supports Harpertown, ICH9/ICH9R and 5100 San Clemente chipset • IPMI Firmware version 1.5 • Intel Harpertown Development Kit • Pigeon Point Development Kit • Shelf manager ,Chassis and Hub Boards are available. 5.1.3 Dependencies and Risks 5.1.3.1 BIOS • This project completely depends on the availability and capabilities of AMI BIOS for Harpertown, ICH9/ICH9R and San Clemente MCH chipsets. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 50 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 51 of 84 5.1.3.2 Linux • This project completely depends on the availability and capabilities of Red Hat Linux for Harpertown, ICH9/ICH9R and San Clemente MCH chipsets. 5.1.3.3 IPMI • Pigeon Point Systems (PPS) IPMI Firmware version 1.5 is available for the Renesas H8S IPMC and with the support of KCS system interface and Serial Over LAN support for 82571EB. 5.1.3.4 Required Development and Test Resources • Intel Development Kit • Pigeon Point IPMI Development Kit. • Chassis • Shelf manager and 2 Hub Boards. • Network Analyzer and Test Equipment viz SmartBits. 5.1.4 Required Development Environment and Tools 5.1.4.1 For BIOS Development • Windows Host with the tool chain from AMI for BIOS development. • American Arium Debugger 5.1.4.2 For Linux Development • Red Hat 9.0 as Host Development Environment • Tool Chain and Linux kernel from Red Hat. 5.1.4.3 For IPMI Firmware Development • JTAG Emulator/Programmer. • Pigeon Point’s Cross compiler for H8S processor. • I2C Debugger. 5.2 External Interface Specifications The user, hardware, software and communication interfaces are specified in this section. 5.2.1 User Interfaces 5.2.1.1 BIOS The Intel Harpertown blade BIOS provides the standard user console interface for setting up of configuration parameters and displays the initialization messages on the console. Both Physical Serial Port and Serial Over LAN will be provided. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 51 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 52 of 84 5.2.1.2 Linux The Linux will provide a login prompt on console, either the Physical Serial Port or Serial Over LAN. This will be set during Linux boot. The login shell will provide a command line interface to run user utilities existing in the root file system. 5.2.1.3 IPMI IPMI provides a KCS system interface through the LPC bus to the payload processor, and a debug physical serial interface for debug. The final product will not have the physical serial port loaded on the board. 5.3 Functional Requirements 5.3.1 BIOS Functional Requirements 5.3.1.1 System Initialization Process The MP(Multi Processor) initialization protocol defines two classes of processors: the bootstrap processor (BSP) and the application processors (APs). Following a power-up or RESET of an MP system, the Intel Harpertown blade dynamically selects one of the processors cores on the system bus as the BSP. The remaining processor cores are designated as APs. For example as shown in Figure 14 System Initialization Block Diagram core0 of CPU0 will act as BSP and remaining 7 cores (logical processors) will act as APs. The BSP executes the BIOS’s boot-strap code to configure the APIC environment, sets up system-wide data structures. At the end of the boot-strap procedure, the BSP sets a processor counter to 1, and then broadcasts a SIPI (Startup IPI) message to all the APs in the system. After receiving the SIPI message the APs start executing initialization code. The first action of the AP initialization code is to set up a race (among the APs) to a BIOS initialization semaphore. When each of the APs has gained access to the semaphore and executed the AP initialization code, the BSP establishes a count for the number of processors connected to the system bus, completes executing the BIOS boot-strap code, and then begins executing operating-system boot-strap and start-up code. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 52 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 53 of 84 Figure 14 System Initialization Block Diagram Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 53 of 84 LPC
    • Intel Harpertown ATCA Blade Product Requirements Document Page 54 of 84 5.3.1.2 BIOS Initialization of Processor and Chipset The Intel Harpertown blade is an Advanced TCA board based on Intel XEON Processor with 5100 (San Clemente) Northbridge and ICH9/ICH9R Southbridge chipset. Most BIOS features supported by standard AMI BIOS will be supported. The BIOS shall perform the low level initialization required to load programs from the bootstrap device. • Reset and ROM trap handler vectors • CPU initialization • L1 and L2 cache initialization • Standard BIOS support for chipset • Memory controller initialization to support up to 48GB DIMM memory on two channels with ECC data protection. • PCI-Express bus configuration • Console device initialization • Serial Over LAN (SOL) feature initialization • Support for PXE Boot 5.3.1.3 BIOS Detailed Requirements 5.3.1.4 Overview: The Basic I/O System (BIOS) on the Intel Harpertown blade payload processor will be started as the first software after power up. The BIOS will initialize the board and perform some basic functionality tests during POST. After the POST has completed without fatal errors, the BIOS will transfer control to the operating system. The following is a list of BIOS functional requirements BIOS_R1 Field Upgradeability via network It shall be possible to upgrade the BIOS in field via network using the integrated PXE option ROM. BIOS_R2 Memory Detection The San Clemente reads the SPD from the DIMMs. The BIOS may be used to direct the San Clemente in doing so. The BIOS automatically configures the chipset accordingly. No setup items shall be provided for configuring the DRAM interface. BIOS_R3 Memory test during POST The BIOS shall perform simple memory tests during POST. In order to increase the boot-up speed, a setup item shall be provided to limit the amount of tested memory. Possible setting will be “64MB” and “All”. The default value will be “64MB”. BIOS_R4 ECC Support The BIOS shall always enable ECC support in the chipset and configure it to generate an NMI only for non- correctable ECC errors. No setup item will be provided for disabling this feature. BIOS_R5 Plug and Play The BIOS shall support the following specifications: • Plug and Play BIOS Specification, Version 1.0A • PCI BIOS Specification, Revision 2.1 • Plug and Play ISA Specification, Version 1.0A During POST, the BIOS shall identify all PCI-Express devices with a header type 1 in the system and initialize them according to their resource requirements in a conflict free manner. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 54 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 55 of 84 BIOS_R6 Intel 82571EB Serial Over LAN (SOL) support BIOS shall support Serial Over LAN feature of 82571EB. SOL is a mechanism by which the BIOS sends text based console redirection to a virtual serial port on the 82571 GbE controllers. The GbE controller then takes this ASCII text data and packages it up in an RMCP+ packet as per the IPMI 2.0 specification and send it to a remote console. The console in turn decodes this data and displays the text. The console can also send keystrokes back to the GbE controllers which are in turn sent to the BIOS. SOL is generally used by a remote console to view BIOS Power On Self Test (POST) information and changes BIOS setup if necessary. The GbE controller is capable of supporting SOL with the optional authentication and encryption as defined within the IPMI 2.0 specification. BIOS_R7 Boot Firmware Console Driver The firmware shall support an RS232 physical serial port and 82571EB’s logical Serial port as a system console. The Serial Over LAN (SOL) for 82571EB shall be supported by the BIOS. The Intel Harpertown blade features two serial ports in which one is physical serial port and another one is logical serial port provided by 82571EB Ethernet controller, these can be configured via setup. The following section describes the possible settings and the default values for this port. • Onboard serial port 1 Possible settings: Standard baud rates 9600, 19200, 38400 and 115200 shall be supported. Default value: 9600 Baud, 8 bit, No parity BIOS_R8 Default Boot Options • PXE Boot : PXE protocol is industry standard used for network enabled booting of the Intel based boards. It operates as follows, the client initiates the protocol by broadcasting a DHCPDISCOVER containing an extension that identifies the request as coming from a client that implements the PXE protocol. Assuming that a DHCP server or a Proxy DHCP server implementing this extended protocol is available, after several intermediate steps, the server sends the client a list of appropriate Boot Servers. The client then discovers a Boot Server of the type selected and receives the name of an executable file on the chosen Boot Server. The client uses TFTP to download the executable (Linux image) from the Boot Server. Finally, the client initiates execution of the downloaded image. At this point, the client’s state must meet certain requirements that provide a predictable execution environment for the image. Important aspects of this environment include the availability of certain areas of the client’s main memory, and the availability of basic network I/O services. BIOS shall search for a boot image via DHCP, BOOTP, and TFTP accessed via Ethernet Base interface. • BIOS should be able to send PXE client message for initiation, discover, Boot service request and NBP download. • If no boot image is found an error shall be reported and the BIOS shall infinitely keep trying for the boot image with the boot option sequence configured. BIOS_R9 Boot Firmware Progress Codes The BIOS of the Intel Harpertown blade BIOS shall indicate the boot progress to the IPMC through OEM command allowing Firmware Progress sensor to record the state. The Firmware Progress sensor is the software sensor implemented in IPMC. Note: BIOS shall report any failures to IPMC occurred during the boot progress. BIOS_R10 Boot Bank Selection The payload processor shall boot from the flash bank as selected by IPMC. BIOS_R11 IPMI 1.5 watchdog commands support BIOS shall support watchdog command as specified in 1.7. BIOS_R12 Linux OS Image Source BIOS shall be able to load Linux OS image from Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 55 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 56 of 84 • Ethernet Link -0 (Base-0) • Ethernet Link-1 (Base-1) Notes: In an ATCA environment, the default will be Ethernet link 0 which shall be base interface-0. BIOS_R13 BIOS DHCP support The Intel Harpertown blade shall be able to boot using DHCP for address assignment, accessed via either of the Base channel Ethernet Links BIOS_R14 BIOS DHCP Retries o BIOS shall inform IPMC about DHCP failure after every set of maximum retries are completed. It is to avoid filing of event log with too many messages in IPMC. o BIOS shall learn DHCP retry count from IPMC using OEM command. o DHCP retry count of 10 shall be taken as default by BIOS, if it could not establish the communication with IPMC. BIOS_R15 System Management Interface Driver BIOS shall implement IPMI specification defined KCS style of system management interface driver to establish communication with IPMC. The KCS system runs over the LPC bus between the IPMC and payload. BIOS_R16 IPMC command retries BIOS shall retry sending commands to IPMC in case it could not establish the communication with IPMC. At the max 10 retries are allowed BIOS_R17 Failure of IPMC communication If BIOS could not establish communication with IPMC or if BIOS could not receive any response from IPMC after all retries, BIOS shall display relevant error on console and it shall keep trying to establish IPMC communication. BIOS_R18 Boot Process Interruption support The firmware shall provide a method to halt the startup process from either the physical or SOL console. Upon interruption, the user will be presented a standard BIOS configuration set of menus/screens BIOS_R19 BIOS configuration Boot firmware shall provide a user interface on the system console to modify and save configurable firmware parameters in BIOS Flash. BIOS_R20 Storage of BIOS configuration data BIOS configuration data shall be stored into Primary and Secondary Boot FLASH. BIOS_R21 Detection of boot device In the event of Boot device detect failure; BIOS shall loop in the POST until it finds a boot device. BIOS_R22 Boot Failure/Success event message BIOS shall send a boot device detection failure or successful event message to IPMC only once to avoid filing of event log with too many messages. The event messages shall be sent after 10 retries, and this number of retries can be configurable during build time. BIOS_R23 Boot Redundancy and Failover The Intel Harpertown blade shall have both primary and secondary bootable banks to support failover when one copy of the boot image is corrupted. BIOS_R24 Banks individually programmable Each bootable flash bank shall be individually programmable, without risk to the other bank. BIOS_R25 The temperature at which Thermal Monitor(TM1) activates the thermal control circuit shall be enabled by BIOS during initialization. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 56 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 57 of 84 5.3.2 Linux Functional Requirements This section describes the functional requirements for Linux. 5.3.3 General Linux Requirements Linux kernel specific general requirements are as follows. LINUX_R1 Linux Bring-up The Linux will be brought up, which will include following activities: o CPU Initialization o Cache Initialization o MMU Initialization o Interrupt Handling o PCI Scanning o Early Debug Messages Support LINUX_R2 Console Support The support for physical serial port and 82571EB Serial Over LAN (SOL) logical serial port compliant to 16440/16550 controllers shall be provided. The default baud rate for the physical serial port will be 9600 bps. Following are the console ports supported: • Physical Serial Port – The serial port of the Super I/O is used as the console port and this shall be configured during Linux booting by passing boot parameters. • Serial Over LAN is achieved using pass through capability of the Intel 82571EB Ethernet controller. Pass through mode is a set of instructions that enables an external Baseboard Management Controller (BMC) to communicate with GbE controllers using a TCO port. The TCO port supports both SMBus and I2C commands to pass traffic to and from an external BMC. Linux sends text based console redirection to a virtual serial port (that’s 16550 compatible) on the GbE controllers. In pass through mode, session establishment is handled by the IPMC, while the rest of the SOL protocol is handled by the NIC. The Intel Harpertown blade only supports the pass through mode, i.e. the entire session flow is handled by the NIC. In this mode, the role of the IPMC is to provide the Intel 82571 GbE with the necessary configuration parameters over the Total Cost of Ownership (TCO) interface of the Intel 82571 GbE. LINUX_R3 48GB DDR SDRAM Support Support for up to 48GB of DDR SDRAM will be provided through HIGH MEM support under Linux. LINUX_R4 Dual Base Channel Ethernet (Intel 82571EB) Support Driver support for on-board 10/100/1000 Mbps Ethernet Controller shall be provided. The interface should work at 10,100 or 1000 Mbps according to the E-Keying events from the IPMC. LINUX_R5 Fabric Ethernet (Intel 82598) Support Driver support for onboard Ethernet (Intel 82598) shall be fully functional. The interface shall work at 1Gbps or 10Gbps according to E-Keying event. It shall be able to work at 10Gbps in option-9 and 1Gbps in option-1 configuration of chassis. LINUX_R6 RTC Driver Support Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 57 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 58 of 84 The ICH9/ICH9R RTC driver shall provide user to get and set time. The Time shall be synchronized with the time set by BIOS. Time modified by OS should be reflected throughout the system immediately as well as in the BIOS in the next booting. LINUX_R7 IPMI SMI (System Management Interface) Driver support The IPMI driver support shall provide access to IPMI firmware routines. The user shall be able to configure the required board specific data. The user shall also be able to monitor the events generated by devices such as temperature sensor, voltage sensors using the IPMI KCS interface LINUX_R8 OpenIPMI support OpenIPMI shall be supported to communicate with IPMI devices. LINUX_R9 BIOS upgrade support from Linux Linux shall provide support to upgrade BIOS using the MTD (Memory Technology Device) driver for the Boot Flash. LINUX_R10 Watchdog Driver Driver for on-chip watchdog timer of the Intel XEON will be provided. However, the watchdog facility shall be disabled by default. LINUX_R11 Support for Payload Boot Flash The flash driver support for Boot Flash will be provided. The driver will support Read, Write and Erase operation on blocks of Flash. LINUX_R12 IPMI Firmware upgrade support Command line API shall be provided to upgrade the IPMC firmware(Boot code and IPMI Firmware) from Linux shell prompt. LINUX_R13 IPMI 1.5 compliance command supports from Linux shell prompt Command line access shall be given to issue the IPMI commands to the IPMC from the Linux. Minimal set of IPMI compliance commands shall be supported. LINUX_R14 IPMI Action Handler The IPMI action handler shall be supported. This shall take action based on the IPMI events/messages posted by IPMC through System management Interface. LINUX_R15 IPMI Action Handler sensor update The IPMI action handler shall update periodically, the payload sensors values to IPMC that are not directly accessible to IPMC. The IPMI Action handler is a piece of software that hooks to OpenIPMI library that runs on the payload. This handler uses OpenIPMI libraries to communicate with underlying IPMC. Function of IPMI Action Handler is to take action on various IPMI events learned through IPMC. Mainly this action handler realizes the “software E- Keying” implemented by HARPERTOWN-ATCA blade. Table 9 gives detailed action and events that are handled by the IPMI Action Handler. Sl. No Event Action 1 Fabric Interface E-Keying Event 2 Base Interface E-Keying Event enable/disable the given channel 3 Graceful reboot Shutdown the OS and reboot 4 FRU Deactivation request Shutdown Table 9 IPMI Action Handler Actions Per Event LINUX_R16 Payload soft reset or halt information to IPMC Linux shall inform IPMC whenever the payload reboot or shutdown or halt is initiated, by using OEM command. LINUX_R17 Linux ACPI Support Linux shall support ACPI (Advanced Configuration and Management Interface) power management. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 58 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 59 of 84 ACPI puts the OS in control of system configuration and power management. Further, it acts as a hardware abstraction layer between the OS and the platform BIOS -- allowing the OS and the platform to evolve independently. See the Documentation section for further information (http://lxr.linux.no/source/Documentation/pm.txt ). 5.4 IPMI Firmware Requirements IPMI_R1 PICMG 3.0 Compliance The IPMI-Firmware design shall conform to the functional requirements as in PICMG® 3.0 Revision 2.0 AdvancedTCA Base Specification Including ECN-001 and ECN-002 1.7. IPMI_R2 IPMI Mandatory Command Set Compliance The Intel Harpertown blade shall be compliant to the mandatory IPMI command set as required by 1.7. IPMI_R3 IPMI optional command set compliance The Intel Harpertown blade shall be compliant to the following optional IPMI command set as described in 1.7. o Cold Reset o All Watchdog commands o Master write-read o Get sensor reading factors o Get sensor threshold o Get sensor event enable o Get sensor type o Get shelf address info IPMI_R4 PICMG OEM Command Compliance The Intel Harpertown blade shall be compliant to the mandatory PICMG OEM command set as detailed in 1.7. IPMI_R5 IPMC Command Processing The IPMC-Firmware on OPERA-ATCA blade shall process only one command at any given time. IPMC shall be responsible for serializing the command requests sent to it from either through System Interface or IPMB channel. IPMI_R6 Events for threshold based sensors The IPMI-Firmware shall enable following events for all the threshold based sensors  Assertion event for upper non-recoverable going high  Assertion event for upper critical going high  Assertion event for upper critical going low  Assertion event for upper non-critical going high IPMI-Firmware shall provide mechanism to alter the Masking. Masking is a field in SDR (Sensor Data Record) This field reports the assertion event generation or threshold event generation capabilities for a discrete or threshold-based sensor, respectively. IPMI_R7 Threshold value accessibility The IPMI-Firmware shall provide the following threshold values whenever requested.  Upper non-recoverable threshold comparison is returned  Upper critical threshold is comparison returned  Upper non-critical threshold is comparison returned IPMI_R8 Control of Payload Power The IPMI-Firmware on the Intel Harpertown blade shall control (disable/enable) the payload power in accordance with 1.7 Power requirements. IPMI_R9 IPMI Payload Power Sensor Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 59 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 60 of 84 The IPMI-Firmware shall be able to detect the status of following critical payload power supplies. • VCCP_PGOOD • V1P5_PGOOD • V1P8_PGOOD IPMI_R10 Control of Payload Reset The IPMI-Firmware shall be able to control the reset logic of the payload. IPMI_R11 IPMI Payload Reset Sensor The IPMI-Firmware on the Intel Harpertown blade shall be able to sense and monitor when payload resets. IPMI_R12 IPMI Air Temperature Sensor The IPMI-Firmware on the Intel Harpertown blade shall sense and monitor inlet and outlet air temperature sensors. IPMI_R13 IPMI Critical Device Temperature Sensors The IPMI-Firmware on the Intel Harpertown blade shall sense and monitor temperature sensors that are coupled to devices requiring thermal protection. IPMI_R14 IPMI Voltage and Current Sensors The IPMI-Firmware on the Intel Harpertown blade shall sense and monitor voltage and current sensors. IPMI_R15 Boot Bank Selection and OEM command support The IPMI-Firmware on the Intel Harpertown blade, shall select one of two copies of boot firmware to use for initial execution of the payload processor. The selection of boot bank shall be done by OEM command. It is also controlled automatically. If boot fails out of one boot bank, it will switch over to the other boot bank. IPMI_R16 System Management Interface The IPMI-Firmware shall have a driver to support KCS SMI interface. IPMI_R17 E-Keying support The IPMI-Firmware for The Intel Harpertown blade shall provide E-Keying support for following interfaces. . o Fabric Interface o Option-1 mode o Option-9 mode o Base Interface IPMI_R18 E-Keying information for payload The IPMI-Firmware for The Intel Harpertown blade shall provide E-Keying information for the supported interfaces to the payload IPMI_R19 E-Keying Link Types  The IPMI-Firmware shall support following link type only and provide required E-Keying as defined by 1.7 Base Interface 10/100/1000 BASE-T.  PICMG 3.1 - Ethernet Fabric Interface IPMI_R20 Ethernet fabric Interface Link Type Extensions The IPMI-Firmware shall support following Link Type Extensions for Ethernet fabric interface and provide required E-Keying as define in PICMG 3.1 vD1.0  Fixed 1000BASE-BX  Fixed 10GBASE-BX4 [XAUI] IPMI_R21 Default state of the payload control signals. IPMC-Firmware shall configure the payload control signals that are connected to IPMC and driven by IPMC to default state when IPMC is powered on as given below Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 60 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 61 of 84  PLOAD_RST#  PLOAD_PEN  PLOAD_PGOOD  BOOT_SEL  LATCH_EN IPMI_R22 Signal status Persistence through IPMI Cold reset The cold reset command intended to reset an IPM Controller and cause it to load default settings of interrupt enables, event message generation ,sensor scanning , threshold values ,and other default states to be restored. Following control signals that are connected from the IPMC to the payload shall be persistent through an IPMC Cold reset.  PLOAD_RST#  PLOAD_PEN  PLOAD_PGOOD  BOOT_SEL  BLUE LED, LED1, LED2 and LED3 IPMI_R23 Signal status Persistence through IPMC External reset An external reset of the IPMC occurs if its RES# pin is grounded. The IPMC firmware detects the external reset condition by reading the XRST bit of the SYSCR register. This source causes a hard reset of the IPMC. Following control signals that are connected from the IPMC to the payload shall be persistent through an external reset of the IPMC.  PLOAD_RST#  PLOAD_PEN  PLOAD_PGOOD  BOOT_SEL IPMI_R24 Data persistent through IPMC Cold reset Configuration parameters for BMC watchdog, Configuration parameters for Payload, Sensor threshold values, Sensor attributes shall be persistent through the IPMC Cold reset. IPMI_R25 Data persistent through IPMC External reset IPMI-Firmware shall make SDR, Configuration parameters for Payload. persistent through External reset. IPMI_R26 Payload operation during IPMC reset Payload operation shall not be affected by reset of IPMC initiated by external reset signal and internal soft reset. Failure of IPMC by any case other than Management power failure, shall not affect the operation of payload. IPMI_R27 KCS Interface IPMI-Firmware shall be in compliance with IPMI Spec v1.5 for KCS interface and Additional Specification for KCS Interface as in section 9.14 of IPMI 1.5 spec IPMI_R28 IPMC Watchdog The IPMC shall strobe external watchdog periodically to avoid IPMC reset. IPMI_R29 BMC Watchdog IPMI-Firmware shall implement BMC watchdog as specified in IPMI 1.5 Spec IPMI_R30 Payload firmware recovery IPMC-Firmware shall switch the boot bank in case of payload firmware progress failure. Note: During boot bank switch, IPMC shall put payload under reset and shall be released out of reset once the switching is done. IPMI_R31 IPMI Firmware Progress Sensor The IPMC Firmware shall implement a Firmware Progress Sensor to indicate the progress of the Payload Boot Firmware (BIOS). Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 61 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 62 of 84 IPMI_R32 IPMI OS Boot Sensor The IPMI-Firmware shall implement an OS Boot Sensor to indicate the start of Operating Systems boot. IPMI_R33 IPMC SDR Support The IPMI-Firmware on the Intel Harpertown blade shall provide Sensor Data Records (SDR) of Type 01h – Full Sensor Record as in IPMI 1.5 spec for all sensors it manages. IPMI_R34 Payload boot configuration Parameter Storage The IPMC on the Intel Harpertown blade shall provide storage and retrieval of an OEM FRU record containing boot configuration parameters viz. Boot bank selection DHCP retry count and boot retries. OEM command shall be supported to access these parameters. IPMI_R35 IPMI PICMG3.0 OEM FRU Records IPMI shall contain FRU records for E-Keying and Power Budget to support inventory. FRU records shall be in compliance with Platform Management FRU Information Storage Definition v1.0 specification and 1.7 . IPMI_R36 IPMI Standard FRU Records IPMI shall contain FRU records for Common Header, Board Info Area, Product Info Area and MultiRecord Info Area. FRU records shall be in compliance with Platform Management FRU Information Storage Definition v1.0 specification. IPMI_R37 HPM.1 IPMC Firmware Upgrade The IPMI-Firmware on the Intel Harpertown blade shall be upgradeable over System Management Interface and IPMB as defined in PICMG HPM.1 R1.0 specification. A backup of current image shall be made when firmware upgrade is initiated. IPMI_R38 IPMC Recovery from Upgrade The IPMC-Firmware on the Intel Harpertown blade shall have a recovery mechanism from a firmware upgrade failure using two redundant images as defined in PICMG HPM.1 R1.0 Specification. If upgrade fails backup image shall be used to restore the IPMC functionality. IPMI_ R39 Firmware upgrade error handling The IPMC-Firmware shall support required level of firmware upgrade error handling as in HPM.1 specification IPMI _R40 IPMC sensor Event Reporting IPMI-Firmware shall report events to Event Receiver (Shelf Manager) for all IPMI Standard and OEM sensors it implements. IPMI_R41 IPMI Support for LEDs IPMI-Firmware shall support controlling following LEDs  Hotswap LED – as defined by 1.7.  LED1 – As Out-Of-ServiceLED  LED2 – TBD  LED3 - TBD  APPLICATION SPECIFIC LEDn - TBD How many Application specific LED? - TBD IPMI_ R42 Lamp Test Support IPMI-Firmware shall support Lamp Test Control for all the LEDs it controls IPMI_ R43 Front plate LEDs color IPMI-Firmware shall maintain the LED attributes as defined by 1.7 and color of hot swap LED, OOS LED and Application specific LEDs shall be as given bellow  HOT SWAP LED – BLUE  LED1 (OOS LED) – RED/AMBER  LED2 – GREEN Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 62 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 63 of 84  LED3 – AMBER  APPLICATION LEDn - TBD IPMI_ R44 Autonomous Thermal Protection The IPMC-Firmware shall thermally protect the blade from damage independent of any external management control IPMI_ R46 IPMC POST The IPMC on the Intel Harpertown shall execute the POST for following interfaces. o Components connected on Private Management Bus. The private management bus is defined by IPMI specification as the I2C bus to which sensors and payload EEPROM are connected . These are completely managed by IPMC. o Sensors IPMI_ R47 Sensor Support The IPMC on the Intel Harpertown blade shall support all sensors specified by the PICMG3.0 R2.0 standard. IPMI_R48 Serial Overl LAN support Serial Over LAN is achieved using pass through capability of the Intel 82571EB Ethenet controller. Pass through mode is a set of instructions that enables an IPMC to communicate with GbE controllers using a TCO port. The TCO port supports both SMBus and I2C commands to pass traffic to and from an IPMC. The INTEL-HARPERTWON-ATCA only supports pass through mode, i.e. the entire session flow is handled by the NIC. In this mode, the role of the IPMC is to provide the Intel 82571 GbE with the necessary configuration parameters over the Total Cost of Ownership (TCO) interface of the Intel 82571 GbE 5.5 General ATCA Specification Software Conformance Requirements GEN_ATCA_R1 Governing Specification PICMG® 3.0 Revision 2.0 AdvancedTCA Base Specification Including ECN-001 and ECN-002 1.7 The Intel Harpertown blade software shall comply with 1.7. GEN_ATCA_R2 Governing Specification PICMG3.1 The Intel Harpertown blade shall comply with 1.7 Options 1 and 9. GEN_ATCA_R3 Base Interface The Intel Harpertown blade containing a Base Interface shall comply with requirements in 1.7. GEN_ATCA_R5 IPMB Bus Clock Speed The IPMC on the Intel Harpertown blade shall be capable of receiving commands simultaneously from both IPMBs running at a minimum of 100KHz. GEN_ATCA_R6 Independent operation on IPMB_A and IPMB_B Reception of commands on the system management interface on the Intel Harpertown blade shall also be independent of activity on IPMB_A or IPMB_B (transmitting or receiving). GEN_ATCA_R7 IPMBs independent of local activity Receiving commands on the IPMBs on the Intel Harpertown blade shall not be dependent on any local control / status activity.. The IPMC shall not restrict on receiving IPMB commands from Shelf manager. The IPMC should process the request from IPMB of any local activity being done by IPMC (For example responding to payload request or responding to debug request). GEN_ATCA_R8 IPMB Bus Timing Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 63 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 64 of 84 Timing of command handling / responses on the Intel Harpertown blade shall be according to the IPMI bus specifications. GEN_ATCA_R9 IPMB command response latency Total response latency of IPMI commands on the Intel Harpertown blade shall be less than 50 milliseconds with no other traffic present. GEN_ATCA_R10 IPMB command throughput IPMI subsystem on the Intel Harpertown blade shall support 10 commands per second under worst-case operating conditions. GEN_ATCA_R12 IPMC Security In order to minimize the vulnerability of the IPMC firmware, all configuration and communications with this firmware shall be via the IPMB and IPMI protocols. Note: IPMC shall have the debug serial port through which the user will have access to IPMC console. GEN_ATCA_R13 IPMI Action Handler The IPMI action handler shall be supported. This shall take action based on the IPMI events/messages posted by IPMC through System management Interface. 5.6 Deliverables The software deliverables for the project will be the following: • RPM containing the source for linux kernel and default config file • RPM containing the linux kernel image (zImage) and System.map file • IPMC Firmware Image • BIOS Image • Linux User Manual • IPMI User Manual • BIOS User Manual • Release Notes Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 64 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 65 of 84 6 Performance Requirements The design intent for the Intel Harpertown server blade is to achieve best-in-class performance/watt and performance/cost. System test must run typical standard suites of performance benchmarking software to validate integer, floating point, database access and use, web-serving, and performance in other typical compute server applications. At the time of the writing of this document, it was not yet possible to detail exactly which benchmarks would be used to qualify the performance of the blade. However, the following list comprises a group of commonly used benchmarks and testing facilities which would be suitable to select from. Other tests and benchmarks could be utilized as well. SPEC Standard Performance Evaluation Corporation, www.spec.org . SPEC publishes results many of its members’ test results each year, so it would be relatively easy to compare our performance to that of the rest of the industry. • CPU2000 Floating point performance • CINT2000 Integer performance • High Perf Computing Enterprise applications • OpenMP Open Multi-Processing • MPI Message Passing Interface • Java Client/Server jAppServer2004, JBB2005, JVM98 (Java Virtual Machine) • MAIL2001 SMTP/POP3 performance monitoring • SFS97 Network file serving using NFS • WEB2005 Web serving performance testing using HTTP and HTTPS. Also WEB99_SSL which uses the SSL protocol. TPC Transaction Processing Performance Council, www.tpc.org . Could be quite expensive to run. Top 10 results for each of the TPC benchmarks may be found at www.tpc.org/information/results.asp . • TCP-App Tests web services. Uses common apps such as Exchange, Domino, SQL, Server, Oracle. • TCP-C Tests large scale online transaction processing (OLTP). • TCP-H Tests database access, queries, modification submission. • TCP-W Tests performance of web-based transaction loading, while generating data-driven dynamic web-pages. Mindcraft Spin-off of SPEC. www.mindcraft.com . Independent testing laboratory. • Directory Mark Tests performance of Windows Active Directory running Lightweight Directory Access Protocol (LDAP3) • AuthMark Tests system’s performance in authenticating login access to web-based products. • iLOAD MVP A tool for creating system loading during benchmark testing. WebStone Tests a system’s ability to perform web serving. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 65 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 66 of 84 7 System Address Map Figure 15, excerpted from 1.7 § 4.0, shows the San Clemente mandated detailed system address map for the entire Cranberry Lake Intel platform. Far more detailed information pertaining to the system address map is directly available through 1.7 § 4.0 its associate subsections. These additional items would include items such as System Memory Address Range, 32/64-bit Addressing, Compatibility Area, MS-DOS Area, Legacy VGA Ranges, Expansion Card BIOS Area, Lower System BIOS Area, Upper System BIOS Area, System Memory Area, 15 MB - 16 MB Window (ISA Hole), Extended SRAM Space (TSEG), Memory Mapped Configuration (MMCFG) Region, Low Memory Mapped I/O (MMIO), Chipset Specific Range, Interrupt/SMM Region, I/O APIC Controller Range, High SMM Range, Interrupt Range, Reserved Ranges, Firmware Range, High Extended Memory, System Memory, High MMIO, CB_BAR MMIO, Extended Memory, Main Memory Region, Application of Coherency Protocol, Routing Memory Requests, Memory Address Disposition, Registers Used for Address Routing, Address Disposition for Processor, Access to SMM Space (Processor Only), Inbound Transactions, I/O Address Map, Special I/O Addresses, Outbound I/O Access, Inbound I/O Access, Configuration Space Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 66 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 67 of 84 Figure 15 San Clemente System Address Map Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 67 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 68 of 84 8 Hardware Device Addresses The following description of the San Clemente/ICH9/ICH9R device configuration registers is taken directly from 1.7 § 3.2. This description, therefore applies to the device configuration registers of the Intel blade. The MCH contains 12 PCI devices within a single physical component. The configuration registers for these devices are mapped as devices residing on PCI bus 0. • Device 0: ESI bridge/PCI Express* Port 0. Logically, this appears as a PCI device that resides on PCI bus 0. Physically Device 0, Function 0 contains the PCI Express* configuration registers for the ESI port, and other MCH specific registers. PCI Express* port 0 resides at DID of 65C0h. • Device 2: PCI Express* 2. Logically this appears as a PCI device residing on bus 0. Device 2, Function 0 is routed to the PCI Express* configuration registers for PCI Express* port 2. When PCI Express* ports 2 and 3 are combined into a single x8 port, controlled by port 2 registers, Device 3, Function 0 (port 3) configuration registers are inactive. PCI Express* port 2 resides at DID of 65E2h. • Device 3: PCI Express* 3. Logically this appears as a PCI device that resides on bus 0. Device 3, Function 0 contains the PCI Express* configuration registers for PCI Express* port 3. When PCI Express* ports 2 and 3 are combined into a single x8 port, controlled by port 2 registers, these configuration registers are inactive. PCI Express* port 3 resides at DID of 65E3h. • Device 4: PCI Express* 4. Logically this appears as a PCI device that resides on bus 0. Device 4, Function 0 contains the PCI Express* configuration registers for PCI Express* port 4. When PCI Express* ports 4 and 5 are combined into a single x8 port, Device 4, Function 0 contains the configuration registers and Device 5, Function 0 (port 5) configuration registers are inactive. When PCI Express* ports 4, 5, 6, and 7 are combined into a single x16 graphics port, Device 4, Function 0 contains the configuration registers and Device 5, Function 0 (port 5), Device 6, Function 0 (port 6), and Device 7, Function 0 (port 7), configuration registers are inactive. PCI Express* port 4 resides at DID of 65E4h. • Device 5: PCI Express* 5. Logically this appears as a PCI device that resides on bus 0. Device 5, Function 0 contains the PCI Express* configuration registers for PCI Express* port 5. When PCI Express* ports 4 and 5 are combined into a single x8 port Device 4, Function 0 contains the configuration registers, and these configuration registers are inactive. When PCI Express* ports 4, 5, 6 and 7 are combined into a single x16 graphics port Device 4, Function 0 contains the configuration registers, and these configuration registers are inactive. PCI Express* port 5 resides at DID of 65E5h. • Device 6: PCI Express* 6. Logically this appears as a PCI device residing on bus 0. Device 6, Function 0 contains the PCI Express* configuration registers for PCI Express* port 6. When PCI Express* ports 6 and 7 are combined into a single x8 port Device 6, Function 0 contains the configuration registers, and Device 7, Function 0 (port 7) configuration registers are inactive. When PCI Express* ports 4, 5, 6 and 7 are combined into a single x16 graphics port Device 4, Function 0 contains the configuration registers, and these configuration registers are inactive. PCI Express* port 6 resides at DID of 65E6h. • Device 7: PCI Express* 7. Logically this appears as a PCI device residing on bus 0. Device 7, Function 0 contains the PCI Express* configuration registers for PCI Express* port 7. When PCI Express* ports 6 and 7 are combined into a single x8 port Device 6, Function 0 contains the configuration registers, and these configuration registers are inactive. When PCI Express* ports 4, 5, 6 and 7 are combined into a single x16 graphics port Device 4, Function 0 contains the configuration registers, and these configuration registers are inactive. PCI Express* port 2 resides at DID of 65E7h. • Device 8: DMA Engine Controller. Logically this appears as DMA device residing on bus 0. Device 8, Function 0 contains the DMA registers. • Device 16: Device 16, Function 0 is routed to the Front Side Bus (FSB) Controller, Interrupt and System Address registers. Function 1 is routed to the Front Side Bus Address Mapping, Memory Control, and Error registers. Function 2 is routed to FSB Error Registers. These devices reside at DID 65F0h. • Device 19: Device 19, Function 0 is routed to Miscellaneous registers. These devices reside at DID 65F3h. • Device 21: Device 21, Function 0, Channel 0 Memory Map, Error Flag/Mask, and Channel Control registers. These devices reside at DID 65F5h. • Device 22: Device 22, Function 0, Channel 1 Memory Map, Error Flag/Mask, and Channel 1 Control registers. These devices reside at DID 65F6h. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 68 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 69 of 84 9 Reliability Requirements 9.1 Blade Insertions REL_1: The Intel Harpertown blade backplane connectors will be rated at a minimum of 250 insertion/extraction cycles. 9.2 CPU Insertions REL_2: The Intel Harpertown blade processor sockets will be rated at a minimum of 20 insertion/extraction cycles. 9.3 DIMM Insertions REL_3: The DIMM sockets will be rated at a minimum of 20 insertion/extraction cycles. REL_4: The DIMM contacts and DIMM socket contact surfaces shall both be made of gold. 9.4 MTBF REL_5: The Intel Harpertown blade shall have a minimum Mean Time Between Failure of 45,000 hours. This is a little over 5 years. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 69 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 70 of 84 10 Optional ATCA Subsystems 10.1 Advanced Mezzanine Cards (AMC) The initial Intel Harpertown blade will not support AMC cards. 10.2 Rear Transition Modules (RTM) The default Intel Harpertown blade design will not support RTMs. The reason for this is that the RTM interface is completely vendor-specific. It will not be possible to specify the interface until we either design an RTM or select one from some third party vendor product. Since the RTM is an optional interface, we are free to leave this as follow-on work to customize to customers’ specifications at a later date. Further, initial marketing analysis shows that RTM systems are not widely used. As such, efforts will be better focused on completing the known required functionality of the card. However, efforts should be made to facilitate future support of RTMs in the event that a customer should request it as customer option. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 70 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 71 of 84 11 Mechanical Requirements 11.1 General MECH_1: The Intel Harpertown blade will be an ATCA compliant single-slot blade. MECH_2: The Intel Harpertown blade will operate properly in both horizontal and vertical orientations, dependent on the chassis supplying the proper cooling environment. While most ATCA chassis engage blades mounted vertically, some smaller chassis orient the blades horizontally. 11.2 Front Panel See § 3.15 for information on the front panel. 11.3 Improved Alignment Keying MECH_3: The Intel Harpertown blade shall support improved alignment keying as specified by § 2.4.4 of 1.7 . 11.4 Front Board Cover In order to minimize both air flow impedance and system cost, it is the intent to not utilize a metallic cover on the Intel Harpertown blade. However, care should be taken during the design of the blade mechanical components so as to facilitate the inclusion of a cover if it is later deemed necessary for the purposes of EMI containment, or if a customer would prefer to have one. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 71 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 72 of 84 12 DFx Requirements DFX_1: The Intel Harpertown blade PCB will be designed in conformance with Solectron standard Design For Manufacturing practices as detailed in 1.7. DFX_2: The Intel Harpertown blade PCB will be designed in conformance with Solectron standard Design For Testability practices as detailed in 1.7. Additional comments on JTAG access during the ICT phase of manufacturing can be found in § 3.14.3 . Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 72 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 73 of 84 13 Regulatory Compliance Requirements The Intel blade will conform to at least the regulatory requirements listed in . Note that to sell into markets outside of the EU or USA, other standards of regulatory requirement will have to be met. However, it is uncommon for these additional standards to require conformance to regulations more binding than those listed in . REQ Area Standard Governing Beyond Comment Known Compliancy Is ID Agency PICMG 3.0 ? REG_ Safety, GR-1089-CORE, Telcordia NEBS Document 1 Electro- Electromagnetic Compatibility magnetic and Electrical Safety Generic Compatibility Criteria for Network Telecommunication Equipment REG_ Safety UL/CSA 60950, EN 60950, CB UL/CSA IEC X 2 Report & Certificate REG_ Electro- FCC Part 15 Class A FCC X 3 magnetic Compatibility USA REG_ Electro- CISPR-22, Information IEC - Comité 4 magnetic Technology Equipment -- international Compatibility Radio Disturbance spécial des Characteristics -- Limits and perturbations Methods of Measurement. radioélectriques REG_ Electro- CISPR-24, Information IEC - Comité 5 magnetic technology equipment - international Compatibility Immunity characteristics - spécial des Limits and methods of perturbations measurement radioélectriques REG_ Electro- EN 300 386, Electro-Magnetic ETSI 6 magnetic Compatibility (EMC) Compatibility Requirements for Public Telecommunication Network Equipment; Electromagnetic Compatibility (EMC) Requirements REG_ Electro- EN55022 ETSI X 7 magnetic Compatibility REG_ Electro- EN55024 ETSI X 8 magnetic Compatibility REG_ Environmental GR-63-CORE, Network Telcordia NEBS Document 9 USA Equipment Building System (NEBS) Requirements - Physical Protection REG_ Environmental ETSI EN 300 019 ETSI 10 Europe Environmental Conditions and Environmental Tests for Telecommunication Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 73 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 74 of 84 Equipment REG_ NEBS Criteria SR-3580 Telcordia Level III Required 11 Levels REG_ General GR-78, Generic Requirements Telcordia 12 Physical for the Physical Design and Design Manufacture of Telecom Products & Equipment REG_ Power Supply EN 300 132, Equipment ETSI 13 Interface Engineering Power Supply Europe Interface At The Input To Telecommunications Equipment REG_ Acoustic EN 300 753, Acoustic Noise ETSI 14 Noise Europe Emitted By Telecommunications Equipment REG_ Reliability ?? TBD 15 REG_ ATCA PICMG® 3.0 Revision 2.0 PICMG 16 AdvancedTCA® Base Specification® REG_ Backplane PICMG® 3.1 Revision 1.0 PICMG 17 Fabric Specification Ethernet/Fibre Physical Layer Channel for AdvancedTCA™ Systems REG_ Power Advanced Configuration and Industry X OS Control of HW 18 Management Power Interface Specification Consortiaum Power Mngt. REG_ RoHS Directive 2002/95/Ec Of The X The European 19 European Parliament And Of The Council of 27 January Parliament And The 2003 on the restriction of the Council Of The use of certain hazardous European Union substances in electrical and electronic equipment Table 10 Required Regulatory Standard Compliances 13.1 RoHS Requirements REG_20: The Intel Harpertown blade will be 6/6 RoHS compliant. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 74 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 75 of 84 14 Risks The purpose of this section is to highlight known risk areas on the design of the Intel Harpertown blade. Workarounds to the listed issues are also presented so as to present a realistic, balanced view of each area. 14.1 Scarcity of Marketing Input The project certainly could benefit from an increase of marketing input. There were a number of decisions made by the design team based on best effort basis. A good example is the selection of operating systems. Possible work-around: Have design engineers collect marketing input. 14.2 San Clemente Schedule In an email from Bill Ferguson of Intel dated Aug. 7, 2007, it was reported to Dhiren Kumar that the San Clemente engineering BO samples would not appear until late Q4’07 – early Q1’08. Possible work-around: Utilize the A0 stepping silicon in the first revision of the board, then switch over to the B0 stepping silicon. 14.3 Thermal Issues Presentations at industry conferences on ATCA suggest that it is quite difficult to thermally shed the 200 Watts in a standard ATCA slot. It was also reported by the Bangalore team that they had had run into problems in dissipating 80 Watts of CPU processor power in a previous ATCA design. We have also been unsuccessful in locating a heat sink that is compliant to the ATCA mechanical dimensions. However, we certainly have not exhausted all search avenues in this. Possible solution: More in-depth thermal analysis by the Solectron mechanical and thermal teams. More in-depth search for ATCA heat-sinks. 14.4 Lack of Intel Support Support from Intel to Solectron has been quite spotty from the start of the project. There have been times when the support has been quite adequate, and others when it has been nearly non-existent. Designing a system like this from the ground up requires support from the processor vendor. 14.5 Lack Of A Budget Due to the prospect of merger, it has been difficult, it at all possible, to attain approval for any and all sizable expenditures for many aspects of this and other similar projects. This has resulted in inabilities to procure the assets typical to this type of undertaking at the normal times. Some examples of such needed items would be software licenses for items like the IPMI code, test equipment, and compatible ATCA hardware such as chassis, hub boards and shelf managers. Without these, many of the development processes that would occur concurrently up front are being put on hold. Possible solution: Those teams whose work is put on hold may take advantage of the ‘lull’ to better plan out their respective systems. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 75 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 76 of 84 14.6 Late Engagement with BIOS Vendor Of those items being put on hold due to the budget freeze, of particular concern is the delay in engagement with the chosen BIOS provider, American Megatrends, Inc. In order to best ensure an architecture amenable to proper interoperation with the BIOS, it is normal to engage the BIOS provider early on in the process. Workaround approach: While maintaining our goal of designing a leading edge product, avoid any ‘exotic’ design features which may be difficult to bring up at system boot time, or to control or monitor through standard BIOS management routines. Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 76 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 77 of 84 15 Bill Of Materials below is a preliminary bill of material (BOM) for the Intel Harpertown blade based on the best information available at the time of this writing. The BOM only contains the main, most expensive components of the board. The intent of the inclusion of the BOM is to publicize a ‘ballpark’ cost for the Intel Harpertown blade. It is not intended to represent an accurate unit cost. When viewing the BOM, please note the following items: • To see the spreadsheet, double-click on the BOM, which will open it in spreadsheet format. This will also allow edits to the cells which will then be affect dependent cells, such as the total. • This spreadsheet, for the most-part covers only major components. There are many mid- and small- level devices which are not included. • Note that approximately half of the BOM total is made up of the DDR2 DIMM cost. The board price can be significantly reduced by cutting back the amount of memory. • Several of the suppliers have only provided list prices. These vendors have openly stated that much more favorable pricing will be realized when final customers can be identified. • Some vendors have supplied tiered pricing. By specifying the expected annual usage (EAU1 or EU2) , appropriate tiered pricing will be selected for each line item for which tiered pricing has been supplied. Note that very few of the line items are based on tiered pricing. • If you change any cell contents in spreadsheet mode, simply do not save this document at closure. This will return the cell values to their original contents when this document is re-opened. • On parts from vendors for which a design win agreement (DWA) has been negotiated, the DWA percentage is listed in the third column, but the discount is not reflected in the line item price. At this stage of the development of the BOM, this inaccuracy is on par with many other inaccuracies as mentioned above. • To expand individual sections, click on the + at the left of the page on the same row of the grouping. • Some items, such as the processors, have a list of selectable versions. This list is highlighted by the quantity cells filled with yellow. The lists are made up of rows, each row containing the information for a specific variation of the element. To select one variation, specify the quantity for that variation, and set the quantities to zero for all other rows in the list. The element price will reflect the change as well as the overall board price. "ATCA Intel Harpertown Server PRD BOM 2007 09 14.xls" Table 11 Intel Blade Draft Bill Of Material (BOM) Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 77 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 78 of 84 16 Requirements Conformance Matrix The purpose of this section is to provide a means to track which of the requirements specified in this document have been fully fulfilled, partially fulfilled, not fulfilled, or otherwise. If so desired, this matrix can be copied to another document to be filled out separately, or this document can be edited by the appropriate authorized individual(s), updating the requirements conformance status as the project progresses on. Req. ID PRD Fulfilled Fulfilled Fulfilled by Status Date Section by HDS by SDS Implementation ARC_1 2.1 ARC_2 2.1 ARC_3 2.1 ARC_4 2.1 ARC_5 2.2 ARC_6 2.2 PROC_1 3.1 PROC_2 3.1 PROC_3 3.1 MCH_1 3.2 MCH_2 3.2 MEM_1 3.3 MEM_2 3.3 MEM_3 3.3 MEM_4 3.3 MEM_5 3.3 MEM_6 3.3 MEM_7 3.3 SB_1 3.4 SIO_1 3.5 RTC_1 3.6 RTC_2 3.6 RTC_3 3.6 RTC_4 3.6 RTC_5 3.6 RTC_6 3.6 BIOS_FLSH_1 3.7 BIOS_FLSH_2 3.7 BIOS_FLSH_3 3.7 BIOS_FLSH_4 3.7 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 78 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 79 of 84 Req. ID PRD Fulfilled Fulfilled Fulfilled by Status Date Section by HDS by SDS Implementation BASE_1 3.8 BASE_2 3.8 BASE_3 3.8 FAB_1 3.9 FAB_2 3.9 FAB_3 3.9 FAB_4 3.9 FAB_5 3.9 PTPM_1 3.10 PTPM_2 3.10 PTPM_3 3.10 PWR_1 3.11 PWR_2 3.11 PWR_3 3.11 PWR_4 3.11.1 PWR_5 3.11.1 PWR_6 3.11.2 HLTH_1 3.12.1 HLTH_2 3.12.1 HLTH_3 3.12.1 HLTH_4 3.12.1 HLTH_5 3.12.2 HLTH_6 3.12.2.1 HLTH_7 3.12.3 HLTH_8 3.12.4 HLTH_9 3.12.4 HLTH_10 3.12.5 IPMC_1 3.13 IPMC_2 3.13 IPMC_3 3.13 IPMC_4 3.13 IPMC_5 3.13 IPMC_6 3.13 IPMC_7 3.13 IPMC_8 3.13 IPMC_9 3.13 IPMC_10 3.13 DBG_1 3.14.1 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 79 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 80 of 84 Req. ID PRD Fulfilled Fulfilled Fulfilled by Status Date Section by HDS by SDS Implementation DBG_2 3.14.2 DBG_3 3.14.3 DBG_3.5 3.14.3 DBG_4 3.14.3 DBG_5 3.14.4 DBG_6 3.14.5 DBG_7 3.14.6 DBG_8 3.14.7 DBG_9 3.14.8 FP_1 3.16.1 FP_2 3.16.2 FP_3 3.16.2 FP_4 3.16.3 SATA_1 3.17 XIRQ_1 3.18 BPC_1 3.19.1 BPC_2 3.19.1 BPC_3 3.19.2 BPC_4 3.19.3 IPM_1 4.2 IPM_2 4.2 IPM_3 4.2 IPM_4 4.2 IPM_5 4.2 OPMI_1 4.3 OPMI_2 4.3 BOOT_1 4.5 BOOT_2 4.5 BOOT_3 4.5 FWUG_1 4.6.1 FWUG_2 4.6.1 FWUG_3 4.6.1 FWUG_4 4.6.1 FWUG_5 4.6.1 FWUG_6 4.6.1 FWUG_7 4.6.1 FWUG_8 4.6.2 FWUG_9 4.6.2 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 80 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 81 of 84 Req. ID PRD Fulfilled Fulfilled Fulfilled by Status Date Section by HDS by SDS Implementation FWUG_10 4.6.2 FWUG_11 4.6.2 FWUG_12 4.6.2 FWUG_13 4.6.2 SOL_1 4.8 SOL_2 4.8 SOL_3 4.8 SOL_4 4.8 VS_1 4.9 BIOS_1 4.10 BIOS_R1 5.3.1.4 BIOS_R2 5.3.1.4 BIOS_R3 5.3.1.4 BIOS_R4 5.3.1.4 BIOS_R5 5.3.1.4 BIOS_R6 5.3.1.4 BIOS_R7 5.3.1.4 BIOS_R8 5.3.1.4 BIOS_R9 5.3.1.4 BIOS_R10 5.3.1.4 BIOS_R11 5.3.1.4 BIOS_R12 5.3.1.4 BIOS_R13 5.3.1.4 BIOS_R14 5.3.1.4 BIOS_R15 5.3.1.4 BIOS_R16 5.3.1.4 BIOS_R17 5.3.1.4 BIOS_R18 5.3.1.4 BIOS_R19 5.3.1.4 BIOS_R20 5.3.1.4 BIOS_R21 5.3.1.4 BIOS_R22 5.3.1.4 BIOS_R23 5.3.1.4 BIOS_R25 5.3.1.4 BIOS_R25 5.3.1.4 LINUX_R1 5.3.3 LINUX_R2 5.3.3 LINUX_R3 5.3.3 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 81 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 82 of 84 Req. ID PRD Fulfilled Fulfilled Fulfilled by Status Date Section by HDS by SDS Implementation LINUX_R4 5.3.3 LINUX_R5 5.3.3 LINUX_R6 5.3.3 LINUX_R7 5.3.3 LINUX_R8 5.3.3 LINUX_R9 5.3.3 LINUX_R10 5.3.3 LINUX_R11 5.3.3 LINUX_R12 5.3.3 LINUX_R13 5.3.3 LINUX_R14 5.3.3 LINUX_R15 5.3.3 LINUX_R16 5.3.3 LINUX_R17 5.3.3 IPMI_R1 5.4 IPMI_R2 5.4 IPMI_R3 5.4 IPMI_R4 5.4 IPMI_R5 5.4 IPMI_R6 5.4 IPMI_R7 5.4 IPMI_R8 5.4 IPMI_R9 5.4 IPMI_R10 5.4 IPMI_R11 5.4 IPMI_R12 5.4 IPMI_R13 5.4 IPMI_R14 5.4 IPMI_R15 5.4 IPMI_R16 5.4 IPMI_R17 5.4 IPMI_R18 5.4 IPMI_R19 5.4 IPMI_R20 5.4 IPMI_R21 5.4 IPMI_R22 5.4 IPMI_R23 5.4 IPMI_R24 5.4 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 82 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 83 of 84 Req. ID PRD Fulfilled Fulfilled Fulfilled by Status Date Section by HDS by SDS Implementation IPMI_R25 5.4 IPMI_R26 5.4 IPMI_R27 5.4 IPMI_R28 5.4 IPMI_R29 5.4 IPMI_R30 5.4 IPMI_R31 5.4 IPMI_R32 5.4 IPMI_R33 5.4 IPMI_R34 5.4 IPMI_R35 5.4 IPMI_R36 5.4 IPMI_R37 5.4 IPMI_R38 5.4 IPMI_R39 5.4 IPMI_R40 5.4 IPMI_R41 5.4 IPMI_R42 5.4 IPMI_R43 5.4 IPMI_R44 5.4 IPMI_R45 5.4 IPMI_R46 5.4 IPMI_R47 5.4 IPMI_R48 5.4 GEN_ATCA_R1 5.5 GEN_ATCA_R2 5.5 GEN_ATCA_R3 5.5 GEN_ATCA_R4 5.5 GEN_ATCA_R5 5.5 GEN_ATCA_R6 5.5 GEN_ATCA_R7 5.5 GEN_ATCA_R8 5.5 GEN_ATCA_R9 5.5 GEN_ATCA_R10 5.5 GEN_ATCA_R11 5.5 GEN_ATCA_R12 5.5 GEN_ATCA_R13 5.5 REL_1 9.1 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 83 of 84
    • Intel Harpertown ATCA Blade Product Requirements Document Page 84 of 84 Req. ID PRD Fulfilled Fulfilled Fulfilled by Status Date Section by HDS by SDS Implementation REL_2 9.2 REL_3 9.3 REL_4 9.3 REL_5 9.4 MECH_1 11.1 MECH_2 11.1 DFX_1 12 DFX_2 12 REG_1 13 REG_2 13 REG_3 13 REG_4 13 REG_5 13 REG_6 13 REG_7 13 REG_8 13 REG_9 13 REG_10 13 REG_11 13 REG_12 13 REG_13 13 REG_14 13 REG_15 13 REG_16 13 REG_17 13 REG_18 13 REG_19 13 REG_20 13.1 Document Type Document Identification Status Date Product TBD Sept 18, 2007 Requirement Project Author Page Document Intel Harpertown Blade Jerry Viviano 84 of 84