Tia 942 Data Center Standards

24,567 views
24,160 views

Published on

Review of TIA-942 data standards and some of the best practices surrounding a data center.

Sri Chalasani (Plante & Moran) is available to provide consulting on data center and infrastructure solutions.

3 Comments
72 Likes
Statistics
Notes
No Downloads
Views
Total views
24,567
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
0
Comments
3
Likes
72
Embeds 0
No embeds

No notes for slide
  • * TIA – Telecommunications Industry Association * Focus on TIA-942 data standards and some of the best practices surrounding a data center. * If you get a chance to go through this document, you notice that it is fairly simple and applies a lot of common sense; probably, at the end of this review you will say.. Hmmm I know that – the TIA puts structure to random common sense.
  • * Wikipedia.org: A data center or datacenter is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices. businessdictionary.com: Computer facility designed for continuous use by several users, and well equipped with hardware, software, peripherals, power conditioning and backup, communication equipment, security systems, etc. So what make one data center different from another: Levels of redundancy (cooling, electrical, connectivity, etc.) Capacity (space, cooling, electrical, network connectivity etc.) Monitoring and notification Staffing to maintain the facility
  • The increased demands on enterprise data centers stem from New business realities, Increased energy costs, Deploy and manage applications that require higher availability and increased service levels for uptime and responsiveness Regulatory compliance requirements for data retention and security Implement green computing practices, both of which reduce costs by lowering data center power consumption. Expanding volumes of data managing highly complex and wildly heterogeneous environments Q: By show of hands, how many people actually track cost of data center operations, including energy costs?
  • The first attempt of providing some level of standardization was to provide a tier system – a system that specifies the availability and reliability of a data center
  • Q What is the difference between Uptime and TIA-942? The Uptime Institute Established in 1995 Not a standards body Widely referenced in the data center construction industry Uptime’s method includes four tiers; Provides a high level guideline but does not provide specific design details for each Tier. TIA 942 Established by a standards body TIA and recognized by ANSI TIA 942 tier system is based on Uptime Institute’s Tier Performance Standards. Although a standard, tier system is provided as “informative and not considered to be requirements of this Standard”. Provide specific design criteria for designers build to a specific tier level and allows data center owners to evaluate their own design Comparison of the three Uptime method and Syska method , do not provide details needed to articulate the differences between levels. The TIA-942, provides specific details at every tier level and across a wide range of elements including telecom, architectural, electrical, mechanical, monitoring, and operations. Objective of standard: Provide requirements & guidelines for the design & installation of a data center or computer room. Standard to used in data center design / building development process Provides input during the construction process and cuts across the multidisciplinary design efforts; By addressing the multi-disciplinary aspects, promotes cooperation in the design and construction Do have to mention that it is a little heavy on the telecommunications standards than others – given its origin. Provides more details that the other two. For example, the TIA-942 specifies that a tier 2 data center should have two access provider entrance pathways that are at least 20 m (66 ft) apart. Syska also specifies that a tier 2 data center should have two entrance pathways but adds no other detail. Audience: Primarily intended for use by CIO, Data Center Operations Manager, Infrastructure Engineers (servers / network / cabling), Facilitate communications with architects, facility management
  • ANSI/BICSI-002 "Data Center Design Standard & Recommended Practices" These 21 areas can be boiled down to one of these 8 core areas Sizing and selection: Design Process, Space Planning, Site Selection Cabling infrastructure and administration: cabinets and racks, cabling pathways, cabling systems, cabling field testing Architectural and structural considerations: Architectural, Structural, Commissioning Security and Fire Protection: Fire protection, Security, building automation, Electrical, Grounding, and Mechanical Systems: Electrical, HVAC/Mechanical, Applications Distances: Redundancy, information technology, Maintenance Access Provider Coordination and Demarcation: access providers, telecom space, telecom administration, Operations (?): Maintenance The number one data center planning issue is - Heat Mitigation (cooling)
  • The 5 areas of focus for today will be Data Center Spaces. Data Center Cabling Electrical Cooling Tier System
  • According to TIA-942, a data center should include the following key functional areas: • One or more Entrance Rooms • Main Distribution Area (MDA) • One or more Horizontal Distribution Areas (HDA) • Equipment Distribution Area (EDA) • An optional Zone Distribution Area (ZDA) • Backbone and Horizontal Cabling
  • Entrance Room Analogy: “Entrance Facility” Main Distribution Area (MDA) Analogy: “Equipment Room” Horizontal Distribution Area (HDA) Analogy: “Telecom Room” Zone Distribution Area (ZDA) Analogy: “Consolidation Point” Equipment Distribution Area (EDA) Analogy: “Work Area” Entrance Room (ER) : Location of interface with campus and carrier entrance facilities Location for access provider equipment, demarcation points and interface with other campus locations. ER is connected to the data center MDA through backbone cabling. TR’s Main Distribution Area (MDA) Centralized portion of the backbone cabling Providing connectivity between equipment rooms, entrance facilities, horizontal cross-connects, and intermediate cross-connects. Can have core aggregation switches / routers Horizontal Distribution Area (HDA) Main transition point between backbone and horizontal cabling and houses the LAN, SAN and KVM switches that connect to the active equipment (servers, mainframes, storage devices). Location of horizontal cross-connect (HC); HDA houses cross connects and active equipment (switches) for connecting to the equipment distribution area or Zone Distribution Area (if available) and storage area network (SAN). * Per the TIA-942 standard, both the MDA and HDA require separate racks for fiber, UTP and coax cable Zone Distribution Area (ZDA) Optional ZDA acts as a consolidation point within the horizontal cabling run between the HDA and EDA. ZDA cannot contain any cross connects or active equipment. Equipment Distribution Area (EDA) * Where equipment cabinets and racks house the switches and servers and where the horizontal cabling from the HDA (or ZDA if used) is terminated at patch panels
  • Advantages of a ZDA Reduces pathway congestion Limits data center disruption from the MDA and eases implementation of MACs Enables a modular solution for a “pay-as-you-grow” approach Simple to deploy and/or redeploy if needed typically does not contain active electronics, but with Top of Rack topologies, I think it would qualify as a ZDA.
  • Location: Avoid locations that are restricted by building components that limit expansion such as elevators, core, outside walls, or other fixed building walls. Accessibility for the delivery of large equipment to the equipment room should be provided EMI: Sources: electronics devices that transmit data over a medium EMI can couple itself onto data lines and corrupt data packets being transmitted on that medium. may cause corruption of the data that is being transmitted and stored. Floor Loading: Lbf / Sq ft: A pound-foot is a unit of torque (a vector). One pound-foot is the torque created by one pound force acting at a perpendicular distance of one foot from a pivot point.
  • Signal Ref. Grid: The intent of the signal reference grid is to establish an equipotential ground plane where everything connected rises and falls together in the event of an electrical disturbance, from whatever source. Electronic equipment is affected when there is a potential difference between devices. An equipotential grid significantly reduces potential differences, thus reducing current flow thereby eliminating the adverse affect on logic circuits SRG not required with modern IT equipment; The advent of Ethernet and fiber data interfaces have dramatically reduced the susceptibility of IT equipment to noise and transients, particularly when compared with the ground referenced IT interface technologies of 20 years ago. The installation of an SRG is not harmful, other than the associated cost and delay. * Recommend UPS equipment outside of the main data center – 13 – 18% of heat generated from UPS
  • Higher equipment failures at top of the rack In the EDA, racks and cabinets should be arranged in a hot aisle/cold aisle configuration to encourage airflow and reduce heat
  • Unstructured Cabling or ad-hoc cabling : Installing cabling when you need it – primarily serves as a single use cable Structured Cabling : an organized reusable and flexible cabling system Very large emphasis on Cabling in the document Multidisciplinary Design Considerations in cabling Horizontal cabling Backbone cabling Cross-connect in the entrance room or main distribution area Main cross-connect (MC) in the main distribution area Horizontal cross-connect (HC) in the telecommunications room, horizontal distribution area or main distribution area Zone outlet or consolidation point in the zone distribution area; and Outlet in the equipment distribution area. Backbone Cabling: Provides connections between telecommunications closets, equipment rooms, and entrance facilities. Includes cabling from MDA to ER, HDA, TR; Optional cabling between HDAs allowed Consists of the transmission media (optical fiber cable), Further be classified as interbuilding backbone (cabling between buildings), or intrabuilding backbone (cabling within a building). Horizontal cabling: Simply put – patch panel to wall outlet; Connect between a horizontal cross connect to the outlet in the EDA or ZDA Max of one consolidation point in a ZDA Max distance of 90m/295ft reduced where total patch cord lengths > 10m
  • Bold are the recommendations per TIA-942
  • • Racks and Cabinets –Single Rack loaded with Blade Servers –30kW power • 10-30 seconds required for backup generators to start results in overheated electronics” • Industry experts Recommend Maximum 15-20kW per rack allows backup generators to start up without overheated electronics” Common Bonding Network: The set of metallic components that are intentionally or incidentally interconnected to provide the principal means for effecting bonding and grounding inside a telecommunications building. These components include: structural steel or reinforcing rods, metallic plumbing, ac power conduit, cable racks, and bonding conductors. The CBN is connected to the exterior grounding electrode system
  • Can use nameplate specifications @ approximately 60 – 75% Multiple physically separate connections to public power grid substations Intelligent PDUs are able to provide management systems information about power consumption at the rack or even device level; provide remote power cycling Dual A-B cording: In-rack PDUs should make multiple circuits available so that redundant power supplies (designated A and B) for devices can be corded to separate circuits. Some A-B cording strategies call for both circuits to be on UPS while others call for one power supply to be on house power while the other is on UPS. Each is a function of resilience and availability.
  • Design Implications The vast majority of existing data centers designs do not correctly address the above factors and suffer from unexpected capacity limitations, inadequate redundancy, and poor efficiency.
  • The vast majority of existing data centers designs do not correctly address the above factors and suffer from unexpected capacity limitations, inadequate redundancy, and poor efficiency. • It takes about 160cfm for 1kW of heat or 2500 cfm for 18kW of heat • An average perforated floor tile will disperse 250-300 cfm •“ Equipment on the upper 2/3 of the rack fail twice as often as equipment on the bottom 1/3 of the rack” 1. Determine the Critical Load and Heat Load Equipment + other loads such as lighting, people, etc Can use nameplate specifications @ approximately 60 – 75% As a very general rule-of-thumb, consider no less than 1-ton (12,000 BTU/Hr / 3,516 watts) per 400 square-feet of IT equipment floor space Factor in x% for growth 2. Determine Power Requirements on a per RLU Basis Rack or cabinet foot print area since all manufacturers produce cabinets of generally the same size Rack location is the specific location on the data center floor where services that can accommodate power, cooling, physical space, network connectivity, functional capacity, and rack weight requirements are delivered. Services delivered to the rack location are specified in units of measure, such as watts or Btus, thus forming the term rack location unit The reality is that a computer room usually deploys a mix of varying RLU power densities throughout its overall area. Help with site layout also - Knowing the RLUs for power and cooling enable the data center manager to adjust the physical design, the power and cooling equipment, and rack configurations within the facility to meet the systems' requirements. 3. Determine CFM – movement of air Effective cooling is accomplished by providing both the proper temperature and an adequate quantity of air to the load General problems – improper positioning of equipment heat exhaust being the intake for another equipment, solid doors on cabinets, Be aware of equipment airflow – some use side to side, most use front to back Blank Unused Rack Positions * Large data center would require Computational Fluid Dynamic (CFD) Modeling • It takes about 160cfm for 1kW of heat or 2500 cfm for 18kW of heat • An average perforated floor tile will disperse 250-300 cfm •“ Equipment on the upper 2/3 of the rack fail twice as often as equipment on the bottom 1/3 of the rack”
  • Because of the significant risk of electrical fires in a data center, installing a comprehensive fire detection and suppression system is mission-critical for protecting life and property, as well as ensuring quick operational recovery Fire Suppression: Halon 1301 (no longer recommended or in production)
  • 8 primary criteria MEP: Mechanical , Electrical, and Plumbing What is initial watts / square foot What is ultimate watts / square foot
  • Not All Data Centers Have to be Tier-4 Choosing an optimal criticality is a balance between a business’s cost of downtime and a data center’s total cost of ownership Choices may be limited depending on whether a new data center is being built, or changes are being made to an existing one Existing data center projects (i.e. retrofit), choosing a criticality is limited to the constraints of the existing structure Identify the major constraints to see if they can be addressed or acceptable risk to the business. If the constraint could be removed view alternate strategies such as alternate location
  • http://www.webopedia.com/TERM/d/data_center_tiers.htm A four tier system that provides a simple and effective means for identifying different data center site infrastructure design topologies. The Uptime Institute's tiered classification system is an industry standard approach to site infrastructure functionality addresses common benchmarking standard needs. The four tiers, as classified by The Uptime Institute include the following: Tier 1 : composed of a single path for power and cooling distribution, without redundant components, providing 99.671% availability. Tier II : composed of a single path for power and cooling distribution, with redundant components, providing 99.741% availability Tier III : composed of multiple active power and cooling distribution paths, but only one path active, has redundant components, and is concurrently maintainable, providing 99.982% availability Tier IV : composed of multiple active power and cooling distribution paths, has redundant components, and is fault tolerant, providing 99.995% availability.
  • As stated earlier, some of this may not be any ‘earth shattering’ information, but this standard functions as a collection point for a lot of the common sense type of activities related to the data center. Not difficult to understand – but implementation can be fairly complex
  • Tia 942 Data Center Standards

    1. 1. TIA-942: Data Center Standards (& best practices) Sri Chalasani Merit 2010 Conference May 25, 2010
    2. 2. Objectives <ul><li>What are concerns in the data center? </li></ul><ul><li>Data center standards & best Practices </li></ul>
    3. 3. Data Center Definition <ul><ul><ul><li>Computer facility designed for continuous use by several users, and well equipped with hardware, software, peripherals, power conditioning and backup, communication equipment, security systems , etc. – businessdictionary.com </li></ul></ul></ul><ul><ul><ul><li>… ..It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices . – wikipedia.org </li></ul></ul></ul>Power conditioning Cooling Redundancy Security <ul><ul><ul><li>Notice the common terminology </li></ul></ul></ul><ul><ul><ul><li>Levels of implementation set them apart </li></ul></ul></ul>Capacity Monitoring & Controls Growth
    4. 4. Why should we care? <ul><ul><li>DCs house mission-critical data & equipment. In addition… </li></ul></ul><ul><ul><li>Challenges… increased demand for: </li></ul></ul><ul><ul><ul><li>Applications / systems availability / SLA </li></ul></ul></ul><ul><ul><ul><li>Complex & heterogeneous systems </li></ul></ul></ul><ul><ul><ul><li>Service levels for uptime and responsiveness </li></ul></ul></ul><ul><ul><ul><li>Amount of data (live and retention) </li></ul></ul></ul><ul><ul><ul><li>Regulatory compliance and security </li></ul></ul></ul><ul><ul><ul><li>Changing business demands </li></ul></ul></ul><ul><ul><ul><li>Green practices & energy costs </li></ul></ul></ul><ul><ul><li>Need a facility to accommodate these </li></ul></ul>
    5. 5. Data Center Standards <ul><ul><li>Without standards </li></ul></ul><ul><ul><ul><li>No methodology for comparing data center for reliability and availability </li></ul></ul></ul><ul><ul><ul><li>Variations in data center designs </li></ul></ul></ul><ul><ul><li>Three commonly known tier systems </li></ul></ul><ul><ul><ul><li>Uptime Institute (1995) </li></ul></ul></ul><ul><ul><ul><li>Syska Hennessy Group </li></ul></ul></ul><ul><ul><ul><li>ANSI/TIA-942 or TIA-942 (2005, 2008, 2010) </li></ul></ul></ul>
    6. 6. Data Center Standards <ul><li>Uptime and Syska </li></ul><ul><ul><li>Neither addresses the challenges </li></ul></ul><ul><ul><li>Both provide a framework for the disciplines in a DC - not enough details </li></ul></ul><ul><li>TIA-942 </li></ul><ul><ul><li>Requirements / guidelines for the design & installation of a data center </li></ul></ul><ul><ul><li>Multidisciplinary Design Considerations </li></ul></ul><ul><ul><li>Intended Audience </li></ul></ul>
    7. 7. TIA-942 Multidisciplinary Design <ul><li>Design Considerations </li></ul><ul><li>Architectural Design </li></ul><ul><ul><li>(space, floor, light, security etc.) </li></ul></ul><ul><li>Structured Wiring </li></ul><ul><li>Electrical </li></ul><ul><li>Cooling/Mechanical </li></ul><ul><li>Operations </li></ul><ul><li>Design Process </li></ul><ul><li>Space Planning </li></ul><ul><li>Redundancy </li></ul><ul><li>Site Selection </li></ul><ul><li>Architectural </li></ul><ul><li>Structural </li></ul><ul><li>Electrical </li></ul><ul><li>Mechanical/Cooling </li></ul><ul><li>Fire Protection </li></ul><ul><li>Security </li></ul><ul><li>Building Automation </li></ul><ul><li>Access Providers </li></ul><ul><li>Telecom Spaces </li></ul><ul><li>Cabinets & Racks </li></ul><ul><li>Cabling Pathways </li></ul><ul><li>Cabling Systems </li></ul><ul><li>Cabling Field Testing </li></ul><ul><li>Telecom Administration </li></ul><ul><li>Information Technology </li></ul><ul><li>Commissioning </li></ul><ul><li>Maintenance </li></ul>
    8. 8. TIA-942 – Discussion Topics <ul><li>For today’s discussion, focus on… </li></ul><ul><ul><li>Data Center Spaces. </li></ul></ul><ul><ul><li>Data Center Cabling </li></ul></ul><ul><ul><li>Electrical </li></ul></ul><ul><ul><li>Cooling </li></ul></ul><ul><ul><li>Tier System </li></ul></ul>
    9. 9. Spaces - Functional Areas <ul><li>TIA-942 – 5-key functional areas: </li></ul><ul><ul><li>(1) Entrance Room (ER) </li></ul></ul><ul><ul><li>(2) Main Distribution Area (MDA) </li></ul></ul><ul><ul><li>(3) Horizontal Distribution Area (HDA) </li></ul></ul><ul><ul><li>(4) Zone Distribution Area (ZDA), opt. </li></ul></ul><ul><ul><li>(5) Equipment Distribution Area (EDA) </li></ul></ul><ul><ul><ul><li>Ideally separate rooms but not practical for normal organizations </li></ul></ul></ul><ul><ul><ul><li>Does not include NOC, office space, tape library storage </li></ul></ul></ul>Source: Corning – Distribution in the data center
    10. 10. Spaces - Functional Areas Source: ADC’s Data Center Optical Distribution Frame: The Data Center’s Main Cross-Connect (1) (2) (3) (5) (4) ZDA
    11. 11. Spaces – Optional ZDA Source: Corning – Distribution in the data center <ul><li>Between HDA and EDA </li></ul><ul><li>Provide modularity </li></ul><ul><li>Facilitate MACs </li></ul><ul><li>Top of Rack </li></ul>
    12. 12. Spaces – Reduced Topology <ul><li>Reduced Data Center Topology </li></ul><ul><ul><li>Consolidated ER/MDA/EDA </li></ul></ul><ul><ul><li>Applicable to most enterprises </li></ul></ul>Source: Orthronics – Standards-Based Data Center Structured Cabling System Design
    13. 13. Spaces – Typical Requirements <ul><li>Typical Data Center Requirements: </li></ul><ul><li>Location </li></ul><ul><li>Avoid locations that restrict expansion </li></ul><ul><li>Redundant Access to facility </li></ul><ul><li>Delivery of large equipment </li></ul><ul><li>Located away from EMI sources </li></ul><ul><li>No exterior windows (increased heat & security risk) </li></ul><ul><li>Provide authorized access & monitored </li></ul><ul><li>Size – no magic formula </li></ul><ul><li>Sized to meet the known requirements of specific equipment </li></ul><ul><li>Include projected future as well as present requirements </li></ul><ul><li>Ceiling Height </li></ul><ul><li>Min. 8.5’ from finished floor to any obstruction (sprinklers, lighting fixtures, or cameras) </li></ul><ul><li>Cooling architecture may dictate higher ceilings </li></ul><ul><li>Min. 18” clearance from water sprinkler heads </li></ul><ul><li>Flooring / Walls </li></ul><ul><li>Anti-static properties </li></ul><ul><li>Sealed / painted to minimize dust </li></ul><ul><li>Light color to enhance lighting </li></ul><ul><li>Min. distribution floor loading 150 lbf/Sq-ft , Reco. 250 lbf/Sq-ft </li></ul>
    14. 14. Spaces – Typical Requirements <ul><li>Other Equipment </li></ul><ul><li>UPS, power dist. or conditioner </li></ul><ul><li><= 100kVa inside room </li></ul><ul><li>> 100kVa in separate room </li></ul><ul><li>Lighting </li></ul><ul><li>Min. 500 lux in the horizontal plane and 200 lux in the vertical plane </li></ul><ul><li>Lighting on separate circuits/ panels </li></ul><ul><li>Emergency lighting & signs </li></ul><ul><li>Doors </li></ul><ul><li>Min. 3’ wide x 7’ high no obstructions or removable center </li></ul><ul><li>Operational parameters </li></ul><ul><li>Dedicated HVAC system preferred (68 – 77 F); measured every 10-30 ft at 1.5ft height </li></ul><ul><li>HVAC – min. 100 – 400 sqft/ton </li></ul><ul><li>Max. temp rate of change: 5 F/hr </li></ul><ul><li>40% to 55% relative humidity (reduces ESD) </li></ul><ul><li>Electrical - Signal Reference Grid (SRG)/Common Bonding Network </li></ul><ul><li>Sprinkler systems pre-action system or clean agent </li></ul><ul><li>Security </li></ul><ul><li>Camera monitoring (int./ext.) </li></ul><ul><li>100-yr flood plain </li></ul>
    15. 15. Spaces – Raised vs. Solid Floor <ul><ul><ul><li>Raised floor a very common notion, but... </li></ul></ul></ul><ul><li>Older equipment vs. newer equipment air flow (bottom-up vs. front to back) </li></ul><ul><li>Hot aisle – Cold aisle: Examine the air flow dynamics </li></ul><ul><li>Cold air – wants to fall , but we are pushing – requires pressure through perf. tiles </li></ul><ul><li>Equip. densities increase -> higher head load -> higher pressure of cold air through restrictive space </li></ul><ul><li>What happens to hot air? – flows up, reduces temperature and begins to fall down again </li></ul><ul><li>Only place to go is creep into cold aisle…. warmer air at cabinet tops </li></ul><ul><li>Typically see passive components or open spaces near top of cabinets </li></ul><ul><li>Opening / leaks in flooring has impact on pressure </li></ul><ul><li>Both use anti-static tiles or flooring </li></ul><ul><li>Data & electrical cabling restrictions </li></ul><ul><li>New build – more expensive </li></ul><ul><li>Have to look at your environment to see if raised floor makes sense…. do use this as the rule of thumb ! </li></ul>
    16. 16. Cabling Systems <ul><ul><li>Structured vs. Unstructured Cabling </li></ul></ul><ul><li>Backbone cabling </li></ul><ul><li>Horizontal cabling </li></ul><ul><li>Cross-connect in the entrance room or main distribution area </li></ul><ul><li>Main cross-connect (MC) in the main distribution area </li></ul><ul><li>Horizontal cross-connect (HC) in the telecommunications room, horizontal distribution area or main distribution area </li></ul><ul><li>Zone outlet or consolidation point in the zone distribution area; and </li></ul><ul><li>Outlet in the equipment distribution area </li></ul>
    17. 17. Cabling Systems Source: Corning Cable Systems – Just the Technical Facts
    18. 18. Cabling Systems – Transmission Media <ul><ul><li>100-ohm twisted-pair copper cable </li></ul></ul><ul><ul><ul><li>Category 5e or 6, 6A </li></ul></ul></ul><ul><ul><ul><li>10GbE: Cat 6 – 37-55mts, Cat 6A – 100mts </li></ul></ul></ul><ul><ul><li>Multimode fiber optic cable </li></ul></ul><ul><ul><ul><li>62.5/125 µ m or 50/125 µ m </li></ul></ul></ul><ul><ul><ul><li>50/125 µ m 850 nm laser optimized mmf </li></ul></ul></ul><ul><ul><li>Singlemode optical fiber cable </li></ul></ul><ul><ul><li>75-ohm coaxial cable </li></ul></ul><ul><ul><ul><li>Type 734 & 735 cable </li></ul></ul></ul><ul><ul><ul><li>Type T1.404 coaxial connector </li></ul></ul></ul>
    19. 19. Cabling Systems –Under floor / Overhead <ul><ul><li>Under Floor Cabling </li></ul></ul><ul><ul><ul><li>Less expensive if raised floor than overhead </li></ul></ul></ul><ul><ul><ul><li>Multilevel trays / paths for fiber/copper/power </li></ul></ul></ul><ul><ul><ul><li>Cabling in cable trays to minimize airflow blocks </li></ul></ul></ul><ul><ul><ul><ul><li>Data Cables – Hot Aisle </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Power Cables – Cold Aisle </li></ul></ul></ul></ul><ul><ul><ul><li>Provide adequate capacity for growth </li></ul></ul></ul><ul><ul><ul><li>Electrical – color coded PDU with locking receptacle. Receptacles labeled with PDU/panel ID & breaker # </li></ul></ul></ul>
    20. 20. Cabling Systems – Under floor / Overhead <ul><ul><li>Overhead </li></ul></ul><ul><ul><ul><li>Can be used in raised floor environments also </li></ul></ul></ul><ul><ul><ul><li>Multi level cable tray system </li></ul></ul></ul><ul><ul><ul><ul><li>Bottom layer – copper </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Middle layer – fiber </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Top layer – power </li></ul></ul></ul></ul><ul><ul><ul><li>Suspended from ceiling; Min.12” clearance above each ladder </li></ul></ul></ul><ul><ul><ul><li>5” separation from fluorescent lights & power </li></ul></ul></ul><ul><ul><ul><li>Avoid blocking cooling ducts (overhead cooling) and return air paths </li></ul></ul></ul>
    21. 21. Racks / Cabinets <ul><li>Placement of Racks / Cabinets </li></ul><ul><ul><ul><li>Hot aisle / Cold aisle - arranged in an alternating pattern (with fronts facing each other) </li></ul></ul></ul><ul><ul><ul><li>Cold aisles are front & Hot aisles are rear of racks/cabinets – best results use containment </li></ul></ul></ul><ul><ul><ul><li>If there is a raised floor </li></ul></ul></ul><ul><ul><ul><ul><li>Cold Aisle - PDU cables; Hot Aisle - Data cable trays </li></ul></ul></ul></ul><ul><ul><ul><li>Common bonding network (CBN) </li></ul></ul></ul><ul><ul><ul><ul><li>Racks, cable trays, HVAC, PDU, panel boards, raised floor structure, columns – tied to common ground </li></ul></ul></ul></ul><ul><ul><ul><li>Rack clearance </li></ul></ul></ul><ul><ul><ul><ul><li>Front Min. 3ft, 4ft recommended </li></ul></ul></ul></ul><ul><ul><ul><ul><li>Back Min. 3ft. </li></ul></ul></ul></ul>
    22. 22. Racks / Cabinets Source: anixter.com Still applies for overhead cooling as well
    23. 23. Racks / Cabinets <ul><li>Placement of racks / cabinets </li></ul><ul><ul><ul><li>Front rails recessed for wire management </li></ul></ul></ul><ul><ul><ul><li>Switch-Panel-Switch arrangement </li></ul></ul></ul><ul><ul><ul><li>Front edge of cabinet on edge of tile </li></ul></ul></ul><ul><ul><ul><li>Perforated tiles at front of cabinets </li></ul></ul></ul><ul><ul><ul><li>Provide blank panels in empty spaces </li></ul></ul></ul>
    24. 24. Electrical Considerations <ul><li>Unfortunately no magic bullet! </li></ul><ul><ul><ul><li>Manual process for load configuration </li></ul></ul></ul><ul><ul><ul><li>APC ‘s “Calculating Total Power Requirements for Data Centers” By Richard Sawyer – framework for calculating req. </li></ul></ul></ul><ul><ul><ul><li>Color coded PDU with locking receptacle. Receptacles labeled with PDU/panel ID & breaker # </li></ul></ul></ul><ul><ul><ul><li>Best Practices </li></ul></ul></ul><ul><li>Multiple power grid connects </li></ul><ul><li>Dual A-B cording </li></ul><ul><li>Sub-breakers per relay rack or lineup </li></ul><ul><li>Intelligent PDU </li></ul><ul><li>Generator capacity to include for cooling </li></ul><ul><li>UPS capacity to include cooling and lights </li></ul><ul><li>Accommodate growth </li></ul>
    25. 25. Cooling Considerations <ul><li>No specific guidelines </li></ul><ul><li>Basic physics </li></ul><ul><ul><ul><li>Cooling reqd. = Heat generated = Electrical load </li></ul></ul></ul><ul><ul><ul><li># 1 Mitigating Factor in DC – heat removal </li></ul></ul></ul><ul><li>Design Implications </li></ul><ul><ul><li>Address these factor to avoid limitations to capacity, redundancy, and efficiency </li></ul></ul><ul><ul><ul><li>Layout of racks in alternating rows ( hot/cold aisle ) </li></ul></ul></ul><ul><ul><ul><li>Location of CRAC units </li></ul></ul></ul><ul><ul><ul><li>Quantity and location of vents </li></ul></ul></ul><ul><ul><ul><li>Sizing of ductwork </li></ul></ul></ul><ul><ul><ul><li>Proper internal configuration of racks; airflow </li></ul></ul></ul>
    26. 26. Cooling Considerations <ul><li>Process </li></ul><ul><ul><li>Cooling is not enough – airflow required </li></ul></ul><ul><li>Determine critical heat load </li></ul><ul><li>Establish critical loads - watts-per-RLU/Rack </li></ul><ul><li>Determine the CFM requirements per RLU/Rack </li></ul><ul><li>Establish a floor plan – balance heat loads </li></ul><ul><li>If possible, divide the room into cooling zones by RLU </li></ul><ul><li>Determine appropriate air conditioner type(s) </li></ul><ul><li>Equip. airflow (f->b / s->s) </li></ul><ul><li>Determine cooling delivery methodology(s) </li></ul><ul><ul><li>Room, Row, Rack </li></ul></ul><ul><ul><li>Blank panels/short circuits </li></ul></ul><ul><ul><li>Cold air containment </li></ul></ul><ul><ul><li>Special Considerations – high BTU </li></ul></ul><ul><li>Deploy a comprehensive monitoring system </li></ul>RLU: Rack Location Unit CFM: Cubic Feet / Min. BTU: British Thermal Unit
    27. 27. Cooling Considerations - Airflow <ul><ul><li>source: apc.com </li></ul></ul>Supply & Return Based Contained Systems for best results
    28. 28. Fire Detection and Suppression <ul><ul><li>Significant risk of electrical fires </li></ul></ul><ul><ul><li>A comprehensive fire detection & suppression system is mission-critical </li></ul></ul><ul><ul><li>Water detection systems </li></ul></ul><ul><li>Detection </li></ul><ul><li>Both heat and smoke detection </li></ul><ul><li>Airflow patterns determines location of detection units </li></ul><ul><li>Interconnect with the fire suppression system, local alarms, monitoring system, etc </li></ul><ul><li>Install in accordance with NFPA 72E </li></ul><ul><li>Installed below raised floors and other areas </li></ul><ul><li>Suppression </li></ul><ul><li>Follow NFPA 75 standard firewalls </li></ul><ul><li>Chemical systems or Clean Agent ( FM 200 , Inergen, Ecaro-25(FE 25), Novec 1230) </li></ul><ul><li>Sprinkler systems — both flooded and pre-action (prevent accidental discharge) </li></ul><ul><li>Manual systems (Manual pull stations, Portable fire extinguishers </li></ul>
    29. 29. Tier System – TIA-942 <ul><li>4-Tier System based on Uptime </li></ul><ul><ul><li>Based Resilience / Capacity the MEP systems </li></ul></ul><ul><ul><li>16-pages of criteria </li></ul></ul><ul><ul><li>Primary Categories Sample Sub-Categories </li></ul></ul><ul><li>Power and cooling delivery paths </li></ul><ul><li>Initial & ultimate watts/sqft </li></ul><ul><li>Support space to raised floor ratio </li></ul><ul><li>Raised floor height </li></ul><ul><li>Floor loading pounds/sqft </li></ul><ul><li>Utility voltage </li></ul><ul><li>Redundancy </li></ul><ul><li>Telecommunications </li></ul><ul><li>Architectural and structural </li></ul><ul><li>Physical Security </li></ul><ul><li>Electrical </li></ul><ul><li>Mechanical </li></ul>MEP: Mechanical, Electrical & Plumbing (cable routing)
    30. 30. Optimal Criticality – Choosing a tier <ul><ul><li>Balance cost of downtime and TCO </li></ul></ul><ul><ul><li>source: apc.com </li></ul></ul>C Business characteristics Effect on system design 1 • Typically small businesses • Limited online presence • Low dependence on IT • Perceive downtime as a tolerable Inconvenience • Numerous single points of failure in all aspects of design • No generator if UPS has 8 minutes of backup time • Generally unable to sustain more than a 10 minute power outage 2 • Some online revenue generation • Multiple servers • Phone system vital to business • Dependent on email • Some tolerance to scheduled downtime • Some redundancy in power and cooling systems • Generator backup • Able to sustain 24 hour power outage • Minimal thought to site selection • Vapor barrier • Formal data room separate from other areas 3 • World-wide presence • Majority activity from online • VoIP phone system • High dependence on IT • High cost of downtime • Highly recognized brand • Two utility paths (active and passive) • Redundant power and cooling systems • Redundant service providers • Able to sustain 72-hour power outage • Careful site selection planning • One-hour fire rating • Allows for concurrent maintenance 4 • Multi-million dollar business • Maj. of rev from electronic transactions • Business model entirely dependent on IT • Extremely high cost of downtime • Two independent utility paths • 2N power and cooling systems • Able to sustain 96 hour power outage • Stringent site selection criteria • Minimum two-hour fire rating; High phy. security • 24/7 onsite maintenance staff
    31. 31. Tier System Source: The Uptime Institute Attribute / Statistic Tier I Tier II Tier III Tier IV Power and Cooling Delivery Paths 1 Active 1 Active 1 Active 1 Passive 2 Active Redundant Components N N + 1 N + 1 2(N + 1) Support Space to Raised Floor Ratio 20% 30% 80 – 90% 100% Initial Watts / sqft 20 – 30 40 – 50 40 – 60 50 – 80 Ultimate Watts / sqft 20 – 30 40 – 50 100 – 150 150+ Raised Floor Height 12” 18” 30 – 36” 30 – 36” Floor Loading Pounds / sqft 85 100 150 150+ Utility Voltage 208, 480 208, 480 12 – 15 kV 12 – 15 kV Months to Implement 3 3 – 6 15 – 20 15 – 20 Year First Deployed 1965 1970 1985 1995 Construction $ / sqft $450 $600 $900 $1,100+ Annual IT Downtime Due to Site 28.8 hrs 22.0 hrs 1.6 hrs 0.4 hrs Site Availability 99.67% 99.75% 99.98% 100.00%
    32. 32. Next / Action Steps <ul><ul><li>Understand TIA-942 (requirement & process) </li></ul></ul><ul><ul><li>Perform a risk assessment </li></ul></ul><ul><ul><ul><li>Identify constraints, associated risks, cost of downtime </li></ul></ul></ul><ul><ul><li>For each sub-system </li></ul></ul><ul><ul><ul><li>Determine current tier </li></ul></ul></ul><ul><ul><ul><li>All systems need not be Tier-IV; pick & choose </li></ul></ul></ul><ul><ul><li>Work with Finance & Facilities to resolve issues </li></ul></ul><ul><ul><li>If Risks are too high – look at alternatives </li></ul></ul>
    33. 33. Outsourced Data Center <ul><ul><li>Fits business model - consider outsourcing </li></ul></ul><ul><ul><li>Affordable co-location/hosted Data Center and uptime (99.995%) uptime are not mutually exclusive </li></ul></ul><ul><ul><li>Understand levels of redundancy and the uptime SLA in order to get the best combination of uptime and affordability </li></ul></ul><ul><ul><li>Balance between budget and availability </li></ul></ul>
    34. 34. Outsourced Data Center <ul><ul><li>What to look for…. </li></ul></ul><ul><ul><li>Claims of Uptime Tiers – III or IV; most are not certified </li></ul></ul><ul><li>Hardened data center buildings </li></ul><ul><li>Data center power & cooling redundancy </li></ul><ul><li>Telecom entrance redundancy </li></ul><ul><li>Availability of multiple carriers </li></ul><ul><li>Physical security </li></ul><ul><li>SAS 70 data center compliance </li></ul><ul><li>SLA </li></ul>
    35. 35. Review <ul><li>TIA-942 – organized common sense </li></ul><ul><li>Key design parameters for the disciplines in a data center </li></ul><ul><li>Tier System </li></ul><ul><li>Next Steps </li></ul>
    36. 36. Resources <ul><li>Useful links </li></ul><ul><ul><li>Excellent white papers from www.apc.com </li></ul></ul><ul><ul><li>TIA - http://www.tiaonline.org/ </li></ul></ul><ul><ul><li>Green data center efficiency savings calculator </li></ul></ul><ul><ul><ul><li>http://cooling.thegreengrid.org/namerica/WEB_APP/calc_index.html </li></ul></ul></ul><ul><ul><ul><li>The green grid (thegreengrid.org) </li></ul></ul></ul><ul><ul><ul><li>Department of Energy – DC Profiling Tool </li></ul></ul></ul><ul><ul><ul><li>http://www1.eere.energy.gov/industry/datacenters/software.html </li></ul></ul></ul><ul><ul><li>… . and obviously Google or Bing it. </li></ul></ul>
    37. 37. Questions
    38. 38. Contact Information Sri Chalasani [email_address] 248.223.3707

    ×