To 30kW and Beyond: High-Density Infrastructure Strategies that Improve Efficiencies and Cut Costs

874 views
790 views

Published on

Instead of building data centers out, like a city with a growing population supported by sprawling suburbs, data center designers and managers are increasingly finding that it is more efficient to build up and replace the sprawl with higher density racks. Today’s data centers capacities are skyrocketing as newer, denser servers are deployed, but in addition to capitalizing on faster, smarter and smaller computing technologies, organizations can also realize significant cost and space savings by deploying high-density power and cooling infrastructures to support these systems. Discover how high density cooling, aisle confirmation and power distribution architectures can help reduce facility space, energy costs and IT budgets.

Published in: Business, Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
874
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • …can really equal lower cost.
  • …get into thatlater on in the presentation
  • ….free cooling, and we’ll see the benefits of that.
  • ….and the trend is gonna continue.
  • …and energy is becoming more and more important.
  • …and looking at rack-based cooling.
  • …some areas where it’s very, very effective.
  • …eliminate that waste, or that bypass air.
  • …more and more efficient with the cooling system.
  • …whether it’s the power or the cooling systems.
  • …strategies for enhancing cooling efficiency within the data center.
  • …more efficient as we go forward here.
  • …make the CRAC unit much more efficient.
  • …depending on the operation of the server.
  • …in total, and we’ll talk about that towards the end here.
  • …how does high density equal lower cost?
  • …provisioning for higher density really drive that lower cost.
  • …so it’s really simple to deploy.
  • …very simple as a cooling solution.
  • … very, very reliable cooling all the time.
  • …Turn it back to you Thom.
  • Thanks Steve.Now I’d like to hand things over to David Martinez, Facilities Coordinator in the Sandia National Laboratories Corporate Computing Facilities. During his more than 25 years with Sandia Labs, David has worked in a variety of capacities. During his tenure, David has seen the data center operations move from about 20,000 sq. ft. to over 77,000 sq. ft., comprised of 4 unique data center environments. David, thanks for joining us today to discuss how you implemented some of the high density strategies Steve discussed with the deployment of Sandia’s new “Red Sky” super-computer.…I’ll just hop right into it here.
  • …so we went in search of a new computer.
  • …computer and more efficient computer.
  • ....so this is how we did it:
  • ….and about 140 welded connections.
  • ….running at 40 hertz, 40-50 hertz.
  • …about 10% gas to get that liquid back to the XDP units.
  • …a very clean intallation.
  • …safe thing--very, very effective.
  • …and it passes from there.
  • …time of trouble, if you did have it.
  • …basically how it works.
  • ….with our climate here.
  • …have advanced in this slide.
  • ….Red Sky in the red.
  • …just by the low tonnage usage.
  • …this is the slide they produced.
  • …huge step in the right direction with this install.
  • …so back to you Thom.
  • To 30kW and Beyond: High-Density Infrastructure Strategies that Improve Efficiencies and Cut Costs

    1. 1. To 30kW and Beyond:High-Density Infrastructure Strategies that Improve Efficiencies and Cut Costs
    2. 2. Emerson Network Power: The globalleader in enabling Business-Critical Continuity Emerson Technologies  Uninterruptible Power  Power Distribution  Surge Protection  Transfer Switching  DC Power  Precision Cooling  High Density Cooling  Racks  Rack Monitoring  Sensors and Controls  KVM  Real-Time Monitoring  Data Center Software © 2010 Emerson Network Power
    3. 3. Emerson Network Power –An organization with established customers © 2010 Emerson Network Power
    4. 4. Presentation topics• Emerson Network Power overview• “High Density Equals Lower Cost: High Density Design Strategies for Improving Efficiency and Performance,” Steve Madara, Vice President and General Manager, Liebert North America Precision Cooling, Emerson Network Power• “Sandia National Laboratories’ Energy Efficient Red Sky Design,” David Martinez, Facilities Coordinator, Computing Infrastructure and Support Operations, Sandia National Laboratories• Question and Answer session © 2010 Emerson Network Power
    5. 5. High Density Equals Lower Cost:High Density Design Strategiesfor Improving Efficiency andPerformanceSteve MadaraVice President and General ManagerLiebert North America Precision CoolingEmerson Network Power © 2010 Emerson Network Power
    6. 6. Agenda• Industry trends and challenges• Three strategies for enhancing cooling efficiency• High density equals lower cost © 2010 Emerson Network Power
    7. 7. Industry Trends and Challenges © 2010 Emerson Network Power
    8. 8. Trends and challenges• Server design – Higher ∆T across the server raising leaving air temperatures Traditional cooling unit © 2010 Emerson Network Power
    9. 9. Trends and challenges• Regulatory – ASHRAE 90.1 Standard – 2010 Revisions – Refrigerant legislation• Discussion around lower dew point limits• Raising cold aisle temperatures – Intended to improve cooling capacity and efficiency – Monitor the impact on server fans• No cooling © 2010 Emerson Network Power
    10. 10. Spring 2010 Data Center Users’ Group survey Average kW per Rack 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Average:Now 4% 28% 36% 12% 4% 3% 1% 11% ~8kW In two 9% 30% 21% 15% 4% 3%3% 15% Average:years 10-12kW Maximum kW per Rack 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%Now 5% 20% 23% 16% 9% 8% 11% 11% Average: >12kW In two 3% 10% 20% 16% 14% 7% 11% 19% Average:years 14-16kW 2 kW or less >2 — 4 kW >4 — 8 kW >8 — 12 kW >12 — 16 kW >16 — 20 kW >20 — 24 kW GreaterEmerson Network Power © 2010 than 24 kW Unsure
    11. 11. Spring 2010 Data Center Users’ Group survey:Top data center issues 0% 5% 10% 15% 20% 25% 30% 35% 40% 45% Experienced hot spots 40% Run out of power 26% Experienced an outage 23% Experienced a ―water event‖ 23%N/A — Have not had any issues 23% Run out of cooling 18% Run out of floor space 16% Excessive energy costs 13% Other 2% © 2010 Emerson Network Power
    12. 12. Spring 2010 Data Center Users’ Group survey: Implemented or considered technologies 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%Fluid economizer on 14% 5% 14% 10% 28% 29% chiller plant Fluid economizer 16% 4% 14% 7% 31% 28% using dry coolers Air economizer 16% 5% 23% 11% 22% 24% Cold aisle containment 28% 11% 38% 14% 6% 3% Containerized/ 5% 2% 24% 18% 35% 16%modular data centerWireless monitoring 7% 5% 44% 9% 18% 18% Solar array 1% 3% 14% 10% 47% 25%Rack based cooling 11% 6% 38% 18% 13% 15% DC power 6% 3% 18% 18% 35% 20% Already implemented Plan to implement Still considering Considered, but decided against Will not consider © 2010 Emerson Network Power Unsure
    13. 13. Spring 2010 Data Center Users’ Group survey: Implemented or considered technologies 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%Fluid economizer on 14% 5% 14% 10% 28% 29% chiller plant Fluid economizer 16% 4% 14% 7% 31% 28% using dry coolers Air economizer 16% 5% 23% 11% 22% 24% Cold aisle containment 28% 11% 38% 14% 6% 3% Containerized/ 5% 2% 24% 18% 35% 16%modular data centerWireless monitoring 7% 5% 44% 9% 18% 18% Solar array 1% 3% 14% 10% 47% 25%Rack based cooling 11% 6% 38% 18% 13% 15% DC power 6% 3% 18% 18% 35% 20% Already implemented Plan to implement Still considering Considered, but decided against Will not consider © 2010 Emerson Network Power Unsure
    14. 14. Impact of design trends on the cooling system 72 F By-pass air – 30% at 60 F CRAC Server Design ΔT - 21 F 4 kw ΔT - Operating ΔT - 16 F 16 F ΔT – 18 F 78 F CFM – 60 F CFM – 100% 70% 58 F• Legacy data center designs with poor airflow management resulted in significant by-pass air – Typical CRAC unit has a ΔT of around 21 F at 72 F return air by design – Low ΔT servers and significant by-pass air resulted actual lower CRAC ΔT and thus lower efficiency © 2010 Emerson Network Power
    15. 15. Impact of higher temperature rise servers 75 F By-pass air – 40% @ 56F Hot aisle CRAC Server 75 F 8 kw ΔT - 21 F ΔT – 35 F 91 F CFM – 56 F CFM – 100% 60% 54 F• Even with improved best practices, newer servers can create challenges in existing data centers – Server ΔT greater than CRAC unit ΔT capability requires more airflow to meet the kW server load – Requires even more careful management of the return air to the cooling unit as a result of the high exiting temperatures at the server. © 2010 Emerson Network Power
    16. 16. Introducing Efficiency Without Compromise Improving performance of the IT infrastructure and environment ™ Balancing high Adapting to IT changes Delivering levels of for continuous architectures from 10- availability and optimization and 60kW/rack to minimize efficiency design flexibility space and cost Expertise & Support © 2010 Emerson Network Power
    17. 17. Three Strategies for Enhancing Cooling Efficiency © 2010 Emerson Network Power
    18. 18. Strategy 1: Getting the highest temperature to thecooling unit• Higher return air temperatures increase the cooling unit capacity and efficiency• Increases the CRAC unit ΔT• Increases the SHR © 2010 Emerson Network Power
    19. 19. Raising the temperature at the room level Without containment, hot air wrap- With containment, hot air wrap-around around will occur and will limit max is eliminated and server supply return temperatures temperatures are controlled CRAC CRAC CRAC CRAC CRAC CRAC Rack Rack Rack Rack Rack Rack Rack Rack• Impacted by • Containment can be at the room – The length of the rack rows (long or zone level rows difficult to predict) • Supply air control provides – Floor tile placement consistency between – Rack consistency and load containments• Improved with • Can be used in combination with – Ducted returns high density modules or row based cooling – Local high density modules © 2010 Emerson Network Power
    20. 20. Strategy 2: Providing the right cooling and airflow• Efficiency is gained when: Server Load (kW) = Cooling Unit Capacity (kW) Server Airflow = Cooling Unit Airflow• Challenges – Rising server ΔT result in higher by-pass air to meet the cooling load – Requires variable speed fans and independent control of the airflow from the cooling capacity © 2010 Emerson Network Power
    21. 21. Strategy 3: Provide the most efficient heat rejectioncomponents• Reduce cooling unit fan power• Maximize use of economization mode to reduce compressor hours of operation (chiller / compressor)• Improve condenser heat transfer and reduce power• Direct cooling – eliminate fans and compressor hours © 2010 Emerson Network Power
    22. 22. High Density Equals Lower Cost © 2010 Emerson Network Power
    23. 23. High density equals lower cost Adding 2000 kW of IT at 20kW/rack vs. 5kW/rack • Smaller building for high density • Fewer racks for high density • Capital equipment costs more • Equipment installation costs higher • High density cooling is more efficient Cost Difference Low Density vs. High Density Building Capital Costs @ $250/sq. ft. ($1,875,000) Rack and PDU Capital Costs @ $2,500 each ($750,000) Cooling Equipment Capital Costs $320,000 Installation Costs $750,000 Capital Cost Total ($1,555,000) Cooling Operating Costs (1yr) ($420,480)Total Net Savings of a High Density Design ($1,975,480) 5 yr Total Net Savings of High Density ($3,657,400) © 2010 Emerson Network Power
    24. 24. Total cost of ownership benefits Traditional Cooling Liebert XDFan power- 8.5kW per 100 kW of coolingAverage entering air Cooling Fan power- 2 kW per temperature of 72- Unit 100 kW of cooling 80 F Average entering air temperature of 95-100 F Chiller Capacity 1.40 Latent Load kW Chiller Capacity per kW of Sensible Heat Load 1.20 Fan Load Sensible Load • 65% less fan power 1.00 • Greater cooling coil effectiveness 0.80 • 100% sensible cooling 0.60 • 20% less chiller capacity required 0.40 • Overall 30% to 45% energy savings yielding a payback down to 1 year 0.20 0.00 Traditional CW CRAC CW Enclosed Rack Refrigerant Modules © 2010 Emerson Network Power
    25. 25. Solutions transforming high density to high efficiency XDO20 XDV10 XDS20 XDC or XDP Embedded Component Cooling in XDR10/40 Cooling Base Super Infrastructure without XDH20/32 Computers Rear Server Fans (160 kW) (microchannel Door Standard 20 - 40 kW Cooling intercoolers) 10-40 kW Capable > 100 kW Modules 35 – 75kWFuture pumping 10-35 kW New & Future Product Configurations units of larger capacity and N+1 capability © 2010 Emerson Network Power
    26. 26. Rear door cooling• Passive rear door cooling module – No cooling unit fans – server air flow only – Optimal containment system – Allows data centers to be designed in a ―cascading‖ mode• Simplifies the cooling requirements – Solves problem for customers without hot and cold aisles – Room neutral – does not require airflow analysis – No electrical connections © 2010 Emerson Network Power
    27. 27. Customer layout Cascade Effect Cascade Effect © 2010 Emerson Network Power
    28. 28. Direct server cooling without server fansLiebert XDS configuration• A cooling rack which uses the Clustered Systems cooling technology to move heat directly from the server to the Liebert XD pumped refrigerant system• Heat is never transferred to the air• CPU 1 CPU 0 Provides cooling for 36 1U servers• Available initially in 20kW capacity with a 40kW rack in the futureBenefits• No cooling module fans or server fans• 8% to 10% smaller IT power requirements• Chill Off 2 tested at 80 F fluid temperature result in effective PUE<1• Next opportunity: XDP connection to a cooling tower without a chiller © 2010 Emerson Network Power
    29. 29. Energy consumption comparisons Equivalent PUE<1 © 2010 Emerson Network Power
    30. 30. Industry evolving technology to improve data centercooling efficiency © 2010 Emerson Network Power
    31. 31. Sandia National Laboratories’ Energy Efficient Red Sky Design David J. Martinez Facilities Coordinator Corporate Computing Facilities Sandia National Laboratories Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract DE- AC04-94AL85000
    32. 32. Why build Red Sky?What happened to consolidation?• Addresses critical national need for High Performance Computing (HPC)• Replaces aging current HPC system © 2010 Emerson Network Power
    33. 33. System comparison THUNDERBIRD RED SKY 140 Racks (Large) 36 Racks (Small) 50 Teraflops Total 10 Teraflops per rack ~13 kW / rack full load ~32 kW / rack full load 518 tons cooling 328 tons cooling 12.7M gal water per yr 7.3M gal water per yr © 2010 Emerson Network Power
    34. 34. Red Sky and new technologiesSystem design implemented three new technologies forpower and cooling: Modular Power Distribution Units Liebert XDP Refrigerant Cooling Units Glacier DoorsThe facility had to be prepared….. © 2010 Emerson Network Power
    35. 35. Facility preparation 3.5 months Zero accidents 0.5 miles copper 650 brazed connections 400 ft. carbon steel 140 welded connections © 2010 Emerson Network Power
    36. 36. Facility preparation © 2010 Emerson Network Power
    37. 37. Facility preparation © 2010 Emerson Network Power
    38. 38. Facility preparation © 2010 Emerson Network Power
    39. 39. © 2010 Emerson Network Power
    40. 40. New cooling solutions Glacier Doors First rack-mounted, refrigerant-based passive cooling system on the market XDP Refrigerant Units Pumping unit that serves as an isolating interface between the building chilled water system and the pumped refrigerant (134A) circuit • Operates above dew point • No compressor • Power used to cool computer not dehumidify • 100% sensible at 0.13 kW per kW cooling © 2010 Emerson Network Power
    41. 41. The total cooling solutionHow it works 90% heat load removed with Liebert XDP-Glacier Door combination Uses Laminar Air Flow concept Perforated tiles only needed in 1st row © 2010 Emerson Network Power
    42. 42. Laminar air flow © 2010 Emerson Network Power
    43. 43. How it all comes together 45oChillerPlant Liebert XDP Glacier Door © 2010 Emerson Network Power
    44. 44. How it all comes together April to Sept. Chiller 53o Plant 61o Liebert Plug Fan 52o XDP CRAC Unit Oct .to March Chiller 0.46 kW power = 1 ton cooling Plate Frame Heat Exchanger Plate Frame 0.2 kW power = 1 ton cooling Heat Exchanger © 2010 Emerson Network Power
    45. 45. Comparison of compute power and footprint © 2010 Emerson Network Power
    46. 46. Tons of cooling used © 2010 Emerson Network Power
    47. 47. Annual water consumption © 2010 Emerson Network Power
    48. 48. Carbon footprint 1,000 250 900 800 200 700 600 150 500 Tonnes gha 400 100 300 200 50 100 0 0 Red Sky Thunderbird CO2E 203 912 Footprint 46 205 © 2010 Emerson Network Power
    49. 49. Chiller plant power consumption and cost $1,400,000 18,000,000 16,000,000Kilowatt Hours cost per Year $1,200,000 37% Reduction 14,000,000 $1,000,000 Kilowatt Hours per Year 12,000,000 $800,000 10,000,000 $600,000 8,000,000 6,000,000 $400,000 4,000,000 $200,000 2,000,000 $0 0 Thunderbird Red Sky (518 Tons of (328 Tons of Cooling) Cooling) kWh Cost $1,324,222 $838,504 Kilowatt Hours 15,954,483 10,102,452 © 2010 Emerson Network Power
    50. 50. Energy consumption $140,000 1,800,000 1,600,000 Kilowatt Hours Used per Year $120,000 Kilowatt Hours Cost per Year 1,400,000 $100,000 1,200,000 $80,000 1,000,000 77% Reduction $60,000 800,000 600,000 $40,000 400,000 $20,000 200,000 $0 0 Thunderbird (21 Red Sky (12 XDPs CRACs) & 3 CRACs) kWh Cost $126,791 $28,256 Kilowatt Hours 1,527,604 340,437 © 2010 Emerson Network Power
    51. 51. Q&ASteve Madara, Vice President and GeneralManager, Liebert North America PrecisionCooling, Emerson Network PowerDavid J. Martinez, Facilities Coordinator, CorporateComputing Facilities, Sandia National LaboratoriesThank you for joining us!• Look for more webcasts coming this fall!• Follow @EmrsnNPDataCntr on Twitter © 2010 Emerson Network Power

    ×