Emerson Network Power Jorge Murillo   Gerente Regional de Ventas   Emerson Network Power
Emerson is a Leader in its Core  Global Businesses & Markets  #1 Compressors #1 Controls #1 Alternators #1 Fluid Control #1 Ultrasonic Welding #1 Garbage Disposers  #1 Appliance Components #1 Fractional Motors #1 Storage Solutions #1 Plumbing Tools #1 Wet/Dry Vacuums #1 Pressing Tools/Jaws #1 CCTV Inspection Systems #1 AC & DC Power Systems #1 OEM Embedded Power #1 Precision Cooling Systems Emerson Electric Co.;  Proprietary Information #1 Control Valves #1 Measurement Devices
Soon, Power Will Cost  More Than the Server „ In the data center, power and cooling costs more than the IT it supports”,  Christian L. Belady, Electronics Cooling, Febru ar 2007
 
 
PUE : Power Usage Effectiveness PUE = Total Facility Power IT Equipment Power Emerson Electric Co.;  Proprietary Information PUE = 2
Introducing “CUPS” CUPS, or Compute Units per Second, as a  temporary  or  placeholder  measure for what will be the eventual universal metric for data center output © 2007 Emerson Network Power Data Center Efficiency = = CUPS Watts Consumed Data Center Output Energy Consumed
 
Energy Logic Technical analysis of data center energy consumption and opportunities Based upon a detailed model 5,000 sq ft 4 – 5 year server refresh No blades or virtualization No high density zones (3kW per rack / 120W per sq ft) Total compute load about 600 kW UPS configuration 2x750 kVA 1+1 redundant Hot-aisle / cold-aisle configuration Floor mount cooling (connected to building chilled water plant) http://www.emerson.com/edc/docs/EnergyLogicMetricPaper.pdf
Other servers 15%
 
 
 
 
 
Above the Ceiling Floor Penetrations Use Vapor Barrier Cooling Best Practices Eliminate External Heat Loads
Duct or drop ceiling minimize hot air from flowing across the cold aisles Return plenum allows for better air distribution Cooling Best Practices Use Hot Aisle / Cold Aisle Configuration
As heat loads increase, hot air recirculation can become a problem Block any open spaces that would allow hot and cold air to mix coming into the cabinet Blank panels over empty equipment slots  Blocker panels in any excessive spaces between racks. Blank Blank Blank Blank Cooling Best Practices Block Open Spaces
ASHRAE TC 9.9     Mission Critical Facilities, Technology Spaces, & Electronic Equipment / Systems Standards To purchase these books – www.tc99.ashraetcs.org
Liebert CW Liebert DS Liebert ICS Liebert  Challenger  3000 Low Density High Density Water  Refrigerant Liebert  Challenger  ITR Liebert  XDH Liebert  XDV Liebert  XDO Liebert  XDR Liebert  XDKW Liebert  XDRW Liebert  XDF ROOM ROW CHIP RACK Liebert  XDC Liebert  XDP Liebert  CRV Liebert  Data Mate Liebert  InteleCool 2 Liebert  Mini-Mate 2  Liebert  XDPW Liebert  CRV Liebert  Challenger  3000 Liebert  XDCF Liebert CW
Blower & Drive Technologies Centrifugal Blowers, Forward Curved Radial Fan, Backward Curved Direct drive Electronically Commutated Motors DC Motor w/ Integrated AC – DC Conversion Includes speed control 0-10VDC 10 - 30% More efficient than standard Forward Curved
Test Configurations – CW114 FC Centrifugal w/o VFD EC Plug Fan in Unit EC Plug Fan Under Floor FC Centrifugal with VFD Fan Speed 100% 100% 100% 100% Capacity 117.8 kW 117.8 kW 118.1 kW 125.1 kW Motor kW 11.3 kW 11.3 kW 9.6 kW (higher cfm) 9.5 kW (higher cfm) SCOP 10.4 10.4 12.3 13.2 % Difference Base 0% +18% eff +28% eff Fan Speed 100% 80% 80% 80% Capacity 117.8 kW 100.8 kW 100.5 kW 106.4 kW Motor kW 11.3 kW 6.1 kW 5.3 kW 5.2 kW SCOP 10.4 16.5 19.0 20.5 % Difference Base +59% eff +83% eff +97% eff
Underfloor Velocity Considerations EC fans underfloor Centrifugal Fans
Air Economizer Systems Air Economizer Features Integrated economizer dampers with DX or CW cooling Integrated unit iCom controller 3 Stages of Cooling – 100% Outside Air, Mixed Outside Air & Cooling, 100% Cooling Air Enthalpy operation with variable capacity cooling  Sensors  T/H – Outdoor Air, Return Air, Supply air In economizer mode, unit humidification and dehumidification are inhibited Normal Operation Outside Air Operation
Compressor Technologies Scroll Compressor Semi-Hermetic Compressor Digital Scroll Compressor
Digital Scroll Capacity Modulation
Digital Scroll Capacity Modulation 10-100% Continuous Capacity Scrolls Separated  – No Pump Scrolls In Contact   - Pumping 1 mm 27% Of Full Capacity  50% Of Full Capacity 11 Sec Full Capacity Zero Capacity 4  Sec Zero Capacity Full Capacity 7.5 Sec 7.5 Sec
Reduced Compressor Cycles
Digital Scroll Energy Savings @ 90% Load and 75F / 50% Compressor cost 10 ton Air Cooled Scroll $3,902 Digital $2,714 Cost of Humidification Scroll $2,347 Digital $0 Total Scroll $6,249 Digital $2,714
Low Density High Density Water   Refrigerant Liebert  XDH Liebert  XDV Liebert  XDO Liebert  XDR Liebert  XDKW Liebert  XDRW Liebert  XDF ROOM ROW CHIP RACK Liebert  XDC Liebert  XDP Liebert  XDPW Liebert  XDCF Liebert  Challenger  ITR Liebert  CRV Liebert  CRV Liebert CW Liebert DS Liebert ICS Liebert  Challenger  3000 Liebert  Data Mate Liebert  InteleCool 2 Liebert  Mini-Mate 2  Liebert  Challenger  3000
Liebert XD Energy Efficiency Benefits 65% less fan power Greater cooling coil effectiveness 100% Sensible Cooling Liebert XD & Base Cooling Fan Power- 8.5kW per 100 kW of Cooling Average entering air temperature of 77 °F Traditional Cooling Only Fan Power- 3.0 kW per 100 kW of Cooling  (XD @ 2 kW per 100kW) Average entering air temperature of 98 °F Fluid temperature 4°F above the dew point to prevent condensation Cooling Unit Blower Resistances
XDP XDC XDV10 or Base Infrastructure  (160 kW) Building chiller or DX system Cooling modules  10-35 kW +++ XDO20 Embedded technology Door Cooling Module  20-35 kW XDR20/35 XDH20/32 XD Rack Based Cooling with Refrigerant
Overhead XD Solutions Solve Space, Power  & Cooling Constraints © 2007 Emerson Network Power 65% Space Freed Up 43% Cooling Capacity and 33% Power Capacity Saved BEFORE 5,000 sq. ft. / 465 sq. m. 1,768 sq. ft. / 164 sq. m AFTER
Low Density High Density Water   Refrigerant Liebert  XDH Liebert  XDV Liebert  XDO Liebert  XDR Liebert  XDKW Liebert  XDRW Liebert  XDF ROOM ROW CHIP RACK Liebert  XDC Liebert  XDP Liebert  XDPW Liebert  XDCF Liebert  Challenger  ITR Liebert  CRV Liebert  CRV Liebert CW Liebert DS Liebert ICS Liebert  Challenger  3000 Liebert  Data Mate Liebert  InteleCool 2 Liebert  Mini-Mate 2  Liebert  Challenger  3000
Benefits of Cold Aisle Containment No CAC, Side View Cold Aisle Containment
High Efficiency Cooling Option Cold Aisle Containment
Raised Floor & Cold Aisle Containment  UBS Bank- Knurr
The Liebert Smart-Aisle Containment
Variable Refrigerant and Air Flow By monitoring and controlling both, the refrigeration capacity and air flow maximum system  efficency is achieved - 0% - - 100% - Compressor Fans Independent cooling capacity and airflow operation match server needs
The Smart-Aisle Efficiency Smart-Aisle includes : Variable Speed Fans Digital Scroll Compressor Liebert iCom with remote sensors Cold aisle containment Basic Containment SmartAisle With SmartAisle 97 F 92 F 85 F
Hot vs. Cold Aisle Containment  RH>55% RH>55% SMART AISLE Hot Aisle
Verizon Wireless iCom  Case Study Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Stand By (9)
Verizon Wireless   iCom  Case Study Summary Total of (32) DH380A’s units operating as a single zone 32 units / 6 zones ( w/ 9 units on standby ) iCom with (6) vNSA switches and (2)  iCom  Wall Mount Total cost of upgrades ~$184,000  Total cost of installation ~$60,000
Verizon Wireless   iCom  Case Study Summary Approximately 200 kW per hour savings Total of 1,752,000 kW of unnecessary usage per year savings Total Annual Savings ~$211,000 Total Install Cost ~$244,000 Simple Payback ~1.2 years CRAC CRAC CRAC CRAC AC
Optimizing Data Center Performance Optimizing performance requires a holistic view of efficiency – more than just energy
Infrastructure Management Solution Manage the Data Center as a Single Entity Varying Degrees of Completeness Operational Control Planning  and Management Monitoring  and Automation Integrated Information AutoCAD Drawings Visio Vendor Dependent Monitoring Databases Email and Meetings Individuals Multiple Excel Spreadsheets
Real Time Monitoring and Control SiteScan Web and Nform Allows for quick equipment assessment and corrective action Infrastructure performance trend reporting and capacity management
Creating Dashboards
Infrastructure Explorer is a ‘single pane of glass’ that consolidates information on assets, Data Center capacities and projects, enabling various teams to gain clear insight into their environments, allowing them to Know, Plan and Manage the Data Center. MergePoint Infrastructure Explorer © 2010 Avocent Corporation
AMIE can report on changes that have occurred in the data center and help anticipate  what problems may occur in the future Reporting the Past and Future © 2010 Avocent Corporation  Monthly Capacity Trends by Floor Plan -  Historical and Future Data as of: m/dd/yyyy  hh:mm:ssAM Sep, 1 2008 – Feb 28, 2009 Plan - Plan 1 Square Footage: 1000 Location: Atlanta, GA Max Capacity Heat(kW) Power(kW) Weight(lbs) Network Ports Space(ru) Sep, 2008 40 1000 5000 50 500 Oct, 2008 60 34 3500 245 1000 Nov, 2008 45 400 2126 300 1257 Dec, 2008 40 1000 5000 50 500 Jan, 2009 80 800 3500 245 1000 Feb, 2009 90 900 2126 300 1257 Consumed Heat(kW) Power(kW) Weight(lbs) Network Ports Space(ru) Sep, 2008 40 1000 5000 50 500 Oct, 2008 60 34 3500 245 1000 Nov, 2008 45 400 2126 300 1257 Dec, 2008 40 1000 5000 50 500 Jan, 2009 80 800 3500 245 1000 Feb, 2009 90 900 2126 300 1257 Remaining Heat(kW) Power(kW) Weight(lbs) Network Ports Space(ru) Sep, 2008 40 1000 5000 50 500 Oct, 2008 60 34 3500 245 1000 Nov, 2008 45 400 2126 300 1257 Dec, 2008 40 1000 5000 50 500 Jan, 2009 80 800 3500 245 1000 Feb, 2009 90 900 2126 300 1257 Print Date/Time : m/dd/yyyy  hh:mm:ssAM Page # of #
Trending Evaluation Based on Resource Usage
Facility / Network Concerns in the Past 3 Years Efficient use of resources Availability Management Categories Spring 2008 Spring 2009 Spring 2010 Heat density (cooling) Heat density (cooling) Adequate monitoring / data center management capabilities Power density Energy efficiency (energy costs & equipment efficiency) Heat density (cooling) Availability (uptime) Adequate monitoring / data center management capabilities Availability (uptime) Adequate monitoring / data center management capabilities Availability (uptime) Energy efficiency (energy costs & equipment efficiency) Energy efficiency (energy costs & equipment efficiency) Power density Power density Space constraints / growth Space constraints / growth Space constraints / growth
Efficiency Without Compromise  TM Emerson
 

Estrategias para ahorro de energía en applicaciones de misión crítica de IT

  • 1.
    Emerson Network PowerJorge Murillo Gerente Regional de Ventas Emerson Network Power
  • 2.
    Emerson is aLeader in its Core Global Businesses & Markets #1 Compressors #1 Controls #1 Alternators #1 Fluid Control #1 Ultrasonic Welding #1 Garbage Disposers #1 Appliance Components #1 Fractional Motors #1 Storage Solutions #1 Plumbing Tools #1 Wet/Dry Vacuums #1 Pressing Tools/Jaws #1 CCTV Inspection Systems #1 AC & DC Power Systems #1 OEM Embedded Power #1 Precision Cooling Systems Emerson Electric Co.; Proprietary Information #1 Control Valves #1 Measurement Devices
  • 3.
    Soon, Power WillCost More Than the Server „ In the data center, power and cooling costs more than the IT it supports”, Christian L. Belady, Electronics Cooling, Febru ar 2007
  • 4.
  • 5.
  • 6.
    PUE : PowerUsage Effectiveness PUE = Total Facility Power IT Equipment Power Emerson Electric Co.; Proprietary Information PUE = 2
  • 7.
    Introducing “CUPS” CUPS,or Compute Units per Second, as a temporary or placeholder measure for what will be the eventual universal metric for data center output © 2007 Emerson Network Power Data Center Efficiency = = CUPS Watts Consumed Data Center Output Energy Consumed
  • 8.
  • 9.
    Energy Logic Technicalanalysis of data center energy consumption and opportunities Based upon a detailed model 5,000 sq ft 4 – 5 year server refresh No blades or virtualization No high density zones (3kW per rack / 120W per sq ft) Total compute load about 600 kW UPS configuration 2x750 kVA 1+1 redundant Hot-aisle / cold-aisle configuration Floor mount cooling (connected to building chilled water plant) http://www.emerson.com/edc/docs/EnergyLogicMetricPaper.pdf
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 22.
  • 25.
    Above the CeilingFloor Penetrations Use Vapor Barrier Cooling Best Practices Eliminate External Heat Loads
  • 26.
    Duct or dropceiling minimize hot air from flowing across the cold aisles Return plenum allows for better air distribution Cooling Best Practices Use Hot Aisle / Cold Aisle Configuration
  • 27.
    As heat loadsincrease, hot air recirculation can become a problem Block any open spaces that would allow hot and cold air to mix coming into the cabinet Blank panels over empty equipment slots Blocker panels in any excessive spaces between racks. Blank Blank Blank Blank Cooling Best Practices Block Open Spaces
  • 28.
    ASHRAE TC 9.9 Mission Critical Facilities, Technology Spaces, & Electronic Equipment / Systems Standards To purchase these books – www.tc99.ashraetcs.org
  • 30.
    Liebert CW LiebertDS Liebert ICS Liebert Challenger 3000 Low Density High Density Water Refrigerant Liebert Challenger ITR Liebert XDH Liebert XDV Liebert XDO Liebert XDR Liebert XDKW Liebert XDRW Liebert XDF ROOM ROW CHIP RACK Liebert XDC Liebert XDP Liebert CRV Liebert Data Mate Liebert InteleCool 2 Liebert Mini-Mate 2 Liebert XDPW Liebert CRV Liebert Challenger 3000 Liebert XDCF Liebert CW
  • 31.
    Blower & DriveTechnologies Centrifugal Blowers, Forward Curved Radial Fan, Backward Curved Direct drive Electronically Commutated Motors DC Motor w/ Integrated AC – DC Conversion Includes speed control 0-10VDC 10 - 30% More efficient than standard Forward Curved
  • 32.
    Test Configurations –CW114 FC Centrifugal w/o VFD EC Plug Fan in Unit EC Plug Fan Under Floor FC Centrifugal with VFD Fan Speed 100% 100% 100% 100% Capacity 117.8 kW 117.8 kW 118.1 kW 125.1 kW Motor kW 11.3 kW 11.3 kW 9.6 kW (higher cfm) 9.5 kW (higher cfm) SCOP 10.4 10.4 12.3 13.2 % Difference Base 0% +18% eff +28% eff Fan Speed 100% 80% 80% 80% Capacity 117.8 kW 100.8 kW 100.5 kW 106.4 kW Motor kW 11.3 kW 6.1 kW 5.3 kW 5.2 kW SCOP 10.4 16.5 19.0 20.5 % Difference Base +59% eff +83% eff +97% eff
  • 33.
    Underfloor Velocity ConsiderationsEC fans underfloor Centrifugal Fans
  • 34.
    Air Economizer SystemsAir Economizer Features Integrated economizer dampers with DX or CW cooling Integrated unit iCom controller 3 Stages of Cooling – 100% Outside Air, Mixed Outside Air & Cooling, 100% Cooling Air Enthalpy operation with variable capacity cooling Sensors T/H – Outdoor Air, Return Air, Supply air In economizer mode, unit humidification and dehumidification are inhibited Normal Operation Outside Air Operation
  • 35.
    Compressor Technologies ScrollCompressor Semi-Hermetic Compressor Digital Scroll Compressor
  • 36.
  • 37.
    Digital Scroll CapacityModulation 10-100% Continuous Capacity Scrolls Separated – No Pump Scrolls In Contact - Pumping 1 mm 27% Of Full Capacity 50% Of Full Capacity 11 Sec Full Capacity Zero Capacity 4 Sec Zero Capacity Full Capacity 7.5 Sec 7.5 Sec
  • 38.
  • 39.
    Digital Scroll EnergySavings @ 90% Load and 75F / 50% Compressor cost 10 ton Air Cooled Scroll $3,902 Digital $2,714 Cost of Humidification Scroll $2,347 Digital $0 Total Scroll $6,249 Digital $2,714
  • 41.
    Low Density HighDensity Water Refrigerant Liebert XDH Liebert XDV Liebert XDO Liebert XDR Liebert XDKW Liebert XDRW Liebert XDF ROOM ROW CHIP RACK Liebert XDC Liebert XDP Liebert XDPW Liebert XDCF Liebert Challenger ITR Liebert CRV Liebert CRV Liebert CW Liebert DS Liebert ICS Liebert Challenger 3000 Liebert Data Mate Liebert InteleCool 2 Liebert Mini-Mate 2 Liebert Challenger 3000
  • 42.
    Liebert XD EnergyEfficiency Benefits 65% less fan power Greater cooling coil effectiveness 100% Sensible Cooling Liebert XD & Base Cooling Fan Power- 8.5kW per 100 kW of Cooling Average entering air temperature of 77 °F Traditional Cooling Only Fan Power- 3.0 kW per 100 kW of Cooling (XD @ 2 kW per 100kW) Average entering air temperature of 98 °F Fluid temperature 4°F above the dew point to prevent condensation Cooling Unit Blower Resistances
  • 43.
    XDP XDC XDV10or Base Infrastructure (160 kW) Building chiller or DX system Cooling modules 10-35 kW +++ XDO20 Embedded technology Door Cooling Module 20-35 kW XDR20/35 XDH20/32 XD Rack Based Cooling with Refrigerant
  • 44.
    Overhead XD SolutionsSolve Space, Power & Cooling Constraints © 2007 Emerson Network Power 65% Space Freed Up 43% Cooling Capacity and 33% Power Capacity Saved BEFORE 5,000 sq. ft. / 465 sq. m. 1,768 sq. ft. / 164 sq. m AFTER
  • 45.
    Low Density HighDensity Water Refrigerant Liebert XDH Liebert XDV Liebert XDO Liebert XDR Liebert XDKW Liebert XDRW Liebert XDF ROOM ROW CHIP RACK Liebert XDC Liebert XDP Liebert XDPW Liebert XDCF Liebert Challenger ITR Liebert CRV Liebert CRV Liebert CW Liebert DS Liebert ICS Liebert Challenger 3000 Liebert Data Mate Liebert InteleCool 2 Liebert Mini-Mate 2 Liebert Challenger 3000
  • 46.
    Benefits of ColdAisle Containment No CAC, Side View Cold Aisle Containment
  • 47.
    High Efficiency CoolingOption Cold Aisle Containment
  • 48.
    Raised Floor &Cold Aisle Containment UBS Bank- Knurr
  • 49.
  • 50.
    Variable Refrigerant andAir Flow By monitoring and controlling both, the refrigeration capacity and air flow maximum system efficency is achieved - 0% - - 100% - Compressor Fans Independent cooling capacity and airflow operation match server needs
  • 51.
    The Smart-Aisle EfficiencySmart-Aisle includes : Variable Speed Fans Digital Scroll Compressor Liebert iCom with remote sensors Cold aisle containment Basic Containment SmartAisle With SmartAisle 97 F 92 F 85 F
  • 52.
    Hot vs. ColdAisle Containment RH>55% RH>55% SMART AISLE Hot Aisle
  • 54.
    Verizon Wireless iCom Case Study Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Stand By (9)
  • 55.
    Verizon Wireless iCom Case Study Summary Total of (32) DH380A’s units operating as a single zone 32 units / 6 zones ( w/ 9 units on standby ) iCom with (6) vNSA switches and (2) iCom Wall Mount Total cost of upgrades ~$184,000 Total cost of installation ~$60,000
  • 56.
    Verizon Wireless iCom Case Study Summary Approximately 200 kW per hour savings Total of 1,752,000 kW of unnecessary usage per year savings Total Annual Savings ~$211,000 Total Install Cost ~$244,000 Simple Payback ~1.2 years CRAC CRAC CRAC CRAC AC
  • 57.
    Optimizing Data CenterPerformance Optimizing performance requires a holistic view of efficiency – more than just energy
  • 58.
    Infrastructure Management SolutionManage the Data Center as a Single Entity Varying Degrees of Completeness Operational Control Planning and Management Monitoring and Automation Integrated Information AutoCAD Drawings Visio Vendor Dependent Monitoring Databases Email and Meetings Individuals Multiple Excel Spreadsheets
  • 59.
    Real Time Monitoringand Control SiteScan Web and Nform Allows for quick equipment assessment and corrective action Infrastructure performance trend reporting and capacity management
  • 60.
  • 61.
    Infrastructure Explorer isa ‘single pane of glass’ that consolidates information on assets, Data Center capacities and projects, enabling various teams to gain clear insight into their environments, allowing them to Know, Plan and Manage the Data Center. MergePoint Infrastructure Explorer © 2010 Avocent Corporation
  • 62.
    AMIE can reporton changes that have occurred in the data center and help anticipate what problems may occur in the future Reporting the Past and Future © 2010 Avocent Corporation Monthly Capacity Trends by Floor Plan - Historical and Future Data as of: m/dd/yyyy hh:mm:ssAM Sep, 1 2008 – Feb 28, 2009 Plan - Plan 1 Square Footage: 1000 Location: Atlanta, GA Max Capacity Heat(kW) Power(kW) Weight(lbs) Network Ports Space(ru) Sep, 2008 40 1000 5000 50 500 Oct, 2008 60 34 3500 245 1000 Nov, 2008 45 400 2126 300 1257 Dec, 2008 40 1000 5000 50 500 Jan, 2009 80 800 3500 245 1000 Feb, 2009 90 900 2126 300 1257 Consumed Heat(kW) Power(kW) Weight(lbs) Network Ports Space(ru) Sep, 2008 40 1000 5000 50 500 Oct, 2008 60 34 3500 245 1000 Nov, 2008 45 400 2126 300 1257 Dec, 2008 40 1000 5000 50 500 Jan, 2009 80 800 3500 245 1000 Feb, 2009 90 900 2126 300 1257 Remaining Heat(kW) Power(kW) Weight(lbs) Network Ports Space(ru) Sep, 2008 40 1000 5000 50 500 Oct, 2008 60 34 3500 245 1000 Nov, 2008 45 400 2126 300 1257 Dec, 2008 40 1000 5000 50 500 Jan, 2009 80 800 3500 245 1000 Feb, 2009 90 900 2126 300 1257 Print Date/Time : m/dd/yyyy hh:mm:ssAM Page # of #
  • 63.
    Trending Evaluation Basedon Resource Usage
  • 64.
    Facility / NetworkConcerns in the Past 3 Years Efficient use of resources Availability Management Categories Spring 2008 Spring 2009 Spring 2010 Heat density (cooling) Heat density (cooling) Adequate monitoring / data center management capabilities Power density Energy efficiency (energy costs & equipment efficiency) Heat density (cooling) Availability (uptime) Adequate monitoring / data center management capabilities Availability (uptime) Adequate monitoring / data center management capabilities Availability (uptime) Energy efficiency (energy costs & equipment efficiency) Energy efficiency (energy costs & equipment efficiency) Power density Power density Space constraints / growth Space constraints / growth Space constraints / growth
  • 65.
  • 66.

Editor's Notes

  • #8 The new white paper introduces the term “Compute Units Per Seconds,” (CUPS). I must emphasize that our goal is to determine what insights can be gained from a metric, and moving the industry closer to adopting such a measure. However, we are not proposing or advocating a specific measure.
  • #10 An “Average” data center. Not bleeding edge, but not outdated. Something reasonable constructed in the last few years.
  • #26 If you cannot adequately seal the room, you cannot control the room.
  • #31 As you can see the product offering addresses many applications from high density or low density, on the rack row or room levels. There are products utilizing green refrigerants or chilled water.
  • #32 Here are the primary blower technologies used in today’s applications.
  • #33 We sent test units to ETL, an independent laboratory in Cortland, NY, to compare various fan systems. Here’s a table that summarizes the data. The base case is a CW114 unit running with standard centrifugal fans, without a Variable speed drive. So in both cases on this table the fan speed is 100%. The capacity, motor kW and SCOP are listed. Remember that SCOP is the net sensible capacity, divided by the motor kW input. In this case, the energy efficiency is 10.4. The next case shows the addition of a variable speed drive. With the drive set at 100% fan speed, there is no change in capacity, or fan motor kW. However, the speed is reduced by 80%, the cooling capacity and fan kW are reduced. The fan kw varies by the cube of the fan speed, so in this case we see about a 59% increase in energy efficiency. The next test case was a unit with direct-drive EC plug fans. These are backward inclined airfoil fans, with a DC motor that allows for speed control. At full speed, we see a slight increase in cooling capacity, due to a higher air volume and lower motor kW. So, at full speed, the resulting energy efficiency is 18% better than the base. But when we slow the EC fan down to 80% speed, the energy efficiency increases to 83% higher than the base case. The last test case was placing the EC plug fans underneath the cooling unit, down into the raised flloor. The capacity increases due to higher air volume, the fan kW was again maxed out at about 9.5kW. This results in a 28% increase in energy efficiency. When the speed is decreased, the energy savings is compounded, resulting in a 97% increase in energy efficiency, vs the base case. Note that instead of increases the fan speed for the underfloor case, we could have left the air volume and capacity constant. This would have resulted in a lower motor kW.
  • #36 Let’s start with compressors. There are four technologies we’re going to discuss : Scroll compressors, tandem scroll compressors, Semi-Hermetic compressors with unloaders, and digital scroll compressors.
  • #37 This is an animation of the digital scroll compressor.
  • #38 Digital capacity modulation utilizes Copelands axial compliance. Copeland scrolls have axial and radial compliance. As opposed to other scroll machines with fixed throws, Copeland compressors are designed to use inherent forces from the discharge pressures to push flanks and tip seals together. This allows for better sealing of the mating surfaces and higher eff’y. The force that keep the scrolls engaged axially comes from a small cavity above the top scroll which uses a fraction of the discharge pressure. By design the scrolls are made to separate by about 1 mm. If the pressure in the small cavity is released, the scrolls separate. By modulating the pressure in the cavity we can modulate the pumping of the scrolls and so, the capacity produced by the compressor. So you see here that we can modulate the capacity by leaving the pressure in the cavity and keeping the scrolls engaged
  • #39 Both the digital and 4-step semi-hermetic increase reliability by reducing compressor cycling. The digital offers a standard 3 year warranty. By reducing the start / stops we decrease the wear and tear of the compressors. Note here that we are trying to link our discussions back to the brochure, which may be used at time to present the product.
  • #40 An example on how you can save energy, we will utilize the LSN program to do some energy analysis runs. I have extracted the energy usage information to make it a bit easier to read. This information was based on a 10 ton air cooled unit running at 90% set at 75F and 50% humidity. As you can see, both the Digital and 4-step provide a significant savings. Another thing that has to be taken into consideration is the cost of humidification, a normal effect of the refrigeration process is, dehumidification. Dependant on the temperature and humidity set points humidification takes place and we must now humidify to make up for it. Since the digital and 4-step are operating at partial capacities they will dehumidify less than a unit at full load.
  • #43 Locating cooling units closer to the load as in Liebert XD reduces the energy required to move the air and result in less mixing of hot and cold air. Due to the much lower blower resistances with the cooling units located closer to the racks, fan power is typically 3 kW per 100kW of cooling with Liebert XD compared to 8.5 kW per 100kW of cooling for traditional cooling. Micro channel coils provide minimal air pressure drop losses and improved thermal heat transfer and there is no need to over-chill data centers to eliminate hot spots.
  • #44 The Liebert XD units are connected with the Liebert XD piping system that makes it easy to plan and expand the system in response to a growing heat load. The key is to put the necessary piping in place in advance and then add cooling units (with quick connects and flexible piping) and pump units /chillers as the need arises for more cooling capability. The current cooling modules are XDO, XDV and XDH. The module that is being introduced now is the XDR that is located on the rear of the rack. Future solutions include embedded cooling and chip cooling. This technology can cool more than 100kW per rack. includes modules in the system
  • #45 Key Points: Here is a look at the results of implementing Energy Logic’s strategies in terms of space, power and cooling constraints. If you apply all 10 strategies in their totality, you will free up 65 percent of the space. From 5,000 square feet, you’ll only need 1,768 square feet. And we are not assuming all racks are moved to high density. Only ½ of the racks are high-density at 12 kW per rack. If you went for higher density racks, the space savings would be even greater. As it is, though, you are saving 2/3 of space. Your required UPS capacity will go down by 2 x 500 kVA. That means you have 1/3 of your total UPS capacity available for your growth and expansion projects. The load on the cooling plant will go down from 350 tons to 200 tons, so you are saving 40 percent in terms of the utilization you can add. We did not take it into account further savings could be had by optimizing the chilled water plant by installing VFDs.
  • #47 Hot spots, Because at these densities, we can’t cool 10 kW per rack with traditional cooling.
  • #52 Lets review these 3 scenarios. On the left, is open aisle. There is a lot of air that doesn’t pass through the servers, and is being bypassed. Probably what most sites have today. Good but could be better. Basic containment , the middle graph, results in a greater pressure under the floor and in the cold aisle. This results in a CFM reduction from the CRAC (Good this lowers the fan losses) more CFM across the server (lower Server dt) and more CFM leakage losses around and through the racks and through the floor. There is less air mixing and the result being higher return temperatures back to the cooling unit, and a nice boost in cooling efficiency. But there is still more air delivered than what is necessary. With SmartAisle, on the right side, we are adjusting the cooling to exactly match the needs within the containment. No more no less. SmartAisle results in a better match between the server CFM requirements and the output of the CRAC. Also due to less mixing, the return to the CRAC is higher resulting in more efficiency from the CRAC. To achieve this kind of energy improvement, up to 33%, you need to use variable cooling technologies. You know about VFD’s and EC fans on Liebert CW. You know about digital scroll on Liebert DS. What about controlling the air on DS … ------------------------------------
  • #59 The issue is so what kind of tools do you have to manage this squeeze? So what it comes down to is that the tools most co’s have just can’t really meet the challenge . (READ Slide) So what we find is that there are spreadsheets, and auto CADs, VISIO diagrams, … .and people that we call “walking heads” because they carry everything in their heads, and they all have varying degrees of completeness. Where we are focused is to manage that DC as one single entity – - we refer to it as Data Center Service Management or “DCSM” And the solution we provide is just that in that …
  • #61 Dashboards can be expressed in many different ways, this user uses KW to determine capacity, cost to run the data center, and estimated days available on generator.
  • #63 Planning Palette This palette is the primary display for selecting projects by completion date. This palette contains a calendar with references to current and future projects. The calendar days can contain icons that represent a roll-up of project status for that day. In addition to the calendar there is a list of projects which is updated with a date selection. The user can create a new project or view an existing project by selecting the project name or selected Create New Project (+). If the user selects a project from the list the current view will update to reflect the changes in the selected project and the Project Details Palette will be populated. Product Strategy – Aimed at Proactive Segment Integrate with operations, Integrate with change management
  • #65 Talking Points The top concerns of the DCUG membership over the last 3 years has remained focused into 3 core categories Availability Efficient use of resources Energy efficiency has certainly been the hot topic publicly Cooling , power and space constraints have also been key issues (looking at the rapid change in complexity it is easy to understand why) Also driving a closer working relationship between IT and facilities Adequate monitoring & management capabilities How can I manage all this complexity in an efficient and effective way
  • #66 <Optional slide to use when want to summarize all categories with functional blocks at a glance. Use to explode into detail breakouts>