Your SlideShare is downloading. ×
  • Like
US Trends in Data Centre Design with NREL Examples of Large Energy Savings
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

US Trends in Data Centre Design with NREL Examples of Large Energy Savings

  • 387 views
Published

Summary …

Summary
Background
Information Technology Systems
Environmental Conditions
Air Management
Cooling Systems
Electrical Systems
Other Opportunities for Energy Efficient Design
Data Center Metrics & Benchmarking

Published in Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
387
On SlideShare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
20
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable Energy, LLC.US  Trends  in  Data  Centre  Design  with  NREL  Examples  of  Large  Energy  Savings  Understanding  and  Minimising  The  Costs  of  Data  Centre  Based  IT  Services  Conference    University  of  Liverpool  O?o  Van  Geet,  PE  June  17,  2013        
  • 2. 20"5"10"15"20"25"30"35"40"1.00"1.03"1.06"1.09"1.12"1.15"1.18"1.21"1.24"1.27"1.30"1.33"1.36"1.39"1.42"1.45"1.48"1.51"1.54"1.57"1.60"1.63"1.66"1.69"1.72"1.75"1.78"1.81"1.84"1.87"1.90"1.93"Cost%in%Millions%of%Dollars%P.U.E.%Total%Annual%Electrical%Cost:%Compute%+%Facility%2Assume ~20MW HPC system & $1M per MW year utility costFacilityHPCCost  and  Infrastructure  Constraints  
  • 3. 3BPG  Table  of  Contents  •  Summary  •  Background  •  Informa?on  Technology  Systems  •  Environmental  CondiLons  •  Air  Management  •  Cooling  Systems  •  Electrical  Systems  •  Other  Opportuni?es  for  Energy  Efficient  Design  •  Data  Center  Metrics  &  Benchmarking  
  • 4. 4CPUs  ~65C  (149F)  GPUs  ~75C  (167F)  Memory  ~85C  (185F)  CPU,  GPU  &  Memory,  represent  ~75-­‐90%  of  heat  load  …  ssSafe  Temperature  Limits  
  • 5. 5  Data  Center  equipment’s  environmental  condiLons  should  fall  within  the  ranges  established  by  ASHRAE  as  published  in  the  Thermal  Guidelines  book.    Environmental  CondiLons  ASHRAE  Reference:  ASHRAE  (2008),  (2011)    (@  Equipment  Intake) Recommended AllowableTemperature        Data  Centers  ASHRAE    18°  –  27°C  15°  –  32°C  (A1)  5°  –  45°C  (A4)  Humidity  (RH)      Data  Centers  ASHRAE      5.5°C  DP  –    60%    RH  and  15oC  DP  20%  –  80%  RHEnvironmental  SpecificaLons  (°C)  
  • 6. 62011  ASHRAE  Allowable  Ranges  Dry Bulb Temperature
  • 7. 7Psychrometric  Bin  Analysis  00.0020.0040.0060.0080.010.0120.0140.0160.0180.020.0220.0240.0260.0280.030.0320.0340.03630 40 50 60 70 80 90 100 110 120HumidityRatio(lbWater/lbDryAir)Dry-Bulb Temperature (ºF)Boulder, Colorado TMY3 Weather DataTMY3 Weather DataClass 1 Recommended RangeClass 1 Allowable Range60ºF50ºF40ºFRelativeHumidity80%60%40%20%100%70ºF60ºF50ºF40ºFRelativeHumidity80%60%40%20%100%80ºFDesign  Condi?ons  (0.4%):  91.2  db,  60.6  wb  
  • 8. 8EsLmated  Savings  Baseline  System   DX  Cooling  with  no  economizer  Load   1  ton  of  cooling,  constant  year-­‐round  Efficiency  (COP)   3  Total  Energy  (kWh/yr)   10,270  RECOMMENDED  RANGE   ALLOWABLE  RANGE  Results  Hours   Energy  (kWh)   Hours   Energy  (kWh)  Zone1:    DX  Cooling  Only   25   8   2   1  Zone2:    Mul?stage  Indirect  Evap.  +  DX  (H80)   26   16   4   3  Zone3:    Mul?stage  Indirect  Evap.  Only   3   1   0   0  Zone4:    Evap.  Cooler  Only   867   97   510   57  Zone5:    Evap.  Cooler  +  Outside  Air   6055   417   1656   99  Zone6:    Outside  Air    Only   994   0   4079   0  Zone7:    100%  Outside  Air   790   0   2509   0  Total   8,760   538   8,760   160  Es0mated  %  Savings   -­‐   95%   -­‐   98%  
  • 9. 9Data  Center  Efficiency  Metric  •  Power  Usage  EffecLveness  (P.U.E.)  is  an  industry  standard  data  center  efficiency  metric.  •  The  raLo  of  power  used  or  lost  by  data  center  facility  infrastructure  (pumps,  lights,  fans,  conversions,  UPS…)  to  power  used  by  compute.  •  Not  perfect,  some  folks  play  games  with  it.  •  2011  survey  esLmates  industry  average  is  1.8.  •  Typical  data  center,  half  of  power  goes  to  things  other  than  compute  capability.  9“IT power” + “Facility power”P.U.E. =“IT power”
  • 10. 10PUE  –  Simple  and  EffecLve  
  • 11. 11-200204060801000.750.850.951.051.151.251.351.45OutdoorTemperature(°F)PUEData  Center  PUE  -200204060801000.750.850.951.051.151.251.351.45OutdoorTemperature(°F)PUEData Center PUEOutdoor Temperature
  • 12. “I  am  re-­‐using  waste  heat  from  my  data  center  on  another  part  of  my  site  and  my  PUE  is  0.8!”  ASHRAE  &  friends  (DOE,  EPA,  TGG,  7x24,  etc..)  do  not  allow  reused  energy  in  PUE  &  PUE  is  always  >1.0.    Another  metric  has  been  developed  by  The  Green  Grid  +;  ERE  –  Energy  Reuse  EffecLveness.  h?p://www.thegreengrid.org/en/Global/Content/white-­‐papers/ERE  
  • 13. 13ERE  –  Adds  Energy  Reuse  UtilityCoolingUPS PDUITRejectedEnergy(a)(b)(c) (d)(f)(e)ReusedEnergy(g)
  • 14. 14Credit:  Haselden  ConstrucLon  •  More  than  1300  people  in  DOE  office  space  on  NREL’s  campus  •  33,445    m2    •  Design/build  process  with  required  energy  goals    ̶         50%  energy  savings  from  code  ̶         LEED  Pla?num  •  Replicable  ̶  Process    ̶  Technologies  ̶  Cost  •  Site,  source,  carbon,  cost  ZEB:B  ̶         Includes  plugs  loads  and  datacenter  •  Firm  fixed  price    -­‐    US  $22.8/m2  construcLon  cost  (not  including  $2.5/m2  for  PV  from  PPA/ARRA)  •  Opened  June  10,  2010  (First  Phase)  DOE/NREL  Research  Support  Facility    
  • 15. 15RSF  Datacenter  •  Fully  containing  hot  aisle  –  Custom  aisle  floor  and  door  seals  –  Ensure  equipment  designed  for  cold  aisle  containment    §  And  installed  to  pull  cold  air  Ø  Not  hot  air…  –       1.18  annual  PUE    –         ERE  =  0.9    •  Control  hot  aisle  based  on  return  temperature  of  ~90F.  •  Waste  heat  used  to  heat  building.  •  Outside  air  and  EvaporaLve  Cooling  •  Low  fan  energy  design  •  176  Sq  m.  Credit:  Marjorie  Scho?/NREL  
  • 16. 1616
  • 17. 1717Data Center Load GROWTH (40+ kW in 2 years) since NO recharge!
  • 18. 18Move  to  Liquid  Cooling  •  Server  fans  are  inefficient  and  noisy.  –  Liquid  doors  are  an  improvement  but  we  can  do  beger!  •  Power  densiLes  are  rising  making  component-­‐level  liquid  cooling  soluLons  more  appropriate.  •  Liquid  Benefit  –  Thermal  stability,  reduced  component  failures.  –  Beger  waste  heat  re-­‐use  op?ons.  –  Warm  water  cooling,  reduce/eliminate  condensa?on.  –  Provide  cooling  with  higher  temperature  coolant.  •  Eliminate  expensive  &  inefficient  chillers.  •  Save  wasted  fan  energy  and  use  it  for  compuLng.  •  Unlock  your  cores  and  overclock  to  increase  throughput!  
  • 19. 19Liquid  Cooling  –  Overview  Water  and  other  liquids  (dielectrics,  glycols  and  refrigerants)  may  be  used  for  heat  removal.    •  Liquids  typically  use  LESS  transport  energy    (14.36  Air  to  Water  Horsepower  ra?o  for  example  below).    •  Liquid-­‐to-­‐liquid  heat  exchangers  have  closer  approach  temps  than  Liquid-­‐to-­‐air  (coils),  yielding  increased  outside  air  hours.  
  • 20. 202011  ASHRAE  Liquid  Cooling  Guidelines  NREL  ESIF  HPC  (HP  hardware)  using  24  C  supply,  40  C  return  –W4/W5  
  • 21. 21NREL  HPC  Data  Center  Showcase  Facility  •  10MW,  929  m2    •  Leverage  favorable  climate  •  Use  direct  water  to  rack  cooling  •  DC  manager  responsible  for  ALL  DC  cost  including  energy!  •  Waste  heat  captured  and  used  to  heat  labs  &  offices.  •  World’s  most  energy  efficient  data  center,  PUE  1.06!  •  Lower  CapEx  and  OpEx.  Leveraged  exper0se  in  energy  efficient  buildings  to  focus  on  showcase  data  center.  Chips to bricks approach•  Opera?onal  1-­‐2013,  Petascale+  HPC  Capability  in  8-­‐2013  •  20-­‐year  planning  horizon  ̶  5  to  6  HPC  genera?ons.  High  Performance  CompuLng  
  • 22. 22CriLcal  Data  Center  Specs  •  Warm  water  cooling,  24C  ̶  Water  much  beger  working  fluid  than  air  -­‐  pumps  trump  fans.  ̶  U?lize  high  quality  waste  heat,  40C  or  warmer.  ̶  +90%  IT  heat  load  to  liquid.  •  High  power  distribuLon  ̶  480VAC,  Eliminate  conversions.  •  Think  outside  the  box  ̶  Don’t  be  sa?sfied  with  an  energy  efficient  data  center  nestled  on  campus  surrounded  by  inefficient  laboratory  and  office  buildings.  ̶  Innovate,  integrate,  op?mize.  Dashboards  report  instantaneous,  seasonal  and  cumulaLve  PUE  values.  
  • 23. 23•  Data  center  equivalent  of  the  “visible  man”  –  Reveal  not  just  boxes  with  blinky  lights,  but  the  inner  workings  of  the  building  as  well.  –  Tour  views  into  pump  room  and  mechanical  spaces  –  Color  code  pipes,  LCD  monitors  NREL  ESIF  Data  Center  Cross  SecLon  
  • 24. 24•  2.5 MW – Day onecapacity (Utility $500K/yr/MW)•  10 MW – UltimateCapacity•  Petaflop•  No Vapor Compressionfor CoolingData Center
  • 25. 25Summer Cooling ModePUE –Typical Data Center =1.5 – 2.0NREL ESIF= 1.04* 30% more energyefficient than yourtypical “green” datacenterData Center
  • 26. 26Winter Cooling ModeERE – Energy ReuseEffectivenessHow efficient are weusing the waste heat toheat the rest of thebuilding?NREL ESIF= .7 (we use30% of waste heat)(more with future campusloops)Future CampusHeating LoopFutureCampusHeatingLoopHigh BayHeatingLoopOfficeHeatingLoopConferenceHeatingLoopData Center
  • 27. 2795 degAir75 degAir• Water to rack Cooling for High PerformanceComputers handles 90% of total load• Air Cooling for Legacy Equipment handles 10% of total LoadData Center – Cooling Strategy
  • 28. 28          PUE  1.0X  -­‐-­‐  Focus  on  the  “1”  Facility PUEIT Power ConsumptionEnergy Re-useWe all know how to do this!True efficiency requires 3-D optimization.
  • 29. 29Facility PUEIT Power ConsumptionEnergy Re-useWe all know how to do this!Increased work per wattReduce or eliminate fansComponent level heat exchangeNewest processors are more efficient.True efficiency requires 3-D optimization.          PUE  1.0X  -­‐-­‐  Focus  on  the  “1”  
  • 30. 30Facility PUEIT Power ConsumptionEnergy Re-useTrue efficiency requires 3-D optimization.We all know how to do this!Increased work per wattReduce or eliminate fansComponent level heat exchangeNewest processors are more efficient.Direct liquid cooling,Higher return water tempsHolistic view of data centerplanning        PUE  1.0X  -­‐-­‐  Focus  on  the  “1”  
  • 31. 31What’s  Next?  ü  Energy  Efficient  supporLng  infrastructure.  ü  Pumps,  large  pipes,  high  voltage  (380  to  480)    electrical  to  rack  ü  Efficient  HPC  for  planned  workload.  ü  Capture  and  re-­‐use  waste  heat.  Can  we  manage  and  “opLmize”  workflows,  with  varied  job  mix,  within  a  given  energy  “budget”?    Can  we  do  this  as  part  of  a  larger  “ecosystem”?      31 Steve Hammond
  • 32. 32Other  Factors  32 5DemandSMART: Comprehensive Demand ResponseBalancing supply and demand on the electricity grid is difficult and expensive. End usersthat provide a balancing resource are compensated for the service.Annual Electricity Demand As a Percent of Available Capacity50%100%Winter Spring Summer Fall75%25%90%4MW solarUse waste heatBetter rates, shed loadDC as part of Campus Energy System
  • 33. 33ParLng  Thoughts  •  Energy Efficient Data Centers – been there, done that–  We know how, let’s just apply best practices.–  Don’t fear H20: Liquid cooling will be increasingly prevalent.•  Metrics will lead us into sustainability–  If you don’t measure/monitor it, you can’t manage it.–  As PUE has done; ERE, Carbon Use Effectiveness (CUE), etc. will help drivesustainability.•  Energy Efficient and Sustainable Computing – it’s all about the “1”–  1.0 or 0.06? Where do we focus? Compute & Energy Reuse.•  Holistic approaches to Energy Management.–  Lots of open research questions.–  Projects may get an energy allocation rather than a node-hour allocation.
  • 34. 34Otto VanGeet303.384.7369Otto.VanGeet@nrel.gov    NREL  RSF  50%  of  code  energy  use  Net  zero  annual  energy  $22.8/m2    ConstrucLon  Cost    QUESTIONS?  
  • 35. 35•  Thermoelectric  power  generaLon  (coal,  oil,  natural  gas  and  nuclear)  consumes  about  1.1  gallon  per  kW  hour,  on  average.  •  This  amounts  to  about  9.6  M  gallons  per  MW  year.  •  We  esLmate  about  2.5  M  gallons  water  consumed  per  MW  year  for  on-­‐site  evaporaLve  cooling  towers  at  NREL.  •  If  chillers  need  0.2MW  per  MW  of  HPC  power,  then  chillers  have  an  impact  of  2.375M  gallons  per  year  per  MW.  •  Actuals  will  depend  on  your  site,  but  evap.  cooling  doesn’t  necessarily  result  in  a  net  increase  in  water  use.    •  Low  Energy  use  =  Lower  water  use.  Energy  Reuse  uses  NO  water!  Water  ConsideraLons  “We shouldn’t use evaporative cooling, water is scarce.”NREL PIX 00181
  • 36. 36Data  Center  Efficiency  •  Choices regarding power, packaging, cooling, and energyrecovery in data centers drive TCO.•  Why should we care?•  Carbon footprint.•  Water usage.•  Mega$ per MW year.•  Cost: OpEx ~ IT CapEx!•  A  less  efficient  data  center  takes  away  power  and  dollars  that  could  otherwise  be  used  for  compute  capability.  
  • 37. 37HolisLc  Thinking  •  Approach  to  Cooling:  Air  vs  Liquid  and  where?  –  Components,  Liquid  Doors  or  CRACs,  …  •  What  is  your  “ambient”  Temperature?  –  55F,  65F,  75F,  85F,  95F,  105F  …  –  13C,  18C,  24C,  30C,  35C,  40.5C  …  •  Electrical  distribuLon:    –  208v  or  480v?  •  “Waste”  Heat:    –  How  hot?      Liquid  or  Air?      Throw  it  away  or  Use  it?  
  • 38. 38Liquid  Cooling  –  New  ConsideraLons  •  Air  Cooling  –  Humidity      –  Fan  failures  –  Air  side  economizers,  par?culates  •  Liquid  Cooling  –  pH  &  bacteria  –  Dissolved  solids  –  Corrosion  inhibitors,  etc.  •  When  considering  liquid  cooled  systems,  insist  that  providers  adhere  to  the  latest  ASHRAE  water  quality  spec  or  it  could  be  costly.  
  • 39. 392011  ASHRAE  Liquid  Cooling  Guidelines  
  • 40. 402011  ASHRAE  Thermal  Guidelines  2011  Thermal  Guidelines  for  Data  Processing  Environments  –  Expanded  Data  Center  Classes  and  Usage  Guidance.    White  paper  prepared  by  ASHRAE  Technical  Commi?ee  TC  9.9  
  • 41. 41Energy  Savings  PotenLal:  Economizer  Cooling  Energy  savings  poten?al  for  recommended  envelope,  Stage  1:  Economizer  Cooling.12  (Source:  Billy  Roberts,  NREL)    
  • 42. 42Data  Center  Energy  •  Data  centers  are  energy  intensive  faciliLes.  –  10-­‐100x  more  energy  intensive  than  an  office.  –  Server  racks  well  in  excess  of  30kW.  –  Power  and  cooling  constraints  in  exis?ng  facili?es.  •  Data  Center  inefficiency  steals  power  that  would  otherwise  support  compute  capability.  •  Important  to  have  DC  manager  responsible  for  ALL  DC  cost  including  energy!      
  • 43. 43Energy  Savings  PotenLal:    Economizer  +  Direct  EvaporaLve  Cooling  Energy  savings  poten?al  for  recommended  envelope,  Stage  2:  Economizer  +  Direct  Evap.  Cooling.12  (Source:  Billy  Roberts,  NREL)    
  • 44. 44Energy  Savings  PotenLal:  Economizer  +  Direct  Evap.  +  MulLstage  Indirect  Evap.  Cooling  Energy  savings  poten?al  for  recommended  envelope,  Stage  3:  Economizer  +  Direct  Evap.  +  Mul?stage  Indirect  Evap.  Cooling.12  (Source:  Billy  Roberts,  NREL)    
  • 45. 45Data  Center  Energy  Efficiency  •  ASHRAE  90.1  2011  requires  economizer  in  most  data  centers.  •  ASHRAE  Standard  90.4P,  Energy  Standard  for  Data  Centers  and  Telecommunica0ons  Buildings    •  PURPOSE:  To  establish  the  minimum  energy  efficiency  requirements  of  Data  Centers  and  TelecommunicaLons  Buildings,  for:    •  Design,  construcLon,  and  a  plan  for  operaLon  and  maintenance    •  SCOPE:  This  Standard  applies  to:    •  New,  new  addiLons,  and  modificaLons    to  Data  Centers  and  TelecommunicaLons  Buildings  or  porLons  thereof  and  their  systems    •  Will  set  minimum  PUE  based  on  climate.  •  More  detail  at  :  h?ps://www.ashrae.org/news/2013/ashrae-­‐seeks-­‐input-­‐on-­‐revisions-­‐to-­‐data-­‐centers-­‐in-­‐90-­‐1-­‐energy-­‐standard-­‐scope  
  • 46. 461.  Reduce  the  IT  load  -­‐  VirtualizaLon  &  ConsolidaLon  (up  to  80%  reducLon)  2.   Implement  contained  hot  aisle  and  cold  aisle  layout.  ̶  Curtains,  equipment  configura?on,  blank  panels,  cable  entrance/exit  ports,    3.  Install  economizer  (air  or  water)  and  evaporaLve  cooling  (direct  or  indirect).  4.  Raise  discharge  air  temperature.      Install  VFD’s  on  all  computer  room  air  condiLoning  (CRAC)  fans  (if  used)  and  network  the  controls.  5.  Reuse  data  center  waste  heat  if  possible.  6.  Raise  the  chilled  water  (if  used)  set-­‐point.  ̶  Increasing  chiller  water  temp  by  1°C  reduces  chiller  energy  use  by  about  3%  7.  Install  high  efficiency  equipment  including  UPS,  power  supplies,  etc..  8.  Move  chilled  water  as  close  to  server  as  possible  (direct  liquid  cooling).  9.  Consider  centralized  high  efficiency  water  cooled  chiller  plant  ̶  Air-­‐cooled  =  2.9  COP,  water-­‐cooled  =  7.8  COP  Energy  ConservaLon  Measures  
  • 47. 47Equipment  Environmental  SpecificaLon    Air Inlet to IT Equipment is theimportant specification to meetOutlet temperature is notimportant to IT Equipment
  • 48. 48  Recommended  Range  (Statement  of  Reliability)  Preferred  facility  opera?on;  most  values  should  be  within  this  range.      Allowable  Range  (Statement  of  FuncLonality)  Robustness  of  equipment;  no  values  should  be  outside  this  range.  MAX  ALLOWABLE  RACK  INTAKE  TEMPERATURE  MAX  RECOMMENDED  Over-­‐Temp  Recommended    Range  Under-­‐Temp  MIN  RECOMMENDED  MIN  ALLOWABLE  Allowable    Range  Key  Nomenclature  
  • 49. 49Improve  Air  Management  •  Typically,  more  air  circulated  than  required.    •  Air  mixing  and  short  circuiLng  leads  to:  –  Low  supply  temperature  –  Low  Delta  T  •  Use  hot  and  cold  aisles.  •  Improve  isolaLon  of  hot  and  cold  aisles.  –  Reduce  fan  energy  –  Improve  air-­‐condi?oning  efficiency  –  Increase  cooling  capacity  49  Hot  aisle/cold  aisle  configuraLon  decreases  mixing  of  intake  &  exhaust  air,  promoLng  efficiency.    Source:  hNp://www1.eere.energy.gov/femp/pdfs/eedatacenterbestprac0ces.pdf  
  • 50. 50Isolate  Cold  and  Hot  Aisles  Source:  hNp://www1.eere.energy.gov/femp/pdfs/eedatacenterbestprac0ces.pdf  70-80ºF vs. 45-55ºF95-105ºF vs. 60-70ºF
  • 51. 51Adding  Air  Curtains  for  Hot/Cold  IsolaLon  Photo  used  with  permission  from  the  NaLonal  Snow  and  Ice  Data  Center.  h?p://www.nrel.gov/docs/fy12osL/53939.pdf  
  • 52. Courtesy  of  Henry  Coles,  Lawrence  Berkeley  NaLonal  Laboratory  
  • 53. 53Three  (3)  Cooling  Device  Categories  ITEquipmentRackcoolingwaterrackcontainmentSERVERFRONT1  -­‐  Rack  Cooler  •  APC-­‐water  •  Knürr(CoolTherm)-­‐water  •  Knürr(CoolLoop)-­‐water  •  Rigal-­‐water  2  -­‐  Row  Cooler  •  APC(2*)-­‐water  •  Liebert-­‐refrigerant  ITEquipmentRackITEquipmentRackITEquipmentRackrowcontainmentcoolingwatercoolingwaterSERVERFRONT3  -­‐  Passive  Door  Cooler  •  IBM-­‐water  •  Vege/Coolcentric-­‐water  •  Liebert-­‐refrigerant  •  SUN-­‐refrigerant  SERVERFRONTITEquipmentRackcoolingwaterCourtesy  of  Henry  Coles,  Lawrence  Berkeley  Na0onal  Laboratory  
  • 54. 54“Chill-­‐off  2”  EvaluaLon  of  Close-­‐coupled  Cooling  SoluLons    Courtesy  of  Geoffrey  Bell  and  Henry  Coles,  Lawrence  Berkeley  Na0onal  Laboratory  less energyuse
  • 55. 55Cooling  Takeaways…  •  Use  a  central  plant  (e.g.  chiller/CRAHs)  vs.  CRAC  units  •  Use  centralized  controls  on  CRAC/CRAH  units  to  prevent  simultaneous  humidifying  and  dehumidifying.  •  Move  to  liquid  cooling  (room,  row,  rack,  chip)  •  Consider  VSDs  on  fans,  pumps,  chillers,  and  towers  •  Use  air-­‐  or  water-­‐side  free  cooling.  •  Expand  humidity  range  and  improve  humidity  control  (or  disconnect).