Energy Logic: Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems
Upcoming SlideShare
Loading in...5
×
 

Energy Logic: Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems

on

  • 375 views

Knurr's whitepapers are well-researched and you can understand the needs and technicalities of the solutions and products we manufacture and provide.

Knurr's whitepapers are well-researched and you can understand the needs and technicalities of the solutions and products we manufacture and provide.

Statistics

Views

Total Views
375
Views on SlideShare
375
Embed Views
0

Actions

Likes
0
Downloads
9
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Energy Logic: Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems Energy Logic: Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems Document Transcript

  • A White Paper from the Experts in Business-Critical ContinuityEnergy Logic: Reducing Data Center Energy Consumption byCreating Savings that Cascade Across Systems
  • The model demonstrates Executive Summarythat reductions in ener-gy consumption at the IT A number of associations, consultants and vendors have promoted best practices forequipment level have enhancing data center energy efficiency. These practices cover everything from facility lightingthe greatest impact on to cooling system design, and have proven useful in helping some companies slow or reverseoverall consumption the trend of rising data center energy consumption. However, most organizations still lack abecause they cascade cohesive, holistic approach for reducing data center energy use.across all supporting Emerson Network Power analyzed the available energy-saving opportunities and identified thesystems. top ten. Each of these ten opportunities were then applied to a 5,000-square-foot data center model based on real-world technologies and operating parameters. Through the model, Emerson Network Power was able to quantify the savings of each action at the system level, as well as identify how energy reduction in some systems affects consumption in supporting systems. The model demonstrates that reductions in energy consumption at the IT equipment level have the greatest impact on overall consumption because they cascade across all supporting systems. This led to the development of Energy Logic, a vendor-neutral roadmap for optimizing data center energy efficiency that starts with the IT equipment and progresses to the support infrastructure. This paper shows how Energy Logic can deliver a 50 percent or greater reduction in data center energy consumption without compromising performance or availability. This approach has the added benefit of removing the three most critical constraints faced by data center managers today: power, cooling and space. In the model, the 10 Energy Logic strategies freed up two-thirds of floor space, one-third of UPS capacity and 40 percent of precision cooling capacity. All of the technologies used in the Energy Logic approach are available today and many can be phased into the data center as part of regular technology upgrades/refreshes, minimizing capital expenditures. The model also identified some gaps in existing technologies that could enable greater energy reductions and help organizations make better decisions regarding the most efficient technologies for a particular data center. 1
  • Introduction In a report to the U.S. Congress, the Data center energy Environmental Protection Agency consumption is beingThe double impact of rising data center concluded that best practices can reduce driven by the demandenergy consumption and rising energy costs data center energy consumption by 50 within almost everyhas elevated the importance of data center percent by 2011. organization for greaterefficiency as a strategy to reduce costs, computing capacitymanage capacity and promote environ- The EPA report included a list of Top 10 and increased ITmental responsibility. Energy Saving Best Practices as identified by centralization. the Lawrence Berkeley National Lab. OtherData center energy consumption has been organizations, including Emerson Networkdriven by the demand within almost every Power, have distributed similar informationorganization for greater computing capacity and there is evidence that some bestand increased IT centralization. While this practices are being adopted.was occurring, global electricity pricesincreased 56 percent between 2002 and The Spring 2007 DCUG Survey found that2006. 77 percent of respondents already had their data center arranged in a hot-aisle/cold-The financial implications are significant; aisle configuration to increase cooling sys-estimates of annual power costs for U.S. tem efficiency, 65 percent use blanking pan-data centers now range as high as $3.3 els to minimize recirculation of hot air andbillion. 56 percent have sealed the floor to preventThis trend impacts data center capacity as cooling losses.well. According to the Fall 2007 Survey of While progress has been made, an objec-the Data Center Users Group (DCUG®), an tive, vendor-neutral evaluation of efficiencyinfluential group of data center managers, opportunities across the spectrum of datapower limitations were cited as the primary center systems has been lacking. This hasfactor limiting growth by 46 percent of made it difficult for data center managers torespondents, more than any other factor. prioritize efficiency efforts and tailor bestIn addition to financial and capacity consid- practices to their data center equipmenterations, reducing data center energy use and operating practices.has become a priority for organizations This paper closes that gap by outlining aseeking to reduce their environmental holistic approach to energy reduction,footprint. based on quantitative analysis, that enablesThere is general agreement that improve- a 50 percent or greater reduction in dataments in data center efficiency are possible. center energy consumption. 2
  • The distinction between Data Center Energy Consumption percent of total consumption. Supply-side demand and supply systems include the UPS, power distribu- power consumption is The first step in prioritizing energy saving tion, cooling, lighting and building valuable because reduc- opportunities was to gain a solid under- switchgear, and account for 48 percent of tions in demand-side standing of data center energy consumption. consumption. energy use cascade Emerson Network Power modeled energy Information on data center and infrastruc- through the supply side. consumption for a typical 5,000-square- ture equipment and operating parameters foot data center (Figure 1) and analyzed on which the analysis was based are pre- how energy is used within the facility. sented in Appendix A. Note that all data Energy use was categorized as either centers are different and the savings poten- “demand side” or “supply side.” tial will vary by facility. However, at mini- Demand-side systems are the servers, stor- mum, this analysis provides an order-of- age, communications and other IT systems magnitude comparison for data center that support the business. Supply-side energy reduction strategies. systems exist to support the demand side. The distinction between demand and sup- In this analysis, demand-side systems— ply power consumption is valuable because which include processors, server power sup- reductions in demand-side energy use cas- plies, other server components, storage and cade through the supply side. For example, communication equipment—account for 52 in the 5,000-square-foot data center used toPower and Cooling 48% Computing Equipment 52% Building Switchgear / Computing Equipment 52% Category Power Draw* MV Transformer 3% (Demand) Computing 588 kW Lighting 1% Processor 15% Lighting 10 kW UPS and distribution Server Power losses 72 kW Support Cooling Supply 14% Systems 48% 38% Cooling power draw for (Supply) computing and UPS losses 429 kW Other Server 15% Building switchgear/MV transformer/other losses 28 kW PDU 1% Storage 4% TOTAL 1127 kW Communication UPS 5% Equipment 4% Figure 1. Analysis of a typical 5000-square-foot data center shows that demand-side computing equipment accounts for 52 percent of energy usage and supply-side systems account for 48 percent. * This represents the average power draw (kW). Daily energy consumption (kWh) can be captured by multiplying the power draw by 24. 3
  • analyze energy consumption, a 1 Watt complete. The energy saving measures The first step in thereduction at the server component level included in Energy Logic should be consid- Energy Logic approach is(processor, memory, hard disk, etc.) results ered a guide. Many organizations will already to establish an IT equip-in an additional 1.84 Watt savings in the have undertaken some measures at the end ment procurementpower supply, power distribution system, of the sequence or will have to deploy some policy that exploits theUPS system, cooling system and building technologies out of sequence to remove energy efficiency bene-entrance switchgear and medium voltage existing constraints to growth. fits of low power proces-transformer (Figure 2). sors and high-efficiency The first step in the Energy Logic approach is power supplies.Consequently, every Watt of savings that to establish an IT equipment procurementcan be achieved on the processor level policy that exploits the energy efficiency ben-creates a total of 2.84 Watts of savings for efits of low power processors and high-effi-the facility. ciency power supplies.The Energy Logic Approach As these technologies are specified in new equipment, inefficient servers willEnergy Logic takes a sequential approach to be phased out and replaced with higher-reducing energy costs, applying the 10 tech- efficiency units, creating a solid foundationnologies and best practices that exhibited for an energy-optimized data center.the most potential in the order in which theyhave the greatest impact. Power management software has great potential to reduce energy costs and shouldWhile the sequence is important, Energy be considered as part of an energy optimiza-Logic is not intended to be a step-by-step tion strategy, particularly for data centers thatapproach in the sense that each step can only have large differences between peak andbe undertaken after the previous one is average utilization rates. Other facilities may Server The Cascade Effect Component 1 Watt -1.0 W -1.18 W saved here DC-DC AC-DC -1.49 W Saves an additional and .31 W here Power .18 W here Distribution -1.53 W and .04 W here UPS -1.67 W and .14 W here Cooling -2.74 W 1 Watt saved at the processor saves a and 1.07 W here Building total of 2.84 Watts of total consumption Switchgear/ Transformer -2.84 W = Cumulative Saving and .07 W hereFigure 2. With the Cascade Effect, a 1 Watt savings at the server component level creates areduction in facility energy consumption of 2.84 Watts. 4
  • Total floor space choose not to employ power management Reducing Energy Consumption andrequired for IT equip- because of concerns about response times. A Eliminating Constraints to Growthment was reduced by 65 significant opportunity exists within thepercent. industry to enhance the sophistication of Employing the Energy Logic approach to the power management to make it an even more model data center reduced energy use by 52 powerful tool in managing energy use. percent without compromising performance or availability. The next step involves IT projects that may not be driven by efficiency considerations, but In its unoptimized state, the 5,000-square- have an impact on energy consumption. They foot data center model used to develop the include: Energy Logic approach supported a total • Blade servers compute load of 529 kW and total facility • Server virtualization load of 1127 kW. Through the optimization strategies presented here, this facility has These technologies have emerged as “best been transformed to enable the same level of practice” approaches to data center manage- performance using significantly less power ment and play a role in optimizing a data and space. Total compute load was reduced center for efficiency, performance and man- to 367 kW, while rack density was increased ageability. from 2.9 kW per rack to 6.1 kW per rack. Once policies and plans have been put in This has reduced the number of racks required place to optimize IT systems, the focus shifts to support the compute load from 210 to 60 to supply-side systems. The most effective and eliminated power, cooling and space limi- approaches to infrastructure optimization tations constraining growth (Figure 4). include: • Cooling best practices Total energy consumption was reduced to • 415V AC power distribution 542 kW and the total floor space required for • Variable capacity cooling IT equipment was reduced by 65 percent • Supplemental cooling (Figure 5). • Monitoring and optimization Energy Logic is suitable for every type of Emerson Network Power has quantified the data center; however, the sequence may be savings that can be achieved through each of affected by facility type. Facilities operating at these actions individually and as part of the high utilization rates throughout a 24-hour Energy Logic sequence (Figure 3). Note that day will want to focus initial efforts on savings for supply-side systems look smaller sourcing IT equipment with low power when taken as part of Energy Logic because processors and high efficiency power sup- those systems are now supporting a smaller plies. Facilities that experience predictable load. peaks in activity may achieve the greatest benefit from power management technology. Figure 6 shows how compute load and type of operation influence priorities. 5
  • Savings Independent of Energy Logic Savings Other Actions with the Cascade Effect Energy Saving Action ROI Cumulative Savings (kW) Savings (%) Savings (kW) Savings (%) Savings (kW)Lower power processors 111 10% 111 10% 111 12 to 18 mo.High-efficiency power 141 12% 124 11% 235 5 to 7 mo.suppliesPower management 125 11% 86 8% 321 Immediatefeatures TCO reducedBlade servers 8 1% 7 1% 328 38%* TCO reducedServer virtualization 156 14% 86 8% 414 63%**415V AC power distribution 34 3% 20 2% 434 2 to 3 mo.Cooling best practices 24 2% 15 1% 449 4 to 6 mo.Variable capacity cooling: 79 7% 49 4% 498 4 to 10 mo.variable speed fan drivesSupplemental cooling 200 18% 72 6% 570 10 to 12 mo.Monitoring and optimiza-tion: Cooling units work as a 25 2% 15 1% 585 3 to 6 mo.team* Source for blade impact on TCO: IDC ** Source for virtualization impact on TCO: VMwareFigure 3. Using the model of a 5,000-square-foot data center consuming 1127 kW of power, the actions included inthe Energy Logic approach work together to produced a 585 kW reduction in energy use.Constraint Unoptimized Optimized Capacity Freed UpData center space (sq ft) 4988 1768 3220 (65%)UPS capacity (kVA) 2 * 750 2 * 500 2 * 250 (33%)Cooling plant capacity (tons) 350 200 150 (43%)Building entrance switchgear and genset (kW) 1169 620 549 (47%)Figure 4. Energy Logic removes constraints to growth in addition to reducing energy consumption. 6
  • P1 P2 P1 P2 P1 P2 COLD AISLE COLD AISLE COLD AISLE D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 Control CW CRAC CW CRAC Center HOT AISLE HOT AISLE HOT AISLE D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 COLD AISLE COLD AISLE COLD AISLE D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 CW CRAC CW CRAC HOT AISLE HOT AISLE HOT AISLE Battery D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 Room COLD AISLE COLD AISLE COLD AISLE D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 CW CRAC CW CRAC HOT AISLE HOT AISLE HOT AISLE D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 UPS Room COLD AISLE COLD AISLE COLD AISLE CW CRAC CW CRAC CW CRAC D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 HOT AISLE HOT AISLE HOT AISLE Before Energy Logic P1 P2 P1 P2 COLD AISLE COLD AISLE D1 R R R R R R R R D2 D1 R R R R R R R D2 Control CW CRAC CW CRAC Center HOT AISLE HOT AISLE D1 R R R R R R R R D2 D1 R R R R R R R D2 COLD AISLE COLD AISLE XDV XDV XDV XDV XDV XDV XDV XDV XDV XDV D1 R R R R R D2 D1 D2 CW CRAC CW CRAC HOT AISLE HOT AISLE Battery XDV XDV XDV XDV XDV XDV XDV XDV XDV XDV D1 R R R R R D2 D1 D2 Room COLD AISLE COLD AISLE Space Savings: 65% UPS Room CW CRAC After Energy LogicP1 = Stage one distribution, side A D1 = Stage two distribution, side A R = RackP2 = Stage one distribution, side B D2 = Stage two distribution, side B XDV = Supplemental cooling moduleCW CRAC = Chilled water computer room air conditionerFigure 5. The top diagram shows the unoptimized data center layout. The lower diagram shows the data center after theEnergy Logic actions were applied. Space required to support data center equipment was reduced from 5,000 square feetto 1,768 square feet (65 percent). 7
  • 1. Lowest power processor 1. Virtualizationoperations 2. High-efficiency power supplies 2. Lowest power processor 24-hour 3. Blade servers 3. High-efficiency power supply 4. Power management 4. Power management Cooling best practices 5. Consider main- frame architecture Variable capacity cooling Supplemental cooling 1. Power 415V AC distribution 1. Virtualization management Monitoring and optimization 2. Power management operations Daytime 3. Low power 2. Low power processor processor 4. High-efficiency 3. High-efficiency power supplies power supplies 4. Blade servers I/O Intensive Compute IntensiveFigure 6 . The Energy Logic approach can be tailored to the compute load and type of operation.The Ten Energy Logic Actions lower power processors deliver the same per- formance as higher power models (Figure 8).1. Processor EfficiencyIn the absence of a true standard measure of In the 5,000-square-foot data center mod-processor efficiency comparable to the U.S. eled for this paper, low power processors cre-Department of Transportation fuel efficiency ate a 10 percent reduction in overall datastandard for automobiles, Thermal Design center power consumption.Power (TDP) serves as a proxy for server 2. Power Suppliespower consumption. As with processors, many of the server powerThe typical TDP of processors in use today is supplies in use today are operating at efficien-between 80 and 103 Watts (91 W average). cies below what is currently available. TheFor a price premium, processor manufactur- U.S. EPA estimated the average efficiency ofers provide a lower voltage versions of their installed server power supplies at 72 percentprocessors that consumes on average 30 in 2005. In the model, we assume the unopti-Watts less than standard processors (Figure mized data center uses power supplies that7). Independent research studies show these average 79 percent across a mix of servers that range from four years old to new. Sockets Speed (GHz) Standard Low power Saving 1 1.8-2.6 103 W 65 W 38 W AMD 2 1.8-2.6 95 W 68 W 27 W Intel 2 1.8-2.6 80 W 50 W 30 WFigure 7. Intel and AMD offer a variety of low power processors that deliver average savingsbetween 27 W and 38 W. 8
  • System Performance Results AS3AP is Better) Transaction/Second (Higher Best-in-class power supplies are available 30000 today that deliver efficiency of 90 percent. Use of these power supplies reduces power 25000 draw within the data center by 124 kW or 11 percent of the 1127 kW total. 20000 As with other data center systems, server power supply efficiency varies depending onTransactions/Second load. Some power supplies perform better at 15000 partial loads than others and this is particu- larly important in dual-corded devices where power supply utilization can average less 10000 than 30 percent. Figure 9 shows power sup- ply efficiencies at different loads for two 5000 power supply models. At 20 percent load, model A has an efficiency of approximately 88 percent while model B has an efficiency 0 Load 1 Load 2 Load 3 Load 4 Load 5 closer to 82 percent. Opteron 2218 HE 2.6 GHz Woodcrest LV S148 2.3 GHz Opteron 2218 2.6 GHz Woodcrest 5140 2.3 GHz Source Anandtech Figure 9 also highlights another opportunity to increase efficiency: sizing power suppliesFigure 8. Performance results for standard- and low-power processor closer to actual load. Notice that the maxi-systems using the American National Standard Institute AS3AP mum configuration is about 80 percent ofbenchmark. “Typical” PSU Load in “Typical” Config “Max” Nameplate Redundant Config. 100% CPU Util. Configuration Rating 90% 85% Efficiency 80% Model A Model B 75% 70% 0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100% Percent of Full Load Figure 9. Power supply efficiency can vary significantly depending on load and power supplies are often sized for a load that exceeds the maximum server configuration. 9
  • the nameplate rating and the typical configu- are disabled because of concerns regarding While the move to bladeration is 67 percent of the nameplate rating. response time; however, this decision may servers is typically notServer manufacturers should allow pur- need to be reevaluated in light of the signifi- driven by energy consid-chasers to choose power supplies sized for a cant savings this technology can enable. erations, blade serverstypical or maximum configuration. can play a role in reduc- In the model, we assume that the idle power ing energy consump-3. Power management software draw is 80 percent of the peak power draw tion.Data centers are sized for peak conditions that without power management, and reduces tomay rarely exist. In a typical business data cen- 45 percent of peak power draw as powerter, daily demand progressively increases from management is enabled. With this scenario,about 5 a.m. to 11 a.m. and then begins to power management can save an additionaldrop again at 5 p.m. (Figure 10). 86 kW or eight percent of the unoptimized data center load.Server power consumption remains relativelyhigh as server load decreases (Figure 11). In 4. Blade Serversidle mode, most servers consume between Many organizations have implemented blade70 and 85 percent of full operational power. servers to meet processing requirements andConsequently, a facility operating at just 20 improve server management. While thepercent capacity may use 80 percent of the move to blade servers is typically not drivenenergy as the same facility operating at 100 by energy considerations, blade servers canpercent capacity. play a role in energy consumption.Server processors have power management Blade servers consume about 10 percent lessfeatures built-in that can reduce power when power than equivalent rack mount serversthe processor is idle. Too often these features because multiple servers share common AC Power Input Versus Percent CPU Time 150 120 Servers 45 Dell Power Edge 2400 (Web/SQL Server) 40 125 Watts 100 Proc Time % 35 100 80 Percent Utilization 30 % CPU Time 25 Watts 75 60 20 15 50 40 10 25 20 5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 0 0 HoursFigure 10. Daily utilization for a typical Figure 11. Low processor activity does not translate into low powerbusiness data center. consumption. 10
  • Implementing virtual- power supplies, cooling fans and other data center airflow (Figure 12). Many organi-ization provides an components. zations, including Emerson Network Power,incremental eight per- offer CFD imaging as part of data center In the model, we see a one percent reductioncent reduction in total assessment services focused on improving in total energy consumption when 20 percentdata center power draw cooling efficiency.for the 5,000-square- of rack-based servers are replaced with bladefoot facility. servers. More importantly, blades facilitate Additionally, temperatures in the cold aisle the move to a high-density data center archi- may be able to be raised if current tempera- tecture, which can significantly reduce energy tures are below 68° F. Chilled water tempera- consumption. tures can often be raised from 45° F to 50° F. 5. Server Virtualization In the model, cooling system efficiency is As server technologies are optimized, virtual- improved five percent simply by implement- ization is increasingly being deployed to ing best practices. This reduces overall facility increase server utilization and reduce the energy costs by one percent with virtually no number of servers required. investment in new technology. In our model, we assume that 25 percent of 7. 415V AC Power Distribution servers are virtualized with eight non-virtual- The critical power system represents another ized physical servers being replaced by one opportunity to reduce energy consumption; virtualized physical server. We also assume however, even more than other systems, care that the applications being virtualized were must be taken to ensure reductions in energy residing in single-processor and two-proces- consumption are not achieved at the cost of sor servers and the virtualized applications reduced equipment availability. are hosted on servers with at least two Most data centers use a type of UPS called a processors. double-conversion system. These systems Implementing virtualization provides an convert incoming power to DC and then back incremental eight percent reduction in total to AC within the UPS. This enables the UPS to data center power draw for the 5,000-square- generate a clean, consistent waveform for IT foot facility. equipment and effectively isolates IT equip- ment from the power source. 6. Cooling Best Practices Most data centers have implemented some UPS systems that don’t convert the incoming best practices, such as the hot-aisle/cold-aisle power—line interactive or passive standby rack arrangement. Potential exists in sealing systems—can operate at higher efficiencies gaps in floors, using blanking panels in open because of the losses associated with the con- spaces in racks, and avoiding mixing of hot version process. These systems may compro- and cold air. ASHRAE has published several mise equipment protection because they do excellent papers on these best practices. not fully condition incoming power. Computational fluid dynamics (CFD) can be A bigger opportunity exists downstream used to identify inefficiencies and optimize from the UPS. In most data centers, the UPS 11
  • 8. Variable Capacity Cooling Newer technologies such Data center systems are sized to handle peak as Digital Scroll com- loads, which rarely exist. Consequently, oper- pressors and variable ating efficiency at full load is often not a good frequency drives in com- indication of actual operating efficiency. puter room air condi- tioners (CRACs) allow Newer technologies, such as Digital ScrollFigure 12. CFD imaging can be used to evalu- high efficiencies to be compressors and variable frequency drives inate cooling efficiency and optimize airflow. maintained at partialThis image shows hot air being recirculated computer room air conditioners (CRACs), loads.as it is pulled back toward the CRAC, which is allow high efficiencies to be maintained atpoorly positioned. partial loads. Digital scroll compressors allow the capacityprovides power at 480V, which is then of room air conditioners to be matchedstepped down via a transformer, with exactly to room conditions without turningaccompanying losses, to 208V in the power compressors on and off.distribution system. These stepdown lossescan be eliminated by converting UPS output Typically, CRAC fans run at a constant speedpower to 415V. and deliver a constant volume of air flow. Converting these fans to variable frequencyThe 415V three-phase input provides 240V drive fans allows fan speed and power draw tosingle-phase, line-to-neutral input directly to be reduced as load decreases. Fan power isthe server (Figure 13). This higher voltage not directly proportional to the cube of fan rpmonly eliminates stepdown losses but also and a 20 percent reduction in fan speed pro-enables an increase in server power supply vides almost 50 percent savings in fan powerefficiency. Servers and other IT equipment consumption. These drives are available incan handle 240V AC input without any issues. retrofit kits that make it easy to upgradeIn the model, an incremental two percent existing CRACs with a payback of less thanreduction in facility energy use is achieved by one year.using 415V AC power distribution. Traditional 208V Distribution PDU PDU 12V Servers 480V 480V 208V 208V UPS PSU Storage transformer panelboard Comm Equipment 415V Distribution 480V PDU 12V Servers 415V 3-phase 240V UPS PSU Storage 240V phase to panelboard neutral Comm EquipmentFigure 13. 415V power distribution provides a more efficient alternative to using 208V power. 12
  • Supplemental cooling In the chilled water-based air conditioningunits are mounted system used in this analysis, the use of vari-above or alongside able frequency drives provides an incrementalequipment racks and saving of four percent in data center powerpull hot air directly from consumption.the hot aisle and delivercold air to the cold aisle. 9. High Density Supplemental Cooling Traditional room-cooling systems have proven very effective at maintaining a safe, controlled environment for IT equipment. However, optimizing data center energy effi- ciency requires moving from traditional data center densities (2 to 3 kW per rack) to an Figure 14. Supplemental cooling enables environment that can support much higher higher rack densities and improved cooling densities (in excess of 30 kW). efficiency. This requires implementing an approach to In the model, 20 racks at 12 kW density per cooling that shifts some of the cooling load rack use high density supplemental cooling from traditional CRAC units to supplemental while the remaining 40 racks (at 3.2 kW den- cooling units. Supplemental cooling units are sity) are supported by the traditional room mounted above or alongside equipment cooling system. This creates an incremental racks (Figure14), and pull hot air directly from six percent reduction in overall data center the hot aisle and deliver cold air to the cold energy costs. As the facility evolves and more aisle. racks move to high density, the savings will Supplemental cooling units can reduce cool- increase. ing costs by 30 percent compared to tradi- 10. Monitoring and Optimization tional approaches to cooling. These savings One of the consequences of rising equipment are achieved because supplemental cooling densities has been increased diversity within brings cooling closer to the source of heat, the data center. Rack densities are rarely uni- reducing the fan power required to move air. form across a facility and this can create cool- They also use more efficient heat exchangers ing inefficiencies if monitoring and optimiza- and delver only sensible cooling, which is tion is not implemented. Room cooling units ideal for the dry heat generated by electronic on one side of a facility may be humidifying equipment. the environment based on local conditions Refrigerant is delivered to the supplemental while units on the opposite side of the facility cooling modules through an overhead piping are dehumidifying. system, which, once installed, allows cooling Cooling control systems can monitor condi- modules to be easily added or relocated as tions across the data center and coordinate the environment changes. the activities of multiple units to prevent con- flicts and increase teamwork (Figure 15 ). 13
  • TEAMWORK MODE Control Disables Humidifier ON Humidifier ON Dehumidification ON Compressors ON heaters in “Cold Zone” Unit 1 Unit 2 Unit 3 Unit 1 Unit 2 Unit 3 UNBALANCED LOAD NO LOAD SWITCH UNBALANCED LOAD NO LOAD OR HUB POWER Unit 6 Unit 5 Unit 4 Unit 6 Unit 5 Unit 4 Humidifier ON Dehumidification ON Compressors ONFigure 15. System-level control reduces conflict between room air conditioning unitsoperating in different zones in the data center.In the model, an incremental saving of one • Monitor and reduce parasitic losses from The Energy Logic modelpercent is achieved as a result of system-level generators, exterior lighting and perimeter also provides a roadmapmonitoring and control. access control. For a 1 MW load, generator for the industry to deliv- losses of 20 kW to 50 kW have been er the information andOther Opportunities for Savings measured. technologies that dataEnergy Logic prioritizes the most effective center managers can use What the Industry Can Learn from the to optimize theirenergy reduction strategies, but it is not Energy Logic Model facilities.intended to be a comprehensive list of energyreducing measures. In addition to the actions The Energy Logic model not only prioritizesin the Energy Logic strategy, data center man- actions for data center managers. It also pro-agers should consider the feasibility of the vides a roadmap for the industry to deliver thefollowing: information and technologies that data center managers can use to optimize their facilities.• Consolidate data storage from direct Here are specific actions the industry can take attached storage to network attached stor- to support increased data center efficiency: age. Also, faster disks consume more power so consider reorganizing data so that 1. Define universally accepted metrics for less frequently used data is on slower processor, server and data center efficiency archival drives. There have been tremendous technology advances in server processors in the last• Use economizers where appropriate to decade. Until 2005, higher processor per- allow outside air to be used to support data formance was linked with higher clock speeds center cooling during colder months, and hotter chips consuming more power. creating opportunities for energy-free Recent advances in multi-core technology cooling. With today’s high-density have driven performance increases by using computing environment, economizers can more computing cores operating at relatively be cost effective in many more locations lesser clock speeds, which reduces power than might be expected. consumption. 14
  • An industry standard of Today processor manufacturers offer a range eliminate much of the resistance to usingdata center efficiency of server processors from which a customer power management.that measures perform- needs to select the right processor for the 3. Matching power supply capacity to serverance per Watt of energy given application. What is lacking is an easy- configurationused would be extreme- to-understand and easy-to-use measure such Server manufacturers tend to oversize powerly beneficial in measur- as the miles-per-gallon automotive fuel effi- supplies to accommodate the maximuming the progress of data ciency ratings developed by the U.S. configuration of a particular server. Somecenter optimization Department of Transportation, that can help users may be willing to pay an efficiencyefforts. buyers select the ideal processor for a given penalty for the flexibility to more easily load. The performance per Watt metric is upgrade, but many would prefer a choice evolving gradually with SPEC score being used between a power supply sized for a standard as the server performance measure, but more configuration and one sized for maximum work is needed. configuration. Server manufacturers should This same philosophy could be applied to the consider making these options available facility level. An industry standard of data cen- and users need to be educated about the ter efficiency that measures performance per impact power supply size has on energy Watt of energy used would be extremely ben- consumption. eficial in measuring the progress of data cen- 4. Designing for High Density ter optimization efforts. The PUE ratio devel- A perception persists that high-density envi- oped by the Green Grid provides a measure of ronments are more expensive than simply infrastructure efficiency, but not total facility spreading the load over a larger space. High- efficiency. IT management needs to work density environments employing blade and with IT equipment and infrastructure manu- virtualized servers are actually economical as facturers to develop the miles-per-gallon they drive down energy costs and remove equivalent for both systems and facilities. constraints to growth, often delaying or elim- 2. More sophisticated power management inating the need to build new facilities. While enabling power management features 5. High-voltage distribution provides tremendous savings, IT manage- 415V power distribution is used commonly ment often prefers to stay away from this in Europe, but UPS systems that easily sup- technology as the impact on availability is not port this architecture are not readily available clearly established. As more tools become in the United States. Manufacturers of critical available to manage power management fea- power equipment should provide the 415V tures, and data is available to ensure that output as an option on UPS systems and can availability is not impacted, we should see do more to educate their customers regard- this technology gain market acceptance. ing high-voltage power distribution. More sophisticated controls that would allow these features to be enabled only during peri- 6. Integrated measurement and control ods of low utilization, or turned off when criti- Data that can be easily collected from IT sys- cal applications are being processed, would tems and the racks that support them has 15
  • yet to be effectively integrated with support IT consolidation projects also play an impor- Together these strate-systems controls. This level of integration tant role in data center optimization. Both gies created a 52 per-would allow IT systems, applications and sup- blade servers and virtualization contribute to cent reduction in energyport systems to be more effectively managed energy savings and support a high-density use in the 5,000-square-based on actual conditions at the IT equip- environment that facilitates true foot data center modelment level. optimization. developed by Emerson Network Power whileConclusion The final steps in the Energy Logic optimiza- removing constraints to tion strategy is to focus on infrastructure sys- growth.Data center managers and designers, IT tems, employing a combination of best prac-equipment manufacturers and infrastructure tices and efficient technologies to increaseproviders must all collaborate to truly opti- the efficiency of power and cooling systems.mize data center efficiency. Together these strategies created a 52 per-For data center managers, there are a number cent reduction in energy use in the 5,000-of actions that can be taken today that can square-foot data center model developed bysignificantly drive down energy consumption Emerson Network Power while removingwhile freeing physical space and power and constraints to growth.cooling capacity to support growth. Appendix B shows exactly how these savingsEnergy reduction initiatives should begin with are achieved over time as legacy technologiespolicies that encourage the use of efficient IT are phased out and savings cascade acrosstechnologies, specifically low power proces- systems.sors and high-efficiency power supplies. Thiswill allow more efficient technologies to beintroduced into the data center as part of thenormal equipment replacement cycle.Power management software should also beconsidered in applications where it is appro-priate as it may provide greater savings thanany other single technology, depending ondata center utilization. 16
  • Appendix A: Data Center Assumptions Used to Model Energy UseTo quantify the results of the efficiency Serversimprovement options presented in this • Age is based on average server replace-paper, a hypothetical data center that has ment cycle of 4-5 years.not been optimized for energy efficiency • Processor Thermal Design Power aver-was created. The ten efficiency actions pre- ages 91W/processor.sented in this paper were then applied tothis facility sequentially to quantify results. • All servers have dual redundant power supplies. The average DC-DC conversionThis 5000-square-foot hypothetical data efficiency is assumed at 85% and averagecenter has 210 racks with average heat den- AC-DC conversion efficiency is assumed atsity of 2.8kW/rack. The racks are arranged 79 percent for the mix of servers fromin a hot-aisle/cold-aisle configuration. Cold four years old to new.aisles are four feet wide, and hot aisles are • Daytime power draw is assumed to existthree feet wide. Based on this configuration for 14 hours on weekdays and 4 hours onand operating parameters, average facility weekends. Night time power draw is 80power draw was calculated to be 1127 kW. percent of daytime power draw.Following are additional details used in theanalysis. • See Figure 16 for more details on server configuration and operating parameters. Two Four More than Single Socket Total sockets Sockets fourNumber of servers 157 812 84 11 1064Daytime power draw (Watts/server) 277 446 893 4387 –Nighttime power draw (Watts/server) 247 388 775 3605 –Total daytime power draw (kW) 44 362 75 47 528Total nighttime power draw (kW) 39 315 65 38 457Average server power draw (kW) 41 337 70 42 490Figure 16. Server operating parameters used in the Energy Logic model. 17
  • Storage Cooling system• Storage Type: Network attached storage. • Cooling System is chilled water based.• Capacity is 120 Terabytes. • Total sensible heat load on the precision• Average Power Draw is 49 kW. cooling system includes heat generated by the IT equipment, UPS and PDUs,Communication Equipment building egress and human load.• Routers, switches and hubs required to interconnect the servers, storage and • Cooling System Components: access points through Local Area Network - Eight 128.5 kW chilled water based and provide secure access to public precision cooling system placed at the networks. end of each hot aisle. Includes one redundant unit.• Average Power Draw is 49 kW. - The chilled water source is a chillerPower Distribution Units (PDU): plant consisting of three 200 ton• Provides output of 208V, 3 Phase through chillers (n+1) with matching con- whips and rack power strips to power densers for heat rejection and four servers, storage, communication equip- chilled water pumps (n+2). ment and lighting. (Average load is - The chiller, pumps and air condition- 539kW). ers are powered from the building dis-• Input from UPS is 480V 3-phase. tribution board (480V 3 phase).• Efficiency of power distribution is 97.5 - Total cooling system power draw is percent. 429 kW.UPS System Building substation:• Two double conversion 750 kVA UPS • The building substation provides 480V modules arranged in dual redundant 3-phase power to UPS’s and cooling (1 +1) configuration with input filters for system. power factor correction (power factor = • Average load on building substation is 91 percent). 1,099 kW. • Utility input is 13.5 kVA, 3-phase• The UPS receives 480V input power for connection. the distribution board and provides a • System consists of transformer with isola- 480V, 3 Phase power to the power distri- tion switchgear on the incoming line, bution units on the data center floor. switchgear, circuit breakers and distribu-• UPS efficiency at part load: 91.5 percent. tion panel on the low voltage line. • Substation, transformer and building entrance switchgear composite efficiency is 97.5 percent. 18
  • Appendix B: Timing Benefits from Efficiency Improvement Actions Estimated Cumulative Yearly Savings Efficiency Improvement Savings Area (kW) Year 1 Year 2 Year 3 Year 4 Year 5 Low power processors 111 6 22 45 78 111Policies Higher efficiency power 124 12 43 68 99 124 IT supplies Server power 86 9 26 43 65 86 management Blade servers 7 1 7 7 7 7Projects IT Virtualization 86 9 65 69 86 86 415V AC power 20 0 0 20 20 20 distributionPractices Best Cooling best practices 15 15 15 15 15 15 Variable capacity 49 49 49 49 49 49 coolingInfrastructure High density 72 0 57 72 72 72 Projects supplemental cooling Monitoring and opti- 15 0 15 15 15 15 mization 585 100 299 402 505 585 19
  • Note: Energy Logic is an approach developed by Emerson Network Power Emerson Network Power to provide organizations 1050 Dearborn Drive with the information they need to more effective- ly reduce data center energy consumption. It is P.O. Box 29186 not directly tied to a particular Emerson Network Columbus, Ohio 43229 Power product or service. We encourage use of 800.877.9222 (U.S. & Canada Only) the Energy Logic approach in industry discussions 614.888.0246 (Outside U.S.) on energy efficiency and will permit use of the Energy Logic graphics with the following Fax: 614.841.6022 attribution: EmersonNetworkPower.com The Energy Logic approach developed by Liebert.com Emerson Network Power. While every precaution has been taken to ensure To request a graphic send an email to accuracy and completeness in this literature, Liebert Corporation assumes no responsibility, and disclaims all energylogic@emerson.com. liability for damages resulting from use of this informa- tion or for any errors or omissions. Specifications subject to change without notice. © 2007 Liebert Corporation. All rights reserved throughout the world. Trademarks or registered trademarks are property of their respective owners. ® Liebert and the Liebert logo are registered trade- marks of the Liebert Corporation Business-Critical Continuity, Emerson Network Power and the Emerson Network Power logo are trademarks and service mark of the Emerson Electric Co. WP154-158-117 SL-24621Emerson Network Power.The global leader in enabling Business-Critical Continuity™. EmersonNetworkPower. com AC Power Embedded Computing Outside Plant Racks & Integrated Cabinets Connectivity Embedded Power Power Switching & Controls Services DC Power Monitoring Precision Cooling Precision Cooling