Data Center Cooling Strategies for Efficiency - Techniques to Reduce YourEnergy Bill by 20-80%
Data center cooling is a hot topic. When you consider the challenges of cooling the latest generation servers, growing cost of infrastructure equipment, and ever growing concern around energy efficiency, it's easy to understand the focus.
To view the recorded webinar presentation, please visit, http://www.42u.com/cooling-strategies-webinar.htm
Make IT a hero! It’s good business and it’s good for the planet too!
The A and B projections are for improvements in server technology efficiency.
Use thermostatically controlled variable-speed fans In some situations, a 3-phase power distribution strategy can be preferable because it reduces PDU requirements and increases open space in racks. Lower cost of cabling: Loads > 5kW within a rack can be handled by a single 3-ph circuit as against multiple 1-ph circuits, translates into lower cost of cabling. • Higher reliability of electrical infrastructure: Bringing 3-phase power into the rack and utilizing rackmount PDUs with phase level metering capabilities allows data center operators to better balance the loads across all three phases. Balanced loads minimize harmonics and overheated neutral wires. • Higher reliability of IT infrastructure: Bringing a lower number of circuits into each rack maximizes the airflow both under the raised floor and within the racks. Better airflow keeps IT equipment from overheating. • Scalability: Higher capacity of 3-ph circuits provides more room for growth in the future. More equipment can be plugged in without bringing down power to existing equipment. If a data center operator chooses to use 3-ph power for some or all the racks, the next decision point is whether to bring in a Wye-supply or a delta supply to the rack. Wye supply would also pull in a neutral wire to the rack whereas a delta supply would not. Develop modular rack design point for low-, medium- and high-density rack configurations. The most significant efficiency strategy is simple airflow control; ensuring correct airflow by using blanking panels and floor grommets to ensure proper airflow. The next level of airflow control is containment.
Without moving to newer technologies and best practices, data center density will reach physics limits
At all rack design points hot/cold aisle configurations will be more efficient Depending on the number of high-density racks, those operating at 10 kW of load or higher, you should consider either high-density checkerboarding or close-coupled cooling. TIA 492?
effect of adding one 12”, 7U blanking panel to the middle of a rack conventional wisdom is correct: blanking panes are essential other data center clichés also borne out eliminate leaks in floor manage floor tile permeability
Planning is everything and best-practices make the difference. Planning data center layouts is critical. It’s becoming common to find data centers laid out with a hot/cold aisle configuration. Closing up the airflow gaps is another step in the right direction. Better yet is isolation of the hot and cold aisles. Here are two diagrams from an Accenture study done in conjunction with Lawrence Berkeley National Lab and the Silicon Valley Leadership Group. They examined two isolation alternatives; hot aisle isolation and cold aisle isolation. The results of the study indicated that cold aisle isolation is the more efficient of the two. The report then highlighted the efficiency gains of cold aisle isolation. Three findings seem to be most significant relative to efficiency; the dramatic reduction in fan energy translated to a 1.8% overall energy reduction and raising the chilled water temperature to 50 degrees, coupled with a water-side economizer, reduced overall energy consumption an additional 2.8%. While difficult to calculate in terms of efficiency, the 30-49% increase in CRAH capacity is clearly significant, particularly for those of you who are reaching your CRAH and CRAC capacity limits. Cold aisle isolation provided additional savings in other areas as well. You can get to a copy of the full report from our website at 42U.com.
Allowing for warmer water temperatures, while still maintaining high density loads could save a great deal of money using conventional chillers Depending on the operational condition of the chiller plant, the energy saving can be realized annually on converting an existing 10K square foot data center.
Typical Chiller plants provide chilled water at <45F, which is then past to the CRAC units in the data center Raising the water temperature lowers operating cost, but the cooling solution must be able to handle the warmer water while providing the same amount of cooling Close coupled cooling allows for high density installations, while using water as warm as 70F! This warm water temperature allows for more hours each year where evaporative or dry air side economizers / coolers can be used lower operating cost, by reduced electricity usage
Focus on complete data center cooling designs Provide complete cooling redundancy Include a comprehensive monitoring and alarm system at all component levels Understand The Complete Facility Hot/cold aisle – common aisle for exhaust Placement of vented or cutout floor tiles Air flow paths Consider alternate cabinet configurations Supplemental Cooling Systems Close Couple Cooling Energy-efficient, cost –effective way Group common products together Develop component installation standards Plan for new cooling solutions
Get electrical, mechanical studies Find engineering firms with data center experience Consider instrumenting your data center to enhance visibility Eat your spinach (blanking panels, leaks) Take advantage of outstanding, free PG&E Pacific Energy Center classes (state-wide) Keep an eye on emerging technologies (flywheel UPS, rack-level cooling)