Your SlideShare is downloading. ×

White Paper: Extending the Life of your Data Center

150

Published on

Learn about heat transfer …

Learn about heat transfer
door (HTD) cooling and how it neutralizes heat at the source. This white paper also includes cost analyses of what’s required to retrofit a data center versus the cost to build a new one, as well as a case study about a large midwestern university’s data center expansion.

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
150
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Extending the Life of Your Data CenterRetrofitting with BLACK BOX ®passive water coolingat the rack level. Cold Front Coolant Management Systems Cold Front Heat Transfer Door 724-746-5500 | blackbox.com
  • 2. Extending the Life of Your Data Center Table of Contents Introduction............................................................................................................................................................................................... 3 Build or Renovate....................................................................................................................................................................................... 4 Add Heat Density As You Grow................................................................................................................................................................. 4 Heat Transfer Door Cooling........................................................................................................................................................................ 4 Cost Analyses............................................................................................................................................................................................ 5 . Retrofit Example........................................................................................................................................................................................ 5 . University Case Study................................................................................................................................................................................. 7 Summary................................................................................................................................................................................................... 9 About Black Box ....................................................................................................................................................................................... 9 We‘re here to help! If you have any questions about your application, our products, or this white paper, contact Black Box Tech Support at 724-746-5500 or go to blackbox.com and click on “Talk to Black Box.” You’ll be live with one of our technical experts in less than 30 seconds.Full-ColorBlack 724-746-5500 | blackbox.com Page 2
  • 3. Extending the Life of Your Data Center Introduction The proliferation of new, extended application development in software as a service, as well as new vitualization technologies, has created a conflict in many data centers between todays outdated capabilities versus tomorrows growing needs. There is no doubt that the cost of building and managing data center resources continues to increase yearly. Likewise, the yearly cost to power and cool data centers makes up a majority of operating expenses. Legacy data centers waste at least 50% of the energy they consume managing heat generated by IT systems. Most data centers are not new; they are housed in buildings that are using practices that could be 20 years old and have not yet caught up with the latest trends in IT rack power densities. Fully populated racks can dissipate as much as 7–25 kW per rack. High-end servers can dissipate more than 40 kW per rack. This level of density requires data centers to provide power and cooling densities that exceed typical current capabilities. Furthermore, most legacy data centers have not been designed to use their maximum capabilities, best practices have not been implemented, and the cooling requirements of IT equipment have been considerably less than optimal. This has created a common situation, as identified by the Uptime Institute, where data centers consume 2.0 to 2.6 times the cooling required by the IT equipment, thus wasting energy and power and further reducing the amount of IT that can be housed in the structure. By implementing best practices and optimizing the performance of the existing air cooling infrastructure, data center operators can improve the performance of the specified cooling infrastructure to 70% efficiency. The question data center owners must ask themselves is if their current air cooling is acceptable at 70% or if they can continue to sustain that performance as computing technologies push power and cooling beyond their current requirements. What can operators expect from their environment if cooling requirements exceed 12 kW to up to 25 kW per rack?Full-ColorBlack 724-746-5500 | blackbox.com Page 3
  • 4. Extending the Life of Your Data Center Build or Renovate With the adoption of virtualization, cloud computing, and consolidation, most data centers that have reached the upper limit of energy consumption essentially have two choices: build new or renovate. New data centers built using performance-enhancing cooling, either with fully optimized air, liquid cooling, or hybrid systems have improved efficiency. But the cost to build a new data center is prohibitive. According to the Uptime Institute, the total capital expense of building a new data center ranges from $18,000 to $25,000 or more per kilowatt. The second alternative is renovation. While renovation presents issues with improving existing space and operating a data center through a rebuilding project, retrofitting allows you to achieve what you want at a fraction of the cost. Add Heat Density As You Grow This paper proposes that by retrofitting existing data centers with passive water cooling at the rack level, data center operators can increase IT rack power densities at a fraction of the cost of building a new data center. Furthermore, retrofitting allows data centers to take a deliberate approach to improvement, adding resources on an as-needed basis rather than completing a full build-out of an entire space at one time. This approach using passive water-cooling at the rack level also enables upgrades with no disruption to IT operations while achieving a significant cost savings in energy consumption, not to mention the capital expense of constructing a new data center. Heat Transfer Door (HTD) Cooling  Passive heat transfer doors can neutralize heat directly at the source — the rack. These modules replace standard rear doors on IT equipment racks. Server fans draw in air through the chassis, passing the air through the liquid-filled fin-and-tube coil assembly, which removes the heat before reentry into the data center. An HTD uses specially designed coils that maintain airflow through the rack with a negligible resistance to the airflow. An HTD can sensibly cool up to 35 kW of heat per rack. Taking up a minimum of floor space, the HTD is a flexible, efficient, and space-saving cooling solution for data centers. In a retrofit, an HTD system may be fed by a redundant pumping system, a Coolant Management System (CMS) that creates a secondary loop that controls the heat exchanger temperature to avoid water condensation risks. Being passive, the energy consumed by such a system is negligible, which creates a significant energy savings compared to air-driven cooling. There are also instances where existing facilities can be retrofitted without using a CMS, where the new cooling system can be connected directly to a chiller. Figure 1 shows a typical system layout. This system can remove as much as 35 kW when used at the recommended new American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) conditions. It has been designed to minimize any pressure drop compared to the existing perforated door, resulting in no meaningful temperature rise inside the rack. IT equipment tests by IBM® and Dell® have shown less than a 1.8° F (1° C) increase in computer CPU junction temperature, which is negligible. Figure 1Full-ColorBlack 724-746-5500 | blackbox.com Page 4
  • 5. Extending the Life of Your Data Center Figure 2 is a photograph of a Passive HTD. HTD units work with a number of currently available enclosures using transition frames. Each HTD can easily be fitted to an enclosure within minutes without disrupting IT operations. Cost Analyses Figure 3 compares the cost to retrofit a data center to the cost to build a new one. This analysis is based on data derived from customers and the Uptime Institute. DataSite Orlando is a managed services data center that is housed in a former telco site. The facility was designed to optimize power and cooling efficiency, using best practices to improve Figure 2 its performance in its conventionally cooled areas. In its higher-density areas DataSite Orlando uses passive liquid cooling, which, in addition to enabling higher density loads, freed up 75 percent of the center to white space. DataSite now has the ability to cool up to 200 watts per square foot with standard air cooling and up to 600 watts per square foot using passive liquid cooling. DataSite Orlando is the flagship property in BURGES property + company’s high-tech real estate portfolio. According to Jeff Burges, president and founder of BURGES property + company, increasing heat density in an existing structure using conventional cooling can be accomplished in a two-step process. First, the existing structure can be upgraded and optimized using best practices. In this example, a 10,000-square-foot, 350-kW datacenter is looking to increase capacity to 1.5 megawatts. Two options exist: • Upgrade existing facility and increase density from 35 watts/sf to 150 watts/sf; OR • Build a new facility. In an upgrade scenario, the cost to upgrade power and cooling infrastructure supporting the existing 350 kW load is estimated at $1,100 per kilowatt. The power and cooling infrastructure for the new load of 1,150 kW is estimated at $11,250 per kW (assuming no new space construction cost). The combined cost totals over $13M. In comparison, building a new data center with an IT load of 1.5 megawatts at a cost of $12,500 would total nearly $19M — over 30% more than the cost of retrofitting. In addition, time, planning, and market risk significantly increase under a new build scenario. Retrofit Example • Retrofitting and densifying existing facilities is most economical • Cheaper and faster • Ability to use state-of-the-art cooling at lower cost Current Desired Desired 350 kW 350 kW 35W/ft2 125W/ft2 125W/ft2 125W/ft2 (10,000 ft2) (10,000 ft2) (2800 ft2) (7200 ft2) 900 kW 900 kW Figure 3Full-ColorBlack 724-746-5500 | blackbox.com Page 5
  • 6. Extending the Life of Your Data Center Figures 4 and 5 illustrate the comparison between each cooling option. An added benefit of using passive, localized liquid cooling is a redistribution of space — condensing the space requirement for IT into 25 percent of the existing footprint. Comparisons are based on very conservative estimates. Upgrade New Power Retrofit Costs Existing Power Infrastructure Totals kW addressed 350 1150 1500 Cost to upgrade/add existing power per kW $ 1100 $ 11,250 Total cost $ 385,000 $ 12,937,500 $ 13,322,500 New Build Costs Retrofit saves Total load (kW) 1500 20% New power + cooling costs $ 12,500 over new build. Total P&C costs for new build* $ 18,750,000 *Excludes cost/kW for non-P&C infrastructure: e.g., land, building, etc. Estimated to be $300/SF Key assumptions Spec new Scenarios Current Target Increase New data center construction for power + cooling engine (Uptime Institute) $ 12,500 Local/SF 100 500 400 Infrastructure portion of total cost 10% DC critical area 10,000 10,000 — Infrastructure cost in dollars $ 1250 Total load (kW) 1000 5000 4000 Power + cooling $ 11,250 Implied kW per rack 3.0 15.0 12 Existing power reconfiguration cost $ 500 Cooling retrofit cost $ 600 } $ 1100 Figure 4 In larger data centers, this comparison is equally compelling as shown in Figure 5: Upgrade New Power Retrofit Costs Existing Power Infrastructure Totals kW addressed 1000 4000 5000 Cost to upgrade/add existing power per kW $ 1100 $ 11,250 Total cost $ 1,100,000 $ 45,000,000 $ 46,100,000 New Build Costs Retrofit saves Total load (kW) 5000 26% New power + cooling costs $ 12,500 over new build. Total P&C costs for new build* $ 62,500,000 *Excludes cost/kW for non-P&C infrastructure: e.g., land, building, etc. Estimated to be $300/SF Key assumptions Spec new Scenarios Current Target Increase New data center construction for power + cooling engine (Uptime Institute) $ 12,500 Local/SF 100 500 400 Infrastructure portion of total cost 10% DC critical area 10,000 10,000 — Infrastructure cost in dollars $ 1250 Total load (kW) 1000 5000 4000 Power + cooling $ 11,250 Implied kW per rack 3.0 15.0 12 Existing power reconfiguration cost $ 500 Cooling retrofit cost $ 600 } $ 1100 Figure 5Full-ColorBlack 724-746-5500 | blackbox.com Page 6
  • 7. Extending the Life of Your Data Center University Case Study A large midwestern universitys advanced computing center provides high-performance computing and storage for cutting-edge science, engineering, and social science research across the school. It needed additional cooling capacity and power distribution to support a new 1200-node HPC cluster. Data center power, space, and chilled water capacity constraints eliminated the option of selecting traditional precision air conditioning as an alternative. The university selected passive, liquid cooling using heat transfer door technology as the basis for the cluster expansion cooling solution. The design consisted of 50 rear door heat exchangers, which act as passive radiators to cool the exhaust air from each enclosure. The units were connected by hoses to five 150-kW Coolant Management Systems (CMS), and the CMS transferred the heat to the building’s primary chilled water loop. The data center faced a number of challenges when evaluating cooling methods. First, the facility is housed below-ground, with limited overhead space and shared chiller resources. The center has only shallow raised floors and therefore the air is delivered from the ceiling. This cooling method was not uniformly distributed. Before increasing the server density, the data center consumed 500 kW, which did not include the power to run the 12 traditional 20- and 30-ton computer room air conditioners needed to cool the data center. Once densified, the data center was able to increase the compute power 888 kW, and eliminate three of the computer room air conditioners from the area where the rear door heat exchangers were installed. Retrofitting the existing data center dropped the capital expense from a potential $60M for a new data center to about $1M for the retrofit. Since the heat transfer doors operated at above dew point, the selection also eliminated the need for additional pumps and systems to remove condensation. Each CMS expends little energy, consuming no more than 2.5 kW. From a cost perspective, given an electric rate of $.05 kWH, the cost to operate the CMS, the data center saves more than $130,000 in operating expenses over a five-year period. With utility prices increasing, the operating expense savings could top $200,000 in ten years. In addition these examples don’t take into account the expense associated with unused cooling capacity or power. Eliminating unused, excess cooling and power, which is typical in many older data centers, makes the financial benefits of the retrofit even more significant. The following charts illustrate a generic example comparing HTD cooling to both non-passive in-row cooling and traditional Computer Room Air Conditioners (CRACs). In many cases, the initial capital cost of the HTD solution is comparable to other solutions. Given the significantly lower power consumption and on-going maintenance of the HTD, operating costs are significantly lower. Therefore, when reviewing the total cost of ownership, the HTD is a very compelling solution. Figures 7 and 8 show a typical example comparing the solutions and the total cost of ownership or TCO, which equals Day 1 capital + cumulative operating expense (OPEX) over the time horizon shown. In this example, the assumed load per rack is 12 kW. The capital expense (CAPEX) costs are likely understated in the CRAC scenario given that containment costs in a 12-kW/rack environment would be more expensive. In addition, a taller raised floor will likely be necessary (most traditional raised floors would not be able to accommodate the airflow needed to cool 12 kW/rack). In addition, the energy cost assumption is today’s U.S. average of roughly $0.10/kW. Most urban areas are more expensive, and the model does not reflect any energy cost increases in the future years. All this would make CRACs and higher power consuming in-rows even less attractive than shown below. Key assumptions RDHx CRAC In-row, no HACS Figure 6: Number of Racks 28 28 30 Assumptions/output Cooling Units (RDHx/CRAC/In-rows) 28 8 38 CDUs/Manifolds needed 5 — 5 of generic example: Cooling equipment footprint (sq. ft.) 75 320 518 Cooling equipment power consumption (kW) 13 64 42 Total square feet needed if new 775 840 1268 Power consumption Power draw — RDHx/CRAC/In-rows (kW per unit) — 8.0 1.1 Power draw — CDUs and Manifolds (kW per unit) 2.5 — — Total power draw — all units (total kW) 13 64 42Full-ColorBlack 724-746-5500 | blackbox.com Page 7
  • 8. Extending the Life of Your Data Center Figure 7: Capital Expenses and $600,000 CAPEX ANNUAL OPEX $120,000 Total Operating Expenses $500,000 $100,000 $400,000 $80,000 $300,000 $60,000 $200,000 $40,000 $100,000 $20,000 CAPEX OPEX $0 $0 H ) H ) TD TD Cs Cs A A A A CR CR H H H H C/ C/ o o A A (n (n CR CR ow ow -r -r In In $1000 TCO Figure 8: Total Cost of Ownership $800 $600 $400 $200 CAPEX $0 Initial cost Yr 1 Yr 2 Yr 3 Yr 4 HTD CRAC/CRAH In-row (no HACs)Full-ColorBlack 724-746-5500 | blackbox.com Page 8
  • 9. Extending the Life of Your Data Center Summary Using passive liquid cooling opens options in retrofitting for optimized performance, a comparable initial cost compared with traditional cooling options, and the opportunity to increase cooling capacity without disrupting operations. Long-term, liquid cooling is significantly more cost-effective to data center operators. The question of passive liquid cooling isn’t if, but when. About Black Box Black Box Network Services is a leading provider of cabing and cabinet infrastructure products, serving 175,000 clients in 141 countries with 195 offices throughout the world. The Black Box® Catalog and Web site offer more than 118,000 products including Elite™ Cabinets, Intelli-Pass™ Biometric Access Control for Cabinets, Cold Front Cooling Solutions, and ClimateCab™. Find what you need to build and expand your data center or communications closet at blackbox.com. From our most popular cables to industrial cabinets and everything in between, Black Box has all the components of a communications infrastructure. © Copyright 2011. All rights reserved. Black Box® and the Double Diamond logo are registered trademarks, and Elite™, Intelli-Pass™, and ClimateCab™ are trademarks, of BB Technologies, Inc. Any third-party trademarks appearing in this white paper are acknowledged to be the property of their respective owners.Full-ColorBlack 724-746-5500 | blackbox.com Page 9

×