2. Emerson is a Leader in its Core Global Businesses & Markets #1 Compressors #1 Controls #1 Alternators #1 Fluid Control #1 Ultrasonic Welding #1 Garbage Disposers #1 Appliance Components #1 Fractional Motors #1 Storage Solutions #1 Plumbing Tools #1 Wet/Dry Vacuums #1 Pressing Tools/Jaws #1 CCTV Inspection Systems #1 AC & DC Power Systems #1 OEM Embedded Power #1 Precision Cooling Systems Emerson Electric Co.; Proprietary Information #1 Control Valves #1 Measurement Devices
3. Soon, Power Will Cost More Than the Server „ In the data center, power and cooling costs more than the IT it supports”, Christian L. Belady, Electronics Cooling, Febru ar 2007
4.
5.
6. PUE : Power Usage Effectiveness PUE = Total Facility Power IT Equipment Power Emerson Electric Co.; Proprietary Information PUE = 2
43. XDP XDC XDV10 or Base Infrastructure (160 kW) Building chiller or DX system Cooling modules 10-35 kW +++ XDO20 Embedded technology Door Cooling Module 20-35 kW XDR20/35 XDH20/32 XD Rack Based Cooling with Refrigerant
52. Hot vs. Cold Aisle Containment RH>55% RH>55% SMART AISLE Hot Aisle
53.
54. Verizon Wireless iCom Case Study Group 1 Group 2 Group 3 Group 4 Group 5 Group 6 Stand By (9)
55.
56.
57. Optimizing Data Center Performance Optimizing performance requires a holistic view of efficiency – more than just energy
58. Infrastructure Management Solution Manage the Data Center as a Single Entity Varying Degrees of Completeness Operational Control Planning and Management Monitoring and Automation Integrated Information AutoCAD Drawings Visio Vendor Dependent Monitoring Databases Email and Meetings Individuals Multiple Excel Spreadsheets
64. Facility / Network Concerns in the Past 3 Years Efficient use of resources Availability Management Categories Spring 2008 Spring 2009 Spring 2010 Heat density (cooling) Heat density (cooling) Adequate monitoring / data center management capabilities Power density Energy efficiency (energy costs & equipment efficiency) Heat density (cooling) Availability (uptime) Adequate monitoring / data center management capabilities Availability (uptime) Adequate monitoring / data center management capabilities Availability (uptime) Energy efficiency (energy costs & equipment efficiency) Energy efficiency (energy costs & equipment efficiency) Power density Power density Space constraints / growth Space constraints / growth Space constraints / growth
The new white paper introduces the term “Compute Units Per Seconds,” (CUPS). I must emphasize that our goal is to determine what insights can be gained from a metric, and moving the industry closer to adopting such a measure. However, we are not proposing or advocating a specific measure.
An “Average” data center. Not bleeding edge, but not outdated. Something reasonable constructed in the last few years.
If you cannot adequately seal the room, you cannot control the room.
As you can see the product offering addresses many applications from high density or low density, on the rack row or room levels. There are products utilizing green refrigerants or chilled water.
Here are the primary blower technologies used in today’s applications.
We sent test units to ETL, an independent laboratory in Cortland, NY, to compare various fan systems. Here’s a table that summarizes the data. The base case is a CW114 unit running with standard centrifugal fans, without a Variable speed drive. So in both cases on this table the fan speed is 100%. The capacity, motor kW and SCOP are listed. Remember that SCOP is the net sensible capacity, divided by the motor kW input. In this case, the energy efficiency is 10.4. The next case shows the addition of a variable speed drive. With the drive set at 100% fan speed, there is no change in capacity, or fan motor kW. However, the speed is reduced by 80%, the cooling capacity and fan kW are reduced. The fan kw varies by the cube of the fan speed, so in this case we see about a 59% increase in energy efficiency. The next test case was a unit with direct-drive EC plug fans. These are backward inclined airfoil fans, with a DC motor that allows for speed control. At full speed, we see a slight increase in cooling capacity, due to a higher air volume and lower motor kW. So, at full speed, the resulting energy efficiency is 18% better than the base. But when we slow the EC fan down to 80% speed, the energy efficiency increases to 83% higher than the base case. The last test case was placing the EC plug fans underneath the cooling unit, down into the raised flloor. The capacity increases due to higher air volume, the fan kW was again maxed out at about 9.5kW. This results in a 28% increase in energy efficiency. When the speed is decreased, the energy savings is compounded, resulting in a 97% increase in energy efficiency, vs the base case. Note that instead of increases the fan speed for the underfloor case, we could have left the air volume and capacity constant. This would have resulted in a lower motor kW.
Let’s start with compressors. There are four technologies we’re going to discuss : Scroll compressors, tandem scroll compressors, Semi-Hermetic compressors with unloaders, and digital scroll compressors.
This is an animation of the digital scroll compressor.
Digital capacity modulation utilizes Copelands axial compliance. Copeland scrolls have axial and radial compliance. As opposed to other scroll machines with fixed throws, Copeland compressors are designed to use inherent forces from the discharge pressures to push flanks and tip seals together. This allows for better sealing of the mating surfaces and higher eff’y. The force that keep the scrolls engaged axially comes from a small cavity above the top scroll which uses a fraction of the discharge pressure. By design the scrolls are made to separate by about 1 mm. If the pressure in the small cavity is released, the scrolls separate. By modulating the pressure in the cavity we can modulate the pumping of the scrolls and so, the capacity produced by the compressor. So you see here that we can modulate the capacity by leaving the pressure in the cavity and keeping the scrolls engaged
Both the digital and 4-step semi-hermetic increase reliability by reducing compressor cycling. The digital offers a standard 3 year warranty. By reducing the start / stops we decrease the wear and tear of the compressors. Note here that we are trying to link our discussions back to the brochure, which may be used at time to present the product.
An example on how you can save energy, we will utilize the LSN program to do some energy analysis runs. I have extracted the energy usage information to make it a bit easier to read. This information was based on a 10 ton air cooled unit running at 90% set at 75F and 50% humidity. As you can see, both the Digital and 4-step provide a significant savings. Another thing that has to be taken into consideration is the cost of humidification, a normal effect of the refrigeration process is, dehumidification. Dependant on the temperature and humidity set points humidification takes place and we must now humidify to make up for it. Since the digital and 4-step are operating at partial capacities they will dehumidify less than a unit at full load.
Locating cooling units closer to the load as in Liebert XD reduces the energy required to move the air and result in less mixing of hot and cold air. Due to the much lower blower resistances with the cooling units located closer to the racks, fan power is typically 3 kW per 100kW of cooling with Liebert XD compared to 8.5 kW per 100kW of cooling for traditional cooling. Micro channel coils provide minimal air pressure drop losses and improved thermal heat transfer and there is no need to over-chill data centers to eliminate hot spots.
The Liebert XD units are connected with the Liebert XD piping system that makes it easy to plan and expand the system in response to a growing heat load. The key is to put the necessary piping in place in advance and then add cooling units (with quick connects and flexible piping) and pump units /chillers as the need arises for more cooling capability. The current cooling modules are XDO, XDV and XDH. The module that is being introduced now is the XDR that is located on the rear of the rack. Future solutions include embedded cooling and chip cooling. This technology can cool more than 100kW per rack. includes modules in the system
Key Points: Here is a look at the results of implementing Energy Logic’s strategies in terms of space, power and cooling constraints. If you apply all 10 strategies in their totality, you will free up 65 percent of the space. From 5,000 square feet, you’ll only need 1,768 square feet. And we are not assuming all racks are moved to high density. Only ½ of the racks are high-density at 12 kW per rack. If you went for higher density racks, the space savings would be even greater. As it is, though, you are saving 2/3 of space. Your required UPS capacity will go down by 2 x 500 kVA. That means you have 1/3 of your total UPS capacity available for your growth and expansion projects. The load on the cooling plant will go down from 350 tons to 200 tons, so you are saving 40 percent in terms of the utilization you can add. We did not take it into account further savings could be had by optimizing the chilled water plant by installing VFDs.
Hot spots, Because at these densities, we can’t cool 10 kW per rack with traditional cooling.
Lets review these 3 scenarios. On the left, is open aisle. There is a lot of air that doesn’t pass through the servers, and is being bypassed. Probably what most sites have today. Good but could be better. Basic containment , the middle graph, results in a greater pressure under the floor and in the cold aisle. This results in a CFM reduction from the CRAC (Good this lowers the fan losses) more CFM across the server (lower Server dt) and more CFM leakage losses around and through the racks and through the floor. There is less air mixing and the result being higher return temperatures back to the cooling unit, and a nice boost in cooling efficiency. But there is still more air delivered than what is necessary. With SmartAisle, on the right side, we are adjusting the cooling to exactly match the needs within the containment. No more no less. SmartAisle results in a better match between the server CFM requirements and the output of the CRAC. Also due to less mixing, the return to the CRAC is higher resulting in more efficiency from the CRAC. To achieve this kind of energy improvement, up to 33%, you need to use variable cooling technologies. You know about VFD’s and EC fans on Liebert CW. You know about digital scroll on Liebert DS. What about controlling the air on DS … ------------------------------------
The issue is so what kind of tools do you have to manage this squeeze? So what it comes down to is that the tools most co’s have just can’t really meet the challenge . (READ Slide) So what we find is that there are spreadsheets, and auto CADs, VISIO diagrams, … .and people that we call “walking heads” because they carry everything in their heads, and they all have varying degrees of completeness. Where we are focused is to manage that DC as one single entity – - we refer to it as Data Center Service Management or “DCSM” And the solution we provide is just that in that …
Dashboards can be expressed in many different ways, this user uses KW to determine capacity, cost to run the data center, and estimated days available on generator.
Planning Palette This palette is the primary display for selecting projects by completion date. This palette contains a calendar with references to current and future projects. The calendar days can contain icons that represent a roll-up of project status for that day. In addition to the calendar there is a list of projects which is updated with a date selection. The user can create a new project or view an existing project by selecting the project name or selected Create New Project (+). If the user selects a project from the list the current view will update to reflect the changes in the selected project and the Project Details Palette will be populated. Product Strategy – Aimed at Proactive Segment Integrate with operations, Integrate with change management
Talking Points The top concerns of the DCUG membership over the last 3 years has remained focused into 3 core categories Availability Efficient use of resources Energy efficiency has certainly been the hot topic publicly Cooling , power and space constraints have also been key issues (looking at the rapid change in complexity it is easy to understand why) Also driving a closer working relationship between IT and facilities Adequate monitoring & management capabilities How can I manage all this complexity in an efficient and effective way
<Optional slide to use when want to summarize all categories with functional blocks at a glance. Use to explode into detail breakouts>