Deep Freeze - Design

2,382 views
2,215 views

Published on

A Liquid Cooling Solution for HP Cloud Computing

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,382
On SlideShare
0
From Embeds
0
Number of Embeds
9
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Deep Freeze - Design

  1. 1. DEEP FREEZE ™And Nano-Cooling Technology:Next Generation Solution for Cooling Blade Servers CASE STUDY & VALUE PROPOSITION September 2011 Presented by:©2011 Mobee Communications, LTD, Deep Freeze Technology Corporation, NGN Data ServicesCorporation & Global Access Advisors, LLC. All rights reserved. The information contained in this article isproprietary. As such, no part of this article may be copied or reproduced in any means, electronic orotherwise, without the express permission of Deep Freeze Technology Corporation, NGN Data ServicesCorporation and Global Access Advisors, LLC.
  2. 2. DEEP FREEZE™ DEEP FREEZE™The Deep Freeze™ blade server cooling concept is the chief component ofan overall data center design strategy. It is a “cold-plate” technologyevolution that is both “closed-loop” and “chassis-based”, representing themost efficient cooling design in the market. It is an independent (after-market, retro-fit product), closed cooling system (100% self-contained), based on cold plate technology (metal composites as the cooling structure), using ionized water (non- damaging electro-sensitive fluid) circulating through a chassis-based (an actual blade-server component, replacing the relatively inefficient fan) cooling design.Beneficial HighlightsAs a product, Deep Freeze™ is an •Independent unit •Based upon a Retro-fit design •Designed as an after-market unit serving the $6B blade server industryThe Deep Freeze™ product will •Drastically reduce the maintenance costs of blade server management •Dramatically increase the efficiency of the data center computing power •Obviate the need for expensive CRAC units and other equipment •Facilitate the “green design” for data center construction and operation www.globalaccessadvisors.com
  3. 3. DEEP FREEZE™ Deep Freeze™ and Nano-Cooling Technology: Next Generation Solution for Cooling Blade Servers1 Executive Summary It takes about 1,000 times more energy Cooling approach that does not to move a data byte involve connectivity to external than it does to perform a computation CRACs (computer room air with it once it arrives. conditioners), chillers, etc.: their Direct Additionally, the time taken to complete Liquid Cooling Platform of the HP a computation is Matrix Blade Technology (Patent currently limited by how long it takes to Pending): Deep Freeze™. do the moving- all of which produces heat, which slows the The Deep Freeze™ design obviates processing even further. the need for CFD analysis and maximizes the power to cooling ratio, Air-cooling can go some way to while saving real estate. They have removing this heat, which is why prototyped a plug-in, after market computers have fans inside. Emerging replacement for the CPU fans that technologies have begun incorporates the liquid cooling to substitute liquid-cooling agents strategy. The beta model expertly because a given volume of works for the HP ProLiant2 series, , water can hold 4,000 times more waste but they have designs for horizontal heat than air. (across Corporate offerings) and vertical (different alloy, densities, etc.) Deep Freeze Technology Corp has applications. developed a revolutionary liquid-1 A blade server is a server chassis housing multiple thin, modular electronic circuit boards, known as server blades.Each blade is an individual server, often dedicated to a single application. The blades are literally servers on a card,containing processors, memory, integrated network controllers, an optional Fiber Channel host bus adaptor (HBA)and other input/output (IO) ports. Blades typically come with two advanced technology attachments (ATAs) or SCSIdrives.2 HP holds the number 1 position in the world-wide server market with a 31.5%factory revenue sharef for 1Q11. HP’s10.8% revenue growth was led by increased demand for both their x86-based Proliant Servers and titanium-basedIntegrity servers. www.globalaccessadvisors.com 1
  4. 4. DEEP FREEZE™BackgroundIncreasing operational expenses But mixing hot and cold air is exactly(energy costs3, space provisioning4, the wrong approach to cooling bladeetc.) are forcing companies to cool servers. Specific amounts of cold airtheir data centers more efficiently. The need to be deployed to the blade rackubiquitous Blade Server5 exacerbates directly and quickly, while the heatedthe problem. A single blade rack air produced by energy consumptionconsumes more than 25 kW-4 times must be ventilated quickly away fromthe kW required for a standard server. the rack.Much of that energy is converted toheat, so cooling blade servers Because blade racks require morepresents its own unique sets of precise ventilation, computational fluidchallenges for the temperature dynamics (CFD) is often used tomaintenance strategies of a server model airflow movements through aroom or data center. data center. By assessing the variables of the server areas physicalWith traditional standard rack servers, properties and cooling capabilities,cooling was often a function of CFD can predict the appropriateoffsetting temperature variations: by airflow mixture between hot and coldassessing hardware deployment, the air, and thus accurately predict thesimple calculation of heat produced amount of cold air necessary to coolwould yield the resulting “cool air” the datacenter and the most efficientrequired to be pumped into the pathways of cold air circulation directlyenvironment to maintain temperatures to the servers.within the hardwares operating limits.3 Blade servers allow more processing power in less rack space, simplifying cabling (up to an 85% reduction) andreducing power consumption. The advantage of blade servers comes not only from the consolidation benefits of housingseveral servers in a single chassis, but also from the consolidation of associated resources (like storage and networkingequipment) into a smaller architecture that can be managed through a single interface.4 U is the standard unit of measure for designating the vertical usable space, or height of racks (metal framedesigned to hold hardware devices) and cabinets (enclosures with one or more doors). This unit of measurementrefers to the space between shelves on a rack. 1U is equal to 1.75 inches. For example, a rack designated as 20U,has 20 rack spaces for equipment and has 35 (20 times 1.75) inches of vertical usable space. Rack and cabinetspaces- and the equipment which fits into them- are measured in U.5 The leading manufacturers in the $5.6B blade server technology market (in order of market-share): HP, IBM, Dell,Cisco, Siemens, Fujitsu, Oracle, Sun, and NEC. www.globalaccessadvisors.com 2
  5. 5. DEEP FREEZE™Performing CFD calculations can be air, water is more efficient coolingquite challenging for server arrays agent. And because water goesdeploying both blade and traditional straight to the server, there is no needservers. Because blade servers to factor hot-cold mixing or CFD.require more directed cooling,Mechanical engineers had to dispense Liquid cooling is common inwith traditional datacenter airflow supercomputing and high performancecooling approaches. Reliance on computing (HPC), where facility“raised floors” concepts (cold air operators manage computing clusterspumped through perforated floors) producing high heat loads. Rising heatgave way to “in row cooling” densities have spurred predictions that(alternating columns of cold and hot liquid cooling would be more widelyair, with the cold air forced horizontally, adopted, but some data centerfrom the back of the rack to the front). managers remain wary of havingCurrently, the two emerging trend water near their equipment.seems to be gaining favor: indirectliquid cooling and direct immersion While liquid cooling is a proventechniques. technology, it does require a fair degree of capital overhead. InIndirect Liquid Cooling. addition to server room air ducts and electric sockets for cooling units, liquidWith liquid cooling, cold fluid- usually cooling requires the installation ofwater- is piped to a special water- piping and failsafe systems (in case ofcooling heat sink, called a water block, a leak or other malfunction). The realon the processor. While a standard estate benefits sought by theheat sink has metal fins to increase its utilization of blade server technology issurface area with the air around the often offset by the addition spaceserver, a water block consists of a required by the extra cooling unitsmetal pipe that goes through a needed for liquid cooling.conductive metal block. The processorheats the block; cold water travels intothe block, cooling it back down andwarming the water, which is then pipedout to a radiator, which cools it again.As much better conductor of heat than www.globalaccessadvisors.com 3
  6. 6. DEEP FREEZE™Direct Immersion Techniques. change, new approaches to cooling are replacing traditional measures asDirect immersion (submersibles) is a best practice. Cooler parts lastanother innovation in blade server longer. When parts stay below thecooling. Blade server racks are specified maximum thermal limit theyentirely submersed into tubs of cooled, operate more consistently and voltagenon-static mineral oils. fluctuations that can lead to data errors and crashes are minimized.Though not entirely “revolutionary”,proponents suggest that mineral oil A 2010 survey of nearly 100 memberscoolants have been “optimized for of the Data Center Users Groupdata centers” and can support heat revealed that data center managersloads of up to 100 kilowatts per 42U top three concerns were density ofrack, far beyond current average heat heat and power (83 percent),loads of 4 to 8 watts a rack and high- availability (52 percent), and spacedensity loads of 12 to 30 kilowatts per constraints/growth (45 percent).rack. These systems are designed tocomply with fire codes and the Clean Answering these concerns requires anWater Act, and integrate with standard approach that delivers the requiredpower distribution units (PDUs) and reliability and the flexibility to grow,network switches. while providing the lowest cost of ownership possible.Some mineral oil-style coolants can bemessy to maintain. Proponents say The industry seeks a solution that:the coolant can be drained for • can effectively and efficientlyenclosure-level maintenance, and address high-density zonesindividual servers can be removed forwork. Detractors suggest that the real • supports flexible options that areestate utilization of horizontal bathing easily scalabletubs is substituting one issue foranother. • incorporates technologies that improve energy efficiency, andRather than approaching the challengeof exploding heat removal • become elements of a system that isrequirements using limited, traditional easy to maintain and supportmeasures, whats needed is a shift inapproach. Because the constant in Deep Freeze™ is one such viabledata center heat loads has been rapid, solution.unpredictable www.globalaccessadvisors.com 4
  7. 7. DEEP FREEZE™Deep Freeze Technical Approach viable contact cooling mechanisms. When air-cooled heat sinks areDeep Freeze™ is predicated upon inadequate, liquid-cooled cold platesCold Plate Technology (liquid-cooled are the ideal high-performance heatdissipater. The technology uses an transfer solution.aluminum or other alloy “plate”containing internal tubing through Cold plate technologies utilize varyingwhich a liquid coolant is forced, to geometries and coolants toabsorb heat transferred to the plate by provide a range of thermaltransistors and other components performances. The lower the thermalmounted on it. (Fig.1). resistance, the better the performance of the cold plate.For Blade Servers, their compactdesign and increasing power densities, As a chassis or component-levelcold plates represent approach, Deep Freeze™ represents a superior technology. Fig. 1 Design www.globalaccessadvisors.com 5
  8. 8. DEEP FREEZE™ The Deep Freeze™ design transferring the heat into the ambient contemplates using a copper fluid path air outside the blade. and ionized water as the cooling fluid. The heat exchange takes place inside Like a car radiator, the liquid CPU the cooled interior of the Deep Freeze design circulates a cooled liquid unit and the cooled liquid travels back through a heat sink attached to the into the blade through the heat sink blade processor. Deep Freeze™ module to continue the process. An technology uses ionized water- which essential aspect of the Deep Freeze™ acts as a heat sink- to pass through its technology is that cooling occurs in a module. (Fig. 2).The heat is then closed-coupled environment. This transferred from the hot processor to allows the heat exchange between the the heat sink module. The hot liquid nano-chiller and the Deep Freeze unit then moves through the Deep without heating the room’s exterior. Freeze™ heat sink module and into its unit,Fig. 2 Heat Dissipation Principle www.globalaccessadvisors.com 6
  9. 9. DEEP FREEZE™ Benefits of Closed Loop Cooling There are basic fundamentals to contemporary data center management: (1)the higher power consumption of modern blade servers produces more heat 6; (2) almost all power consumed by rack-mounted equipment is converted to sensible heat; (3) which increases the temperature in the environment. A 2010 HP Technical Study7 surveyed the various cooling strategies and the effects upon a representative example of power consumption in 42U IT equipment rack: ProLiant DL160 G6 1U servers (42 servers @ 383 W per server). The cooling requirement was computed: 54,901 BTU/hr ÷ 12,000 BTU/hr per ton = 4.58 tons HP determined that the increasing heat loads created by the latest server systems require more aggressive cooling strategies than the traditional open-area approach. (Fig. 3). Figure 3: Cooling strategies based on server density/power per rack (HP 2010) Supplemented data centerDensity (nodes per rack) cooling Chassis/component Cold/hot Closed-loop cooling level cooling, future Traditional aisle cooling technologies open-area containment cooling 8 16 24 32 40 Power (kW per rack) 6 The sensible heat load is typically expressed in British Thermal Units per hour (BTU/hr) or watts, where 1 W equals 3.413 BTU/hr. The rack’s heat load in BTU/hr can be calculated as follows: Heat Load = Power [W] × 3.413 BTU/hr per watt In the United States, cooling capacity is often expressed in "tons" of refrigeration, which is derived by dividing the sensible heat load by 12,000 BTU/hr per ton. 7 “Cooling Strategies for IT Equipment” (September, 2010). Hewlett Packard Development Company. www.globalaccessadvisors.com 7
  10. 10. DEEP FREEZE™Of the cooling strategies commerciallyavailable, HP concluded that “Closed IBM worked closely with Wolverine’sLoop Cooling is “the best solution for MicroCool Division tohigh-density systems consuming many develop innovative liquid coolingkilowatts of power. These systems components within this new highhave separate cool air distribution and performance computer. It consumeswarm air return paths that are isolated 40 percent less energy compared to afrom the open room air. Closed-loop similar system using air-coolingsystems typically use heat exchangers technology.that use chilled water for removing The IBM Blade Server relies upon aheat created by IT equipment. Since proprietary MicroCool “cold plate” andthey are self-contained, closed-loop integrated Wolverine copper liquidcooling systems offer flexibility and are cooling loops. The design claims toadaptable to a wide range of locations maintain an entire electronic footprintand environments. Closed-loop below 80 degrees C, with a 60cooling systems can also degrees C inlet fluid made up of water.accommodate a wide range of server There have been several challengesand power densities.” to IBM’s “green assertions”. (www.flickr.com/photos/ibm_research_zurich/453732638 3/)Deep Freeze™ is a closed-loopcooling design which performs at the The pilot operation also utilizes thechassis or component level. It is the waste heat from the computer, to“future cooling design” predicted in the warm the external structures. IBMHP study. collaborated for over three years, at a cost in excess of $22M.Competitive landscape 2. Google: In 2009, Google patented a “serverCurrently, there are three entrants in sandwich” design in which twothe blade server cooling technology motherboards are attached to eitherspace suggesting variations on the side of a liquid-cooled heat sink.“future cooling design” theme. Drawings submitted with the patent illustrate Google’s design and how it1. IBM: might be implemented in a dataIn July 2010, IBM announced the center.successful pilot launch of its newly http://www.datacenterknowledge.com/archives/2010/07/0developed Zero Emissions Liquid 6/google-patents-liquid-cooled-server-sandwich/Cooled, Blade Server: Aquasar. The diagram depicts the “server sandwich” assemblies deployed in a www.globalaccessadvisors.com 8
  11. 11. DEEP FREEZE™row of racks, with each assembly up to 80 kilowatts per rack in someconnected to supply and return pipes implementations. Google’s patent saysfor liquid cooling, which are housed in the heat sink could be configured tothe hot aisle. The illustration of the use either chilled water or a liquidheat sink provides a view of the coolant.grooves where processors for themotherboards would fit onto either 3. Hardcore Computers:side, allowing the heat sink to cool two In April 2010, Hardcore Computer,motherboards at once. Inc., announced the launch of Liquid Blade™, the first Total LiquidThe liquid cooling design patented by Submersion blade server. The initialGoogle features custom motherboards Liquid Blade™ server platform, whichwith components attached to both is powered by two Intel® 5500 or 5600sides. Heat-generating processors are series Xeon® processors running onplaced on the side of the motherboard an Intel® S5500HV referencethat comes in contact with the heat motherboard, addresses several majorsink, which is an aluminum block datacenter challenges: power, coolingcontaining tubes that carry cooling and space. Hardcore Computer’sfluid. Components that produce less patented technology submerges all ofheat, like memory chips, are placed on the heat-producing components of thethe opposite side of the motherboard, Liquid Blade.adjacent to fans that provide air-cooling for these components. Hardcore’s Liquid Blade technology contends that it is 1350 times moreMotherboards are attached to either efficient than air at heat removal andside of the heat sink, creating a increases compute density because“server sandwich” assembly that can far less space is required betweenbe housed in a rack. The diagrams components. With little heat escapingsubmitted with the patent depict into the datacenter, the need for aircabinets filled with 10 of these liquid- conditioning and air moving equipmentcooled assemblies, suggesting each is minimized. The net result is a muchtakes up 4U in a rack. smaller physical and carbon footprint for the datacenter. As an addedSimilar to Cold Plate Technology the benefit, no need for special fireheat sink can cool heat loads of protection systems to cover the servers. This because all of the blade components are submerged so www.globalaccessadvisors.com 9
  12. 12. DEEP FREEZE™there is no oxygen exposure. Without oxygen there is no potential for sustainablefire.The major criticism of the Hardcore Computer product is that it relies extensively onproprietary parts. In order to upgrade, most parts will need to be purchased throughHardcore Computer, thus limiting the consumer options. Other complaints rangefrom “sizeable footprint” to “messy operations”.Deep Freeze™: A Comparative StudyIn October 2010, Harcore Computer engaged a Third party vendor to developconstruction budgets for two 3.2 megawatt datacenters: one using air-coolingarchitecture and the other equipped with Liquid Blade™ servers. In that study ofequivalent compute power facilities, each datacenter was designed to house 6,397servers utilizing the same 2-CPU-per server technology.Not surprisingly, the Liquid Blade™ servers significantly outperformed itscompetitor in the three key areas: physical space needs, power density and coolingload.In June 2011, Deep Freeze™ was comparatively tested, using the samemethodology and criteria. The results are as follows:Comparative CapacityDeep Freeze requires far fewer physical servers due to virtualization methodology. Table 1. Comparative Capacity Analysis www.globalaccessadvisors.com 10
  13. 13. DEEP FREEZE™ Auxiliary Equipment Deep Freeze’s™ closed loop, chassis/component design obviates the need for substantial investments in traditional CRAC architectures.Table 2. Auxiliary Equipment Comparison www.globalaccessadvisors.com 11
  14. 14. DEEP FREEZE™Cooling LoadAs each source of heat generation is examined and compared, the cooling load of theexterior walls, host, lighting servers and the UPS system were accounted. Asdemonstrated in Table 4, the chiller capacity for both the Liquid Blade™ and Air-Cooledsuites is substantially greater than the chiller-less solutions- the primary reason beingthat Deep Freeze™ facilitates a smaller footprint to cool, while still being able tomaintain the data center computing capacity. Table 4. Cooling Load ComparisonPower ConsumptionComparing the Deep Freeze™ design with the Air-Cooled and Liquid Blade™ suitesillustrates that the cost from auxiliary equipment is significantly higher. Table 5. Power Consumption www.globalaccessadvisors.com 12
  15. 15. DEEP FREEZE™ Construction Costs Table 6 compares construction costs. Though all suites have identical computing capacity, the capital costs to construct both the Air-cooled and the Liquid Blade™ Architecture is, on-average, 175% higher than the Deep Freeze™ suite. Table 6. Construction Costs ComparisonTotal Cost of Ownership (TCO)The Deep Freeze™ approach to data center design, architecture and coolingmethodologies (after-market, retro-fit design) results in a significant overall savings inestimated TCO. As a retro-fitted, after-market product, Deep Freeze™ units(replacing the fans installed in blade-server manufacture) will substantially decreasethe TCO due to power efficiencies realized and cooling expenditures reduced. Table 7. TCO www.globalaccessadvisors.com 13
  16. 16. DEEP FREEZE™Deep Freeze™: The Value Proposition RealizedGreen Design and the “Whole System Approach”Power and cooling issues can be is the extent to which virtualization’sarticulated separately for the purpose entitlement can be multiplied if powerof explanation and analysis, but and cooling infrastructure is optimizedeffective deployment of a total to align with the new, leaner IT profile.virtualization solution requires a In addition to the financial savingssystem-level view. The shift toward obtainable, these same power andvirtualization, with its new challenges cooling solutions answer a number offor physical infrastructure, re- functionality and availabilityemphasizes the need for integrated challenges presented by virtualiza-solutions using a holistic approach- that tion.”is, consider everything together, andmake it work as a system8. Two major challenges that virtualization poses to physicalAll system components should infrastructure are the need forcommunicate and interoperate. dynamic power and cooling systems,Demands and capacities must be and the rack-level, real- timemanaged in real time, preferably at the management of capacities.rack level, to ensure efficiency. These challenges have been met byA recent and significant datacenter Deep Freeze’s™ closed-loop,science study concluded that chassis/component cooling“Virtualization is an undisputed leap architecture and its real-time capacityforward in data center evolution- it management module. These solutionssaves energy, it increases computing are based on design principles thatthroughput, it frees up floor space, it resolve functional challenges, reducefacilitates load migration and disaster power consumption, and increaserecovery. Less well known efficiency. 8 Niles, Suzanne. “Virtualization: Optimizing Power and Cooling to Maximize Benefits”, 2011. APC data Center Science Center. 9 Ibid, at page 19. www.globalaccessadvisors.com 14
  17. 17. DEEP FREEZE™The comprehensive Deep Freeze™ infrastructure, the effective redirectionsolution is a self-sufficient green- of the solar production of electricityenergy data center that uses ultra through the generatorefficient cooling methods for both into the UPS packs, reduces the loadblade and structure design. This on the generator system.challenge is met by taming the coolingplant’s energy consumption and by This holistic technology uses nodesigning a self-sufficient green moving components and requiresbuilding using alternative energy minimal energy resources. Since thesolutions to offset auxiliary energy Deep Freeze™ modules are notrequirements such as lighting devices, participants in the consumption ofand by using solar energy. energy in the data center itself, the sole energy usage comes fromThe second aspect involves cooling computing power.the blades at the CPU level- this beingthe most efficient method to extract In addition to offering aheat from the blade, and more comprehensive model for new greenimportantly, from the rack. As part of energy data centers, Deep Freeze isthe cooling solution, the UPS and also capable of reducing upgradestorage components were physically expenditures on existing equipmentplaced in a separate area in order to making it the ideal solution for existingcontrol cooling with a variable airflow data centers looking to drasticallyand to maintain a constant reduce maintenance costs bytemperature in the surrounding space. optimizing cooling without replacing costly equipment. Deep FreezeDeep Freeze™ data centers account closed-coupled liquid CPU coolingfor the integration of solar and natural allows for optimization of space ingas solutions as an integral part of the existing data centers, resulting in anself-sustained ability of the data increase in energy savings and thecenter. By utilizing grid-tie solar elimination building additional spacesystems and natural gas generators, costs.the load is reduced both on and off thegrid. By diverting the solar productionto the UPS network and by simulatingthe generators on a grid-like www.globalaccessadvisors.com 15
  18. 18. DEEP FREEZE™SIPNOC CASE STUDYAntonis Valamontes, President, Mobee Communications, LTDMobee Communications, Ltd contracted NGN Data Services in 2010 to deploy theDeep Freeze™ product in their SIPNOC and to design a Tier-3 class data center inthe United States. The data center had to reliably support a 1500 server computingcapacity, while integrating solar and natural gas options as part of a self-sustainingmicro grid design within a 1500 sq. ft. environmental footprint.Mobee Communications, LTD is a venture-backed start-up that offers Mobile IPtelephony through its Virtual SIPNOC design. By designing their SIPNOC site withDeep Freeze™ technology, NGN Data Services provided Mobee with theconfidence that Mobee’s environmental needs regarding power, cooling, humidityand micro grid capability would be met.The primary goal was to build a self-sustainable facility that could operate efficientlyon and off the grid. Building the facility in Florida presented its set of uniquechallenges due to the intense heat and humidity levels. NGN’s solution was tointegrate solar energy and natural gas power generation as the primary sources ofenergy, while designing the grid as a backup system; i.e., what could be consideredan “on-grid UPS”; making the grid available should we choose to use it, but notmandatory.In effect, NGN created the “micro grid”. The micro grid approach is extremely costefficient due to its ability to build excess energy and then push it to the grid.Everything produced in the system is made for consumption and not for return.The design incorporates a two-shell building- a building within a building- creating a6” air pocket in between the outer and inner walls. The purpose of this approachwas to create a natural insulator- the same way feathers create tiny air pockets insleeping bags and comforters to insulate and reduce the escape of heat. For theunder-lining of the roof space, NGN used an “icing” approach, to create an R-factorand further insulate the building to prevent any outside air from entering thebuilding.The need to retain the computing capacity in a confined space, as well as Mobee’sspecific requirements for virtualization, made the HP Matrix blade system with theC7000 enclosures our top choice. The HP Matrix’s superior www.globalaccessadvisors.com 16
  19. 19. DEEP FREEZE™energy management solution aligned perfectly with the NGN Data Services greenenergy data suite model- and by combining it with the Deep Freeze™ solution-cooling optimization with no additional energy consumption costs were achieved.While designing the server room NGN isolated the servers in their own space.Every other accessory, hardware and storage device was designated forassignment to a smaller space with a controlled air environment. When completed,the network had an estimated storage capacity of 850 terabytes and a computingcapacity of over 900 virtual servers- all with dedicated NIC interfaces.The data center server room where heat is generated by the blades is referred toas the “hot room”; the adjacent room where the storage and UPS are located isreferred to as the “cold room.” The temperatures in both the hot and cold rooms aremaintained at a constant 70F. The temperature is controlled electronically byvariable airflow vents located throughout the building. www.globalaccessadvisors.com 17
  20. 20. DEEP FREEZE™Since the Deep Freeze™ modules were deployed to extract heat at the CPU level,the need to use large heavy chillers to cool down the server space was eliminated.NGN anticipating the higher heat/temperature of the server room designed theadjacent room to act as a natural heat exchanger and divided the two rooms with aglass wall- resulting in an efficient heat “exchanger” incorporating an holisticmethod of temperature control that required no additional energy consumption. Asa result, the glass became a natural heat exchanger transferring 14,000BTUs/hr ofheat, the equivalent of cooling 1900 virtual servers or two full racks of eight C7000enclosures. NGN also selected a green, carbon-neutral fire suppression systemcalled Aero-K that creates zero ozone depletion, zero ecological hazards and zerocontribution to global warming.A main objective of Mobee Communications, LTD was to become a premier mobileIP carrier and to operate a globally distributed system. Our need for extensible gridcomputing in a totally virtualized environment that could be rapidly deployedanywhere in the world, with no loss in reliability or performance, was actualized bythe Deep Freeze Technology Corp’s green energy micro grid model. www.globalaccessadvisors.com 18
  21. 21. DEEP FREEZE™ Conclusion Deep Freeze™ & “Green Data Center Architecture: The Value Proposition DefinedTemperature management is, in itself, a comprehensive solution for self-sustaininggreen energy data centers. Deep Freeze’s™ “plug & play”,, retrofitted liquid coolingtechnology provides an after- market, close-looped liquid cooling solution at theCPU level. Deep Freeze™ obviates the need to replace existing blade servers andreduces dependencies upon external CRAC architectures.Deep Freeze™ technology is the prime cost-effective cooling technology in theindustry today, representing the paradigm shift in deploying and cooling high-performance computing environments. No other cooling method delivers such amarked reduction in cost, energy consumption and space, simultaneously providingthe ultimate green energy eco-friendly data suite solution.Beyond the benefits of Deep Freeze™ as the ultimate unified solution to coolingoptimization and overhead cost reduction, the virtualization methodology and “greendata center architecture” saves money, increases computing power and conservesenergy. By offering environmentally and spatially conscious solutions, DeepFreeze™ nano-chiller technology has become the next evolution in green energydata centers.The Deep Freeze™ CPU chilling technology + the NGN virtualization methodology+ the NGN “green” Data Center architecture model = an across-the-board solutionfor reducing costs, and operating with an environmental footprint of highperformance computing. Benefits include: • Environmentally-friendly approach/design • Enhanced space, performance, efficiency and liquid cooling • Energy selective- deployable in areas where energy is limited or expensive • Increases capacity of existing stand-alone data centers • Designs can be rapidly deployed as a “one-offs” or in pod-like units • Ideal for advanced military applications or natural disaster recovery efforts www.globalaccessadvisors.com 19
  22. 22. Contact InformationDeep Freeze™c/o Global Access Advisorsinfo@globalaccessadvisors.com

×