Row-based data center cooling works by capturing hot air from IT equipment before it mixes with surrounding room air using row coolers. The document discusses three common misconceptions about row-based cooling and how it actually functions via hot air capture rather than cold air supply. Key attributes that maximize effectiveness include back-to-front airflow in row coolers, rack-based footprints that match IT layouts, and variable cooling capacity to match varying heat loads.
The Use of Ceiling Ducted Air Containment in Data CentersSchneider Electric
Ducting hot IT-equipment exhaust to a drop ceiling can be an effective air management strategy, improving the reliability and energy efficiency of a data center. Typical approaches include ducting either individual racks or entire hot aisles and may be passive (ducting only) or active (include fans). This paper examines available ducting options and explains how such systems should be deployed and operated. Practical cooling limits are established and best-practice recommendations are provided.
Row-based data center cooling is normally regarded as
a “cold air supply” architecture that uses row-based
coolers. However, row-based cooling is actually a “hot
air capture” architecture that neutralizes hot air from IT
equipment before it has a chance to mix with the
surrounding air in the room. This paper discusses
common misconceptions aboutrow-based cooling,
explains how row-based cooling removes hot air, and
describes key design attributes that maximize the
effectiveness of this approach.
Implementing Hot and Cold Air Containment in Existing Data CentersSchneider Electric
Containment solutions can eliminate hot spots and provide energy savings over traditional uncontained data center designs. The best containment solution for an existing facility will depend on the constraints of the facility. While ducted hot aisle containment is preferred for highest efficiency, cold aisle containment tends to be easier and more cost effective for facilities with existing raised floor air distribution. This presentation investigates the constraints, reviews all available containment methods, and provides recommendations for determining the best containment approach.
High Efficiency Indirect Air Economizer Based Cooling for Data CentersSchneider Electric
Of the various economizer (free cooling) modes for data centers, using fresh air is often viewed as the most energy efficient approach. However, this paper shows how indirect air economizer-based cooling produces similar or better energy savings while eliminating risks posed when outside fresh air is allowed directly into the IT space.
The segmentation of data centers into alternating hot and cold aisles is an established best practice. A number of manufacturers are taking this premise of airflow separation a step further by marketing "containment" solutions. By containing the hot or cold aisle, the air paths have little chance to mix, presenting data center operators with both reliability and efficiency gains.
To view the recording of the webinar presentation, please visit http://www.42u.com/webinars/Aisle-Containment-Webinar/playback.htm
Data Center Lessons Learned at an Intel data center. Innovations in cost and energy savings in high-density data centers including: air economizer, retrofit of factory builiding, high efficiency air-cooled cabinets, and a container data center proof-of-concept.
sehubungan dengan kebutuhan Internet of things (IoT) di segala bidang, maka diperlukan data center yang memenuhi standar, salah satu bagian vital pada data center yaitu bagian HVACnya, berikut saya lampirkan PPT pemaparan singkat mengenai HVAC pada data center, mohon maaf PPTnya masih acak-acak2an :D
semogaa bermanfaat
Data Center Floor Design - Your Layout Can Save of Kill Your PUE & Cooling Ef...Maria Demitras
Implementing data center best practices and using CFD models allowed Great Lakes to suggest a data center layout that would improve PUE and efficiency. Jason Hallenbeck, DCDC, explains the concepts behind how data center floor design can save or kill your PUE and cooling efficiency—as found in this proposal. Find Jason presenting at the BICSI Fall Conference on September 14th at 1:30 pm.
The Use of Ceiling Ducted Air Containment in Data CentersSchneider Electric
Ducting hot IT-equipment exhaust to a drop ceiling can be an effective air management strategy, improving the reliability and energy efficiency of a data center. Typical approaches include ducting either individual racks or entire hot aisles and may be passive (ducting only) or active (include fans). This paper examines available ducting options and explains how such systems should be deployed and operated. Practical cooling limits are established and best-practice recommendations are provided.
Row-based data center cooling is normally regarded as
a “cold air supply” architecture that uses row-based
coolers. However, row-based cooling is actually a “hot
air capture” architecture that neutralizes hot air from IT
equipment before it has a chance to mix with the
surrounding air in the room. This paper discusses
common misconceptions aboutrow-based cooling,
explains how row-based cooling removes hot air, and
describes key design attributes that maximize the
effectiveness of this approach.
Implementing Hot and Cold Air Containment in Existing Data CentersSchneider Electric
Containment solutions can eliminate hot spots and provide energy savings over traditional uncontained data center designs. The best containment solution for an existing facility will depend on the constraints of the facility. While ducted hot aisle containment is preferred for highest efficiency, cold aisle containment tends to be easier and more cost effective for facilities with existing raised floor air distribution. This presentation investigates the constraints, reviews all available containment methods, and provides recommendations for determining the best containment approach.
High Efficiency Indirect Air Economizer Based Cooling for Data CentersSchneider Electric
Of the various economizer (free cooling) modes for data centers, using fresh air is often viewed as the most energy efficient approach. However, this paper shows how indirect air economizer-based cooling produces similar or better energy savings while eliminating risks posed when outside fresh air is allowed directly into the IT space.
The segmentation of data centers into alternating hot and cold aisles is an established best practice. A number of manufacturers are taking this premise of airflow separation a step further by marketing "containment" solutions. By containing the hot or cold aisle, the air paths have little chance to mix, presenting data center operators with both reliability and efficiency gains.
To view the recording of the webinar presentation, please visit http://www.42u.com/webinars/Aisle-Containment-Webinar/playback.htm
Data Center Lessons Learned at an Intel data center. Innovations in cost and energy savings in high-density data centers including: air economizer, retrofit of factory builiding, high efficiency air-cooled cabinets, and a container data center proof-of-concept.
sehubungan dengan kebutuhan Internet of things (IoT) di segala bidang, maka diperlukan data center yang memenuhi standar, salah satu bagian vital pada data center yaitu bagian HVACnya, berikut saya lampirkan PPT pemaparan singkat mengenai HVAC pada data center, mohon maaf PPTnya masih acak-acak2an :D
semogaa bermanfaat
Data Center Floor Design - Your Layout Can Save of Kill Your PUE & Cooling Ef...Maria Demitras
Implementing data center best practices and using CFD models allowed Great Lakes to suggest a data center layout that would improve PUE and efficiency. Jason Hallenbeck, DCDC, explains the concepts behind how data center floor design can save or kill your PUE and cooling efficiency—as found in this proposal. Find Jason presenting at the BICSI Fall Conference on September 14th at 1:30 pm.
Data Center Cooling Design - Datacenter-serverroommarlisaclark
Keep your data center cool and healthy with our smart Data Center Cooling Design which makes sure your data centers never get exhausted and work efficiently. Visit: http://www.datacenter-serverroom.com/rack-row-room-data-center-cooling
Lowering operating costs through cooling system designAFCOM
Learn more about achieving maximum energy efficiency through cooling system design. This presentation was given during the Spring 2012 Data Center World Conference in Las Vegas, NV. Learn more by visiting www.datacenterworld.com.
The data center market has expanded dramatically in the past few years, and it doesn’t show signs of slowing down. Many clients and building owners are requesting modular data centers, which can be placed anywhere data capacity is needed. Modular data centers can help cash-strapped building owners add a new data center (or more capacity) to their site, and can assist facilities with unplanned outages, such as disruptions due to storms. Owners look to modular data centers to accelerate the “floor ready” date as compared to a traditional brick and mortar.
Review of TIA-942 data standards and some of the best practices surrounding a data center.
Sri Chalasani (Plante & Moran) is available to provide consulting on data center and infrastructure solutions.
Data center cooling infrastructure slideLivin Jose
CRAC vs CRAH, what is Air-Side Economizer, What is chillers, What is cooling tower, what is CRAC, What is CRAH, what is the importance of cooling in data center, what is Water Side Economizer,
One of our most popular webinar presentations on data center cooling: 2007 Data Center Cooling Study: Comparing Conventional Raised Floors with Close Coupled Cooling Technology.
If you're looking for a solution, it's simple physics: Water is 3,500 times more effective at cooling than air. But, liquid cooling carries a large stigma particularly because of the large price tag. And, if you're like other Data Center Managers, the words of Jerry McGuire may be ringing in your head "Show me the money!"
To view the recorded webinar presentation, please visit http://www.42u.com/data-center-liquid-cooling-webinar.htm
Este é um documento disponibilzado pela Ashrae na internet para consultas sobre TC 9.9 para operação em Data Centers no mundo todo, esse guia fala sobre as classes e os seus limites operacionais mínimos e máximos
Virtualization and Cloud Computing: Optimized Power, Cooling, and Management ...Schneider Electric
IT virtualization, the engine behind cloud computing, can have significant consequences on the data center physical infrastructure (DCPI). Higher power densities that often result can challenge the cooling capabilities of an existing system. Reduced overall energy consumption that typically results from physical server consolidation may actually worsen the data center’s power usage effectiveness (PUE). Dynamic loads that vary in time and location may heighten the risk of downtime if rack-level power and cooling health are not understood and considered. Finally, the fault-tolerant nature of a highly virtualized environment could raise questions about the level of redundancy required in the physical infrastructure. These particular effects of virtualization are discussed and possible solutions or methods for dealing with them are offered.
Data center power availability provisioningLivin Jose
Data center power availability provisioning, Power provision - Concurrently maintainable, Power provision - Fault tolerant, Power provision - Single Path, Power provision - Single path with resilience
Data Center Cooling Design - Datacenter-serverroommarlisaclark
Keep your data center cool and healthy with our smart Data Center Cooling Design which makes sure your data centers never get exhausted and work efficiently. Visit: http://www.datacenter-serverroom.com/rack-row-room-data-center-cooling
Lowering operating costs through cooling system designAFCOM
Learn more about achieving maximum energy efficiency through cooling system design. This presentation was given during the Spring 2012 Data Center World Conference in Las Vegas, NV. Learn more by visiting www.datacenterworld.com.
The data center market has expanded dramatically in the past few years, and it doesn’t show signs of slowing down. Many clients and building owners are requesting modular data centers, which can be placed anywhere data capacity is needed. Modular data centers can help cash-strapped building owners add a new data center (or more capacity) to their site, and can assist facilities with unplanned outages, such as disruptions due to storms. Owners look to modular data centers to accelerate the “floor ready” date as compared to a traditional brick and mortar.
Review of TIA-942 data standards and some of the best practices surrounding a data center.
Sri Chalasani (Plante & Moran) is available to provide consulting on data center and infrastructure solutions.
Data center cooling infrastructure slideLivin Jose
CRAC vs CRAH, what is Air-Side Economizer, What is chillers, What is cooling tower, what is CRAC, What is CRAH, what is the importance of cooling in data center, what is Water Side Economizer,
One of our most popular webinar presentations on data center cooling: 2007 Data Center Cooling Study: Comparing Conventional Raised Floors with Close Coupled Cooling Technology.
If you're looking for a solution, it's simple physics: Water is 3,500 times more effective at cooling than air. But, liquid cooling carries a large stigma particularly because of the large price tag. And, if you're like other Data Center Managers, the words of Jerry McGuire may be ringing in your head "Show me the money!"
To view the recorded webinar presentation, please visit http://www.42u.com/data-center-liquid-cooling-webinar.htm
Este é um documento disponibilzado pela Ashrae na internet para consultas sobre TC 9.9 para operação em Data Centers no mundo todo, esse guia fala sobre as classes e os seus limites operacionais mínimos e máximos
Virtualization and Cloud Computing: Optimized Power, Cooling, and Management ...Schneider Electric
IT virtualization, the engine behind cloud computing, can have significant consequences on the data center physical infrastructure (DCPI). Higher power densities that often result can challenge the cooling capabilities of an existing system. Reduced overall energy consumption that typically results from physical server consolidation may actually worsen the data center’s power usage effectiveness (PUE). Dynamic loads that vary in time and location may heighten the risk of downtime if rack-level power and cooling health are not understood and considered. Finally, the fault-tolerant nature of a highly virtualized environment could raise questions about the level of redundancy required in the physical infrastructure. These particular effects of virtualization are discussed and possible solutions or methods for dealing with them are offered.
Data center power availability provisioningLivin Jose
Data center power availability provisioning, Power provision - Concurrently maintainable, Power provision - Fault tolerant, Power provision - Single Path, Power provision - Single path with resilience
Data Center Cooling Efficiency, Optimization and Trade-offs
Learn how to lower Cooling costs 30-80% while increasing sustainability.
Gain insight on advantages and the trade-offs of the many technologies, where combinations of technologies may be best, and learn how cooling solutions play into the upcoming Title 24 Code changes.
From Best Practices, Containment (Hot vs. Cold, Passive vs. Managed, Chimney’s and IT Rows), Demand Based Cooling (Managed Distribution), new High Efficiency In Row Cooling (close coupled) technologies (CMS designs = 40% - 60% more efficient), new High Efficiency CRAC/H’s (CMS designs = 40-50% more efficient).
The green data center has moved from theoretical to the realistic, with IT leaders being challenged to construct new data centers (or retrofits the existing one) with energy saving features, sustainable materials and other environmental efficiencies in mind.
This project deals with the effects that are caused by the data center. How severe these effects are and how to overcome these. The measures that has been provided are not only in constructional point of view but also focusing on other dimensions.
One of the most critical aspects of traditional data center infrastructure is cooling system optimization and planning. When a subset of the system is physically relocated closer to the end user, as described in the Edge Data Center (EDC) concept, a solution to provide a data center-like environment for high-power equipment in public places such as office buildings, shopping centers, school campuses, event arenas, and wireless cell sites will be required.
This presentation was originally delivered at AFCOM's Data Center World conference in May, 2014 in Las Vegas, Nevada. The presentation discuss the state of cooling and airflow management, and also introduces Upsite's newest solution, AisleLok Modular containment. For more information, please visit http://upsite.com/aislelok-modular-containment
When developing data center energy-use estimations, engineers must account for all sources of energy use in the facility. Most energy consumption is obvious: computers, cooling plant and related equipment, lighting, and other miscellaneous electrical loads. Designing efficient and effective data centers is a top priority for consulting engineers. Cooling is a large portion of data center energy use, second only to the IT load. Although there are several options to help maximize HVAC efficiency and minimize energy consumption, data centers come in many shapes, sizes, and configurations. By developing a deep understanding of their client’s data center HVAC requirements, consulting engineers can help maintain the necessary availability level of mission critical applications while reducing energy consumption.
CPD Presentation Evaporative cooling in data centresColt UK
Data centres that use evaporative cooling can cut their energy bills by up to 80% compared to conventional cooling methods!
The specifications for the environmental operating conditions of IT equipment used in data centres have recently been revised, opening the way to evaporative cooling in such buildings. Evaporative cooling can provide a highly effective solution, with low installation and running costs, minimal maintenance requirements and quiet operation.
This seminar covers:
• Revisions to the specifications for the environmental operating conditions of IT equipment in data centres
• Options for cooling in a data centre
• Implementing evaporative cooling in a data centre.
Cooling Optimization 101: A Beginner's Guide to Data Center CoolingUpsite Technologies
As new personnel enter the industry, they are often bombarded with a slew of buzz words and marketing messages that would lead them to believe that data centers almost run themselves. And while monitoring and DCIM solutions are improving the management of power and cooling, an understanding of the fundamental science is crucial to both see through the hype and get the most out of management systems. More so, as the veterans in our industry start to retire, much of the basic knowledge around power and cooling is often overlooked when training their successors. This session will provide that basic knowledge and give a fundamental understanding of the power and cooling infrastructure in a data center, with an emphasis on cooling optimization. In this session, you’ll learn how to recover stranded cooling capacity, reduce operating costs, improve IT equipment reliability, and prolong the life and capacity of the data center.
More Electric:
Our world is becoming More Electric. Almost everything we interact with today is either already electric or becoming electric. Think about it. From the time you start your day in the morning to the time you finish your day – your home, your car, your work, your devices, your entertainment – almost everything is electric. Imagine the energy needed to power this. Electricity consumption will increase by 80% in next 25 years
More Connected: Our lives are also becoming more connected. The Internet has already transformed the way we live, work and play. Now the Connected Things is going to take this to a brand new level. 50 billion things connected in the next 5 years.
More Distributed: With such a widespread electrification and connectivity, energy models need rethinking as well. Which is why the generation of power needs to be closer to users. Distributed Energy is rapidly evolving globally. This is positive energy – renewable. In 2014 , Renewables overtook fossil fuels in investment value, with $295bn invested in renewables compared to $289bn invested in fossil fuels. And it is getting cheaper to do this.
More Efficient: When our world is more electric, more connected and more distributed, new opportunities emerge and allows us to tap into even more efficiency – in industrial processes, in the energy value chain, in buildings, in transportation, in the global supply chain and even in the comfort and peace-of-mind of our homes.
With more than $18 billion in M&A activity in the first half of last year alone, the colocation industry is riding the bubble of rapid growth. Colocation data center providers are being evaluated by a wide range of investors, with varying experience and perspectives. Understanding the evaluation criteria is a critical competency for attracting the right type of investor and financial commitment for your colocation business and this is why we have invited today’s speaker to present.
Steve Wallage Steve Wallage is Managing Director of BroadGroup Consulting. Steve brings 25 years of industry experience, holding senior roles at Gartner Group, IDC, CGI and IBM before joining BroadGroup 10 years ago. In his responsibilities at BroadGroup Steve has led many due diligence projects for investors evaluating colocation companies.
In this briefing we explore the Phaseo power supplies and transformers offer presentation and application samples.
For more details:
Industrial%20Automation%20and%20Control&parent-category-id=4500&parent-subcategory-id=4510
We’ve all been hearing about how robust the market for data center space is, but a presentation by an investment banker who has his finger on the pulse on the market day in and day out gave me a new appreciation for how great the opportunity really is.
Herb May is a partner and managing director with DH Capital, an investment bank founded 15 years ago in New York that is focused on the Internet infrastructure space. His company has been involved in close to 100 deals, representing almost $20 billion in value. Most of DH Capital’s work is as a mergers and acquisitions advisor, but raising capital is a growing percentage of its business. The point is, the company understands the financials behind data centers and colocation companies inside and out.
At Schneider Electric, in the IT Division, our core business has always been focused on delivering the highest level of availability to critical technologies, systems and processes. We’ve done this through our award winning, industry-leading and highest quality products and solutions, including UPS, Cooling, Rack Systems, DCIM and Services.
In this new digital era, we see a world that is always-on.
Always on to meet the needs of the highest notion of “access” to goods and services
Always on to be the solid, reliable foundation of digital transformation for businesses
Our mission is: To empower the digital transformation of our customers by ensuring their critical network, systems and processes are highly available and resilient.
In this briefing we explore the Magelis Basic HMI offer presentation and application samples.
For more details:
https://www.schneider-electric.com/en/product-range/61054-magelis#search
In this briefing, we explore the Zelio time relay offer presentation and application samples.
For more details:
http://www.schneider-electric.com/en/product-range/529-zelio-time?parent-category-id=2800&parent-subcategory-id=2810&filter=business-1-industrial-automation-and-control
Spacial, Thalassa, ClimaSys Universal enclosures BriefingSchneider Electric
Discover more about Universal Enclosures and how to select the one you need.
For more information:
http://www.schneider-electric.com/en/product-category/5800-enclosures-and-accessories/?filter=business-1-industrial-automation-and-control
Learn more about "what is a solid state relay", key features and targeted applications.
For more details:
http://www.schneider-electric.com/en/product-range/60278-zelio-relays?parent-category-id=2800&filter=business-1-Industrial%20Automation%20and%20Control
Learn more about what an HMI does and the main components and a look at a typical HMI.
Further details:
http://www.schneider-electric.com/en/product-category/2100-HMI%20(Terminals%20and%20Industrial%20PC)?filter=business-1-Industrial%20Automation%20and%20Control
Where will the next 80% improvement in data center performance come from?Schneider Electric
Rick Puskar, Head of Marketing for Schneider Electric's IT Division presents at the Gartner Symposium in Barcelona November 8th, 2017. In this presentation Rick discusses where the next 80% improvement in data center performance will come from with a focus on the speed, availability and reliability of data. Learn how a cloud-based data center infrastructure management as a service architecture like Schneider Electric's EcoStruxure IT can drive such aggressive goals around data center performance.
Learn how EcoStruxure is digitizing industry with IIoT to increase end-to-end operational efficiency with more dynamic control for better business results.
Learn more about our System Integrator Alliance Program - A global partnership transforming industry and infrastructure by helping them make the most of their processes, the most of their assets and the most of their energy.
EcoStruxure, IIoT-enabled architecture, delivering value in key segments.Schneider Electric
As presented during the Alliance 2017 event, learn how to deliver integrated solutions based on EcoStruxure, our IIoT-enabled architecture for Wastewater, Food and Beverage and Mining, Minerals and Metals.
A Practical Guide to Ensuring Business Continuity and High Performance in Hea...Schneider Electric
Within healthcare facilities, high availability of systems is a key influencer of revenue and patient safety and satisfaction. Three important critical success factors need to be addressed in order to achieve safety and availability goals. These include exceeding the facility’s level of regulatory compliance, a linking of business benefits to the maintenance of a safe and an “always on” power and ventilation environment, and a sensible approach to technology upgrades that includes new strategies for “selling” technological improvements to executives. This reference guide offers recommendations for identifying and addressing each of these issues.
Connected Services Study – Facility Managers Respond to IoTSchneider Electric
According to a new 2017 study commissioned by Schneider Electric, facility managers are increasingly looking to leverage the Internet of Things (IoT) by implementing new digital technologies like intelligent analytics to improve maintenance decisions and operations. Explore the full results on how facility managers are reacting to IoT when it comes to facility maintenance.
Learn more about cabling and accessories and the complete ranges available featuring 3 types of cable to suit the envirionment. For more details: http://www.schneider-electric.com/en/product-subcategory/88035-cordset-and-connectors/?filter=business-1-industrial-automation-and-control&parent-category-id=4900
This briefing will look at the general purpose of Photoelectric sensors and Photoelectric fork and frame sensors. For more details: http://www.tesensors.com/global/en/product/photoelectric/xu/?filter=business-1-automation-and-control&parent-category-id=4900/
A world-class global brand offering a comprehensive line of Limit Switches complying with international standards: IEC, UL, CSA, CCC, GOST. For more details: http://www.tesensors.com/global/en/product/limit-switches/xc-standard/?cat_id=BU_AUT_520_L4&conf=sensors&el_typ=node&nod_id=0000000002&prev_nod_id=0000000001&scp_id=Z000
1. How Row-based Data Center
Cooling Works
Revision 0
White Paper 208
Row-based data center cooling is normally regarded as
a “cold air supply” architecture that uses row-based
coolers. However, row-based cooling is actually a “hot
air capture” architecture that neutralizes hot air from IT
equipment before it has a chance to mix with the
surrounding air in the room. This paper discusses
common misconceptions about row-based cooling,
explains how row-based cooling removes hot air, and
describes key design attributes that maximize the
effectiveness of this approach.
Executive summary
by Schneider Electric White Papers are now part of the Schneider Electric
white paper library produced by Schneider Electric’s Data Center Science Center
DCSC@Schneider-Electric.com
by Paul Lin and Victor Avelar
2. How Row-based Data Center Cooling Works
Schneider Electric – Data Center Science Center Rev 0 2
Close-coupled cooling, row-oriented architecture, and in-row cooling are some common
names used to describe row-based cooling in data centers. Throughout this paper the term
row cooler1
will be used to describe this architecture. In addition, the term “pod” will be used
to describe how row coolers are typically deployed (see side bar for definition).
Although row-based cooling technology has been around for about 10 years, there is still
confusion surrounding how this type of cooling works. Specifically there are three common
misconceptions about how it works and how it is deployed:
• Misconception 1: Row coolers require turning vanes: It is commonly believed that
row coolers require turning vanes to direct cold air to the front of the racks. In fact,
some data center designers, operators, and manufactures design turning vanes at the
outlet of row-based cooling equipment to try to blow cold air directly into the racks.
• Misconception 2: Row coolers are needed in every row: It is commonly believed
that every row in the data center requires row cooler(s). It is counter intuitive to think
that you can cool an entire pod if all row coolers are in one row and none in the oppos-
ing row.
• Misconception 3: Row coolers can’t cool loads outside of their pod: It is common-
ly believed that row coolers can only cool the loads inside the pod they are located in.
Similarly, it is often thought that extra row-based cooling capacity in one pod cannot
help cool another pod.
This paper uses scientific evidence and the core principle of hot air capture to explain why
these are misconceptions. The paper then describes three key row-based cooling design
attributes that maximize hot air capture. For more information on fundamentals of row-based
cooling, see White Paper 130, Choosing between Room, Row, and Rack-based Cooling for
Data Centers.
The goal of any data center cooling system is to remove the heat added to the air by the IT
equipment. A metric for measuring the effectiveness of how hot IT exhaust air is captured by
cooling equipment (or how cold air is supplied to IT inlets) is called the capture index2
.
Capture index is based solely on the airflow patterns associated with the supply of cold air to,
or the removal of hot air from a rack. The capture index is typically a rack-by-rack metric and
has values between zero and 100%; higher values generally imply good cooling performance
and scalability of a cooling architecture.
The hot air capture index is the fraction of hot air exhausted by IT equipment that is captured
directly by the row coolers within the same pod as the IT equipment. This is the principle
metric that quantifies the effectiveness of row-based cooling. This is illustrated in Figure 1
where 78% of the hot IT exhaust air is captured by the cooler. Figure 2 shows a CFD
simulation result of temperature contour of a data center with row coolers.
1
Note that the principles in this paper are described with the assumption that the row coolers are floor-
mounted as opposed to ceiling-mounted.
2
J.W. VanGilder, S.K. Shrivastava, Capture Index: An Airflow-Based Rack Cooling Performance Metric,
Published in ASHRAE Transactions 2007, Volume 113, Part 1.
Introduction
The principle
of hot air
capture
What is a data center
“pod”?
A data center pod is a cluster of
IT cabinets combined with
power and cooling infrastructure
that is deployed as a unit.
Rooms are planned in advance
for a number of pods, but the
pods may be separately
deployed or upgraded over
time. Pods typically assembled
on-site in a room to a standard
design, but may be partially or
extensively pre-manufactured.
In its most common form, a pod
is a pair of rows of cabinets
sharing a hot aisle.
Pod-based design is a
recommended best-practice for
larger data centers.
3. How Row-based Data Center Cooling Works
Schneider Electric – Data Center Science Center Rev 0 3
Source: J.W. VanGilder, S.K. Shrivastava, Capture Index: An Airflow-Based Rack Cooling Performance Metric
A useful application of hot air capture index is optimizing the placement of racks and cooling
units in a pod. The design goal is to ensure that all exhaust air is captured by the cooling
units so that there is no net heating of the room. In this case, rack-by-rack capture indices
explicitly show how much of each rack’s airflow is captured so that the appropriate design
changes may be implemented until an acceptable solution is found. The hot air capture index
design target should generally be at least 90%, which means the pod layout is effective. The
following design practices can help attain or exceed this value.
• Use blanking panels and brush strips – IT exhaust air can escape the hot aisle
through open unused “U” positions in IT racks and through open areas around cable
penetrations at the top and bottom of the rack. Installing blanking panels in open “U”
positions and brush strips around cable penetrations is very effective at increasing the
hot air capture index. For more information on airflow management with blanking pan-
els, see White Paper 44, Improving Rack Cooling Performance Using Airflow Manage-
ment Blanking Panels.
• Optimize row cooler placement – Row coolers should be placed in between IT racks
and generally not at the end of a row in order to maximize hot air capture. For uniform
rack loading, the coolers should be distributed evenly along the length of the hot aisle.
For non-uniform rack loading, coolers should be clustered closer to the higher loads.
• Use side air distribution units – Many types of switches and routers employ side-to-
side airflow. This airflow configuration is generally incompatible with typical front-to-
back airflow of IT enclosures. Side air distribution units allow switch and router enclo-
sures to be placed side by side and still maintain proper airflow thereby consuming less
Figure 2
CFD simulation illustrating
hot air capture (center aisle
represents hot aisle, and
blue objects represents row
coolers)
Figure 1
Illustration of hot air capture
index
4. How Row-based Data Center Cooling Works
Schneider Electric – Data Center Science Center Rev 0 4
floor space. See White Paper 50, Cooling Solutions for Rack Equipment with Side-to-
Side Airflow.
• Avoid fan trays – Fan trays or roof fans employed in racks can actually degrade air
management as the induced airflow patterns are incompatible with the front-to-back IT
airflow.
• Use deeper IT racks – Over the last 10 years, IT equipment has trended toward deep-
er form factors. Using shallow-depth racks (i.e. 900mm / 35in) with this IT equipment
can constrict the movement of hot exhaust air out from the back of the rack. Deeper
racks allow more room for cabling which decreases resistance to airflow.
• Use air containment – The further row coolers are from IT racks, the more difficult it is
to prevent IT exhaust air from recirculating back into IT equipment inlets. Containing
the hot aisle with row coolers can improve cooling system efficiency, especially at den-
sities below 3kW/rack. For more information on airflow management and containment
please read these two white papers: White Paper 135, Impact of Hot and Cold Aisle
Containment on Data Center Temperature and Efficiency; White Paper 153, Imple-
menting Hot and Cold Air Containment in Existing Data Centers.
The three misconceptions mentioned in the introduction are indicative of a lack of under-
standing in how row coolers contribute to the concept of hot air capture. This section will first
describe three key design attributes of row coolers, which will then be used as the basis for
debunking each misconception.
Design attributes of row coolers
Row coolers capture hot IT exhaust air and neutralize it before it has a chance to mix with the
surrounding air in the room or recirculate to the front of the IT rack. Capturing 100% of hot
air can improve energy efficiency and eliminate hot spots. There are three key design
attributes that help row coolers capture and neutralize hot air:
• Back-to-front airflow
• Rack-based footprint
• Variable cooling capacity devices
Back-to-front airflow
Row coolers are designed with back-to-front airflow using small fans distributed across the
height of the cooler. Since the rear of the row cooler is in the hot aisle, the fans are able to
collect the hot IT exhaust air directly and uniformly (top to bottom) from the hot aisle as
shown in Figure 3. It has been shown, based on computational fluid dynamic (CFD) analysis
and performance testing, that the hot air capture index is generally highest for IT racks
closest to the row cooler. Conversely, the hot air capture index decreases for IT racks further
away from the row cooler. This relationship brings about the benefit of the second design
attribute, rack-based footprint.
Debunking the
misconceptions
Figure 3
Row cooler showing verti-
cally-aligned fans and back-
to-front airflow
5. How Row-based Data Center Cooling Works
Schneider Electric – Data Center Science Center Rev 0 5
Rack-based footprint
Row coolers are designed with a footprint similar to an IT rack (full or half rack width). This
design attribute allows you to distribute row coolers throughout a pod of IT racks. The
maximum distance between any row cooler and IT equipment exhaust is normally within 3
meters (10 feet). The implication of this distributed cooling layout is that the hot air capture
index is improved for the entire pod. When the hot exhaust air of one IT rack is beyond the
“reach” of a distant row cooler, a closer row cooler will capture the majority of the hot exhaust
air in that part of the pod.
If all of the hot exhaust air in a pod is captured by distributed coolers in that pod (i.e. 100%
hot air capture), then by definition there can be no hot spots at the intakes of IT equipment.
Figure 4 shows an example layout of distributed row coolers. The quantity and locations of
row coolers are determined according to rack densities and the extent of containment.
Variable cooling capacity devices
Row coolers are designed with EC fans which can adjust fan speed to change airflow and
output cooling capacity. Traditional CRAHs typically only control airflow to maintain raised
floor static pressure, and is not related to the IT load. This design attribute allows row
coolers to balance the cooling capacity with the heat load by sensing the inlet temperature of
nearby racks or the IT room.
Correcting misconception 1: Row coolers require turning vanes
If one doesn’t understand the premise of hot air capture, it’s easy to conclude that turning
vanes are required to blow cold air in the direction of the IT equipment. However, both
design attributes described above show that turning vanes are not required. If 100% of
the hot IT exhaust air is captured and cooled, (before it has a chance to mix with surrounding
air in the room), the remaining space in the room becomes a cold air supply plenum3
.
Therefore, it doesn’t matter where the cold air (from the row cooler) goes; it just matters that
the hot air (from the IT equipment) is being captured and neutralized.
Turning vanes are used by some manufacturers to direct
cold air into the adjacent racks, which indicates they
don’t understand hot air capture. The use of turning
vanes not only increases the capital cost and increases
the width of the cold aisle, but also results in airflow
issues. The airflow leaving the turning vanes is at a
higher velocity, and perpendicular to, the airflow of the
adjacent IT rack. The high velocity jet flow creates
regions of low pressure in front of the IT rack, interfering
with rack airflow. (This is known as Bernoulli’s principle,
3
Even if the initial room air temperature is higher than the row cooler supply air temperature, over time,
the room reaches steady state and will be approximately the same temperature as the row cooler
supply air. This assumes relatively tight room construction with minimal infiltration.
Figure 4
Illustration of row coolers
distributed across a row of
racks
CRAC
CRAC
CRAC
Rack Rack Rack Rack
REAR
FRONT
WALL or ROW to help form “hot aisle”
Hot aisle
CFD simulation of airflow
leaving the turning vanes
(Air velocity vectors show)
6. How Row-based Data Center Cooling Works
Schneider Electric – Data Center Science Center Rev 0 6
the same principle that allows wings to lift an airplane.) Unlike the gentle and fairly uniform
“sucking” of hot IT exhaust in the hot aisle, the strong jets blanketing the rack inlets creates
highly variable conditions from rack to rack. (Jet flow is characterized by localized pressure
and velocity changes.) Note that turning vanes also have a significant pressure drop
requiring more fan power.
Correcting misconception 2: Row coolers are needed in every row
Many believe that row coolers can only cool the racks which are located in the same row as
the coolers. Both design attributes described above show that, with a high hot air capture
index, it doesn’t matter if all the row coolers are on one side of a two-row pod (or any
placement combination).
CFD modeling and actual installations show it is possible for row coolers in a single row to
cool racks in both rows as illustrated in Figure 5. The percentage values on each rack
represent the hot air capture index. Note in the figure, all the IT inlets are exposed to the
blue color which is neutralized air, and the red color variation in the middle is the hot aisle.
Correcting misconception 3: Row coolers can’t cool loads outside
their pod
This misconception has been communicated in other ways including:
• Row coolers are for spot cooling pods only
• Row coolers can’t cool racks in another pod
• Row coolers can’t cool a large room
• Row coolers can’t cool tape storage units on the perimeter of my data center
• N+1 cooling redundancy with row cooling means that you need a redundant cooling
unit in every pod
An important fact to understand about this misconception is that the hot air capture index
measures the percentage of a rack’s hot exhaust air that is captured directly by row cooler(s)
in the same pod as the rack. By definition, if 100% of a rack’s hot air were to make its way
back to a row cooler far outside its pod, its hot air capture index would be 0%.
Therefore, the most predictable way to cool stand-alone ancillary IT gear, such as storage
arrays, is to place a row cooler next to the gear, assuring a high hot air capture index. In
essence, this creates a miniature pod with up to 100% hot air capture. This is analogous to
placing a perforated tile in front of the stand-alone gear, in a raised floor cooling architecture.
Figure 5
Example CFD analysis with
all row coolers in one row
3D view looking at the top of
the racks
Average 5kW/rack with no
cooling redundancy or
containment
99% 98% 95% 93% 99% 100% 100% 93% 91% 95% 98%
100% 100% 99% 100% 100% 99% 98% 100% 96%C C C C C
Hot Aisle
Supply air direction of row coolers
C
7. How Row-based Data Center Cooling Works
Schneider Electric – Data Center Science Center Rev 0 7
The main difference is that a perforated tile operates on the premise of cold air capture which
in this case is limited by how much airflow (and kW cooling capacity) is provided as dis-
cussed in WP121, Airflow Uniformity Through Perforated Tiles in a Raised-Floor Data Center.
What if the stand-alone IT gear doesn’t have a row cooler (i.e. in its pod)? How would this
gear be cooled in a row-based cooling architecture? The answer is tied to the “rack-based
footprint” and “variable cooling capacity devices” design attributes of row coolers. The rack-
based footprint allows coolers to be distributed across the entire data center which creates a
close-coupled cooling effect with all heat loads. The variable cooling capacity devices over-
supply cold air to the cold aisle after sensing an increased room temperature. Containing the
hot aisle with row coolers can also improve hot air capture, especially at densities below
3kW/rack. Will containment have a negative impact on cooling stand-alone IT gear? The
easiest way to illustrate this is by comparing two identical data center layouts each with 65
racks, 235kW of IT load, two stand-alone IT devices, with and without containment.
Figure 6A shows the results of a CFD analysis with 14 row coolers distributed evenly
throughout the data center. The density for the 20 wide racks is 5kW/rack and the 43 narrow
racks are 3kW/rack. The two ancillary racks in the upper right-hand corner represent stand-
alone IT gear. The hot air capture index value (with the unit of percentage) is shown on each
rack. Both the 5kW stand-alone and 1kW stand-alone devices have an average inlet air
temperature of 22˚C (71.6˚F). Figure 6B shows the results of a CFD analysis of the same
data center except with hot aisle containment to illustrate its effect on hot air capture and inlet
air temperatures of the stand-alone IT devices. Most of the racks have 100% hot air capture
index. Both stand-alone IT devices have an average inlet air temperature of 21˚C (69.8˚F).
Figure 6A
Example CFD analysis with
distributed row coolers
3D view looking at the top of
the racks
235kW total load
65 total racks
Wide racks = 5kW/rack
Narrow racks = 3 kW/rack
No cooling redundancy or
containment
5kW stand-
alone device
1kW stand-
alone device
Figure 6B
Example CFD analysis with
centralized row coolers
3D view looking at the top of
the racks
235kW total load
65 total racks
Wide racks = 5kW/rack
Narrow racks = 3 kW/rack
No cooling redundancy but
with containment
5kW stand-
alone device
1kW stand-
alone device
99 100 99 98
99
100 100 100 100 100 100 100 100
100 100 100 100 100 100 100 100 100 100 100
59 65 89 100 76 76 94 85 97 100 100 100 97
66 68 73 79 67 63 97 92 81 90 97 100
99 100 100 98 97 82 80
9193971001009999
C C
CCC
C C
C C C
C C
100 100 100 100 100 100 100 100 100 100 100 100C C C C
C100 100 100 100 100 100 100 100 100 100 100 100
99 99 99 100 100 100 100 100 100 100 99 99 99
99 99 99 99 100 100 100 100 100 100100 100 100
100 100 100 100 100 100 100
100 100 100 100 100 100 100
C C
C C
C C
C C
C C
C
Hot Aisle
Hot Aisle Hot Aisle
Hot Aisle
Hot Aisle Hot Aisle
0 0
0 0
8. How Row-based Data Center Cooling Works
Schneider Electric – Data Center Science Center Rev 0 8
The figures above help to visualize the effect of row coolers on hot air capture index values
and cooling stand-alone IT device. As row coolers are variable capacity devices that are
capable of over-supplying cold air to the cold aisle. When the system is in balance, the
supply temperature from the cooler is equal to the IT inlet temperature. When ancillary
equipment (i.e. stand-alone devices) is initially added outside of the pod, the temperature of
the overall room environment increases due to air mixing. Nearby row-based cooling units
sense this increase and respond by increasing cooling capacity thereby neutralizing the hot
air. This increased cooling capacity is achieved by increased air flow (row coolers increase
fan speed) and or lower supply air temperature (row cooler increase chilled water valve flow).
These adjustments occur initially but remain unchanged once the data center reaches steady
state.
The steps describing how the row-based coolers “pick up the ancillary load” are the same
even when containment is used. In either case we expect the ancillary equipment to operate
at higher temperatures than the row-based racks, because it is not directly coupled to row
coolers in their own pod. The further away stand-alone IT gear is from row coolers, the
higher their inlet air temperature will be. Should you be concerned about the elevated
ancillary equipment temperature? Often times IT equipment can tolerate ASHRAE’s recom-
mended maximum temperature of 27˚C (80.6˚F). If this is not acceptable, it may be appropri-
ate to place row cooler close to the ancillary equipment rack. For more information on
cooling loads outside of a pod, see White Paper 139, Cooling Entire Data Centers Using Only
Row Cooling and White Paper 134, Deploying High-Density Pods in a Low-Density Data
Center.
9. How Row-based Data Center Cooling Works
Schneider Electric – Data Center Science Center Rev 0 9
Row-based cooling was designed with a focus on capturing hot exhaust air from IT equip-
ment in a pod. The higher the hot air capture index values, the less hot air recirculation that
occurs, which decreases temperature variations across racks (such as hot spots). It doesn’t
matter where the cold air goes; it matters where the hot air comes from. Row-based cooling
units can also help to cool ancillary equipment or racks outside of their pod but hot air
capture index is not used as a performance metric in this case. Instead row coolers use a
close-coupled cooling process described in this paper. Understanding these principles helps
to explain the three misconceptions described in this paper and provides a foundation for
effective deployment of row-based cooling.
Conclusion
Paul Lin is a Senior Research Analyst at Schneider Electric's Data Center Science Center.
He is responsible for data center design and operation research, and consults with clients on
risk assessment and design practices to optimize the availability and efficiency of their data
center environment. Before joining Schneider Electric, Paul worked as the R&D Project
Leader in LG Electronics for several years. He is now designated as a “Data Center Certified
Associate”, an internationally recognized validation of the knowledge and skills required for a
data center professional. He is also a registered HVAC professional engineer. Paul holds a
master’s degree in mechanical engineering from Jilin University with a background in HVAC
and Thermodynamic Engineering.
Victor Avelar is the Director of Schneider Electric’s Data Center Science Center. He is
responsible for data center design and operations research, and consults with clients on risk
assessment and design practices to optimize the availability and efficiency of their data
center environments. Victor holds a bachelor’s degree in mechanical engineering from
Rensselaer Polytechnic Institute and an MBA from Babson College. He is a member of
AFCOM and the American Society for Quality.
About the author