This document discusses common mistakes that compromise cooling performance in data centers and network rooms. It examines five categories of typical mistakes: airflow in the rack, rack layout, load distribution, cooling settings, and air delivery/return vent layout. Mistakes such as omitting blanking panels, improper rack layout, and obstructed airflow can cause hot spots, decrease fault tolerance, reduce efficiency and capacity, increasing costs by up to 25% over the lifetime of the data center. The document provides solutions to address each problem through standardized racks, blanking panels, proper hot/cold aisle layout, unobstructed airflow, and policies to prevent these issues.
Are New Orleans Data Centers Making Green Strategies a Priority? (SlideShare)SP Home Run Inc.
http://DataCenterLeadGen.com Are New Orleans Data Centers Making Green Strategies a Priority? (SlideShare). Environmental concerns motivated New Orleans data centers to make green strategies a priority. What are those strategies? Do they make a difference? Copyright (C) SP Home Run Inc. All worldwide rights reserved.
Data centers are important to our daily life and work. However, in traditional data centers, engineers are struggling with the need to use multiple tools to maintain the data center management, like provisioning, managing and monitoring server, which is complex, expensive, inefficient and labor-intensive. Is there any better method to solve those problems and improve data center? The answer is virtualization. A modern data center is considered to be the future of the data center, which is known as virtual data center.
Overview of Intel Data Center Manager
Why do we care about Data Center Management
Storage and Networking Support
What can you do with DCM
Go to Market Options
Case Studies
Are New Orleans Data Centers Making Green Strategies a Priority? (SlideShare)SP Home Run Inc.
http://DataCenterLeadGen.com Are New Orleans Data Centers Making Green Strategies a Priority? (SlideShare). Environmental concerns motivated New Orleans data centers to make green strategies a priority. What are those strategies? Do they make a difference? Copyright (C) SP Home Run Inc. All worldwide rights reserved.
Data centers are important to our daily life and work. However, in traditional data centers, engineers are struggling with the need to use multiple tools to maintain the data center management, like provisioning, managing and monitoring server, which is complex, expensive, inefficient and labor-intensive. Is there any better method to solve those problems and improve data center? The answer is virtualization. A modern data center is considered to be the future of the data center, which is known as virtual data center.
Overview of Intel Data Center Manager
Why do we care about Data Center Management
Storage and Networking Support
What can you do with DCM
Go to Market Options
Case Studies
A data center centralizes a company’s shared IT operations as well as equipment for processing, storing and disseminating applications and data. It is essential to have knowledge about the terms that are used frequently in the context of data centers.
Compass Datacenters provides solutions from the core to the edge. We serve cloud and SaaS providers, enterprises, colocation and hosting companies and customers with edge data center or distributed infrastructure requirements.
Compass Datacenters LLC builds and operates data centers in the United States and internationally. We offer build to order, custom personalization, custom-defined fit-out, cloud, and location-based data center solutions. We also lease Compass powered shells/fit-out ready data center structures designed to your requirements. We serve enterprises, service providers, and hyperscale customers.
Comparing Software Defined Data Centers vs. Traditional Data Centers (SlideSh...SP Home Run Inc.
http://DataCenterLeadGen.com
Comparing Software Defined Data Centers vs. Traditional Data Centers (SlideShare).
How does a Software-Defined Data Center compare to a traditional data center? Read this post for an overview of the pros and cons.
Copyright (C) SP Home Run Inc. All worldwide rights reserved.
Transceiver – How They Help Support Big Data in Data Centers?Fern Xu
Transceivers, one of the most instrumental designs in telecommunication field, are related to the promotion of big data in data centers, helping business owners get their data in real-time.
Do Carrier Neutral Data Centers Really Reduce Costs? (SlideShare)SP Home Run Inc.
http://DataCenterLeadGen.com Do Carrier Neutral Data Centers Really Reduce Costs? (SlideShare). Carrier-neutral data centers promise cost reduction, improved connectivity and redundancy, and portability. But do they really deliver? Copyright (C) SP Home Run Inc. All worldwide rights reserved.
Understanding the Different Types of Data Center, What are the types of data center, What is Enterprise data centre, what is Managed hosting data center, what is Colocation data centre, what is Cloud hosting data centre
Which New Jersey Data Centers Embrace Managed Services? (SlideShare)SP Home Run Inc.
http://DataCenterLeadGen.com
Which New Jersey Data Centers Embrace Managed Services? (SlideShare).
Why are managed services being offered at New Jersey data centers? Who is embracing this phenomenon? Find out here. Copyright (C) SP Home Run Inc. All worldwide rights reserved.
A quick overview of InfoRelay's 15 national data centers, which are located in Washington DC, Los Angeles CA, New York NY, Ashburn VA, Northern Virginia, Miami FL, Dallas TX, San Jose CA, Chicago IL and Herndon VA. InfoRelay serves customers worldwide with colocation, cloud hosting, bandwidth and internet connectivity, ddos protection, network security and managed IT services.
www.InfoRelay.com
Which Portland Data Centers Excel at Managed Hosting? (SlideShare)SP Home Run Inc.
http://DataCenterLeadGen.com Which Portland Data Centers Excel at Managed Hosting? (SlideShare). Researching Portland data centers? Learn about what makes Portland special and the three key data center players in the market. Copyright (C) SP Home Run Inc. All worldwide rights reserved.
As a leading Managed service provider with datacenters in India, Netmagic solutions, fulfills your entire IT infrastructure requirements: from collocation services to backup solutions.
Every business has a data center, regardless of the size. Even the smallest business has it. It is an ever-growing part of business in the modern world and a key business parameter, since data center influences the functioning of business enterprise. Imagine what happens to the business operation when the data center is interrupted. Any interruption can lead to serious breakdown. That is why efficient backup strategy is essential.
One word that you often see associated with any data center is its “tier,” or its level of service. Virtually every data center has a tier ranking of I, II, III, or IV, and this ranking serves as a symbol for everything it has to offer: its physical infrastructure, its cooling, power infrastructure, redundancy levels, and promised uptime.
This presentation takes a look at each of the 4 data center tiers, examining the key components for each tier, as well the total expected uptime level for each tier. If you are in the process of evaluating data centers, this is no doubt a term you will come across in your search, so we hope this presentation helps provide some solid background in to how you can better choose a data center for your specific needs.
For more insights into the data center world, and to learn more about Data Cave, check out our website at www.thedatacave.com.
A data center centralizes a company’s shared IT operations as well as equipment for processing, storing and disseminating applications and data. It is essential to have knowledge about the terms that are used frequently in the context of data centers.
Compass Datacenters provides solutions from the core to the edge. We serve cloud and SaaS providers, enterprises, colocation and hosting companies and customers with edge data center or distributed infrastructure requirements.
Compass Datacenters LLC builds and operates data centers in the United States and internationally. We offer build to order, custom personalization, custom-defined fit-out, cloud, and location-based data center solutions. We also lease Compass powered shells/fit-out ready data center structures designed to your requirements. We serve enterprises, service providers, and hyperscale customers.
Comparing Software Defined Data Centers vs. Traditional Data Centers (SlideSh...SP Home Run Inc.
http://DataCenterLeadGen.com
Comparing Software Defined Data Centers vs. Traditional Data Centers (SlideShare).
How does a Software-Defined Data Center compare to a traditional data center? Read this post for an overview of the pros and cons.
Copyright (C) SP Home Run Inc. All worldwide rights reserved.
Transceiver – How They Help Support Big Data in Data Centers?Fern Xu
Transceivers, one of the most instrumental designs in telecommunication field, are related to the promotion of big data in data centers, helping business owners get their data in real-time.
Do Carrier Neutral Data Centers Really Reduce Costs? (SlideShare)SP Home Run Inc.
http://DataCenterLeadGen.com Do Carrier Neutral Data Centers Really Reduce Costs? (SlideShare). Carrier-neutral data centers promise cost reduction, improved connectivity and redundancy, and portability. But do they really deliver? Copyright (C) SP Home Run Inc. All worldwide rights reserved.
Understanding the Different Types of Data Center, What are the types of data center, What is Enterprise data centre, what is Managed hosting data center, what is Colocation data centre, what is Cloud hosting data centre
Which New Jersey Data Centers Embrace Managed Services? (SlideShare)SP Home Run Inc.
http://DataCenterLeadGen.com
Which New Jersey Data Centers Embrace Managed Services? (SlideShare).
Why are managed services being offered at New Jersey data centers? Who is embracing this phenomenon? Find out here. Copyright (C) SP Home Run Inc. All worldwide rights reserved.
A quick overview of InfoRelay's 15 national data centers, which are located in Washington DC, Los Angeles CA, New York NY, Ashburn VA, Northern Virginia, Miami FL, Dallas TX, San Jose CA, Chicago IL and Herndon VA. InfoRelay serves customers worldwide with colocation, cloud hosting, bandwidth and internet connectivity, ddos protection, network security and managed IT services.
www.InfoRelay.com
Which Portland Data Centers Excel at Managed Hosting? (SlideShare)SP Home Run Inc.
http://DataCenterLeadGen.com Which Portland Data Centers Excel at Managed Hosting? (SlideShare). Researching Portland data centers? Learn about what makes Portland special and the three key data center players in the market. Copyright (C) SP Home Run Inc. All worldwide rights reserved.
As a leading Managed service provider with datacenters in India, Netmagic solutions, fulfills your entire IT infrastructure requirements: from collocation services to backup solutions.
Every business has a data center, regardless of the size. Even the smallest business has it. It is an ever-growing part of business in the modern world and a key business parameter, since data center influences the functioning of business enterprise. Imagine what happens to the business operation when the data center is interrupted. Any interruption can lead to serious breakdown. That is why efficient backup strategy is essential.
One word that you often see associated with any data center is its “tier,” or its level of service. Virtually every data center has a tier ranking of I, II, III, or IV, and this ranking serves as a symbol for everything it has to offer: its physical infrastructure, its cooling, power infrastructure, redundancy levels, and promised uptime.
This presentation takes a look at each of the 4 data center tiers, examining the key components for each tier, as well the total expected uptime level for each tier. If you are in the process of evaluating data centers, this is no doubt a term you will come across in your search, so we hope this presentation helps provide some solid background in to how you can better choose a data center for your specific needs.
For more insights into the data center world, and to learn more about Data Cave, check out our website at www.thedatacave.com.
Electricity usage costs have become an increasing fraction of the total cost of ownership (TCO) for data centers. It is possible to dramatically reduce the electrical consumption of typical data centers through appropriate design of the data center physical infrastructure and through the design of the IT architecture. This paper explains how to quantify the electricity savings and provides examples of methods that can greatly reduce electrical power consumption.
Proactively Managing Your Data Center Infrastructurekimotte
Attached is the presentation from our Proactively Manage Data Center Infrastructure Webinar - to view the webinar with audio, go here:http://blog.eecnet.com/proactive-manage-data-center/
Practical Options for Deploying IT Equipment in Small Server Rooms and Branch...Schneider Electric
Small server rooms and branch offices are typically unorganized, unsecure, hot, unmonitored, and space constrained. These conditions can lead to system downtime or, at the very least, lead to “close calls” that get management’s attention. Practical experience with these problems reveals a short list of effective methods to improve the availability of IT operations within small server rooms and branch offices. This paper discusses making realistic improvements to power, cooling, racks, physical security, monitoring, and lighting. The focus of this paper is on small server rooms and branch offices with up to 10kW of IT load.
Virtualization and Cloud Computing: Optimized Power, Cooling, and Management ...Schneider Electric
IT virtualization, the engine behind cloud computing, can have significant consequences on the data center physical infrastructure (DCPI). Higher power densities that often result can challenge the cooling capabilities of an existing system. Reduced overall energy consumption that typically results from physical server consolidation may actually worsen the data center’s power usage effectiveness (PUE). Dynamic loads that vary in time and location may heighten the risk of downtime if rack-level power and cooling health are not understood and considered. Finally, the fault-tolerant nature of a highly virtualized environment could raise questions about the level of redundancy required in the physical infrastructure. These particular effects of virtualization are discussed and possible solutions or methods for dealing with them are offered.
Green data Data Center is the only way going forward.To improve efficiency and PUE count we are leveraging technology and resources to cut down on emissions and utilizing power in a better way to reduce losses.This project speaks about the latest trends in Green IT and also how can banks use the technologies to upgrade their legacy systems
High Efficiency Indirect Air Economizer Based Cooling for Data CentersSchneider Electric
Of the various economizer (free cooling) modes for data centers, using fresh air is often viewed as the most energy efficient approach. However, this paper shows how indirect air economizer-based cooling produces similar or better energy savings while eliminating risks posed when outside fresh air is allowed directly into the IT space.
COMMON PROBLEMS AND CHALLENGES IN DATA CENTRESKamran Hassan
in this paper common problems and challenges of data center have been identified and methods have been explained to improve the efficiency and reliability of data center
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
Cooling in Data Centers
1. Avoidable Mistakes that
Compromise Cooling Performance
in Data Centers and Network Rooms
White Paper 49
Revision 2
by Neil Rasmussen
Contents
> Executive summary Click on a section to jump to it
Introduction 2
Avoidable mistakes that are routinely made when
installing cooling systems and racks in data centers or Basic airflow requirements 3
network rooms compromise availability and increase
costs. These unintentional flaws create hot-spots, Airflow in the rack cabinet 3
decrease fault tolerance, decrease efficiency, and
reduce cooling capacity. Although facilities operators Layout of racks 6
are often held accountable for cooling problems, many Distribution of loads 8
problems are actually caused by improper deployment
of IT equipment outside of their control. This paper Cooling settings 8
examines these typical mistakes, explains their prin-
ciples, quantifies their impacts, and describes simple Layout of air delivery and 9
return vents
remedies.
Prevention via policies 11
Conclusion 13
Resources 14
white papers are now part of the Schneider Electric white paper library
produced by Schneider Electric’s Data Center Science Center
DCSC@Schneider-Electric.com
2. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
Introduction Most data centers and network rooms have a variety of basic design and configuration flaws
that prevent them from achieving their potential cooling capacity and prevent them from
delivering cool air where it is needed. These problems are generally unrecognized because
computer rooms have typically been operated at power densities well below their design
values. However, recent increases in the power density of new IT equipment are pushing
data centers to their design limits and revealing that many data centers are incapable of
providing effective cooling as expected.
In addition to the decrease in system availability that can result from under-performing
cooling systems, significant costs are incurred. This paper will describe common design
flaws that can cause the efficiency of the cooling system to decrease by 20% or more.
Separate studies from Lawrence Berkeley National Laboratories and from Schneider Electric
concluded that a typical data center exhibits electrical power consumption as shown in
Figure 1, where the electrical power consumed by the cooling system is comparable to the
power consumed by the entire IT load. A 20% loss of cooling efficiency translates to an
increase in total electrical power consumption of 8%, which over the 10 year life of a 500kW
data center translates to a cost of wasted electricity of approximately $700,000. This
significant waste is avoidable for little or no cost.
Cooling
38%
IT Loads
Figure 1 44%
Breakdown of electricity consumption
of a typical data center
Lighting
3%
Power System
15%
The sub-optimization of the data center cooling system arises from a variety of sources.
These problem sources include the design and specification of the cooling plant itself, and
they include how the complete system delivers the cool air to the load. This paper focuses
on cooling problems related to the distribution of cooling air and setup issues related to the
deployment of IT equipment, for the following reasons:
• Because there are practical, feasible, and proven solutions
• Many fixes can be implemented in existing data centers
• Large improvements can result from little or no investment
• Both IT people and facilities people can contribute to fixing them
• Solutions are independent of facility or geographic location
• They lend themselves to correction through policies that are simple to implement
The paper breaks down the common flaws into five contributing categories, and addresses
each in turn:
Schneider Electric – Data Center Science Center White Paper 49 Rev 2 2
3. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
• Airflow in the rack itself
• Layout of racks
• Distribution of loads
• Cooling settings
• Layout of air delivery and return vents
For each category, a number of issues are described along with a simple description of the
theory of the problem and how it impacts availability and total cost of ownership (TCO). This
information is summarized in the tables.
Finally, a number of policies are described which, if implemented, can significantly improve
data center availability and reduce TCO.
Basic airflow The airflow in and around the rack cabinet is critical to cooling performance. The key to
understanding rack airflow is to recognize the fundamental principle that IT equipment cares
requirements about two things:
1. That appropriate conditioned air is presented at the equipment air intake
2. That the airflow in and out of the equipment is not restricted.
The two key problems that routinely occur and prevent the ideal situation are
1. The CRAC air becomes mixed with hot exhaust air before it gets to the equipment air
intake
2. The equipment airflow is blocked by obstructions.
The common theme throughout the next sections is that well-intentioned implementation
decisions that appear to be inconsequential actually create the two problems above, and that
the common solutions routinely used to address the symptoms of these problems significantly
compromise availability and increase costs.
Airflow in the Although the rack is frequently thought of as a mechanical support, it provides a very critical
function in preventing hot exhaust air from equipment from circulating back into the equip-
rack cabinet ment air intake. The exhaust air is slightly pressurized, and this combined with the suction at
the equipment intake leads to a situation where the exhaust air is induced to flow back into
the equipment intake. The magnitude of this effect is much greater than the magnitude
of the effect of buoyancy of hot exhaust air, which many people believe should
naturally cause the hot exhaust air to rise away from the equipment. The rack and its
blanking panels provide a natural barrier, which greatly increases the length of the air
recirculation path and consequently reduces the equipment intake of hot exhaust air.
The very common practice of omitting blanking panels can be found to greater or lesser
degrees in 90% of data centers, despite the fact that all major manufacturers of IT equipment
specifically advise that blanking panels be used. The resulting recirculation problem can lead
Related resource
to a 15°F or 8°C rise in temperature of IT equipment. A detailed description of this effect
White Paper 44
along with experimental data is found in White Paper 44, Improving Rack Cooling Perfor-
Improving Rack Cooling mance Using Blanking Panels. Blanking panels modify rack airflow as shown in Figure 2.
Performance Using Blanking Installing blanking panels is a simple process that can be implemented in almost any data
Panels
center at very low cost.
Schneider Electric – Data Center Science Center White Paper 49 Rev 2 3
4. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
2A. 2B.
Figure 2
Air recirculation through a
missing blanking panel
2A. (left)
Without blanking panels
2B. (right)
With blanking panels
Many configured racks exhibit other flaws that have the same effect as omitting blanking
panels. Using wide racks with inset rails allows recirculation around the sides of the rack
rails. Using shelves to mount IT equipment prevents the use of blanking panels and provides
wide-open paths for exhaust air recirculation. Some standard 19” racks have inherent air
recirculation paths built-in around the rails and at the top and bottom of the enclosure. In
these cases the installation of blanking panels cannot completely control recirculation. Many
racks have simply not been designed to work effectively in a high density IT environment.
Standardizing on the right rack and using blanking panels can greatly reduce recirculation
and reduce hot-spots.
The benefits of reducing hot-spot temperature by using blanking panels and selecting racks
that control recirculation are clear and offer obvious benefits in terms of system availability.
However, there are other less obvious but very significant benefits that require explanation.
Recirculation impacts fault tolerance
Rack systems with significant recirculation have reduced fault tolerance and reduced
maintainability when contrasted with properly implemented systems. In most installations,
cooling is provided by an array of CRAC units feeding a common air supply plenum. In such
an arrangement, it is often possible to maintain facility cooling with one CRAC system off-line
due to failure or maintenance; the remaining CRAC units are able to pick up the required
load. Recirculation compromises this fault tolerance capability in the following ways:
• Lower CRAC return air temperature due to recirculation causes the remaining CRAC
units to operate at reduced capacity and the system is unable to meet the cooling ca-
pacity requirement
• Higher air feed velocities needed to overcome recirculation effects cannot be main-
tained by the remaining systems causing increased recirculation and overheating at the
loads.
Schneider Electric – Data Center Science Center White Paper 49 Rev 2 4
5. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
Recirculation impacts total cost of ownership
The availability issues of overheating and fault tolerance make a compelling case for the use
of standardized racks and blanking panels. However, the TCO consequences of recirculation
are dramatic and make the case overwhelming.
The largest life cycle cost related to cooling is the cost of electricity to operate cooling
equipment and fans. The amount of cooling Wattage or Tonnage required by a data center is
not affected by recirculation; however the efficiency of the cooling systems is significantly and
adversely affected. This means that recirculation will increase the costs related to electricity.
Furthermore, the costs compound as shown in Figure 3.
Figure 3 illustrates the sequence of consequences that typically occur from attempts to deal
with the primary symptom of recirculation, namely, hot-spots. The two most common
responses to hot-spots are to reduce the CRAC supply temperature, or to increase CRAC
capacity, or a combination of both. These responses have significant unforeseen costs as
described in the figure. Controlling recirculation by design and policy as described in this
paper can be done for very little cost and avoids the consequences shown in the figure.
Air
Recirculation
Hot Spots
Typically
15°F (8°C )
Decrease Above Nom inal
CRAC Temp Add 20% M ore
Set Point CRAC Capacity
9°F (5°C )
Reduction of
CRAC Capacity
Figure 3
Increased
Cascade of financial Unwanted Cooler Return
Dehum dificatio n Air
consequences of recirculation
Reduction of
CRAC Excess Fan
Efficiency Power
10% Lost 5% Lost
Increased Reduction of
M ake-up CRAC
Hum idification Efficiency
5% Lost 5% Lost
10-25% M ore Cooling Cost
$300,000 to $700,000 wasted
for a 500kW load over 10 Year Lifetim e
Schneider Electric – Data Center Science Center White Paper 49 Rev 2 5
6. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
Restriction of airflow starves equipment of fresh air, resulting in overheating. Furthermore,
air restriction at the front or rear of the rack encourages recirculation through rack space
without blanking panels. Therefore, it is critical to use racks that have very high door
ventilation, and racks that have enough room in the back of the rack to prevent cable bundles
from obstructing airflow. Users sometimes choose shallow racks believing that this will
increase the floor space utilization but then are unable to utilize the density because of
thermal limits due to cable-related airflow obstruction
Table 1
Summary of rack airflow design flaws with consequences
Design flaw Availability
TCO consequences Solution
consequences
• Use blanking panels
• No blanking panels • Electricity costs
• Hot spots, particularly at the tops • Do not use shelves
• Equipment on shelves • Reduced capacity of CRAC
of racks • Use racks that have no open space outside
• Use of 23 inch (584mm) • Humidifier maintenance
• Loss of cooling redundancy the rails
racks without rail-brushes • Water consumption
• Add brushes outside rails on wide racks
• Hot spots, particularly at the tops
of racks
• Under-rack wire openings • Use brushes or gasketing on under-rack wire
• Loss of static pressure in raised • Reduced efficiency of CRAC
without brushes openings
floor
• Loss of cooling redundancy
• Overheating
• Glass doors • Decreases space and rack
• Amplification of problems relating • Use fully vented doors front and rear
• Doors with low ventilation utilization
to blanking panels
• Very little benefit
• Use of fan trays and roof • Wasted capital
• Same investment could have • Do not use fan trays or roof fans
fans • Wasted electricity
been used for useful purpose
• Cable obstructions cause • Decreases space and rack • Use racks with enough depth to allow free air
• Shallow racks
overheating utilization around cables
In addition to the passive means of controlling rack airflow described above, the use of rack
based fan systems can be used to control rack air distribution. Some rack fan systems, like
fan trays and roof fans, offer little benefit. Other fan systems, like systems that distribute
under-floor air to the front of the rack, or scavenge exhaust air from the rear of the rack, can
Related resource significantly improve rack airflow, reduce circulation effects, and increase rack power
White Paper 46 handling capability. Detailed discussion of these systems can be found in White Paper 46,
Power and Cooling for Ultra- Power and Cooling for Ultra-High Density Racks and Blade Servers. Standardizing on a rack
High Density Racks and that is designed for retrofit of effective supplemental air fan units provides for future high-
Blade Servers density capability.
Layout of racks Proper rack airflow control as described in the previous section is essential to effective
cooling, but is not sufficient by itself. Proper layout of racks in a room is a critical part of
ensuring that air of the appropriate temperature and quantity is available at the rack. Flow of
air to the rack is key.
The objective of correct rack layout is again to control recirculation, that is, to prevent CRAC
air from becoming mixed with hot exhaust air before it reaches the equipment air intake. The
Schneider Electric – Data Center Science Center White Paper 49 Rev 2 6
7. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
design principle is the same: to separate the hot exhaust air from the equipment intake air to
the maximum extent possible.
The solution to this problem is well known. By placing racks in rows and to reversing the
direction that alternate rows of racks face, recirculation can be dramatically reduced. The
principles of this solution are described by the Uptime Institute in their white paper Alternating
Cold and Hot Aisles Provides More Reliable Cooling for Server Farms.
Despite the clear advantages of the hot-aisle-cold-aisle system, surveys show that approx-
imately 25% of data centers and network rooms put racks in rows that face in the same
direction. Putting racks in the same direction causes significant recirculation, virtually
assures that there will be hot–spot problems, and guarantees that the cost of operating
the system will be increased significantly. The costs will vary by installation and are again
illustrated by the previous Figure 3.
The effective application of the hot-aisle-cold-aisle technique consists of more than simply
putting racks into alternating rows. Of the 75% of installations that do use the hot-aisle-cold-
aisle technique, over 30% do not correctly arrange the air distribution and return systems to
properly supply the rows. This is discussed later in the section titled Layout of air delivery
and return vents.
Of the sites that face racks in the same direction and do not use hot-aisle-cold-aisle tech-
niques, surveys conducted by Schneider Electric indicate that the majority are due to a
management directive based on the cosmetic appearance of the data center. The surveys
further suggest that these unfortunate directives would never have been made if the crippling
consequences had been made clear.
For systems laid out with racks all facing the same way, many of the techniques described in
this paper will be much less effective. If alternating racks is not possible, then one effective
way to deal with hot-spots in this environment is to apply a supplemental air distribution unit
to the affected racks.
Table 2
Summary of rack layout design flaws with consequences
Design flaw Availability
TCO consequences Solution
consequences
• Racks all facing in the • Hot spots
• Excess power consumption
same direction • Loss of cooling redundancy
• Water consumption • Use hot-aisle-cold-aisle layout
• Hot-aisle-cold-aisle not • Loss of cooling capacity
• Humidifier maintenance
implemented • Humidifier failures
• Not in rows • Same problems • Same • Arrange racks in rows
• In rows but not tightly • Bay racks together
• Same problems • Same
butted • Do not space racks out
• Racks all facing in the • Hot spots
• Excess power consumption
same direction • Loss of cooling redundancy
• Water consumption • Use hot-aisle-cold-aisle layout
• Hot-aisle-cold-aisle not • Loss of cooling capacity
• Humidifier maintenance
implemented • Humidifier failures
Schneider Electric – Data Center Science Center White Paper 49 Rev 2 7
8. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
Distribution of The location of loads, particularly high power loads, can stress the capabilities of a data
center. Pockets of high density loads typically occur when high-density, high-performance
loads servers are packed into one or more racks. This situation can give rise to hot-spots in the
data center and cause the operators to take corrective action such as decreasing the air
temperature set point, or adding CRAC units. These actions give rise to the negative
consequences summarized in Figure 3.
For these reasons, there is a significant advantage to spreading the load out where feasible.
Fortunately, fiber and Ethernet connections are not adversely affected by spreading equip-
ment out. Typically, the desire to co-locate such devices is driven by IT people who believe it
is more convenient to co-locate devices. People attempting to co-locate high power loads
should be advised of the availability advantages and cost savings which accrue from
Related resource spreading out the load.
White Paper 46
Power and Cooling for Ultra- There are other options for high power racks that can avoid adverse cooling impacts. For a
High Density Racks and more complete discussion of the subject of dealing with high power racks, consult White
Blade Servers
Paper 46, Power and Cooling for Ultra-High Density Racks and Blade Servers.
Table 3 Design flaw Availability TCO
Solution
consequences consequences
Summary of load distribution
design flaws with consequences Hot spots Spread loads
Concentrated Excess power
Loss of cooling out evenly as
Loads consumption
redundancy possible
Cooling settings The previous discussions described the adverse consequences of reducing the CRAC air
temperature setting. Air conditioning performance is maximized when the CRAC output air
temperature is highest. Ideally, if there were zero recirculation, the CRAC output temperature
would be the same 68-77°F (20-25°C) desired for the computer equipment. This situation is
not realized in practice and the CRAC output air temperature is typically somewhat lower than
the computer air intake temperature. However, when the air distribution practices described
in this paper are followed, it allows the CRAC temperature set-point to be maximized. To
maximize capacity and optimize performance, the CRAC set point should not be set lower
than that required to maintain the desired equipment intake temperatures.
Although the CRAC temperature set point is dictated by the design of the air distribution
system, the humidity may be set to any preferred value. Setting humidity higher than that
required has significant disadvantages. First, the CRAC unit will exhibit significant coil
condensation and dehumidify the air. The dehumidification function detracts from the air-
cooling capacity of the CRAC unit significantly. To make matters worse, humidifiers must
replace the water removed from the air. This can waste thousands of gallons of water per
year in a typical data center, and humidifiers are a significant source of heat, which must be
cooled and consequently further detracts from the capacity of the CRAC unit. This situation
is compounded when significant recirculation exists because the lower temperature CRAC air
condenses more readily. Therefore it is essential not to operate a data center at higher
humidity than necessary.
Some data centers, including most early data centers, had high velocity paper or forms
printers. These printers can generate significant static charge. To control static discharge,
the result was the development of a standard for around 50% relative humidity in data
Schneider Electric – Data Center Science Center White Paper 49 Rev 2 8
9. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
centers. However, for data centers without large high-speed forms printers a humidity of 35%
relative humidity will control static charge. Operating a data center at 35% relative humidity
instead of 45% or 50% can save significant amounts of water and energy, especially if there
is significant recirculation.
An additional problem can occur in data centers with multiple CRAC units equipped with
humidifiers. It is extremely common in such cases for two CRAC units to be wastefully
fighting each other to control humidity. This can occur if the return air to the two CRAC units
is at slightly different temperatures, or if the calibrations of the two humidity sensors disagree,
or if the CRAC units are set to different humidity settings. One CRAC unit will be dehumidify-
ing the air while another is humidifying the air. This mode of operation is extremely wasteful,
yet is not readily apparent to the data center operators.
The problem of wasteful CRAC humidity fighting can be corrected by either A) central
humidity control, B) coordinated humidity control among the CRAC units, C) turning off one or
more humidifiers in the CRACS, or D) by using deadband settings. Each of these techniques
has advantages, which will not be discussed in detail in this paper. When the problem
occurs, the most feasible way to correct it in typical systems with independent CRACs is by
verifying that systems are set to the same settings, are properly calibrated, and then expand-
ing the deadband humidity setting, which is available on most CRAC units. When the
deadband setting is set to +/-5% the problem will usually be corrected.
Design flaw Availability TCO
Solution
consequences consequences
Excess power
Hot spots consumption
Humidity set too
Loss of cooling Set humidity at 35-50%
Table 4 high Water consumption
redundancy
Summary of cooling Humidifier maintenance
setting design flaws with Set all units to the same setting
consequences Multiple CRAC Excess power
Loss of cooling consumption Set 5% dead-band on humidity
units fighting to redundancy set points
control humidity of Water consumption
the same space Loss of cooling capacity Use centralized humidifiers
Humidifier maintenance
Turn off unnecessary humidifiers
Layout of air Rack airflow and rack layout are key elements to direct air to maximize cooling performance.
However, one final ingredient is required to ensure peak performance, which is the layout of
delivery and air delivery and return vents. Improper location of these vents can cause CRAC air to mix
return vents with hot exhaust air before reaching the load equipment, giving rise to the cascade of
performance problems and costs described previously. Poorly located delivery or return
vents are very common and can erase almost all of the benefit of a hot-aisle-cold-aisle
design.
The key to air delivery vents is to place them as close to the equipment air intakes as
possible and keep the cool air in the cold aisles. For under-floor air distribution, this means
keeping the vented tiles in the cold aisles only. Overhead distribution can be just as effective
as a raised floor distribution system, but again the key is that the distribution vents be located
over the cold aisles, and the vents be designed to direct the air directly downward into the
cold aisle (not laterally using a diffusing vent). In either overhead or under floor systems any
vents located where equipment is not operational should be closed since these sources end
up returning air to the CRAC unit at low temperature, increasing dehumidification and
decreasing CRAC performance.
Schneider Electric – Data Center Science Center White Paper 49 Rev 2 9
10. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
The key to air return vents is to place them as close to the equipment exhausts as possible
and collect the hot air from the hot aisles. In some cases, an overhead dropped ceiling
plenum is used and the return vents can be easily aligned with the hot aisles. When a high,
open, bulk return ceiling is used, the best approach is to locate the returns of the CRAC unit
as high up in the ceiling as possible and, where possible, spread out the return using
ductwork in an attempt to align returns with the hot aisles. Even a crude return plenum with
only a few return vents crudely aligned
with hot aisles is preferred over a
single bulk return at the side of the > Sealing cable cutouts
room.
Cable cutouts in a raised floor environment cause
significant unwanted air leakage and should be
For smaller rooms without raised sealed. This lost air, known as bypass airflow,
floor or ductwork, upflow or downflow contributes to IT equipment hotspots, cooling
inefficiencies, and increases infrastructure costs.
CRAC units are often located in a
corner or along a wall. In these Many sites ignore unsealed floor openings and
cases, it can be difficult to align cool believe that inadequate cooling capacity is the
problem. As a result, additional cooling units are
air delivery with cold aisles and hot purchased to address the overheating. In fact, those
air return with hot aisles. Perfor- supplementary units may not be needed.
mance will be compromised in these
One alternative to minimize the cost of additional
situations. However, it is possible to cooling capacity is to seal cable cutouts. The
improve the performance of these installation of raised floor grommets both seals the
systems as follows: air leaks and increases static pressure under a
raised floor. This improves cool air delivery through
the perforated floor tiles.
• For upflow units, locate the unit
near the end of a hot aisle and
add ducts to bring cool air to
points over cold aisles as far
away from the CRAC unit as
possible.
• For downflow units, locate the
unit at the end of a cold aisle
oriented to blow air down the
cold aisle, and add either a
dropped ceiling plenum return, or
hanging ductwork returns with return vents located over the hot aisles.
A study of poorly placed return grilles reveals a major underlying root cause: personnel feel
that some aisles are hot and some are cold and assume this is an undesirable condition and
attempt to remedy it by moving cool air vents to hot aisles, and moving hot air returns to cold
aisles. The very condition that a well-designed data center attempts to achieve, the
separation of hot and cool air, is assumed by personnel to be a defect and they take
action to mix the air, compromising the performance and increasing the costs of the
system. People do not understand that hot isles are supposed to be hot.
Obviously the arrangement of the distribution and return vents is easiest to establish at the
time when the data center is constructed. Therefore it is essential to have a room layout with
row locations and orientation before the ventilation system is designed.
Schneider Electric – Data Center Science Center White Paper 49 Rev 2 10
11. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
Table 5
Summary of air delivery and return design flaws with
consequences
Design flaw Availability
TCO consequences Solution
consequences
• Hot air return location not over hot • Hot spots, particularly at • Electricity costs
• Locate hot air returns over hot aisle
aisle the tops of racks • Reduced capacity of CRAC
• Do not use lamps with integral air returns
• Dropped-ceiling lamp with integral • Loss of cooling • Humidifier maintenance
over cold aisles, or block the return
air return located over cold aisle redundancy • Water consumption
• Electricity costs • For overhead delivery always locate
• Overhead delivery vents over hot • Hot spots
• Reduced capacity of CRAC delivery vents over cold aisles
aisles • Loss of cooling
• Humidifier maintenance • For raised floor delivery always locate
• Vented floor tile in hot aisle redundancy
• Water consumption delivery vents in cold aisles
• Vented floor tile near no load
• Overhead delivery vent open above
• Electricity costs • Close vents or openings located where
no load • Small
• Reduced capacity of CRAC there is no loads
• Peripheral holes in raised floor for
conduits, wires, pipes
• Electricity costs
• Loss of CRAC capacity • Use dropped ceiling for return plenum, or
• Low height of return vent in high • Reduced capacity of CRAC
• Loss of cooling extend duct to collect return air at high
ceiling area • Humidifier maintenance
redundancy point
• Water consumption
Prevention via By following the guidelines of this paper it is possible to make new data centers that are
significantly more available, with fewer hot spots, and less costly to operate. Some of the
policies techniques described can be implemented in existing data centers, but others are impractical
on live systems. Naturally, it is best to avoid the problems in the first place. Surveys by
Schneider Electric suggest that most of the defects in cooling system design are unintentional
and would not have been made if the facilities or IT staff had understood the importance of
proper air distribution on the performance, availability, and cost of the data center. One way
to effectively communicate the key factors to the parties involved is through the use of
policies.
Schneider Electric – Data Center Science Center White Paper 49 Rev 2 11
12. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
Table 6
Suggested data center design policies
Policy Justification
The separation of hot and cold air reduces hot spots, increases fault tolerance, and significantly reduces the consumption
Use hot-aisle-cold-aisle rack
of electricity. It is a well known fact that facing all rows in the same direction causes each row to be fed hot exhaust air
layout
from the row in front of it, leading to overheating and dramatically reduced air conditioner performance.
Use blanking panels in Blanking panels prevent hot exhaust air from equipment from returning to the equipment intake, preventing hot spots and
unused positions in all racks increasing equipment life. All server and storage manufacturers specify that blanking panels should be used.
Use gaskets or brushes on The purpose of the raised floor air distribution system is to deliver cool air to the equipment intakes. These intakes are
all under-rack wire openings located on the front of the racks. Openings below the racks feed cool air to the equipment exhaust, bypassing the
in raised floors equipment and reducing the performance of the cooling system.
Do not attempt to correct the The purpose of the hot aisle is to separate the hot exhaust air from the cool equipment intake air. Any attempt to defeat
temperature in hot aisles. this function will compromise the design of the system, reduce equipment reliability, and increase operating cost. The
They are supposed to be exhaust air from the equipment is supposed to be hot, and the hot aisle is intended to take this hot air back to the air
hot. conditioning system. Having the hot aisle be hot helps ensure that the equipment intakes on the cold aisle are kept cold.
Racks serve an essential function as part of the cooling system and are not simply mechanical supports. Rack features
Standardize racks that prevent exhaust air from reaching equipment intakes, provide for proper ventilation, provide space for cabling without
airflow obstruction, and allow the retrofit of high density supplemental cooling equipment should be part of a rack standard.
Concentrating high power loads in one location will compromise the operation of those loads and typically increase data
Spread out high density center operating costs. Fault tolerance in the air delivery system is typically compromised when high power loads are
loads concentrated. The entire data center temperature and humidity controls may need to be altered in a way that compromises
cooling capacity and increases cooling cost.
Establishing policies can force constructive discussions to occur. In addition to establishing
policies, communication can be facilitated by signage or labeling. An example of a label that
is located on the rear of racks in hot aisles is shown in Figure 4. Personnel such as IT
personnel will often view the hot aisle as an undesirable problem or defect. This label helps
them understand why one area of the data center is hotter than another.
THIS IS A HOT AISLE
Figure 4
To maximize the availability of the IT equipment this aisle is intentionally hot. The
Label communicating the arrangement of the racks and the use of blanking panels prevent the equipment
purpose of the hot aisle
exhaust air from returning to the equipment air intakes. This decreases equipment
operating temperature, increases equipment life, and saves energy.
Schneider Electric – Data Center Science Center White Paper 49 Rev 2 12
13. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
Conclusion The air distribution system is a part of the data center that is not well understood, and facility
operators and IT personnel often take actions involving airflow that have unintentional and
adverse consequences to both availability and cost.
Flawed airflow implementation has not been a serious problem in the past, due to low power
density in the data center. However, recent increases in power density are beginning to test
the capacity of cooling systems and give rise to hot-spots and unexpected limitations of
cooling capacity
Decisions such as facing all racks in the same direction are often made for cosmetic reasons
to project image; but as users and customers become more educated they will conclude that
people who do not implement airflow correctly are inexperienced, which is the opposite of the
original intent.
Adopting a number of simple policies and providing a simple justification for them can
achieve alignment between IT and Facilities staff resulting in maximum availability and
optimized TCO.
About the author
Neil Rasmussen is a Senior VP of Innovation for Schneider Electric. He establishes the
technology direction for the world’s largest R&D budget devoted to power, cooling, and rack
infrastructure for critical networks.
Neil holds 19 patents related to high-efficiency and high-density data center power and cooling
infrastructure, and has published over 50 white papers related to power and cooling systems,
many published in more than 10 languages, most recently with a focus on the improvement of
energy efficiency. He is an internationally recognized keynote speaker on the subject of high-
efficiency data centers. Neil is currently working to advance the science of high-efficiency,
high-density, scalable data center infrastructure solutions and is a principal architect of the APC
InfraStruXure system.
Prior to founding APC in 1981, Neil received his bachelors and masters degrees from MIT in
electrical engineering, where he did his thesis on the analysis of a 200MW power supply for a
tokamak fusion reactor. From 1979 to 1981 he worked at MIT Lincoln Laboratories on flywheel
energy storage systems and solar electric power systems.
Schneider Electric – Data Center Science Center White Paper 49 Rev 2 13
14. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network Rooms
Resources
Click on icon to link to resource
Improving Rack Cooling Performance
Using Blanking Panels
White Paper 44
Power and Cooling for Ultra-High
Density Racks and Blade Servers
White Paper 46
White Paper Library
whitepapers.apc.com
TradeOff Tools™
tools.apc.com
Contact us
For feedback and comments about the content of this white paper:
Data Center Science Center
DCSC@Schneider-Electric.com
If you are a customer and have questions specific to your data center project:
Contact your Schneider Electric representative
Schneider Electric – Data Center Science Center White Paper 49 Rev 2 14