1. October 201544
Keeping IT cool
MARKET FEATURE
DATA CENTRES
As data centres try to keep pace with rapidly changing usage patterns, so do
their cooling techniques. Industry experts share insights and updates on best
practices and technologies.
By Rajiv Pillai | Features Writer
T
o state the importance
of data centres in
the present day of
quick connectivity
and information
dissemination is to state the
obvious. As data centres try
to keep pace with rapidly
changing usage patterns, so
do their cooling techniques.
Pierre Havenga, Managing
Director at Emerson Network
Power for the Middle East
and Africa region, gives an
interesting analysis: “If we
look at the telecom industry, a
few years ago, approximately
90% was voice-centric and
10% was data-centric. Now,
it’s approximately 70% data-
centric and 30% voice-centric.
This is mainly driven by
applications on smartphones
and people are spending
more time downloading apps
or text messaging, creating
demand for storage of data.
You don’t record all the voice
communications between
people on cell phones, but
you need to record all the
data. And that’s what’s driving
the need for data centres.” In
light of this, maintaining and
cooling data centres has gained
primacy.
A whitepaper by Emerson
Network Power, a business of
Emerson, reveals that cooling
systems – comprising cooling
and air movement equipment
– account for 38% of energy
consumption in data centres.1
As Havenga puts it simply, “You
have to reject heat from the
data centre; servers generate
heat and heat has to be
rejected.”
Don’t lose your cool
Cooling failure is not an option
for data centres. In Havenga’s
view, the loss of revenue could
amount to millions of dollars
per day if a data centre is
unavailable, with the losses
being different for different
industries. “For example, for the
telecom industry, the losses are
quite huge,” he says. However,
that could be the least of the
problems. As Bart Holsters,
Atemporary failureof
an airportdatacentre
is certainly much
morecritical thanthe
temporaryfailure of
Twitter’s data centre,
thoughsome may
arguethat
2. October 2015 45
Operations Manager at Cofely
Besix Facility Management,
points out, a cooling failure will
result in loss of uptime, with
the servers eventually shutting
down and the electronic
equipment getting damaged.
Mohammad Abusaa,
a Business and Project
Development Professional with
HH Angus and Associates, and
a veteran when it comes to
data centre cooling, presenting
a clear picture of the stakes
involved in case of cooling
failure in various sectors, says:
“The critical nature of cooling
for a data centre can be
understood from the fact that
in many cases, losing cooling
for less than five minutes
could cause the IT equipment
to fail. In some high-density
applications, the time could
be less than two minutes. The
criticality of IT systems’ failure
is gauged by the function of the
data centre. In other words, a
temporary failure of an airport
data centre is certainly much
more critical than the temporary
failure of Twitter’s data centre,
though some may argue that.”
The reason for such cooling
failure, he says, can directly be
related to the cooling system
itself, such as the failure of
pumps, fans or chillers, and, at
times, indirectly related to the
cooling system, such as power
outages.
Abusaa elaborates that when
failure occurs in the cooling
system, standby equipment
or paths are brought online
to ensure continuous supply
of cooling to the IT space.
Therefore, attributes like
redundancy and standby should
be factored in at the design
stage of cooling systems. When
failure occurs in the power
supply, an Uninterruptible
Power Supply (UPS) device
connected to the critical parts
of a cooling system – usually
the distribution components –
will maintain the operation of
the cooling distribution network,
while the backup generators
come online, thereby providing
sufficient power to bring the
cooling generation system back
online within minutes of losing
power. In Abusaa’s view, this is
the usual contingency procedure
in case of a cooling failure.
Cooling solutions
Since cooling is an imperative
for data centres, ASHRAE has
defined standards for their
cooling requirements, which
normally dictates the operating
conditions.
Håkan Lenjesson, Market
Area Director at Systemair for
the Middle East and Turkey
region, says that ASHRAE has
been broadening the operating
ranges and also recommending
a very low Power Usage
Effectiveness (PUE). Havenga
adds: “Today, ASHRAE’s
recommended conditions
range from 18 degrees C to
27 degrees C. However, the
allowable range can even
go up to 35-40 degrees C,
depending on the server
technology.”
With the ranges and
requirements defined, the next
step is to decide on a cooling
solution, as there are several
options available in the market.
Havenga says that most data
centres currently adopt the
traditional direct expansion
technology, which is applicable
everywhere in the world.
“Then there is free cooling,
where you have fresh air
coming directly from outside,”
he points out, and adds, “Or
indirect cooling, where you are
cooling a medium, typically
water. Even further, there are
adiabatic solutions, which
are an enhancement of the
cooling capacity of the chiller.
It increases the free cooling
capacity.”
Havenga reveals that the
latest technology in the market
is evaporative free cooling.
“This is a way of cooling a
data centre without using a
compressor; so basically, all-
year-round cooling,” he says.
He believes that the main driver
behind this is energy savings,
adding that energy is the single
biggest cost incurred by a data
centre, which has led to various
advancements in technology,
InRow cooling being one of
Imagesource:HåkanLenjesson
Bart Holsters
Pierre Havenga
Mohammad Abusaa
Håkan Lenjesson
Figure 1
4. October 2015 47www.dencohappel.com
GEA Middle East LLC
Technopark Head Quarters Building,
Block B, 3rd Floor, Office 321,
Technopark, P.O. Box 261236,
Dubai, United Arab Emirates
Scott Ross
Director of Sales
MENA & APAC
Mobile: +971 50 5548502
Office: +971 4 881 6065
Air Treatment Holding
becomes
5. DATA CENTRES
MARKET FEATURE
October 201548
“Hot Huts”
Google has designed custom cooling systems for
its server racks. The systems are called “Hot Huts”,
because they serve as temporary homes for the hot
air that leaves the servers – sealing it away from the
rest of the data centre floor. Fans on top of each Hot
Hut unit pull hot air from behind the servers through
water-cooled coils. The chilled air leaving the Hot Hut
returns to the ambient air in the data centre, where
the servers can draw the chilled air in, cool them down
and complete the cycle.
Evaporative cooling
As hot water from the data centre flows down the
towers through a material that speeds evaporation,
some of the water turns to vapour. A fan lifts this
vapour, removing the excess heat in the process, and
the tower sends the cooled water back into the data
centre.
Using seawater
Google’s facility in Hamina, Finland, uses seawater
to cool without chillers. The company has chosen
Hamina for its cold climate and its location on the
Gulf of Finland. The cooling system pumps cold water
from the sea to the facility, transfers heat from the
operations to the seawater through a heat exchanger,
and, then, cools this water before returning it to the
Gulf. Since this approach provides all the needed
cooling year round, Google claims to not have installed
any mechanical chillers.
How Google does it
He stresses that capacity
management is related to
the phasing of data centre
construction, expansion or
phasing out of IT loads within
the data centre, variation in
the cooling load profile within
the day/month/year and
variation in the cooling load
requirements within the same
data hall or even at the server
rack level.
The other challenges are
maintenance-related. Abusaa
stresses that it is crucial to
understand that a facility that
is running 24/7 and has strict
security access regulations
will have its design and
operation challenges when
it comes to maintenance. He
says, “For example, while
having a Computer Room Air
Conditioning Unit (CRAC)
installed in a specific location
is the most efficient solution, for
security and access reasons,
the CRAC unit might need to
be relocated to ensure that
the maintenance personnel
do not have access to the IT
equipment, as there might
be a possibility of accidently
damaging the IT equipment
while maintaining the CRAC
unit.”
And then, there is always
the issue related to humidity
(See Figure 1) and air quality.
“Similar to hospitals and other
At Google data centres, the company often uses water instead of chillers, as an energy-efficient way
to cool…
(Information source: https://www.google.ae/about/datacenters/efficiency/internal/#water-and-cooling)
“I
T, data centres and server
equipment consume electricity
and emit heat as a ‘waste
product’,” says Ziad Youssef, Vice
President of IT Business - UAE, Gulf
Countries at Schneider Electric. “At such
an enclosed environment with sensitive
technology, the heat can be damaging.”
As data centres experience exponential growth in the region,
he believes that new solutions to curtail the simultaneous rise
in energy costs will be essential. “One such solution is cooling
– which is critical to the smooth functioning of a data centre,
and to the maintenance of hardware carrying mission-critical
enterprise data,” he says.
Combating ‘waste’ heat
Ziad Youssef
Similarto hospitals
andother critical
facilities,maintaining
controlof the Indoor
AirQuality toavoid
contamination is
crucial
critical facilities, maintaining
control of the Indoor Air Quality
to avoid contamination is
crucial,” Abusaa reveals, and
adds, “This is not only achieved
through filtration but also through
the design of data centres and
operation and maintenance
guidelines.” He is emphatic that
filtration, dehumidification, access
control and other practices should
address the air-quality issue.
Reference
1. http://www.
emersonnetworkpower.com/
documentation/en-us/brands/
liebert/documents/white%20
papers/enterprise-data-
center_24622.pdf