SlideShare a Scribd company logo
1 of 8
Download to read offline
BY BILL KOSIK, PE, CEM, BEMP, LEED AP BD+C, HP Data Center Facilities Consulting, Chicago
M
ission critical facilities
support a wide vari-
ety of vital operations
where facility failure
will result in compli-
cations that range from serious disrup-
tions to business operations, to circum-
stances that can jeopardize life safety
of the general public. To minimize or
eliminate the chance of facility system
failure, mission critical facilities have
three hallmarks that make them dif-
ferent from other type of commercial
buildings:
1. The facility must support opera-
tions that run continuously without
shutdowns due to equipment failure or
maintenance. Seasonal or population
changes within the facility have a small
impact on the energy use profile; gen-
erally, the facility is internally loaded
with heavy electrical consumption.
2. Redundant power and cooling
systems are required to support the
24/7/365 operation. Depending on the
level of redundancy, there will be addi-
tional efficiency losses in the power and
cooling systems brought on by running
the equipment at small percentages of
the capacity.
3. The technical equipment used in
the facility, such as computers; medical
and laboratory equipment; and monitor-
ing , communications, and surveillance
systems, will have high power require-
ments that translate into heat gain and
energy use.
Putting these hallmarks together,
mission critical facilities need to run
continuously, providing less efficient
power and cooling to technical equip-
ment that has very high electrical
requirements, all without failure or
impacts from standard maintenance
procedures. This is why energy use (and
ways to reduce it) in mission critical
facilities has been, and will continue
to be, of great concern. This is true
whether the mission critical facility is a
laboratory, hospital, data center, police/
fire station, or another type of essential
operation.
And due to constant advances in
the design of technical equipment, the
strategies and tactics used for reduc-
ing facility energy consumption need
to anticipate how future changes will
impact building design, codes, stan-
dards, and other guidelines. Fortu-
nately, the technical equipment will
generally become more energy-efficient
over time with improvements in design.
This can reduce facility energy use in
two ways: the equipment will use less
Mission critical facilities, such as data centers, are judged carefully on
their energy use. Engineers should focus on the codes and standards
that dictate energy performance and how building energy performance
can be enhanced.
Energy performance
in mission critical facilities
32 Consulting-Specifying Engineer • MARCH 2015 www.csemag.com
Learning
objectives
Ⅲ Understand the various
ways to measure energy use
in mission critical facilities.
Ⅲ Learn about the codes and
standards that dictate energy
performance.
Ⅲ Learn about the codes,
standards, and organizations
that govern energy perfor-
mance.
energy, and the energy of the power and
cooling systems will also decrease.
Data centers are one segment of the
mission critical facility industry that
arguably see the highest rate of change
in how the facilities are designed, pri-
marily based on the requirements of
technical equipment, servers, storage
devices, and networking gear. Data
centers will have the highest concen-
tration of technical equipment on a sq
ft or percentage of total power demand
as compared to other mission critical
facilities. A change in the specifica-
tions or operating conditions of the
computers in a data center facility will
have a ripple effect that runs through
all aspects of the power and cooling
systems (see Figure 1). Moreover, IT
equipment manufacturers are develop-
ing next generation technology that can
significantly reduce overall energy use
and environmental impact of data cen-
ters. This is a good thing, but with it
brings new design challenges that need
to be addressed in codes, standards, and
guidelines.
For data centers and the broader
range of commercial buildings, there
are myriad programs, guidelines, and
codes intended to keep energy use as
low as possible. Publications from
ASHRAE, Lawrence Berkeley Nation-
al Laboratory, U.S. Green Building
Council, and the U.S. Environmental
Protection Agency are good examples
of technical but practical resources aid-
ing in data center strategy.
But how did all of these come about?
To understand the path forward, it is
equally important to know how we
got here. Similar to the rapid evolu-
tion of power and cooling systems in
data centers, many of thedocuments
released by these groups were devel-
oped in response by changes and new
thinking in the data center design and
construction industry.
Energy-efficiency programs
for buildings
In the United States, one of the first
programs developed by the federal gov-
ernment that spawned several broader
energy efficiency initiatives is the 1977
U.S. National Energy Plan. This was
developed as a blueprint identifying
energy efficiency as a priority because
“conservation is the quickest, cheapest,
most practical source of energy.” This
plan became the basis for many other
building energy use reduction programs
that would typically start out at the fed-
eral level and eventually trickle down
to state and local government.
During this time, one of the most
widely used building efficiency stan-
dards was published for the first time:
ASHRAE Standard 90-1975: Energy
Conservation in New Building Design.
Because no comprehensive national
standard existed at the time, this was
the first opportunity for many architects
and engineers to objectively calculate
the energy costs of their designs and
to increase energy efficiency. Since its
initial release, the standard has been
renamed ASHRAE Standard 90.1:
Energy Standard for Buildings Except
Low-Rise Residential Buildings and
has been put on a 3-year maintenance
33www.csemag.com Consulting-Specifying Engineer • MARCH 2015
Figure 1: Using IT equipment that can run in an environment with 26 C supply air (top) enables the use of different cooling tech-
nology than IT equipment that runs with 20 C supply air. This allows for a 15% reduction in HVAC system energy use. All graphics
courtesy: HP Data Center Facilities Consulting
34 Consulting-Specifying Engineer • MARCH 2015 www.csemag.com
cycle. For example, the 2013 edition
of Standard 90.1 improves minimum
energy efficiency by approximately
37% from the 2004 edition for regulated
loads. It is typical that each new release
of the standard will contain significant
energy-efficiency requirements.
With the proliferation of communica-
tions and computing technology at the
end of the 20th century, building codes
and standards, especially Standard 90.1,
needed to reflect how technology was
impacting building design, especially
power, cooling, control, and commu-
nication systems. Changes in power
density for high-technology commer-
cial buildings began to create situations
that made it difficult for certain building
designs to meet the Standard 90.1 mini-
mum energy use requirements. Also,
when following the prescriptive mea-
sures in Standard 90.1, the results show
that the energy saved by better wall and
roof insulation, glazing technology, and
lighting is a small fraction of the energy
consumption of computers and other
technical equipment.
Without adapting the standards to
reflect how data center facilities and IT
equipment are evolving, it would become
increasingly difficult to judge the effi-
ciency of data center facilities against
the standard. But without addressing the
operation and energy consumption of the
computers themselves, an opportunity to
develop a holistic, optimal energy use
strategy for the data center would be
lost. The engineering community and the
IT manufacturers, backed up by publicly
reviewed, industry-accepted standards
and guidelines, needed to take a promi-
nent role in attacking this challenge.
ASHRAE 90.1 language
It is interesting to study how the
ASHRAE 90.1 standards issued in 2001
dealt with high electrical density equip-
ment, such as what is typically seen in a
data center. Keep in mind that around the
beginning of the decade in 2000, high-
end corporate servers consisted of a sin-
gle 33-MHz 386 CPU, 4 MB RAM, and
two 120 MB hard drives and were scat-
tered about in offices where they were
needed, a far cry from the state-of-the-art.
If needed, mainframe computers would
reside in a separate data processing room.
Overall, the electrical intensity of the
computer equipment was far less than
what is commonly seen today in large
corporate enterprises. The language in
Standard 90.1 at that time talked about
“computer server rooms” and was writ-
ten specifically to exclude the computer
equipment from the energy-efficiency
requirements, rather than stipulating
requirements to make things more effi-
cient. The exclusions dealt primarily with
humidification and how to define base-
line HVAC systems used in comparing
energy use to the proposed design. At
that time, the generally held beliefs were
the computer systems were very suscep-
tible to failure if exposed to improper
environmental conditions and therefore
should not have to meet certain parts of
the standard that could result in a delete-
rious situation.
Knowing this, data center industry
groups were already developing energy
efficiency and environmental operating
guidelines. And as the use of computers
continued to increase and centralized data
centers were beginning to show up in
increasing numbers of building designs, it
was necessary that ASHRAE play a more
important role in this process
New language for data centers
With the release of ASHRAE Stan-
dard 90.1-2007, based on input from the
the data center community, including
ASHRAE’s TC9.9 for Mission Critical
Facilities, data centers could no longer
be treated as an exception in the energy
standard. There were several proposed
amendments to Standard 90.1-2007
that included specific language, but it
wouldn’t be until the release of Standard
90.1-2010 where data center-specific lan-
guage was used in the standard. The sec-
tions in the standard relating to data cen-
ters took another big leap forward with
the release of the 2013 edition, which
contains specific energy performance
requirements for data centers, including
the ability to use power usage effective-
ness (PUE) as a measure of conformity
with the standard.
Standard 90.1 certainly has come a
long way, but, as expected in the technol-
ogy realm, computers continue to evolve
and change the way they impact on the
built environment. This includes many
aspects of a building design, includ-
ing overall facility size, construction
type, and electrical distribution system
and cooling techniques. This places an
unprecedented demand on developing
Energy performance
Figure 2: The ASHRAE thermal classes are plotted on a psychrometric chart.
35www.csemag.com Consulting-Specifying Engineer • MARCH 2015
timely, relevant building energy codes,
standards and guidelines because, as his-
tory has shown, a lot of change can occur
in a short amount of time. And because
the work to develop a standard needs
to be concluded well before the formal
release of the document, the unfortunate
reality is that portions of the document
will already be out of date when released.
Synergy in energy use efficiency
In the past decade, many of the manu-
facturers of power and cooling equipment
have created product lines designed spe-
cifically for use in data centers. Some of
this equipment has evolved from existing
lines, and some has been developed from
the ground up. Either way, the major man-
ufacturers understand that the character-
istics of a data center require specialized
equipment and product solutions. Within
this niche there are a number of novel
approaches that show potential based on
actual installed performance and market
acceptance. The thermal requirements of
the computers have really been the cata-
lyst for developing many of these novel
approaches; state-of-the-art data centers
have IT equipment (mainly servers) with
inlet temperature requirements of 75 to
80 F and higher. (The ASHRAE Thermal
Guideline classes of inlet temperatures
go as high as 113 F.) This has enabled
designs for compressorless cooling, rely-
ing solely on cooling from outside air- or
water-cooled systems using heat rejec-
tion devices (cooling towers, dry cool-
ers, close-circuit coolers, etc.). Even in
climates with temperature extremes that
go beyond the temperature requirements,
owners are taking a calculated risk and
not installing compressorized cooling
equipment based on the large first-cost
reduction (see Figure 2).
How are these high inlet tempera-
tures being used to reduce overall
energy use and improve operations? A
small sampling:
Ⅲ Depending on the type of computing
equipment, during stretches of above-
normal temperatures, the computer pro-
cessor can be slowed down intentionally,
effectively reducing the heat output of the
computers and lessening the overall cool-
ing load of the data center. This allows the
facility to be designed around high inlet
temperatures and also provides an added
level of protection if outside temperatures
go beyond what is predicted. This strat-
egy really demonstrates the power of how
interconnected facility and IT systems can
provide feedback and feed forward to each
other to achieve an operational goal.
Ⅲ Cooling technologies such as immer-
sion cooling are fundamentally different
from most data center cooling systems. In
this application, the servers are complete-
ly immersed in a large tank of a mineral
oil-like solution, keeping the entire com-
Figure 3: Using refrigerant-free cooling systems, the compressor power is reduced
as the temperature drops. The free cooling pump will run generally when the com-
pressors are off.
Some of the seminal events that acted as catalysts to jump-start energy
efficiency improvements in buildings,both residential and commercial,
stem from incidents that happened far from the shores of the United States.
As a result, federal and state governments (and the general public) were
exposed firsthand to the consequences of unstable worldwide energy sup-
plies. Arguably the most infamous example of this hit the United States in
1973. And it hit hard.
The 1973 oil crisis started when the members of the Organization ofArab
Petroleum Exporting Countries (OAPEC) started an oil embargo in response
to world political events. Six months later, the prices of oil imported into
the U.S. rose from $3 per barrel to nearly $12. In addition to massive cost
increases for gasoline and heating oil, this event brought on a decade of
high inflation where prices of energy and various material commodities
rose greatly, triggering fears of an era of resource scarcity with economic,
political, and security stresses. From 1973 to 1974, residential fuel oil rose
from $0.75/million Btu to $1.82/million Btu, a 143% increase. Electricity
costs also spiked: from $5.86/million Btu in 1973 to $7.42/million Btu in
1974. This was a 27% increase in electricity cost in just 1 year.
The 1973 oil crisis is not the only tumultuous event that has threatened
energy supplies in the U.S., but this particular event sparked the greatest
debate on energy efficiency in the built environment in the U.S. to date.
Also, during this time the unsafe levels of water- and air-borne pollution
attributedtotheextractionandproductionofenergyweremakingheadlines,
putting pressure on private industry and government to develop laws that
would protect the welfare of U.S. citizens, and guarantee a cost-effective
and secure source of energy. These programs became part of a greater
effort, which included the industrial sector, appliances, electronics, and
electricity generation.
What the 1970s oil crisis taught us
36 Consulting-Specifying Engineer • MARCH 2015 www.csemag.com
puter, inside and outside, at a consistent
temperature. This approach has a distinct
advantage: It reduces the facility cool-
ing system energy by using liquid cool-
ing and heat-rejection devices only (no
compressors), and it reduces the energy
of the servers as well. Since the servers
are totally immersed, the internal cooling
fans are not needed and the energy used
in powering these fans is eliminated.
Ⅲ Manufacturers also have developed
methods to apply refrigerant phase-
change technology to data center cooling
that, with certain evaporating/condensing
temperatures, does not require any pumps
or compressors, offering a large reduc-
tion in energy use as compared to the
ASHRAE 90.1 minimum energy require-
ments. Other refrigerant-based systems
can be used with economizer cycles using
the refrigerant as the free-cooling medium
(see Figure 3).
Ⅲ Cooling high-density server cabinets
(>30 kW) poses a challenge due the large
intensive electrical load. One solution to
cool such server cabinets is to provide a
close-coupled system using fans and a
cooling coil on a one-to-one basis with
the cabinet. In addition to using water and
refrigerants R134A, R407C, and R410
in close-coupled installations, refriger-
ant R744, also known as carbon dioxide
(CO2
), is also being employed. CO2
cool-
ing is used extensively in industrial and
commercial refrigeration due to its low
toxicity and efficient heat absorption.
Also, the CO2
can be pumped or oper-
ated in a thermo-syphon arrangement.
Trends in energy use, performance
When we talk about reducing energy
use in data centers, we need to have a
two-part discussion focusing on energy
use from the computer itself (processor,
memory, storage, internal cooling fans)
and from the cooling and power equip-
ment required to keep the computer run-
ning. One way to calculate the energy use
of the entire data center operation is to
imagine a boundary that surrounds both
the IT equipment and the power/cooling
systems, both inside and outside the data
center proper. Inside this boundary are
systems that support the data center, as
well as others that support the areas of
the facility that keep the data center run-
ning, such as control rooms, infrastruc-
ture spaces, mechanical rooms, and other
technical rooms. After these systems are
identified, it is easier to categorize and
develop strategies to reduce the energy
use of the individual power and cooling
systems within the boundary.
Take the total of this annual energy use
(in kWh), add it to the annual energy use
of the IT equipment, and then divide this
total by the annual energy use of the IT
systems (see Figure 4). This is the defi-
nition of PUE, which was developed by
The Green Grid a number of years ago.
But there is one big caveat: PUE does not
address scenarios where the IT equipment
energy use is reduced below a predeter-
mined minimum energy performance.
PUE is a metric that focuses on the facil-
ity energy use, and treats the IT equipment
energy use as a static value unchangeable
by the facilities team. This is a heavily
debated topic because using PUE could
create a disincentive to reduce the IT
energy. In any event, the goal of an over-
all energy-reduction strategy must include
both the facility and IT equipment.
To demonstrate exemplary perfor-
mance and to reap the energy-savings
benefits that come from the synergistic
relationship between the IT and facility
systems, the efficiency of the servers,
storage devices, and networking gear can
be judged against established industry
benchmarks. Unfortunately, this is not a
straightforward (or standardized) exer-
cise in view of the highly varying busi-
ness models that drive how the IT equip-
Energy performance
Figure 4: Power usage effectiveness (PUE) is the industry standard for benchmarking
data center energy use, according to data from The Green Grid.
Mission critical facilities are broadly defined as containing any
operation that, if interrupted, will cause a negative impact on
business activities, ranging from losing revenue to jeopardizing legal
conformity to, in extreme cases, loss of life. Data centers, hospitals, labo-
ratories, public safety centers, and military installations are just a few
of the many types of buildings that could be considered mission critical.
While there are several formal codes and standards, such as NFPA
70: National Electric Code, various hospital administrative codes and a
presidential directive set up to guard against failure of critical infrastructure
in the United States, there is no uniform definition of a mission critical
facility.But to maintain continuous operation of the facility and the internal
processes taking place, redundant power and cooling systems must be
present in varying degrees of reliability.
The redundant systems, regardless of the type of mission critical facil-
ity, will cause energy use inefficiencies to some degree. Using multiple
paths of power, cooling, and ventilation distribution will likely result in
less efficient operation of fans, pumps, chillers, transformers, and more.
This is not always true, but it certainly poses challenges to determining
the most effective way to run redundant systems— especially when
each distribution path will likely contain multiple sensors, actuators, and
other safety devices.
Many codes acknowledge that systems that support life safety and
guard against hazards will be exempt from requirements that apply to
noncritical power and cooling systems. However, sometimes it is not
apparent where the boundary lies between mission critical and non-
mission critical.
What defines a mission critical facility?
PUE =
⌺ power delivered to data center
=
P
mechanical + P
electrical + P
other
⌺ IT equipment power use P
IT
37www.csemag.com Consulting-Specifying Engineer • MARCH 2015
ment will operate, and the application of
strategies such as virtualized servers and
workload shifting.
To illustrate how energy can be
reduced beyond what a standard enter-
prise server will consume, some next-
generation enterprise servers will have
multiple chassis, each housing very small
yet powerful high-density cartridge com-
puters, with each server chassis capable of
containing close to 200 servers. Arrange-
ments like this can have similar power
use profiles to the previous generation,
but by using more effective components
(processor, memory, graphics card, etc.)
and sophisticated power use manage-
ment algorithms, comparing the comput-
ing work output with the electrical power
input demonstrates that these computers
have faster processing speeds and use
higher performing memory and graphics
cards, yet use less energy than the previ-
ous generation. But this is not an anomaly
or a one-off situation. For example, study-
ing the trends of supercomputers over the
past two decades, it is evident that these
computers are also on the same path of
making the newest generation of comput-
ers more efficient than the previous. As an
example, in the last 5 years alone, the met-
ric of megaFLOPS per kW, the “miles per
gallon” for the high-performance comput-
ing world, has increased 4.6 times while
the power has increased only 2.3 times
(see Figure 5).
The progression of computers
It is important to understand that many
of the high-performance computing sys-
tems that are at the top of their class are
direct water-cooled. Using water at higher
temperatures will reduce (or eliminate) the
compressor energy in the central cooling
plant. Using direct water-cooling also
allows more efficient processor, graphics
card, and memory performance by keep-
ing the internal temperatures more stable
and consistent as compared to air-cooling
where temperatures within the server
enclosure may not be even due to chang-
es in airflow through the server. As more
higher-end corporate servers move toward
water-cooling, areas in the energy codes
that discuss air-handling
fan motor power will have
to be reevaluated because
a much smaller portion
of the data center will be
cooled by air, creating a
significant reduction in fan
motor power. Fan power
limitations and strategies
for reducing energy use
certainly will still apply,
but they will make a much
smaller contribution to the
overall consumption.
Historically, one of the
weak points in enterprise
server energy use was the
turndown ratio. This com-
pares electrical power draw
to IT workload. It used to
be that an idle server, with
no workload, would draw
close to 50% of its maxi-
mum power just sitting in
an idle state. Knowing that
in most instances servers would be idle
or running at very low workloads, a huge
amount of energy was being used with-
out producing any computing output. As
server virtualization became more preva-
lent (which increased the minimum work-
loads by running several virtualized serv-
ers on one physical server), the situation
improved. But it was still clear that there
was a lot of room for improvement and
the turndown ratio had to be improved.
The result is today’s server technology
allows for a much closer matching of
actual computer workload to the electri-
cal power input (see Figure 6).
There is movement in the IT industry
to create the next wave of computers,
ones that are designed with a completely
new approach and using components that
are currently mostly in laboratories in
various stages of development. The most
innovative computing platforms in use
Figure 5: Since 2005, the power for the world’s top
supercomputers has increased tenfold (kW curve)
while the performance has increased over 140 times.
Even though the computers used in this dataset are
usually purpose-built, extremely powerful computers,
this type of performance is indicative of where enter-
prise servers are headed.
Figure 6: As server power management has become more sophisticated, the ratio of
power at idle (no workload) compared to full power has decreased by more than 50%
since 2007. This will result in a more optimized data center energy use strategy.
38 Consulting-Specifying Engineer • MARCH 2015
today, even ones that have advanced designs enabling extreme
high-performance while significantly reducing energy use,
use the same types of fundamental building blocks that have
been used for decades. From a data center facilities standpoint,
whether air or water is used for the cooling medium, as long as
the computer maintains the same fundamental design, the same
cooling and power strategies will remain as they are today,
allowing for only incremental efficiency improvements. And
even as the densities of the servers become greater (increasing
power draw per data center area), the same approximate data
center size is required, albeit with reductions in the computer
room due to the high-density as compared with a lower density
application.
But what if an entirely new approach to designing comput-
ers comes about? And what if this new approach dramatically
changes how we design data centers? Processing the torrent
of data and using it to create meaningful business results will
continue to push the electrical capacity in the data center
needed to power IT equipment. And, as we’ve seen over the
past decade, the pressure of the IT industry’s energy use may
force energy-efficiency trade-offs that result in a sub-optimal
outcome vis-a-vis balancing IT capacity, energy source, and
total cost of ownership. While no one can predict when this
tipping point will come or when big data will reach the limit
of available capacity, the industry must find ways to improve
efficiency, or it will face curtailed growth. These improve-
ments have to be made using a holistic process, including all
of the constituents that have a vested interest in a continued
energy and cost-aware growth of the IT industry.
The bottom line: In the next few years the data center design
and construction industry will have to continue to be an active
member in the evolution of IT equipment and will need to
come up with creative design solutions for revising codes and
standards, such as ASHRAE 90.1, making sure there is a clear
understanding of the ramifications of the IT equipment to the
data center facility. As developments in computing technology
research begin to manifest into commercially available prod-
ucts, it is likely that the most advanced computing platforms
won’t immediately replace standard servers; a specific type
of workload, such as very big data or real-time analytics will
require a new type of computing architecture. And even though
this technology is still in the development phase, it gives us
a good indication that a breakthrough in server technology is
coming in the near future. And this technology could rewrite
today’s standards for data center energy efficiency.
Bill Kosik is a distinguished technologist at HP Data Center
Facilities Consulting. He is the leader of “Moving toward Sus-
tainability,” which focuses on the research, development, and
implementation of energy-efficient and environmentally responsible
design strategies for data centers. Kosik collaborates with clients,
developing innovative design strategies for cooling high-density
environments, and creating scalable cooling and power models. He
is a member of the Consulting-Specifying Engineer advisory board.
Energy performance
BITZER US, Inc. // www.bitzerus.com // +1 (770) 718-2900
ECOLINE VARISPEED
ECOLINE ECOLINE VARISPEED TRANSCRITICAL CO2
In refrigeration and air conditioning systems, energy efficiency and environmental protection play an
increasingly decisive role. Innovative reciprocating compressors from the market leader will prepare you today
for the demands of tomorrow. Whether it is natural refrigerants, precise capacity control or optimized
compressor technology, with BITZER you will find the best eco-efficient solution for every application.
input #14 at www.csemag.com/information
stay
informed
Stay current
with technology and trends
in electrical, mechanical,
lighting, and fire/life safety.
To subscribe, visit
www.csemag.com/subscribe
cse_stayInformed_QTR.indd 1 11/7/2013 12:01:55 PM
Copyright of Consulting-Specifying Engineer is the property of CFE Media and its content
may not be copied or emailed to multiple sites or posted to a listserv without the copyright
holder's express written permission. However, users may print, download, or email articles for
individual use.

More Related Content

What's hot

How green standards are changing data center design and operations
How green standards are changing data center design and operationsHow green standards are changing data center design and operations
How green standards are changing data center design and operationsSchneider Electric
 
Data center design 2
Data center design 2Data center design 2
Data center design 2Aditya Gupta
 
The smart grid - Supply and demand side equivalent solutions
The smart grid - Supply and demand side equivalent solutionsThe smart grid - Supply and demand side equivalent solutions
The smart grid - Supply and demand side equivalent solutionsSchneider Electric
 
The impact of power management on building performance and energy costs
The impact of power management on building performance and energy costsThe impact of power management on building performance and energy costs
The impact of power management on building performance and energy costsBassam Gomaa
 
Energy solutions for federal facilities : How to harness sustainable savings ...
Energy solutions for federal facilities : How to harness sustainable savings ...Energy solutions for federal facilities : How to harness sustainable savings ...
Energy solutions for federal facilities : How to harness sustainable savings ...Schneider Electric
 
Extending the Life of Existing Switchgear
Extending the Life of Existing SwitchgearExtending the Life of Existing Switchgear
Extending the Life of Existing SwitchgearSchneider Electric
 
DataCenter:: Infrastructure Presentation
DataCenter:: Infrastructure PresentationDataCenter:: Infrastructure Presentation
DataCenter:: Infrastructure PresentationMuhammad Asad Rashid
 
[Webinar Presentation] Best Practices for IT/OT Convergence
[Webinar Presentation] Best Practices for IT/OT Convergence[Webinar Presentation] Best Practices for IT/OT Convergence
[Webinar Presentation] Best Practices for IT/OT ConvergenceSchneider Electric
 
Implementing energy efficient data centers
Implementing energy efficient  data centersImplementing energy efficient  data centers
Implementing energy efficient data centersSchneider Electric India
 
[Oil & Gas White Paper] Best Practices Support Success in the Open Natural Ga...
[Oil & Gas White Paper] Best Practices Support Success in the Open Natural Ga...[Oil & Gas White Paper] Best Practices Support Success in the Open Natural Ga...
[Oil & Gas White Paper] Best Practices Support Success in the Open Natural Ga...Schneider Electric
 
ComEd Retro-Commissioning Program
ComEd Retro-Commissioning ProgramComEd Retro-Commissioning Program
ComEd Retro-Commissioning ProgramIllinois ASHRAE
 
How green standards are changing data center design and operations
How green standards are changing data center design and operationsHow green standards are changing data center design and operations
How green standards are changing data center design and operationsSchneider Electric
 
Types of Prefabricated Modular Data Centers
Types of Prefabricated Modular Data CentersTypes of Prefabricated Modular Data Centers
Types of Prefabricated Modular Data CentersSchneider Electric
 
Lime Energy Banking Brochure
Lime Energy Banking BrochureLime Energy Banking Brochure
Lime Energy Banking Brochurelimeenergysurveys
 
Optimizing machine, line, and process efficiency in manufacturing operations
Optimizing machine, line, and process efficiency in manufacturing operationsOptimizing machine, line, and process efficiency in manufacturing operations
Optimizing machine, line, and process efficiency in manufacturing operationsSchneider Electric
 
Designing a metering system for small and mid sized buildings
Designing a metering system for small and mid sized buildingsDesigning a metering system for small and mid sized buildings
Designing a metering system for small and mid sized buildingsSchneider Electric India
 
Retrofit, build, or go cloud/colo? Choosing your best direction
Retrofit, build, or go cloud/colo?  Choosing your best directionRetrofit, build, or go cloud/colo?  Choosing your best direction
Retrofit, build, or go cloud/colo? Choosing your best directionSchneider Electric
 
Data Center Decisions: Build versus Buy
Data Center Decisions: Build versus BuyData Center Decisions: Build versus Buy
Data Center Decisions: Build versus BuyVISI
 

What's hot (20)

How green standards are changing data center design and operations
How green standards are changing data center design and operationsHow green standards are changing data center design and operations
How green standards are changing data center design and operations
 
Office-building-catalogue-India
Office-building-catalogue-IndiaOffice-building-catalogue-India
Office-building-catalogue-India
 
Data center design 2
Data center design 2Data center design 2
Data center design 2
 
DCD_FOCUS on POWER Article
DCD_FOCUS on POWER ArticleDCD_FOCUS on POWER Article
DCD_FOCUS on POWER Article
 
The smart grid - Supply and demand side equivalent solutions
The smart grid - Supply and demand side equivalent solutionsThe smart grid - Supply and demand side equivalent solutions
The smart grid - Supply and demand side equivalent solutions
 
The impact of power management on building performance and energy costs
The impact of power management on building performance and energy costsThe impact of power management on building performance and energy costs
The impact of power management on building performance and energy costs
 
Energy solutions for federal facilities : How to harness sustainable savings ...
Energy solutions for federal facilities : How to harness sustainable savings ...Energy solutions for federal facilities : How to harness sustainable savings ...
Energy solutions for federal facilities : How to harness sustainable savings ...
 
Extending the Life of Existing Switchgear
Extending the Life of Existing SwitchgearExtending the Life of Existing Switchgear
Extending the Life of Existing Switchgear
 
DataCenter:: Infrastructure Presentation
DataCenter:: Infrastructure PresentationDataCenter:: Infrastructure Presentation
DataCenter:: Infrastructure Presentation
 
[Webinar Presentation] Best Practices for IT/OT Convergence
[Webinar Presentation] Best Practices for IT/OT Convergence[Webinar Presentation] Best Practices for IT/OT Convergence
[Webinar Presentation] Best Practices for IT/OT Convergence
 
Implementing energy efficient data centers
Implementing energy efficient  data centersImplementing energy efficient  data centers
Implementing energy efficient data centers
 
[Oil & Gas White Paper] Best Practices Support Success in the Open Natural Ga...
[Oil & Gas White Paper] Best Practices Support Success in the Open Natural Ga...[Oil & Gas White Paper] Best Practices Support Success in the Open Natural Ga...
[Oil & Gas White Paper] Best Practices Support Success in the Open Natural Ga...
 
ComEd Retro-Commissioning Program
ComEd Retro-Commissioning ProgramComEd Retro-Commissioning Program
ComEd Retro-Commissioning Program
 
How green standards are changing data center design and operations
How green standards are changing data center design and operationsHow green standards are changing data center design and operations
How green standards are changing data center design and operations
 
Types of Prefabricated Modular Data Centers
Types of Prefabricated Modular Data CentersTypes of Prefabricated Modular Data Centers
Types of Prefabricated Modular Data Centers
 
Lime Energy Banking Brochure
Lime Energy Banking BrochureLime Energy Banking Brochure
Lime Energy Banking Brochure
 
Optimizing machine, line, and process efficiency in manufacturing operations
Optimizing machine, line, and process efficiency in manufacturing operationsOptimizing machine, line, and process efficiency in manufacturing operations
Optimizing machine, line, and process efficiency in manufacturing operations
 
Designing a metering system for small and mid sized buildings
Designing a metering system for small and mid sized buildingsDesigning a metering system for small and mid sized buildings
Designing a metering system for small and mid sized buildings
 
Retrofit, build, or go cloud/colo? Choosing your best direction
Retrofit, build, or go cloud/colo?  Choosing your best directionRetrofit, build, or go cloud/colo?  Choosing your best direction
Retrofit, build, or go cloud/colo? Choosing your best direction
 
Data Center Decisions: Build versus Buy
Data Center Decisions: Build versus BuyData Center Decisions: Build versus Buy
Data Center Decisions: Build versus Buy
 

Similar to Energy performance in mission critical facilities

COMMON PROBLEMS AND CHALLENGES IN DATA CENTRES
COMMON PROBLEMS AND CHALLENGES IN DATA CENTRESCOMMON PROBLEMS AND CHALLENGES IN DATA CENTRES
COMMON PROBLEMS AND CHALLENGES IN DATA CENTRESKamran Hassan
 
Express Computer
Express ComputerExpress Computer
Express ComputerAniket Patange
 
Commercial Overview DC Session 4 Introduction To Energy In The Data Centre
Commercial Overview   DC Session 4   Introduction To Energy In The Data CentreCommercial Overview   DC Session 4   Introduction To Energy In The Data Centre
Commercial Overview DC Session 4 Introduction To Energy In The Data Centrepaul_mathews
 
Improvements in Data Center Management
Improvements in Data Center ManagementImprovements in Data Center Management
Improvements in Data Center ManagementScottMadden, Inc.
 
Green data center_rahul ppt
Green data center_rahul pptGreen data center_rahul ppt
Green data center_rahul pptRAHUL KAUSHAL
 
BLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMSBLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMSDaniel Bodenski
 
Compu Dynamics White Paper - Essential Elements for Data Center Optimization
Compu Dynamics White Paper - Essential Elements for Data Center OptimizationCompu Dynamics White Paper - Essential Elements for Data Center Optimization
Compu Dynamics White Paper - Essential Elements for Data Center OptimizationDan Ephraim
 
Full chapter in a single perfect format 2
Full chapter in a single perfect format 2Full chapter in a single perfect format 2
Full chapter in a single perfect format 2Snehasis Panigrahi
 
Data center m&e
Data center  m&eData center  m&e
Data center m&eJebaraj D.A.L
 
Optimizing The Data Centre Environment
Optimizing The Data Centre EnvironmentOptimizing The Data Centre Environment
Optimizing The Data Centre Environmentmixalisg
 
The next wave of GreenIT
The next wave of GreenITThe next wave of GreenIT
The next wave of GreenITMadhu Selvarangam
 
VET4SBO Level 3 module 1 - unit 4 - 0.009 en
VET4SBO Level 3   module 1 - unit 4 - 0.009 enVET4SBO Level 3   module 1 - unit 4 - 0.009 en
VET4SBO Level 3 module 1 - unit 4 - 0.009 enKarel Van Isacker
 
Aged Data Center Infrastructure.pptx
Aged Data Center Infrastructure.pptxAged Data Center Infrastructure.pptx
Aged Data Center Infrastructure.pptxSchneider Electric
 
Google ppt. mis
Google ppt. misGoogle ppt. mis
Google ppt. misjayadevans89
 
Advantages of a modern DCS in coal handling
Advantages of a modern DCS in coal handlingAdvantages of a modern DCS in coal handling
Advantages of a modern DCS in coal handlingSchneider Electric
 
Energy Logic: Reducing Data Center Energy Consumption by Creating Savings tha...
Energy Logic: Reducing Data Center Energy Consumption by Creating Savings tha...Energy Logic: Reducing Data Center Energy Consumption by Creating Savings tha...
Energy Logic: Reducing Data Center Energy Consumption by Creating Savings tha...Knurr USA
 
Datacenter Strategy, Design, and Build
Datacenter Strategy, Design, and BuildDatacenter Strategy, Design, and Build
Datacenter Strategy, Design, and BuildChristopher Kelley
 
Green datacenters
Green datacentersGreen datacenters
Green datacentersCloudbells.com
 

Similar to Energy performance in mission critical facilities (20)

COMMON PROBLEMS AND CHALLENGES IN DATA CENTRES
COMMON PROBLEMS AND CHALLENGES IN DATA CENTRESCOMMON PROBLEMS AND CHALLENGES IN DATA CENTRES
COMMON PROBLEMS AND CHALLENGES IN DATA CENTRES
 
Express Computer
Express ComputerExpress Computer
Express Computer
 
009
009009
009
 
Commercial Overview DC Session 4 Introduction To Energy In The Data Centre
Commercial Overview   DC Session 4   Introduction To Energy In The Data CentreCommercial Overview   DC Session 4   Introduction To Energy In The Data Centre
Commercial Overview DC Session 4 Introduction To Energy In The Data Centre
 
Improvements in Data Center Management
Improvements in Data Center ManagementImprovements in Data Center Management
Improvements in Data Center Management
 
Green data center_rahul ppt
Green data center_rahul pptGreen data center_rahul ppt
Green data center_rahul ppt
 
BLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMSBLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMS
 
Compu Dynamics White Paper - Essential Elements for Data Center Optimization
Compu Dynamics White Paper - Essential Elements for Data Center OptimizationCompu Dynamics White Paper - Essential Elements for Data Center Optimization
Compu Dynamics White Paper - Essential Elements for Data Center Optimization
 
Full chapter in a single perfect format 2
Full chapter in a single perfect format 2Full chapter in a single perfect format 2
Full chapter in a single perfect format 2
 
Data center m&e
Data center  m&eData center  m&e
Data center m&e
 
Optimizing The Data Centre Environment
Optimizing The Data Centre EnvironmentOptimizing The Data Centre Environment
Optimizing The Data Centre Environment
 
Cooling in Data Centers
Cooling in Data CentersCooling in Data Centers
Cooling in Data Centers
 
The next wave of GreenIT
The next wave of GreenITThe next wave of GreenIT
The next wave of GreenIT
 
VET4SBO Level 3 module 1 - unit 4 - 0.009 en
VET4SBO Level 3   module 1 - unit 4 - 0.009 enVET4SBO Level 3   module 1 - unit 4 - 0.009 en
VET4SBO Level 3 module 1 - unit 4 - 0.009 en
 
Aged Data Center Infrastructure.pptx
Aged Data Center Infrastructure.pptxAged Data Center Infrastructure.pptx
Aged Data Center Infrastructure.pptx
 
Google ppt. mis
Google ppt. misGoogle ppt. mis
Google ppt. mis
 
Advantages of a modern DCS in coal handling
Advantages of a modern DCS in coal handlingAdvantages of a modern DCS in coal handling
Advantages of a modern DCS in coal handling
 
Energy Logic: Reducing Data Center Energy Consumption by Creating Savings tha...
Energy Logic: Reducing Data Center Energy Consumption by Creating Savings tha...Energy Logic: Reducing Data Center Energy Consumption by Creating Savings tha...
Energy Logic: Reducing Data Center Energy Consumption by Creating Savings tha...
 
Datacenter Strategy, Design, and Build
Datacenter Strategy, Design, and BuildDatacenter Strategy, Design, and Build
Datacenter Strategy, Design, and Build
 
Green datacenters
Green datacentersGreen datacenters
Green datacenters
 

Recently uploaded

OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Study on Air-Water & Water-Water Heat Exchange in a Finned ďťżTube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned ďťżTube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned ďťżTube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned ďťżTube ExchangerAnamika Sarkar
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidNikhilNagaraju
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacingjaychoudhary37
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and usesDevarapalliHaritha
 

Recently uploaded (20)

Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Study on Air-Water & Water-Water Heat Exchange in a Finned ďťżTube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned ďťżTube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned ďťżTube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned ďťżTube Exchanger
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfid
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacing
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and uses
 

Energy performance in mission critical facilities

  • 1. BY BILL KOSIK, PE, CEM, BEMP, LEED AP BD+C, HP Data Center Facilities Consulting, Chicago M ission critical facilities support a wide vari- ety of vital operations where facility failure will result in compli- cations that range from serious disrup- tions to business operations, to circum- stances that can jeopardize life safety of the general public. To minimize or eliminate the chance of facility system failure, mission critical facilities have three hallmarks that make them dif- ferent from other type of commercial buildings: 1. The facility must support opera- tions that run continuously without shutdowns due to equipment failure or maintenance. Seasonal or population changes within the facility have a small impact on the energy use profile; gen- erally, the facility is internally loaded with heavy electrical consumption. 2. Redundant power and cooling systems are required to support the 24/7/365 operation. Depending on the level of redundancy, there will be addi- tional efficiency losses in the power and cooling systems brought on by running the equipment at small percentages of the capacity. 3. The technical equipment used in the facility, such as computers; medical and laboratory equipment; and monitor- ing , communications, and surveillance systems, will have high power require- ments that translate into heat gain and energy use. Putting these hallmarks together, mission critical facilities need to run continuously, providing less efficient power and cooling to technical equip- ment that has very high electrical requirements, all without failure or impacts from standard maintenance procedures. This is why energy use (and ways to reduce it) in mission critical facilities has been, and will continue to be, of great concern. This is true whether the mission critical facility is a laboratory, hospital, data center, police/ fire station, or another type of essential operation. And due to constant advances in the design of technical equipment, the strategies and tactics used for reduc- ing facility energy consumption need to anticipate how future changes will impact building design, codes, stan- dards, and other guidelines. Fortu- nately, the technical equipment will generally become more energy-efficient over time with improvements in design. This can reduce facility energy use in two ways: the equipment will use less Mission critical facilities, such as data centers, are judged carefully on their energy use. Engineers should focus on the codes and standards that dictate energy performance and how building energy performance can be enhanced. Energy performance in mission critical facilities 32 Consulting-Specifying Engineer • MARCH 2015 www.csemag.com Learning objectives Ⅲ Understand the various ways to measure energy use in mission critical facilities. Ⅲ Learn about the codes and standards that dictate energy performance. Ⅲ Learn about the codes, standards, and organizations that govern energy perfor- mance.
  • 2. energy, and the energy of the power and cooling systems will also decrease. Data centers are one segment of the mission critical facility industry that arguably see the highest rate of change in how the facilities are designed, pri- marily based on the requirements of technical equipment, servers, storage devices, and networking gear. Data centers will have the highest concen- tration of technical equipment on a sq ft or percentage of total power demand as compared to other mission critical facilities. A change in the specifica- tions or operating conditions of the computers in a data center facility will have a ripple effect that runs through all aspects of the power and cooling systems (see Figure 1). Moreover, IT equipment manufacturers are develop- ing next generation technology that can significantly reduce overall energy use and environmental impact of data cen- ters. This is a good thing, but with it brings new design challenges that need to be addressed in codes, standards, and guidelines. For data centers and the broader range of commercial buildings, there are myriad programs, guidelines, and codes intended to keep energy use as low as possible. Publications from ASHRAE, Lawrence Berkeley Nation- al Laboratory, U.S. Green Building Council, and the U.S. Environmental Protection Agency are good examples of technical but practical resources aid- ing in data center strategy. But how did all of these come about? To understand the path forward, it is equally important to know how we got here. Similar to the rapid evolu- tion of power and cooling systems in data centers, many of thedocuments released by these groups were devel- oped in response by changes and new thinking in the data center design and construction industry. Energy-efficiency programs for buildings In the United States, one of the first programs developed by the federal gov- ernment that spawned several broader energy efficiency initiatives is the 1977 U.S. National Energy Plan. This was developed as a blueprint identifying energy efficiency as a priority because “conservation is the quickest, cheapest, most practical source of energy.” This plan became the basis for many other building energy use reduction programs that would typically start out at the fed- eral level and eventually trickle down to state and local government. During this time, one of the most widely used building efficiency stan- dards was published for the first time: ASHRAE Standard 90-1975: Energy Conservation in New Building Design. Because no comprehensive national standard existed at the time, this was the first opportunity for many architects and engineers to objectively calculate the energy costs of their designs and to increase energy efficiency. Since its initial release, the standard has been renamed ASHRAE Standard 90.1: Energy Standard for Buildings Except Low-Rise Residential Buildings and has been put on a 3-year maintenance 33www.csemag.com Consulting-Specifying Engineer • MARCH 2015 Figure 1: Using IT equipment that can run in an environment with 26 C supply air (top) enables the use of different cooling tech- nology than IT equipment that runs with 20 C supply air. This allows for a 15% reduction in HVAC system energy use. All graphics courtesy: HP Data Center Facilities Consulting
  • 3. 34 Consulting-Specifying Engineer • MARCH 2015 www.csemag.com cycle. For example, the 2013 edition of Standard 90.1 improves minimum energy efficiency by approximately 37% from the 2004 edition for regulated loads. It is typical that each new release of the standard will contain significant energy-efficiency requirements. With the proliferation of communica- tions and computing technology at the end of the 20th century, building codes and standards, especially Standard 90.1, needed to reflect how technology was impacting building design, especially power, cooling, control, and commu- nication systems. Changes in power density for high-technology commer- cial buildings began to create situations that made it difficult for certain building designs to meet the Standard 90.1 mini- mum energy use requirements. Also, when following the prescriptive mea- sures in Standard 90.1, the results show that the energy saved by better wall and roof insulation, glazing technology, and lighting is a small fraction of the energy consumption of computers and other technical equipment. Without adapting the standards to reflect how data center facilities and IT equipment are evolving, it would become increasingly difficult to judge the effi- ciency of data center facilities against the standard. But without addressing the operation and energy consumption of the computers themselves, an opportunity to develop a holistic, optimal energy use strategy for the data center would be lost. The engineering community and the IT manufacturers, backed up by publicly reviewed, industry-accepted standards and guidelines, needed to take a promi- nent role in attacking this challenge. ASHRAE 90.1 language It is interesting to study how the ASHRAE 90.1 standards issued in 2001 dealt with high electrical density equip- ment, such as what is typically seen in a data center. Keep in mind that around the beginning of the decade in 2000, high- end corporate servers consisted of a sin- gle 33-MHz 386 CPU, 4 MB RAM, and two 120 MB hard drives and were scat- tered about in offices where they were needed, a far cry from the state-of-the-art. If needed, mainframe computers would reside in a separate data processing room. Overall, the electrical intensity of the computer equipment was far less than what is commonly seen today in large corporate enterprises. The language in Standard 90.1 at that time talked about “computer server rooms” and was writ- ten specifically to exclude the computer equipment from the energy-efficiency requirements, rather than stipulating requirements to make things more effi- cient. The exclusions dealt primarily with humidification and how to define base- line HVAC systems used in comparing energy use to the proposed design. At that time, the generally held beliefs were the computer systems were very suscep- tible to failure if exposed to improper environmental conditions and therefore should not have to meet certain parts of the standard that could result in a delete- rious situation. Knowing this, data center industry groups were already developing energy efficiency and environmental operating guidelines. And as the use of computers continued to increase and centralized data centers were beginning to show up in increasing numbers of building designs, it was necessary that ASHRAE play a more important role in this process New language for data centers With the release of ASHRAE Stan- dard 90.1-2007, based on input from the the data center community, including ASHRAE’s TC9.9 for Mission Critical Facilities, data centers could no longer be treated as an exception in the energy standard. There were several proposed amendments to Standard 90.1-2007 that included specific language, but it wouldn’t be until the release of Standard 90.1-2010 where data center-specific lan- guage was used in the standard. The sec- tions in the standard relating to data cen- ters took another big leap forward with the release of the 2013 edition, which contains specific energy performance requirements for data centers, including the ability to use power usage effective- ness (PUE) as a measure of conformity with the standard. Standard 90.1 certainly has come a long way, but, as expected in the technol- ogy realm, computers continue to evolve and change the way they impact on the built environment. This includes many aspects of a building design, includ- ing overall facility size, construction type, and electrical distribution system and cooling techniques. This places an unprecedented demand on developing Energy performance Figure 2: The ASHRAE thermal classes are plotted on a psychrometric chart.
  • 4. 35www.csemag.com Consulting-Specifying Engineer • MARCH 2015 timely, relevant building energy codes, standards and guidelines because, as his- tory has shown, a lot of change can occur in a short amount of time. And because the work to develop a standard needs to be concluded well before the formal release of the document, the unfortunate reality is that portions of the document will already be out of date when released. Synergy in energy use efficiency In the past decade, many of the manu- facturers of power and cooling equipment have created product lines designed spe- cifically for use in data centers. Some of this equipment has evolved from existing lines, and some has been developed from the ground up. Either way, the major man- ufacturers understand that the character- istics of a data center require specialized equipment and product solutions. Within this niche there are a number of novel approaches that show potential based on actual installed performance and market acceptance. The thermal requirements of the computers have really been the cata- lyst for developing many of these novel approaches; state-of-the-art data centers have IT equipment (mainly servers) with inlet temperature requirements of 75 to 80 F and higher. (The ASHRAE Thermal Guideline classes of inlet temperatures go as high as 113 F.) This has enabled designs for compressorless cooling, rely- ing solely on cooling from outside air- or water-cooled systems using heat rejec- tion devices (cooling towers, dry cool- ers, close-circuit coolers, etc.). Even in climates with temperature extremes that go beyond the temperature requirements, owners are taking a calculated risk and not installing compressorized cooling equipment based on the large first-cost reduction (see Figure 2). How are these high inlet tempera- tures being used to reduce overall energy use and improve operations? A small sampling: Ⅲ Depending on the type of computing equipment, during stretches of above- normal temperatures, the computer pro- cessor can be slowed down intentionally, effectively reducing the heat output of the computers and lessening the overall cool- ing load of the data center. This allows the facility to be designed around high inlet temperatures and also provides an added level of protection if outside temperatures go beyond what is predicted. This strat- egy really demonstrates the power of how interconnected facility and IT systems can provide feedback and feed forward to each other to achieve an operational goal. Ⅲ Cooling technologies such as immer- sion cooling are fundamentally different from most data center cooling systems. In this application, the servers are complete- ly immersed in a large tank of a mineral oil-like solution, keeping the entire com- Figure 3: Using refrigerant-free cooling systems, the compressor power is reduced as the temperature drops. The free cooling pump will run generally when the com- pressors are off. Some of the seminal events that acted as catalysts to jump-start energy efficiency improvements in buildings,both residential and commercial, stem from incidents that happened far from the shores of the United States. As a result, federal and state governments (and the general public) were exposed firsthand to the consequences of unstable worldwide energy sup- plies. Arguably the most infamous example of this hit the United States in 1973. And it hit hard. The 1973 oil crisis started when the members of the Organization ofArab Petroleum Exporting Countries (OAPEC) started an oil embargo in response to world political events. Six months later, the prices of oil imported into the U.S. rose from $3 per barrel to nearly $12. In addition to massive cost increases for gasoline and heating oil, this event brought on a decade of high inflation where prices of energy and various material commodities rose greatly, triggering fears of an era of resource scarcity with economic, political, and security stresses. From 1973 to 1974, residential fuel oil rose from $0.75/million Btu to $1.82/million Btu, a 143% increase. Electricity costs also spiked: from $5.86/million Btu in 1973 to $7.42/million Btu in 1974. This was a 27% increase in electricity cost in just 1 year. The 1973 oil crisis is not the only tumultuous event that has threatened energy supplies in the U.S., but this particular event sparked the greatest debate on energy efficiency in the built environment in the U.S. to date. Also, during this time the unsafe levels of water- and air-borne pollution attributedtotheextractionandproductionofenergyweremakingheadlines, putting pressure on private industry and government to develop laws that would protect the welfare of U.S. citizens, and guarantee a cost-effective and secure source of energy. These programs became part of a greater effort, which included the industrial sector, appliances, electronics, and electricity generation. What the 1970s oil crisis taught us
  • 5. 36 Consulting-Specifying Engineer • MARCH 2015 www.csemag.com puter, inside and outside, at a consistent temperature. This approach has a distinct advantage: It reduces the facility cool- ing system energy by using liquid cool- ing and heat-rejection devices only (no compressors), and it reduces the energy of the servers as well. Since the servers are totally immersed, the internal cooling fans are not needed and the energy used in powering these fans is eliminated. Ⅲ Manufacturers also have developed methods to apply refrigerant phase- change technology to data center cooling that, with certain evaporating/condensing temperatures, does not require any pumps or compressors, offering a large reduc- tion in energy use as compared to the ASHRAE 90.1 minimum energy require- ments. Other refrigerant-based systems can be used with economizer cycles using the refrigerant as the free-cooling medium (see Figure 3). Ⅲ Cooling high-density server cabinets (>30 kW) poses a challenge due the large intensive electrical load. One solution to cool such server cabinets is to provide a close-coupled system using fans and a cooling coil on a one-to-one basis with the cabinet. In addition to using water and refrigerants R134A, R407C, and R410 in close-coupled installations, refriger- ant R744, also known as carbon dioxide (CO2 ), is also being employed. CO2 cool- ing is used extensively in industrial and commercial refrigeration due to its low toxicity and efficient heat absorption. Also, the CO2 can be pumped or oper- ated in a thermo-syphon arrangement. Trends in energy use, performance When we talk about reducing energy use in data centers, we need to have a two-part discussion focusing on energy use from the computer itself (processor, memory, storage, internal cooling fans) and from the cooling and power equip- ment required to keep the computer run- ning. One way to calculate the energy use of the entire data center operation is to imagine a boundary that surrounds both the IT equipment and the power/cooling systems, both inside and outside the data center proper. Inside this boundary are systems that support the data center, as well as others that support the areas of the facility that keep the data center run- ning, such as control rooms, infrastruc- ture spaces, mechanical rooms, and other technical rooms. After these systems are identified, it is easier to categorize and develop strategies to reduce the energy use of the individual power and cooling systems within the boundary. Take the total of this annual energy use (in kWh), add it to the annual energy use of the IT equipment, and then divide this total by the annual energy use of the IT systems (see Figure 4). This is the defi- nition of PUE, which was developed by The Green Grid a number of years ago. But there is one big caveat: PUE does not address scenarios where the IT equipment energy use is reduced below a predeter- mined minimum energy performance. PUE is a metric that focuses on the facil- ity energy use, and treats the IT equipment energy use as a static value unchangeable by the facilities team. This is a heavily debated topic because using PUE could create a disincentive to reduce the IT energy. In any event, the goal of an over- all energy-reduction strategy must include both the facility and IT equipment. To demonstrate exemplary perfor- mance and to reap the energy-savings benefits that come from the synergistic relationship between the IT and facility systems, the efficiency of the servers, storage devices, and networking gear can be judged against established industry benchmarks. Unfortunately, this is not a straightforward (or standardized) exer- cise in view of the highly varying busi- ness models that drive how the IT equip- Energy performance Figure 4: Power usage effectiveness (PUE) is the industry standard for benchmarking data center energy use, according to data from The Green Grid. Mission critical facilities are broadly defined as containing any operation that, if interrupted, will cause a negative impact on business activities, ranging from losing revenue to jeopardizing legal conformity to, in extreme cases, loss of life. Data centers, hospitals, labo- ratories, public safety centers, and military installations are just a few of the many types of buildings that could be considered mission critical. While there are several formal codes and standards, such as NFPA 70: National Electric Code, various hospital administrative codes and a presidential directive set up to guard against failure of critical infrastructure in the United States, there is no uniform definition of a mission critical facility.But to maintain continuous operation of the facility and the internal processes taking place, redundant power and cooling systems must be present in varying degrees of reliability. The redundant systems, regardless of the type of mission critical facil- ity, will cause energy use inefficiencies to some degree. Using multiple paths of power, cooling, and ventilation distribution will likely result in less efficient operation of fans, pumps, chillers, transformers, and more. This is not always true, but it certainly poses challenges to determining the most effective way to run redundant systems— especially when each distribution path will likely contain multiple sensors, actuators, and other safety devices. Many codes acknowledge that systems that support life safety and guard against hazards will be exempt from requirements that apply to noncritical power and cooling systems. However, sometimes it is not apparent where the boundary lies between mission critical and non- mission critical. What defines a mission critical facility? PUE = ⌺ power delivered to data center = P mechanical + P electrical + P other ⌺ IT equipment power use P IT
  • 6. 37www.csemag.com Consulting-Specifying Engineer • MARCH 2015 ment will operate, and the application of strategies such as virtualized servers and workload shifting. To illustrate how energy can be reduced beyond what a standard enter- prise server will consume, some next- generation enterprise servers will have multiple chassis, each housing very small yet powerful high-density cartridge com- puters, with each server chassis capable of containing close to 200 servers. Arrange- ments like this can have similar power use profiles to the previous generation, but by using more effective components (processor, memory, graphics card, etc.) and sophisticated power use manage- ment algorithms, comparing the comput- ing work output with the electrical power input demonstrates that these computers have faster processing speeds and use higher performing memory and graphics cards, yet use less energy than the previ- ous generation. But this is not an anomaly or a one-off situation. For example, study- ing the trends of supercomputers over the past two decades, it is evident that these computers are also on the same path of making the newest generation of comput- ers more efficient than the previous. As an example, in the last 5 years alone, the met- ric of megaFLOPS per kW, the “miles per gallon” for the high-performance comput- ing world, has increased 4.6 times while the power has increased only 2.3 times (see Figure 5). The progression of computers It is important to understand that many of the high-performance computing sys- tems that are at the top of their class are direct water-cooled. Using water at higher temperatures will reduce (or eliminate) the compressor energy in the central cooling plant. Using direct water-cooling also allows more efficient processor, graphics card, and memory performance by keep- ing the internal temperatures more stable and consistent as compared to air-cooling where temperatures within the server enclosure may not be even due to chang- es in airflow through the server. As more higher-end corporate servers move toward water-cooling, areas in the energy codes that discuss air-handling fan motor power will have to be reevaluated because a much smaller portion of the data center will be cooled by air, creating a significant reduction in fan motor power. Fan power limitations and strategies for reducing energy use certainly will still apply, but they will make a much smaller contribution to the overall consumption. Historically, one of the weak points in enterprise server energy use was the turndown ratio. This com- pares electrical power draw to IT workload. It used to be that an idle server, with no workload, would draw close to 50% of its maxi- mum power just sitting in an idle state. Knowing that in most instances servers would be idle or running at very low workloads, a huge amount of energy was being used with- out producing any computing output. As server virtualization became more preva- lent (which increased the minimum work- loads by running several virtualized serv- ers on one physical server), the situation improved. But it was still clear that there was a lot of room for improvement and the turndown ratio had to be improved. The result is today’s server technology allows for a much closer matching of actual computer workload to the electri- cal power input (see Figure 6). There is movement in the IT industry to create the next wave of computers, ones that are designed with a completely new approach and using components that are currently mostly in laboratories in various stages of development. The most innovative computing platforms in use Figure 5: Since 2005, the power for the world’s top supercomputers has increased tenfold (kW curve) while the performance has increased over 140 times. Even though the computers used in this dataset are usually purpose-built, extremely powerful computers, this type of performance is indicative of where enter- prise servers are headed. Figure 6: As server power management has become more sophisticated, the ratio of power at idle (no workload) compared to full power has decreased by more than 50% since 2007. This will result in a more optimized data center energy use strategy.
  • 7. 38 Consulting-Specifying Engineer • MARCH 2015 today, even ones that have advanced designs enabling extreme high-performance while significantly reducing energy use, use the same types of fundamental building blocks that have been used for decades. From a data center facilities standpoint, whether air or water is used for the cooling medium, as long as the computer maintains the same fundamental design, the same cooling and power strategies will remain as they are today, allowing for only incremental efficiency improvements. And even as the densities of the servers become greater (increasing power draw per data center area), the same approximate data center size is required, albeit with reductions in the computer room due to the high-density as compared with a lower density application. But what if an entirely new approach to designing comput- ers comes about? And what if this new approach dramatically changes how we design data centers? Processing the torrent of data and using it to create meaningful business results will continue to push the electrical capacity in the data center needed to power IT equipment. And, as we’ve seen over the past decade, the pressure of the IT industry’s energy use may force energy-efficiency trade-offs that result in a sub-optimal outcome vis-a-vis balancing IT capacity, energy source, and total cost of ownership. While no one can predict when this tipping point will come or when big data will reach the limit of available capacity, the industry must find ways to improve efficiency, or it will face curtailed growth. These improve- ments have to be made using a holistic process, including all of the constituents that have a vested interest in a continued energy and cost-aware growth of the IT industry. The bottom line: In the next few years the data center design and construction industry will have to continue to be an active member in the evolution of IT equipment and will need to come up with creative design solutions for revising codes and standards, such as ASHRAE 90.1, making sure there is a clear understanding of the ramifications of the IT equipment to the data center facility. As developments in computing technology research begin to manifest into commercially available prod- ucts, it is likely that the most advanced computing platforms won’t immediately replace standard servers; a specific type of workload, such as very big data or real-time analytics will require a new type of computing architecture. And even though this technology is still in the development phase, it gives us a good indication that a breakthrough in server technology is coming in the near future. And this technology could rewrite today’s standards for data center energy efficiency. Bill Kosik is a distinguished technologist at HP Data Center Facilities Consulting. He is the leader of “Moving toward Sus- tainability,” which focuses on the research, development, and implementation of energy-efficient and environmentally responsible design strategies for data centers. Kosik collaborates with clients, developing innovative design strategies for cooling high-density environments, and creating scalable cooling and power models. He is a member of the Consulting-Specifying Engineer advisory board. Energy performance BITZER US, Inc. // www.bitzerus.com // +1 (770) 718-2900 ECOLINE VARISPEED ECOLINE ECOLINE VARISPEED TRANSCRITICAL CO2 In refrigeration and air conditioning systems, energy efficiency and environmental protection play an increasingly decisive role. Innovative reciprocating compressors from the market leader will prepare you today for the demands of tomorrow. Whether it is natural refrigerants, precise capacity control or optimized compressor technology, with BITZER you will find the best eco-efficient solution for every application. input #14 at www.csemag.com/information stay informed Stay current with technology and trends in electrical, mechanical, lighting, and fire/life safety. To subscribe, visit www.csemag.com/subscribe cse_stayInformed_QTR.indd 1 11/7/2013 12:01:55 PM
  • 8. Copyright of Consulting-Specifying Engineer is the property of CFE Media and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.