SlideShare a Scribd company logo
1 of 248
Download to read offline
Data Center
Associate Certification
Exam
Course Transcript
Study Guide
Fundamentals of Availability P a g e | 1
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Fundamentals of Availability
Transcript
Slide 1
Welcome to Data Center University™ course on Fundamentals of Availability.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the attachments link to download supplemental information for this course.
Click the Notes tab to read a transcript of the narration.
Slide 3: Learning Objective
At the end of this course, you will be able to:
• Understand the key terms associated with availability
• Understand the difference between availability and reliability
• Recognize threats to availability
• Calculate cost of downtime
Slide 4: Introduction
In our rapidly changing business world, highly available systems and processes are of critical importance
and are the foundation upon which successful businesses rely. So much so, that according to the National
Archives and Records Administration in Washington, D.C., 93% of businesses that have lost availability in
their data center for 10 days or more have filed for bankruptcy within one year. The cost of one episode of
downtime can cripple an organization. Take for example an e-business. In a case of downtime, not only
would they potentially lose thousands or even millions of dollars in lost revenue, but their top competitor is
only a mouse-click away. Therefore loss is translated not only to lost revenue but also to a loss in customer
loyalty. The challenge of maintaining a highly available network is no longer just the responsibility of the IT
Fundamentals of Availability P a g e | 2
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
departments, rather it extends out to management and department heads, as well as the boards which
govern company policy. For this reason, having a sound understanding of the factors that lead to high
availability, threats to availability, and ways to measure availability is imperative regardless of your business
sector.
Slide 5: Measuring Business Value
Measuring Business Value begins first with an understanding of the Physical Infrastructure.
Physical Infrastructure is the foundation upon which Information Technology (IT) and telecommunication
Networks reside.
Physical Infrastructure consists of the Racks, Power, Cooling, Fire Prevention/Security, Management, and
Services.
Slide 6: Measuring Business Value
Business value for an organization, in general terms, is based on three core objectives:
1. Increasing revenue
2. Reducing costs
3. Better utilizing assets
Regardless of the line of business, these three objectives ultimately lead to improved earnings and cash
flow. Investments in Physical Infrastructure are made because they both directly and indirectly impact these
three business objectives. Managers purchase items such as generators, air conditioners, physical security
systems, and Uninterruptible Power Supplies to serve as “insurance policies.” For any network or data
center, there are risks of downtime from power and thermal problems, and investing in Physical
Infrastructure mitigates these and other risks. So how does this impact the three core business objectives
above (revenue, cost, and assets)? Revenue streams are slowed or stopped, business costs / expenses
are incurred, and assets are underutilized or underproductive when systems are down. Therefore, the more
Fundamentals of Availability P a g e | 3
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
efficient the strategy is in reducing downtime from any cause, the more value it has to the business in
meeting all three objectives.
Slide 7: Measuring Business Value
Historically, assessment of Physical Infrastructure business value was based on two core criteria: availability
and upfront costs. Increasing the availability (uptime) of the Physical Infrastructure system and ultimately of
the business processes allows a business to continue to bring in revenues and better optimize the use (or
productivity) of assets. Imagine a credit card processing company whose systems are unavailable – credit
card purchases cannot be processed, halting the revenue stream for the duration of the downtime. In
addition, employees are not able to be productive without their systems online. And minimizing the upfront
cost of the Physical Infrastructure results in a greater return on that investment. If the Physical Infrastructure
cost is low and the risk / cost of downtime is high, the business case becomes easier to justify.
While these arguments still hold true, today’s rapidly changing IT environments are dictating an additional
criteria for assessing Physical Infrastructure business value. Agility. Business plans must be agile to deal
with changing market conditions, opportunities, and environmental factors. Investments that lock resources
limit the ability to respond in a flexible manner. And when this flexibility or agility is not present, lost
opportunity is the predictable result.
Slide 8: Five 9’s of Availability
A term that is commonly used when discussing availability is the term ‘5 Nine’s. Although often used, this
term is often very misleading, and often misunderstood. 5 9’s refers to a network that is accessible 99.999%
of the time. However, it is a rather misleading term. We’ll explain why a little later on in the course.
Slide 9: Key Terms
There are many additional terms associated with availability, business continuity and disaster recovery.
Before we go any further, let’s define some of these terms.
Fundamentals of Availability P a g e | 4
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Reliability is the ability of a system or component to perform its required functions under stated conditions
for a specified period of time.
Availability, on the other hand, is the degree to which a system or component is operational and accessible
when required for use. It can be viewed as the likelihood that the system or component is in a state to
perform its required function under given conditions at a given instant in time. Availability is determined by a
system’s reliability, as well as its recovery time when a failure does occur. When systems have long
continuous operating times, failures are inevitable. Availability is often looked at because, when a failure
does occur, the critical variable now becomes how quickly the system can be recovered. In the data center,
having a reliable system design is the most critical variable, but when a failure occurs, the most important
consideration must be getting the IT equipment and business processes up and running as fast as possible
to keep downtime to a minimum.
Slide 10: Key Terms
Upon considering any availability or reliability value, one should always ask for a definition of failure. Moving
forward without a clear definition of failure, is like advertising the fuel efficiency of an automobile as “miles
per tank” without defining the capacity of the tank in liters or gallons. To address this ambiguity, one should
start with one of the following two basic definitions of a failure.
According to the IEC (International Electro-technical Commission) there are two basic definitions of a failure:
1. The termination of the ability of the product as a whole to perform its required function.
2. The termination of the ability of any individual component to perform its required function but not
the termination of the ability of the product as a whole to perform.
Slide 11: Key Terms
MTBF Mean Time Between Failure, is a basic measure of a system’s reliability. It is typically represented in
units of hours. The higher the MTBF number is, the higher the reliability of the product.
Fundamentals of Availability P a g e | 5
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
MTTR Mean Time to Recover (or Repair), is the expected time to recover a system from a failure. This may
include the time it takes to diagnose the problem, the time it takes to get a repair technician onsite, and the
time it takes to physically repair the system. Similar to MTBF, MTTR is represented in units of hours. MTTR
impacts availability and not reliability. The longer the MTTR, the worse off a system is. Simply put, if it takes
longer to recover a system from a failure, the system is going to have a lower availability. As the MTBF goes
up, availability goes up. As the MTTR goes up, availability goes down.
Slide 12: The Limitations of 99.999%
As before mentioned 5 9’s is a misleading term because the use of the term has become diluted. 5 9’s has
been used to refer to the amount of time that the Data Center is powered up and available. In other words,
a data center that has achieved 5 9’s is powered up 99.999% of the time. However, loss of power is only 1
part of the equation. The other part of the availability equation is reliability.
Let’s take for example two data centers that are both considered 99.999% available. In one year, Data
Center A lost power once, but it lasted for a full 5 minutes. Data Center B lost power 10 times, but for only
30 seconds each time. Both Data Centers were without power for a total of 5 minutes each. The missing
detail is the recovery time. Anytime systems lose power, there is a recovery time in which servers must be
rebooted, data must be recovered, and corrupted systems must be repaired. The Mean Time to Recover
process could take minutes, hours, days, or even weeks. Now, if you consider again the two data centers
that have experienced downtime, you will see that Data Center B that has had 10 instances of power
outages will actually have a much longer duration of downtime, than the data center that only had once
occurrence of downtime. Data Center B will have a significantly higher Mean Time to Recover. It is
because of this dynamic that reliability is equally important to this discussion of availability. Reliability of a
data center talks to the frequency of downtime in a given time frame. There is an inversely proportional
relationship in that as time increases, reliability decreases. Availability, however is only a percentage of
downtime in a given duration.
Fundamentals of Availability P a g e | 6
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 13: Factors that Affect Availability and Reliability
It should be obvious that there are numerous factors that affect data center availability and reliability. Some
of these include AC Power conditions, lack of adequate cooling in the data center, equipment failure, natural
and artificial disasters, and human errors.
Slide 14: AC Power Conditions
Let’s look first at the AC power conditions. Power quality anomalies are organized into seven categories
based on wave shape:
1. Transients
2. Interruptions
3. Sag / Undervoltage
4. Swell / Overvoltage
5. Waveform distortion
6. Voltage fluctuations
7. Frequency variations
Slide 15: Inadequate Cooling
Another factor that poses a significant threat to availability is a lack of cooling in the IT environment.
Whenever electrical power is being consumed, heat is being generated. In the Data Center Environment,
where a mass quantity of heat is being generated, the potential exists for significant downtime unless this
heat is removed from the space.
Slide 16: Inadequate Cooling
Often times, cooling systems may be in place in the data center, however, if the cooling is not distributed
properly hotspots can occur.
Slide 17: Inadequate Cooling
Hot spots within the data center further threaten availability. In addition, inadequate cooling significantly
detracts from the lifespan and availability of IT equipment. It is recommended that when designing the data
Fundamentals of Availability P a g e | 7
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
center layout, a hot aisle/cold aisle configuration is used. Hot spots can also be alleviated by the use of
properly sized cooling systems, and supplemental spot coolers and air distribution units.
Slide 18: Equipment Failures
The health of IT equipment is an important factor in ensuring a highly available system, as equipment
failures pose a significant threat to availability. Failures can occur for a variety of reasons, including
damage caused by prolonged improper utility power. Other such causes are from prolonged exposure to
elevated or decreased temperatures, humidity, component failure, and equipment age.
Slide 19: Natural and Artificial Disasters
Disasters also pose a significant threat to availability. Hurricanes, tornadoes, floods, and the often
subsequent blackouts that occur after these disasters all create tremendous opportunity for downtime. In
many of these cases, downtime is prolonged due to damage sustained by the power grid or the physical site
of the data center itself.
Slide 20: Human Error
According to Gartner Group, the largest single cause of downtime is human error or personnel issues. One
of the most common causes of intermittent downtime in the data center is poor training. Data center staff or
contractors should be trained on procedures for application failures/hangs, system update/upgrades, and
other tasks that can create problems if not done correctly.
Slide 21: Human Error
Another problem is poor documentation. As staff sizes have shrunk, and with all the changes in the data
center due to rapid product cycles, it’s harder and harder to keep the documentation current. Patches can
go awry as incorrect software versions are updated. Hardware fixes can fail if the wrong parts are used.
Slide 22: Human Error
Another area of potential downtime is management of systems. System Management has fragmented from
a single point of control to vendors, partners, ASPs, outsource suppliers, and even a number of internal
Fundamentals of Availability P a g e | 8
groups. With a variety of vendors, contractors and technicians freely accessing the IT equipment, errors are
inevitable.
Slide 23: Cost of Downtime
It is important to understand the cost of downtime to a business, and specifically, how that cost changes as
a function of outage duration. Lost revenue is often the most visible and easily identified cost of downtime,
but it is only the tip of the iceberg when discussing the real costs to the organization. In many cases, the
cost of downtime per hour remains constant. In other words, a business that loses at a rate of 100 dollars
per hour in the first minute of downtime will also lose at the same rate of 100 dollars per hour after an hour
of downtime. An example of a company that might experience this type of profile is a retail store, where a
constant revenue stream is present. When the systems are down, there is a relatively constant rate of loss.
Slide 24: Cost of Downtime
Some businesses, however, may lose the most money after the first 500 milliseconds of downtime and then
lose very little thereafter. For example, a semiconductor fabrication plant loses the most money in the first
moments of an outage because when the process is interrupted, the Silicon wafers that were in production
can no longer be used, and must be scrapped.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Fundamentals of Availability P a g e | 9
Slide 25: Cost of Downtime
And others yet, may lose at a lower rate for a short outage (since revenue is not lost but simply delayed),
and as the duration lengthens, there is an increased likelihood that the revenue will not be recovered.
Regarding customer satisfaction, a short duration may often be acceptable, but as the duration increases,
more customers will become increasingly upset. An example of this might be a car dealership, where
customers are willing to delay a transaction for a day. With significant outages however, public knowledge
often results in damaged brand perception, and inquiries into company operations. All of these activities
result in a downtime cost that begins to accelerate quickly as the duration becomes longer.
(Image on next page)
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Fundamentals of Availability P a g e | 10
Slide 26: Cost of Downtime
Costs associated with downtime can be classified as direct and indirect. Direct costs are easily identified
and measured in terms of hard dollars. Examples include:
1. Wages and costs of employees that are idled due to the unavailability of the network. Although
some employees will be idle, their salaries and wages continue to be paid. Other employees may
still do some work, but their output will likely be diminished.
2. Lost Revenues are the most obvious cost of downtime because if you cannot process customers,
you cannot conduct business. Electronic commerce magnifies the problem, as eCommerce sales
are entirely dependent on system availability
3. Wages and cost increases due to induced overtime or time spent checking and fixing systems. The
same employees that were idled by the system failure are probably the same employees that will
go back to work and recover the system via data entry. They not only have to do their ‘day job’ of
processing current data, but they must also re-enter any data that was lost due to the system crash,
or enter new data that was handwritten during the system outage. This means additional hours of
work, most often on an overtime basis.
4. Depending on the nature of the affected systems, the legal costs associated with downtime can be
significant. For example, if downtime problems result in a significant drop in share price,
shareholders may initiate a class-action suit if they believe that management and the board were
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Fundamentals of Availability P a g e | 11
negligent in protecting vital assets. In another example, if two companies form a business
partnership in which one company’s ability to conduct business is dependent on the availability of
the other company’s systems, then, depending on the legal structure of the partnership, the first
company may be liable to the second for profits lost during any significant downtime event.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
direct costs are not easily measured, but impact the business just the same. In 2000, Gartner Group
xamples include: reduced customer satisfaction; lost opportunity of customers that may have gone to
lide 27: Cost of Downtime by Industry Sector
the industry sectors.
In
estimated that 80% of all companies calculating downtime were including indirect costs in their calculations
for the first time.
E
direct competitors during the downtime event; damaged brand perception; and negative public relations.
S
A business’s downtime costs are directly related to
example, Energy and Telecommunications organizations may experience lost revenues on the order of
2 to 3 million dollars an hour. Manufacturing, Financial Institutions, Information Technology, Insurance,
Retail and Pharmaceuticals all stand to lose over 1 million dollars an hour.
For
Fundamentals of Availability P a g e | 12
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
or example, one way to estimate
the revenue lost due to a downtime event is to look at normal hourly sales and then multiply that figure by
Slide 28: Calculating Cost of Downtime
There are many ways to calculate cost of downtime for an organization. F
the number of hours of downtime.
Remember, however, that this is only one component of a larger equation and, by itself, seriously
underestimates the true loss. Another example is loss of productivity.
ly that figure by the number of hours
f downtime.
The most common way to calculate the cost of lost productivity is to first take an average of the hourly
salary, benefits and overhead costs for the affected group. Then, multip
o
Because companies are in business to earn profits, the value employees contribute is usually greater than
the cost of employing them.
ides only a very conservative estimate of the labor cost of downtime.
Therefore, this method prov
Fundamentals of Availability P a g e | 13
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
• To stay competitive in today’s global marketplace, businesses must strive to achieve high levels of
availability and reliability. While 99.999% availability is the ideal operating condition for most
indirect costs of downtime in many business sectors can be exorbitant, and often is
oday to calculate their level of availability in order to reduce
Slide 29: Summary
businesses.
• Power outages, inadequate cooling, natural and artificial disasters, and human errors pose a
significant barrier to high availability.
• The direct and
enough to bankrupt many organizations.
• Therefore it is critical for businesses t
risks, and increase overall reliability and availability.
Slide 30: Thank You!
Thank you for participating in this course.
Examining Fire Protection Methods in the Data Center P a g e | 1
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Examining Fire Protection Methods in the Data Center
Transcript
Slide 1
Welcome to the Data Centers UniversityTM course on Examining Fire Protection Methods in the Data Center.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the attachments link to download supplemental information for this course.
Click the Notes tab to read a transcript of the narration.
Slide 3: Learning Objectives
At the completion of this course, you will be able to:
• Explain the importance of fire protection for data centers
• Identify the main goals of a data center fire protection system
• Explain the basic theory of fire suppression
• Differentiate the classes of fire and the stages of combustion
• Recognize the different methods of fire detection, fire communication and fire suppression
• Identify the different types of fire suppression agents and devices appropriate for data centers
Slide 4: Introduction
Throughout history, fire has systematically wreaked havoc on industry. Today’s data centers and network
rooms are under enormous pressure to maintain seamless operations. Some companies risk losing millions
of dollars with one data center catastrophe.
Slide 5: Introduction
Examining Fire Protection Methods in the Data Center P a g e | 2
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
In fact, industry studies tell us that 43% of businesses that closed due to fire never reopen and 29% of those
that do reopen fail within 3 years. With these statistics in mind, it is imperative that all businesses prepare
themselves for unseen disasters. The good news is that the most effective method of fire protection is fire
prevention. At the completion of this course you will be one step closer to understanding industry
safeguarding methods that are used to protect a data centers hottest commodity, information.
Slide 6: Introduction
This course will discuss the prevention, theory, detection, communication and suppression of fire specific to
data centers.
Slide 7: national Fire Protection Association
Let us start by discussing the National Fire Protection Association or the NFPA. The NFPA is a worldwide
organization that was established in 1896 to protect the public against the dangers of fire and electricity. The
NFPA’s mission is to “reduce the worldwide burden of fire and other hazards on the quality of life by
developing and advocating scientifically based consensus codes and standards, research, training, and
education”.
The NFPA is responsible for creating fire protection standards, one of them being NFPA 75. NFPA 75 is the
standard for protection of computer or data processing equipment. One notable addition to NFPA 75 that
took place in 1999, allows data centers to continue to power electronic equipment upon activation of a
Gaseous Agent Total Flooding System, which we will discuss later in detail. This exception was made for
data centers that meet the following risk considerations:
• Economic loss that could result from:
• Loss of function or loss of records
• Loss of equipment value
• Loss of life
• and the risk of fire threat to the installation, to occupants or exposed property within that installation
It’s important to note that NFPA continually updates its standards to accommodate the ever changing data
center environment. Please note, that NFPA does set the worldwide standards for fire protection but in most
Examining Fire Protection Methods in the Data Center P a g e | 3
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
cases the Authority Having Jurisdiction (AHJ) has final say in what can or cannot be used for fire protection
in a facility. Now that we have identified the standards and guidelines of fire protection for a data center,
let’s get started with some facts about fire protection.
Slide 8: Prevention
Fire prevention provides more protection then any type of fire detection device or fire suppression
equipment available. In general, if the data center is incapable of breeding fire there will be no threat of fire
damage to the facility. To promote prevention within a data center environment it is important to eliminate as
many fire causing factors as possible. A few examples to help achieve this are:
• When building a new data center, ensure that it is built far from any other buildings that may pose a
fire threat to the data center
• Enforce a strict no smoking policy in IT and control rooms
• The data center should be void of any trash receptacles
• All office furniture in the data center must be constructed of metal. (Chairs may have seat
cushions.)
• The use of acoustical materials such as foam or fabric or any material used to absorb sound is not
recommended in a data center
Even if a data center is considered fire proof, it is important to safeguard against downtime in the event that
a fire does occur. Fire protection now becomes the priority.
Slide 9: System Objectives of Data Center Fire Protection
The main goal of a data center fire protection system is to contain a fire without threatening the lives of
personnel and to minimize downtime. With this in mind, if a fire were to breakout there are three system
objectives that must be met. The first objective is to detect the presence of a fire. The second objective is to
communicate the threat to both the authorities and occupants. Finally, the last objective is to suppress the
fire and limit any damage. Being familiar with common technologies associated with fire detection,
communication, and suppression allows IT managers to better specify a fire protection strategy for their data
Examining Fire Protection Methods in the Data Center P a g e | 4
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
center. Prior to the selection of a detection, communication or suppression system, a design engineer must
assess the potential hazards and issues associated with the given data center.
Slide 10: Tutorial on Fire
When discussing fire protection it’s important that we first understand the basic theory behind fire. This
section will provide a tutorial on fire. We will cover the following topics:
• The Fire Triangle
• The classes of Fire and
• Fire’s Stages of Combustion
Slide 11: The Fire Triangle
The “fire triangle” represents the three elements that must interact in order for fire to exist. These elements
are heat, oxygen and fuel. Fuel is defined as a material used to produce heat or power by burning. When
considering fire in a data center, fuel is anything that has the capability to catch fire, such as servers, cables,
or flooring. As you can see, when one of these factors is taken away, the fire can no longer exist. This is the
basic theory behind fire suppression.
Slide 12: Classes of Fire
Fire can be categorized into five classes; class A, B, C, D, K. As you can see from the Classes of Fire chart,
Class A represents fires involving ordinary combustible materials such as paper, wood, cloth and some
plastics.
(Image on next page)
Examining Fire Protection Methods in the Data Center P a g e | 5
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Class B fires are fires involving flammable liquids and gases such as oil, paint lacquer, petroleum and
gasoline. Class C fires involve live electrical equipment. Class C fires are usually Class A or Class B fires
that have electricity present. Class D fires involve combustible metals or combustible metal alloys such as
magnesium, sodium and potassium. The last class is Class K fires. These fires involve cooking appliances
that use cooking agents such as vegetable or animal oils and fats. Generally, Class A, B and C fires are the
most common classes of fire that one may encounter in a data center. This chart represents all of the
different classes of fire that are able to be extinguished successfully with a basic fire extinguisher. Later in
the course, we will discuss several types of extinguishing agents used in data centers.
Slide 13: Stages of Combustion
The next step in categorizing a fire is to determine what stage of combustion it is in. The four stages of
combustion are:
1. The incipient stage or pre-combustion stage,
2. The visible smoke stage,
3. The flaming fire stage, and lastly,
4. The intense heat stage.
Examining Fire Protection Methods in the Data Center P a g e | 6
As these stages progress, the risk of property damage, and risk to life increases drastically. All of these
categories play an important role in fire protection specifically, data centers. By studying the classes of fire
and the stages of combustion it is easy to determine what type of fire protection system will best suit the
needs of a data center.
Slide 14: Fire Detection Devices
Now that we have completed our tutorial on fire, let us look at some fire detection devices. There are three
main types of fire detection devices, they are:
1. Smoke detectors
2. Heat detectors and
3. Flame detectors
For the purposes of protecting a data center, smoke detectors are the most effective. Heat detectors and
flame detectors are not recommended for use in data centers, as they not provide detection in the incipient
stages of a fire and therefore do not provide early warning for the protection of high value assets. Smoke
detectors are far more effective forms of protection in data centers simply because they are able to detect a
fire at the incipient stage. For this reason we will be focusing on the attributes and impact of smoke
detectors.
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Examining Fire Protection Methods in the Data Center P a g e | 7
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 15: Smoke Detectors
The two types of smoke detectors that are used effectively in data centers are:
1. Intelligent spot type detectors and
2. Air sampling smoke detectors
Slide 16: Intelligent Spot Type Detectors
Intelligent spot type smoke detectors are much more sensitive than a conventional smoke detector.
Intelligent spot type smoke detectors utilize a laser beam which scans particles that pass through the
detector. The laser beam is able to distinguish whether or not the particles are simply dust or actually a by-
product of combustion such as smoke. Furthermore, intelligent spot type smoke detectors are individually
addressable. This means that it has the ability to send information to a central control station and pinpoint
the exact location of the alarm. Another feature of intelligent spot type smoke detectors is that the sensitivity
of the detector can be increased or decreased during certain times of the day. For example, when workers
leave an area, the sensitivity can be increased. The intelligent spot type smoke detectors can also
compensate to a changing environment due to environmental factors such as humidity or dirt accumulation.
Slide 17: Intelligent Spot Type Detectors
Intelligent spot type detectors are most commonly placed in the following areas:
• Below raised floors,
• On ceilings, and
• Above drop down ceilings,
• In air handling ducts to detect possible fires within an HVAC system. By placing detectors near the
exhaust intake of the computer room air conditioners, detection can be accelerated.
Slide 18: Air Sampling Smoke Detection Systems
Air sampling smoke detection systems, sometimes referred to as a “Very Early Smoke Detection” (VESD)
Systems, are usually described as a high powered photoelectric detector. These systems are comprised of
Examining Fire Protection Methods in the Data Center P a g e | 8
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
a network of pipes attached to a single detector, which continually draws air in and samples it. The pipes are
typically made of PVC but can also be CPVC, EMT or copper. Depending on the space being protected and
the configuration of multiple sensors, these systems can cover an area of 2,500 to 80,000 square feet or
232 to 7,432 square meters. This system also utilizes a laser beam, much more powerful than the one
contained in a common photoelectric detector, to detect by-products of combustion. As the particles pass
through the detector, the laser beam is able to distinguish them as dust or byproducts of combustion.
Slide 19: Signaling and Notification Devices
Now that we have talked about fire detection devices and systems, let’s take a look at the next objective of
fire protection, communication. All of the previously mentioned detection devices would be virtually useless
if they were not directly tied into an effective signaling and notification device. Signaling devices provide
audible alarms such as horns, bells or sirens or visual alarms such as strobes, which warn building
occupants after a signaling device has been activated. Signaling devices are also an effective way of
communicating danger to individuals who may be visually or hearing impaired. One of the most basic and
common signaling devices are pull stations. These images represent your typical pull station.
Slide 20: Control Systems
The next communication system we will be covering is the control system. Control systems are often
considered the “brains” of a fire protection system. The computer programs used by control systems allow
users to program and manage the system based on their individual requirements. The system can be
programmed with certain features such as time delays, thresholds, and passwords. Once the detector, pull
station or sensor activates the control system, the system has the ability to set its preprogrammed list of
rules into motion. Most importantly, the control system can provide valuable information to the authorities.
Slide 21: Emergency Power Off
One important safety feature to mention that is not directly related to communication or suppression
systems is the Emergency Power Off or (EPO). If a fire progresses to the point where all other means of
Examining Fire Protection Methods in the Data Center P a g e | 9
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
suppression have been exhausted, the authorities that arrive on site will have the option to utilize this
feature. The EPO is intended to power down equipment or an entire installation in an emergency to protect
personnel and equipment.
EPO is typically used either by fire fighting personnel or by equipment operators. When used by firefighters,
it is used to assure that equipment is de-energized during fire fighting so that firefighters are not subjected to
shock hazards. The secondary purpose is to facilitate fire fighting by eliminating electricity as a source of
energy feeding combustion. EPO may also be activated in case of a flood, electrocution, or other
emergency.
There is a high cost associated with abruptly shutting down a data center. Unfortunately, EPO tripping is
often the result of human error. Much debate has ensued over the use of EPO and may one day lead to the
elimination of EPO in data centers.
Slide 22: Fire Suppression Agents and Devices
The last goal of a data center fire protection system is suppression. The next section will review suppression
agents and devices that are often used in data centers or IT environments. Let’s start with the most
common suppression agents and devices, they are:
• Fire extinguishers and,
• Total flooding fire extinguishing systems
Slide 23: Fire Extinguishers
Fire extinguishers are one of the oldest yet most reliable forms of fire suppression. They are extremely
valuable in data centers because they are a quick solution to suppressing a fire. Fire extinguishers allow for
a potential hazardous situation to be addressed before more drastic or costly measures need to be taken.
It is important to note that only specific types of gaseous agents can be used in data center fire
extinguishers. HFC-236fa, is a gaseous agent specific to fire extinguishers, that has been approved for use
in data centers. It is environmentally safe and can be discharged in occupied areas. Additionally, it exists as
Examining Fire Protection Methods in the Data Center P a g e | 10
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
a gas therefore; it leaves no residue upon discharge. Simply put, it extinguishes fires by removing heat and
chemically preventing combustion.
Slide 24: Total Flooding Fire Extinguishing Systems
A more sophisticated form of fire extinguishing is the Total Flooding Fire Extinguishing System, sometimes
referred to as a clean agent fire suppression system.
Total Flooding Fire Extinguishing Systems are comprised of a series of cylinders or high pressure tanks
filled with an extinguishing or gaseous agent. A gaseous agent is a gaseous chemical compound that
extinguishes the fire by either removing heat or oxygen or both. Given a closed, well-sealed room, gaseous
agents are very effective at extinguishing a fire while leaving no residue.
When installing such a system, the total volume of the room and how much equipment is being protected is
taken into consideration. The number of the tanks or cylinders to be installed are dependent upon these
factors. It is important to note, that the Standard that guides Total Flooding Suppression Systems is NFPA
2001. The next slide features a live demonstration of a Total Flooding Fire Extinguishing system in action.
Slide 25: Total Flooding Fire Extinguishing Systems
If a fire occurs and the system is activated, the gaseous agent discharges and fills the room in about 10
seconds. One of the best features of this system is that it is able to infiltrate hard to reach places, such as
equipment cabinets. This makes Total Flooding Fire Extinguishing Systems perfect for data centers.
Now that we have discussed Total Flooding Fire Extinguishing System, let’s start reviewing the agents that
such systems deploy.
Slide 26: Total Flooding Fire Extinguishing Systems
In the past, some agents were conductive and/or corrosive. Conductive and corrosive agents have negative
impacts on IT equipment. For example, conductive agents may cause short circuits between electronic
Examining Fire Protection Methods in the Data Center P a g e | 11
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
components within IT equipment and corrosive agents may “eat away” at electronic components within IT
equipment. The gaseous agents used in today’s data centers are non conductive and non corrosive. An
effective agent that is both non conductive and non corrosive and was widely used in data centers is Halon.
Unfortunately, it was discovered that Halon is detrimental to the ozone layer and as of 1994, the production
of Halon is no longer permitted. This has lead to the development of safer and cleaner gaseous agents.
Let’s review some of the more popular gaseous agents for data centers.
Slide 27: Most Commonly Used Gaseous Agents
Today, some of the most commonly used gaseous agents in data centers are Inert gases and Fluorine
Based Compounds. Let’s review the characteristics of each agent.
Slide 28: Inert Gases
The most widely accepted inert gases for fire suppression in data centers are :
• Pro-Inert or IG-55 and Inergen or IG-451
• Inert gases are composed of nitrogen, argon, and carbon dioxide, all of which are found naturally in
the atmosphere. Because of this, they have zero Ozone Depletion Potential, meaning that it
possesses no threat to humans or to the environment. Inert gases can be also discharged in
occupied areas and are non-conductive.
• Inergen requires a large number of storage tanks for effective discharge. But because Inergen is
stored as a high pressure gas, it can be stored up to 300 feet or 91.44 meters away from the
discharge nozzles and still discharge effectively. Inergen is used successfully in telecommunication
offices and data centers.
Slide 29: Fluorine Based Compounds
Examining Fire Protection Methods in the Data Center P a g e | 12
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Another suppression alternative for data centers is Fluorine Based Compounds. Fluorine Based Compound,
HFC-227ea is known under two commercial brands; FE-200 and FE-227. HFC-227ea has a zero ozone
depletion potential (ODP) and an acceptably low global warming potential. It is also an odorless, colorless.
Slide 30: Fluorine Based Compounds
HFC-227ea is stored as liquefied compressed gas with a boiling point of 2.5 degrees F (-16.4 degrees C). It
is discharged as an electrically non-conductive gas that leaves no residue and will not harm occupants;
however, like in any other fire situation all occupants should evacuate the area as soon as an alarm sounds.
It can be used with ceiling heights up to 16 feet. HFC-227ea has one of the lowest storage space
requirements; the floor space required is equal to that of needing only 1.7 times that of a Halon 1301 system.
HFC-227ea chemically inhibits the combustion reaction by removing heat and can be discharged in 10
seconds or less. An advantage to this agent is that it can be retrofitted into an existing Halon 1301 system
but the pipe network must be replaced or an additional cylinder of nitrogen must be used to push the agent
through the original Halon pipe network. Some applications include data centers, switchgear rooms,
automotive, and battery rooms.
Slide 31: Fluorine Based Compounds
There is also HFC-125. HFC-125 it is known under two commercial brands; ECARO-25 and FE-25. HFC-
125 has a zero ozone depletion potential (ODP) and an acceptable low global warming potential It is an
odorless, colorless and is stored as a liquefied compressed gas. This agent chemically inhibits the
combustion reaction by removing heat and can be discharged in 10 seconds or less as an electrically non-
conductive gas that leaves no residue and will not harm occupants. It can be used in occupied areas;
however, like in any other fire situation all occupants should be evacuated as soon as an alarm sounds. It
can be used with ceiling heights up to 16 feet or 4.8 meters. One of the main advantages of HFC-125 is that
it flows more like Halon than any other agent available today and can be used in the same pipe network
distribution as an original Halon system.
Slide 32: Other Methods of Fire Suppression
Examining Fire Protection Methods in the Data Center P a g e | 13
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Other methods of fire suppression often found in data centers are:
1. Water Sprinklers Systems and
2. Water Mist Suppression Systems
Of the two options, Water Sprinklers are often present in many facilities due to national and/or local fire
codes.
Let‘s review a few of the key elements of Water Sprinklers and Water Mist Suppression Systems.
Slide 33: Water Sprinkler System
Water sprinkler systems are designed specifically to protect the structure of a building. The system is
activated when the given environment reaches a designated temperature and the valve fuse opens. A valve
fuse is a solder or glass bulb that opens when it reaches a temperature of 165-175°F or 74-79°C.
Slide 34: Water Sprinkler System
There are currently three configurations of water sprinkler systems available: wet-pipe, dry-pipe, and pre-
action. Wet-pipe systems are the most commonly used and are usually found in insulated buildings. Dry-
pipe systems are charged with compressed air or nitrogen to prevent damage from freezing. Pre-action
systems prevent accidental water discharge by requiring a combination of sensors to activate before
allowing water to fill the sprinkler pipes. Because of this feature, pre-action systems are highly
recommended for data center environments. Lastly, it is important to note that water sprinklers are not
typically recommended for data centers, but depending on local fire codes they may be required.
Slide 35: Water Mist Suppression Systems
The last suppression system we will be discussing is the water mist suppression system. When the system
is activated it discharges a very fine mist of water onto a fire. The mist of water extinguishes the fire by
absorbing heat. By doing so, vapor is produced, causing a barrier between the flame and the oxygen
needed to sustain the fire. Remember the "fire triangle"? The mist system effectively takes away two of the
main components of fire, heat and oxygen. This makes the system highly effective. Additionally, because a
Examining Fire Protection Methods in the Data Center P a g e | 14
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
fine mist is used, less water is needed; therefore, the water mist system needs minimal storage space.
Water mist systems are gaining popularity due to their effectiveness in industrial environments. Because of
this, we may see an increase in the utilization of such systems in data centers.
Slide 36: Summary
In summary:
• The three system objectives of a data center fire protection system are:
1. To identify the presence of a fire
2. Communicate the threat to the authorities and occupants
3. To suppress the fire and limit any damage
• The 2 types of smoke detectors that are used effectively in data centers are:
1. Intelligent spot type smoke detectors
2. Air sampling smoke detectors
• Signaling devices provide:
o Audible alarms such as:
ƒ Horns
ƒ Bells or sirens
o Visual alarms such as:
ƒ Strobes
Slide 37: Summary
• The most common fire suppression devices used in data centers are:
o Fire extinguishers
o Total Flooding Fire Extinguishing Systems
• The most commonly used gaseous agents in data centers are:
o Inert gases
o Fluorine Based Compounds
• Additional methods of fire suppression are:
o Water sprinkler
Examining Fire Protection Methods in the Data Center P a g e | 15
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
o Water mist suppression systems
Slide 38: Thank You!
Thank you for participating in this course.
Fundamental Cabling Strategies for Data Centers P a g e | 1
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Fundamental Cabling Strategies for Data Centers
Transcript
Slide 1
Welcome to Fundamental Cabling Strategies for Data Centers.
Slide 2
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click ATTACHMENTS to download important supplemental information for this
course. Click the Notes tab to read a transcript of the narration.
Slide 3
At the completion of this course, you will be able to:
• Discuss the evolution of cabling
• Classify different types of common data center cables
• Describe cabling installation practices
• Identify the strategies for selecting cabling topologies
• Utilize cable management techniques
• Recognize the challenges associated with cabling in the data center
Slide 4
From a cost perspective, building and operating a data center represents a significant piece of any
Information Technology (IT) budget. The key to the success of any data center is the proper design and
implementation of core critical infrastructure components. Cabling infrastructure, in particular, is an
important area to consider when designing and managing any data center.
The cabling infrastructure encompasses all data cables that are part of the data center, as well as all of the
power cables necessary to ensure power to all of the loads. It is important to note that cable trays and cable
Fundamental Cabling Strategies for Data Centers P a g e | 2
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
management devices are critical to the support of IT infrastructure as they help to reduce the likelihood of
downtime due to human error and overheating.
Slide 5
This course will address the basics of cabling infrastructure and will discuss cabling installation practices,
cable management strategies and cable maintenance practices. We will take an in-depth look at both data
cabling and power cabling. Let’s begin with a look at the evolution of data center cabling.
Slide 6
Ethernet protocol has been a data communications standard for many years. Along with Ethernet, several
traditional data cabling practices continue to shape how data cables are deployed.
• High speed data cabling over copper is a cabling medium of choice
• Cable fed into patch panels and wall plates is common
• The RJ45 is the data cable connector of choice
The functionality within the data cables and associated hardware, however, has undergone dramatic change.
Increased data speeds have forced many physical changes. Every time a new, faster standard is ratified by
standardization bodies, the cable and supporting hardware have been redesigned to support it. New test
tools and procedures also follow each new change in speed. These changes have primarily all been
required by the newer, faster versions of Ethernet, which are driven by customers’ needs of more speed and
bandwidth. When discussing this, it is important to note the uses and differences of both fiber-optic cable,
and traditional copper cable. Let’s compare these two.
Slide 7
Copper cabling has been used for decades in office buildings, data centers and other installations to provide
connectivity. Copper is a reliable medium for transmitting information over shorter distances; but its
performance is only guaranteed up to 109.4 yards (100 meters) between devices. (This would include
structured cabling and patch cords on either end.)
Fundamental Cabling Strategies for Data Centers P a g e | 3
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Copper cabling that is used for data network connectivity contains four pairs of wires, which are twisted
along the length of the cable. The twist is crucial to the correct operation of the cable. If the wires unravel,
the cable becomes more susceptible to interference.
Copper cables come in two configurations:
• Solid cables provide better performance and are less susceptible to interference making them the
preferred choice for use in a server environment.
• Stranded cables are more flexible and less expensive, and typically are only used in patch cord
construction.
Copper cabling, patch cords, and connectors are classified based upon their performance characteristics
and for which applications they are typically used. These ratings, called categories, are spelled out in the
TIA/EIA 568 Commercial Building Telecommunications Writing Standard.
Slide 8
Fiber-optic cable is another common medium for providing connectivity. Fiber cable consists of five
elements. The center portion of the cable, known as the core, is a hair thin strand of glass capable of
carrying light. This core is surrounded by a thin layer of slightly purer glass, called cladding, that contains
and refracts that light. Core and cladding glass are covered in a coating of plastic to protect them from dust
or scratches. Strengthening fibers are then added to protect the core during installation. Finally, all of these
materials are wrapped in plastic or other protective substance that serves as the cable’s jacket.
A light source, blinking billions of times per second, is used to transmit data along a fiber cable. Fiber-optic
components work by turning electronic signals into light signals and vice versa. Light travels down the
interior of the glass, refracting off of the cladding and continuing onward until it arrives at the other end of
the cable and is seen by receiving equipment.
When light passes from one transparent medium to another, like from air to water, or in this case, from the
glass core to the cladding material, the light bends. A fiber cable’s cladding consists of a different material
from the core — in technical terms, it has a different refraction index — that bends the light back toward the
Fundamental Cabling Strategies for Data Centers P a g e | 4
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
core. This phenomenon, known as total internal reflection, keeps the light moving along a fiber-optic cable
for great distances, even if that cable is curved. Without the cladding, light would leak out.
Fiber cabling can handle connections over a much greater distance than copper cabling, 50 miles (80.5
kilometers) or more in some configurations. Because light is used to transmit the signal, the upper limits of
how far a signal can travel along a fiber cable is related not only to the properties of the cable but also to the
capabilities and relative location of transmitters.
Slide 9
Besides distance, fiber cabling has several other advantages over copper:
• Fiber provides faster connection speeds
• Fiber is not prone to electrical interference or vibration
• Fiber is thinner and light-weight, so more cabling can fit into the same size bundle or limited spaces
• Signal loss over distance is less along optical fiber than copper wire
Two varieties of fiber cable are available in the marketplace: multimode fiber and single mode fiber.
Multimode is commonly used to provide connectivity over moderate distances, such as those in most data
center environments, or among rooms within a single building. Single mode fiber is used for the longest
distances, such as among buildings on a large campus, or between sites.
Copper is generally the less expensive cabling solution over shorter distances (i.e. the length of data center
server rows), while fiber is less expensive for longer distances (i.e. connections among buildings on a
campus).
Slide 10
In the case of data center power cabling, however, historical changes have taken a different route.
Fundamental Cabling Strategies for Data Centers P a g e | 5
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
In traditional data centers, designers and engineers were not too concerned with single points of failure.
Scheduled downtime was an accepted practice. Systems were periodically taken down to perform
maintenance, and to make changes. Data center operators would also perform infrared scans on power
cable connections prior to the shutdowns to determine problem areas. They would then locate the hot spots
that could indicate possible risk of short circuits and address them.
Traditional data centers, very often, had large transformers that would feed large uninterruptible power
supplies (UPSs) and distribution switchboards. From there, the cables would go to distribution panels that
would often be located on the columns or walls of the data center. Large UPSs, transformers, and
distribution switchgear were all located in the back room.
The incoming power was then stepped down to the correct voltage and distributed to the panels mounted in
the columns. Cables connected to loads, like mainframe computers, would be directly hardwired to the
hardware. In smaller server environments, the power cables would be routed to power strips underneath the
raised floor. The individual pieces of equipment would then plug into those power strips, using sleeve and
pin connectors, to keep the cords from coming apart.
Slide 11
Downtime is not as accepted as it once was in the data center. In many instances, it is no longer possible to
shut down equipment to perform maintenance. A fundamentally different philosophical approach is at work.
Instead of the large transformers of yesterday, smaller ones, called power distribution units (PDUs) are now
the norm. These PDUs have moved out of the back room, onto the raised floor, and in some cases, are
integrated into the racks. These PDUs feed the critical equipment. This change was the first step in a new
way of thinking, a trend that involved getting away from the large transformer and switchboard panel.
Modern data centers also have dual cord environments. Dual cord helps to minimize a single point of failure
scenario. One of the benefits of the dual cord method is that data center operators can perform
Fundamental Cabling Strategies for Data Centers P a g e | 6
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
maintenance work on source A, while source B maintains the load. The server never has to be taken offline
while upstream maintenance is being performed.
This trend began approximately 10 years ago and it was clearly driven by the user. It became crucial for
data center managers to maintain operations 24 hours a day, 7 days per week. Some of the first businesses
to require such operations were the banks, who introduced ATMs, which demanded constant uptime. The
customer said “We can no longer tolerate a shutdown”.
Now that we have painted a clear picture of the history of cabling infrastructure, we’ll discuss the concept of
modularity and its importance in the data center.
Slide 12
Modularity is an important concept in the contemporary data center. Modular, scalable Network Critical
Physical Infrastructure (NCPI) components have been shown to be more efficient and more cost effective.
The data cabling industry tackled the issue of modularity decades ago. Before the patch panel was
designed, many adds, moves and changes were made by simply running new cable. After years of this ‘run
a new cable’ mentality, wiring closets and ceilings were loaded with unused data cables. Many wiring
closets became cluttered and congested. The strain on ceilings and roofs from the weight of unused data
cables became a potential hazard. The congestion of data cables under the raised floor also impeded
proper cooling and exponentially increased the potential for human error and downtime.
Slide 13
In the realm of data cabling, the introduction of the patch panel brought an end to the ‘run a new cable’
philosophy and introduced modularity to network cabling. The patch panel, located either on the data center
floor or in a wiring closet, is the demarcation point where end points of bulk cable converge. If a data center
manager were to trace a data cable from end to end, starting at the patch panel, he would probably find
himself ending at the wall plate. This span is known as the backbone.
Fundamental Cabling Strategies for Data Centers P a g e | 7
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
The modularity of the system is in the use of patch cables. The user plugs his patch cable into a wall plate. If
he needs to move a computer, for example, he simply unplugs his patch cable and connects into a different
wall plate. The same is true on the other end, back at the patch panels. If a port on a hub or router
malfunctions, the network administrator can simply unplug it and connect it into another open port.
Data center backbone cabling is typically designed to be non-scalable. The data cabling backbone, 90% of
the time, is located behind the walls, not out in the open.
Typically a network backbone, when installed, especially in new construction scenarios, accounts for future
growth considerations. Adds, moves and changes can be very costly once the walls are constructed. In new
construction it is best to wire as much of the building as possible, with the latest cable standard. This
reduces expenses once the walls are constructed.
Now that we have discussed the concept of modularity, let’s overview the different types of data cables that
exist in a data center.
Slide 14
So, what are the different types of common data center specific data cables?
Category 5 (Cat 5) was originally designed for use with 100 Base-T. Cat 5e supports 1 Gig Ethernet. Cat 6a
supports 10 Gig Ethernet. It is important to note that a higher rated cable can be used to support slower
speeds, but the reverse is not true. For example, a Cat 5e installation will not support 10 Gig Ethernet, but
Cat 6a cabling will support 100 Base-T.
Cable assemblies can be defined as a length of bulk cable with a connector terminated onto both ends.
Many of the assemblies used are patch cables of various lengths that match or exceed the cabling standard
of the backbone. A Cat 5e backbone requires Cat 5e or better patch cables.
Slide 15
Fundamental Cabling Strategies for Data Centers P a g e | 8
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Data center equipment can require both standard and custom cables. Some cables are specific to the
equipment manufacturer. One example of a common connection would involve a scenario in which a Cisco
router with the 60-pin LFH connector connected to a router with V.35 interface requires an LFH60 to V.35
Male DTE cable.
An example of a less common connection would be a stand alone tape backup that may have a SCSI
interface. If the cable that came with the equipment does not match up to the SCSI card in a computer, the
data center manager will find himself looking for a custom SCSI cable.
A typical example of the diversity of cables required in the data center is a high speed serial router cable. In
a wide area network (WAN), routers are typically connected to modems, which are called DSU/CSU’s.
Some router manufacturers feature unorthodox connectors on their routers. Depending on the interface that
the router and DSU/CSU use to communicate to one another, several connector possibilities exist.
Other devices used in a computer room can require any one of a myriad of cables. Common devices
besides the networking hardware are telco equipment, KVM’s, mass storage, monitors, keyboard and
mouse, and terminal servers. Sometimes brand-name cables are expensive or unavailable. A large market
of manufacturer equivalent cables exists, from which the data center manager can choose.
Slide 16
When discussing data center power cabling, it is important to note that American Wire Gauge (AWG) copper
wire is the common medium for transporting power in the data center. This has been the case for many
years and it still holds true in modern data centers.
The formula for power is Amp x Volts = Power; and data center power cables are delineated by amperage.
The more power that needs to be delivered to the load, the higher the amperage has to be. (Note: The
voltage will not be high under the raised floor. It will be less than 480V; most servers are designed to handle
120 or 208V.)
Fundamental Cabling Strategies for Data Centers P a g e | 9
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
If the level of power is the same, the amperage and voltage are the same. As the amperage increases or
decreases, the gauge of the wire needs to be larger or smaller to accommodate the change in amperage.
AWG ratings organize copper wire into numerous recognizable and standard configurations.
A relatively new trend in the domain of data center power cabling is the invention of the whip. Whips are pre-
configured cables with a twist lock cap on one end and insulated copper on the other end. The insulated
copper end feeds a breaker in the main PDU; the twist lock end feeds the rack mounted PDU that supplies
the intelligent power strips in the rack. Server equipment then plugs directly into the power strip. With whips,
there is no need for wiring underneath the floor (with the possible exception of the feed to the main PDU
breakers). Thus, the expense of a raised floor can be avoided. Another benefit of whips is that a licensed
electrician is not required to plug in the twist lock connectors of the whip into the power strip twist lock
receptacles.
Slide 17
Dual cord, dual power supply also introduced significant changes to the data center power cabling scheme.
In traditional data centers, computers had one feed from one transformer or panel board, and the earliest
PDUs still only had one feed to servers. Large mainframes required two feeds to keep systems consistently
available. Sometimes two different utilities were feeding power to the building.
Now, many servers are configured to support two power feeds, hence the dual cord power supply. Because
data center managers can now switch from one power source to another, this allows for maintenance on
infrastructure equipment without having to take servers offline.
It is important to understand that the power cabling requirements to support the dual cord power supply
configuration have doubled as a result. The same wire, the same copper, and the same sizes, are required
as was required in the past, but now data center designers need to account for double the power
infrastructure cable, including power related infrastructure that may be located in the equipment room that
supports the data center.
Fundamental Cabling Strategies for Data Centers P a g e | 10
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Now that we’ve talked about a basic overview of both power and data cabling, let’s take a look at some best
practices for cabling in the data center.
Slide 18
Some best practices for data cabling include:
• Overhead deployments
o Overhead cables that are in large bundles should run in cable trays or troughs. If the
manufacturer of the tray or trough offers devices that keep the cable bend radius in check then
they should be used as well. Do not over tighten tie wraps or other hanging devices. It can
interfere with the performance of the cable.
• Underfoot deployments
o Be cognizant of the cable’s bend radius specifications and adhere tightly to them. Do not over
tighten tie wraps. This can interfere with the performance of the cable.
• Rack installations
o As talked about previously, be cognizant of the cable’s bend radius specifications and adhere
tightly to them. Don’t over tighten tie wraps. This can interfere with the performance of the
cable. Use vertical and/or horizontal cable management to take up any extra slack.
• Testing cables
o There are several manufacturers of test equipment designed specifically to test today’s high
speed networks. Make sure that the installer tests and certifies every link. A data center
manager can request a report that shows the test results.
Are there any common practices that should be avoided? When designing and installing the network’s
backbone care should be taken to route all Unshielded Twisted Pair (UTP is the U.S. standard) or Shielded
Twisted Pair (STP is the European standard) cables away from possible sources of interference such as
power lines, electric motors or overhead lighting.
Slide 19
Power cabling best practices are described in the National Electric Code (NEC).
Fundamental Cabling Strategies for Data Centers P a g e | 11
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
When addressing best practices in power cabling, it is important that data center professionals use the term,
“continuous load’. The continuous load is defined as any load left on for more than 3 hours, which is, in
effect, all equipment in a data center. Due to the requirements of the continuous load, data center operators
are forced to take all rules that apply to amperages and wire sizes and de-rate those figures by 20%. For
example, if a wire is rated for 100 amps, the best practice is not to run more than 80 amps through it. Let’s
discuss this further.
Over time, cables can get overheated. The de-rating approach helps avoid overheated wires that can lead
to shorts and fires. If the quantity of copper in the cable is insufficient for the amperages required, it will heat
to the point of melting the insulation. If insulation fails, the copper is exposed to anything metal or grounded
in its proximity. If it gets close enough, the electricity will jump or arc and could cause a fire to start.
Undersized power cables also stress the connections. If any connection is loose, the excess load
exacerbates the situation. The de-rating of the power cables takes these facts into account.
To further illustrate this example, let’s compare electricity to water. If too much water gets pushed into a pipe,
the force of the water will break the pipe if it is too small. Amperages are forcing electricity through the wire;
therefore, the wire is going to heat up if the wire is undersized.
The manufacturer, or supplier, of the cable provides the information regarding the circular mill, or the area of
the wires, inside the cable. The circular mill does not take into account the wire insulation. The circular mill
determines how much amperage can pass through that piece of copper.
Next, let’s compare overhead and under the floor installations.
Slide 20
The benefit of under the floor cabling is that the cable is not visible. Many changes can be made and the
wiring will not be seen. The disadvantage of under the floor cabling is the significant expense of constructing
a raised floor. Data center designers also need to take into account the danger of opening up a raised floor
and exposing other critical systems like the cooling air flow system, if the raised floor is used as a plenum.
Fundamental Cabling Strategies for Data Centers P a g e | 12
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
With overhead cabling, data center designers can use cabling trays to guide the cables to the equipment.
They can also run conduit from the PDU directly to the equipment or computer load. The conduit is not
flexible, however, which is not good if constant change is expected.
A best practice is to use overhead cables which are all pre-configured in the factory and placed in the
troughs to the equipment. This standardization creates a more convenient, flexible environment for the data
center of today.
Slide 21
Where your power source is, where the load is, and what the grid is like, all affect the design and layout of
the cabling in the data center. When discussing overhead cabling, data centers designers are tasked with
figuring out the proper placement of cables ahead of time. Then, they can decide if it would be best to have
the troughs directly over the equipment or in the aisle. Also designers have to take into account local codes
for distributing power. For example, there are established rules that require that sprinkler heads not be
blocked. If there is a 24 inch (60.96 cm) cable tray, designers could not run that tray any closer than 10
inches (25.4 cm) below the sprinkler head to cover up or obstruct the head. They would need to account for
this upfront in the design stage.
Now that we’ve touched upon best practices for installation, let’s discuss some strategies for selecting
cabling topologies.
Slide 22
Network Topology deals with the different ways computers (and network enabled peripherals) are arranged
on or connected to a network. The most common network topologies are:
• Star. All computers are connected to a central hub.
• Ring. Each computer is connected to two others, such that, starting at any one computer, the
connection can be traced through each computer on the ring back to the first.
• Bus. All computers are connected to a central cable, normally termed bus or backbone.
Fundamental Cabling Strategies for Data Centers P a g e | 13
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
• Tree. A group of start networks are each connected to a linear backbone.
For data cabling, in IEEE 802.3, UTP/STP Ethernet scenarios, a star network topology is used. Star
topology implies that all computers are connected to a central hub. In it’s simplest form a UTP/STP Ethernet
Star topology has a Hub at the center and devices (i.e. personal computer, printers, etc.) connected directly
to it. Small LANs fit this simple model. Larger installations can be much more complicated, with segments
connecting to other segments, but the basic Star topology remains intact.
Slide 23
Power cables can be laid out either overhead in troughs or below the raised floor. Many factors come into
play when deciding on a power distribution layout from the PDUs to the racks. The size of the data center,
the nature of the equipment being installed and budget are all variables. However, be aware that two
approaches are commonly utilized for distribution of power cables in the data center.
Slide 24
One approach is to run the power cables inside conduits from large wall mounted or floor mounted PDUs to
each cabinet location. This works moderately well for a small server environment with a limited number of
conduits. This does not work well for larger data centers when cabinet locations require multiple power
receptacles.
Slide 25
Another approach, more manageable for larger server environments, is the installation of electrical
substations at the end of each row in the form of circuit panels. Conduit is run from power distribution units
to the circuit panels and then to a subset of connections to the server cabinets.
This configuration uses shorter electrical conduit, which makes it easier to manage, less expensive to install,
and more resistant to a physical accident in the data center. For example, if a heavy object is dropped
Fundamental Cabling Strategies for Data Centers P a g e | 14
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
through a raised floor, the damage it can cause is greatly reduced in a room with segmented power,
because fewer conduits overlap one another in a given area.
Even more efficient is to deploy PDUs in the racks themselves and to have whips feed the various racks in
the row.
Slide 26
What are the best practices for cable management and organization techniques? Some end users purchase
stranded bulk data cable and RJ45 connectors and manufacture their own patch cables on sight. While
doing this assures a clean installation with no excess wire, it is time consuming and costly. Most companies
find it more prudent to inventory pre-made patch cables and use horizontal or vertical cable management to
take up any excess cable. Patch cables are readily available in many standard lengths and colors.
Are there any common practices that should be avoided? All of today’s high speed networks have minimum
bend radius specifications for the bulk cable. This is also true for the patch cables. Care should be taken not
to exceed bend radius on the patch cables.
Slide 27
Proper labeling of power cables in the data center is a recommended best practice. A typical electrical panel
labeling scheme is based on a split bus (two buses in the panel) where the labels represent an odd
numbered side and an even numbered side. Instead of normal sequenced numbering, the breakers would
be numbered 1, 3, 5 on the left hand side and would be numbered 2, 4, 6 on the right side, for example.
When labeling a power cable or whip, the PDU designation from the circuit breaker would be a first identifier.
This identifier number indicates from where the whip comes. Identifying the source of the power cable can
be complicated because the power may not be supplied from the PDU that is physically the closest to the
rack and may not be the one that is feeding the whip. In addition, data center staff may want to access the
“B” power source even though the “A” power source might be physically closer. This is why the power
cables need to be properly labeled at each end. The cable label needs to indicate the source PDU (i.e.
PDU1) and also identify the circuit (i.e. circuit B).
Fundamental Cabling Strategies for Data Centers P a g e | 15
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Ideally on the other end of the cable, a label will indicate what load the cable is feeding (i.e. SAN device, or
Processer D23). To help clarify labeling, very large data centers are laid out in two foot squares that match
the raised floor. They are usually addressed with east/west and numbered designations. For example, “2
west by 30 east” identifies the location of an exact square on the data center floor (which is supporting a
particular piece or pieces of equipment). Therefore the label identifies the load that is being supported by
the cable. Labeling of both ends of the cable in an organized, consistent manner allows data center
personnel to know the origin of the opposite end.
Slide 28
With network data cabling, once the backbone is installed and tested it should be fairly stable. Infrequently,
a cable may become exposed, damaged, and therefore needs to be repaired or replaced. But once in place,
the backbone of a network should remain secure. Occasionally, patch cables can be jarred and damaged;
this occurs most commonly on the user end.
Since the backbone is fairly stable except for occasional repair, almost all changes are initiated simply by
disconnecting a patch cable and reconnecting it somewhere else. The modularity of a well designed cabling
system allows users to disconnect from one wall plate, connect to another and be back up and running
immediately. In the data center, adds, moves and changes should be as simple as connecting and
disconnecting patch cables.
So what are some of the challenges associated with cabling in the data center? We’ll talk about three of the
more common challenges.
Slide 29
The first challenge is associated with useful life.
The initial design and cabling choices can determine the useful life of a data cabling plant. One of the most
important decisions to make when designing a network is choosing the medium: copper, fiber or both?
Fundamental Cabling Strategies for Data Centers P a g e | 16
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Every few years newer-faster-better copper cables are introduced into the marketplace, but fiber seems to
remain relatively unchanged. If an organization chose to install FDDI grade 62.5/125 fiber 15 years ago, that
organization may still be using the same cable today. Whereas if the same organization had installed Cat 5
the organization more than likely would have had replaced it by now. In the early days few large installations
were done in fiber because of the cost. The fiber was more expensive and so was the hardware that it
plugged into. Now the costs of fiber and copper are much closer. Fiber cabling is also starting to change.
The current state of the art is 50/125 laser optimized for 10 Gig Ethernet.
Next, there is airflow and cooling.
There are a few issues with cables and cabling in the data center that effect airflow and cooling. Cables
inside of an enclosed cabinet need to be managed so that they allow for maximum airflow, which helps
reduce heat. When cooling is provided through a raised floor it is best to keep that space as cable free as
possible. For this reason expect to see more and more cables being run across the tops of cabinets as
opposed to at the bottom or underneath the raised floor.
Finally, there is management and labeling.
Many manufacturers offer labeling products for wall plates, patch panels and cables. Also software
packages exist that help keep track of cable management. In a large installation, these tools can be
invaluable.
Let’s take a look at some expenses associated with cabling in the data center.
Slide 30
For data cabling, the initial installation of a cabling plant, and the future replacement of that plant, are the
two greatest expenses. Beyond installation and replacement costs, the only other expense is adding patch
cables as the network grows. The cost of patch cables is minimal considering the other costs in an IT
budget. Cabling costs are, for the most part, up front costs.
Fundamental Cabling Strategies for Data Centers P a g e | 17
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Regarding power cables, the design of the data center, and the location of the PDUs, will have a significant
impact on costs. Dual cord power supplies are driving up the cost because double the power cabling is
required. Design decisions are critical. Where will the loads be located? How far from the power distribution?
What if PDUs are fed from different circuits? If not planned properly, unnecessarily long power cable runs
will be required and will drive up overall data center infrastructure costs.
Next, let’s look at cabling maintenance.
Slide 31
How are cables replaced?
Patch cables are replaced by simply unplugging both ends and connecting the new one. However, cables
do not normally wear out. Most often, if a cable shorts, it is due to misuse or abuse. Cable assemblies have
a lifetime far beyond the equipment to which they are connected.
How are cables rerouted?
If the cable that needs to be rerouted is a patch cable then it can simply be unplugged on one or both ends
and rerouted. If the cable that needs to be rerouted is one of the backbone cables run through the walls,
ceilings, or in cable troughs, it could be difficult to access. The backbone of a cabling installation should be
worry-free, but if problems come up they can sometimes be difficult to address. It depends on what the
issue is, where it is, and what the best solution is. Sometimes re-running a new link is the best solution.
Slide 32
The equipment changes quite frequently in the data center; on the average a server changes every 2-3
years. It is important to note that power cabling only fails at the termination points. The maintenance occurs
at the connections. Data center managers need to scan those connections and look for hot spots. It is also
prudent to scan the large PDU and its connection off the bus for hot spots.
Heat indicates that there is either a loose connection or an overload. By doing the infrared scan, data center
operators can sense that failure before it happens. In the dual cord environment, it becomes easy to switch
to the alternate source, unplug the connector, and check it’s connection.
Fundamental Cabling Strategies for Data Centers P a g e | 18
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 33
Every few feet, the power cable is labeled with a voltage rating, an amperage rating, and the number of
conductors that can be found inside the cable. This information is stamped on the cable. American Wire
Gauge (AWG) is a rating of the size of the copper, and identifies the number of conductors in the cable.
Inside a whip there are a minimum of 3 wires, one hot, one neutral, and one ground. It is also possible, to
have 5 wires (3 hot, 1 neutral, 1 ground) inside the whip. Feeder cables which feed the Uninterruptible
Power Supply (UPS) and feed the PDU are thicker, heavier cables. Single conductor cables (insulated
cables with multiple strands of uninsulated copper wires inside) are usually placed in groups within metal
conduit to feed power hungry data center infrastructure components such as large UPSs and Computer
Room Air Conditioners (CRACs). Multiple conductor cables, (cables inside the larger insulated cable that
are each separately insulated) are most often found on the load side of the PDU. Single conductors are
most often placed within conduit, while multiple conductor cables are generally distributed outside of the
conduit. Whips are multiple conductor cables.
Slide 34
To summarize, let’s review some of the information that we have covered throughout the course.
• A modular, scalable approach to data center cabling is more energy efficient and cost effective
• Copper and fiber data cables running over Ethernet networks are considered the standard for data
centers
• American Wire Gauge copper cable is a common means of transporting power in the data center
• Cabling to support dual cord power supplies helps to minimize single points of failure in a data
center
• To minimize cabling costs, it important for data center managers to take a proactive approach to
the design, build, and operation of the data center of today
Slide 35
Thank you for participating in this course.
 
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Fundamentals of Cooling I Transcript
Slide 1: The Fundamentals of Cooling I
Welcome to The Fundamentals of Cooling I.
Slide 2: Welcome
For best viewing results, we recommend that you maximize your browser window now. The screen controls
allow you to navigate through the eLearning experience. Using your browser controls may disrupt the
normal play of the course. Click the Notes tab to read a transcript of the narration.
Slide 3: Objectives
At the completion of this course, you will be able to:
• Explain why cooling in the data center is so critical to high availability
• Distinguish between Precision and Comfort Cooling
• Recognize how heat is generated and transferred
• Define basic terms like Pressure, Volume and Temperature as well as their units of measurement
• Describe how these terms are related to the Refrigeration Cycle
• Describe the Refrigeration Cycle and its components
Slide 4: Introduction
Every Information Technology professional who is involved with the operation of computing equipment
needs to understand the function of air conditioning in the data center or network room. This course
explains the function of basic components of an air conditioning system for a computer room.
Slide 5: Introduction
Whenever electrical power is being consumed in an Information Technology (IT) room or data center,
heat is being generated. We will talk more about how heat is generated a little later in this course. In
the Data Center Environment, heat has the potential to create significant downtime, and therefore must
be removed from the space. Data Center and IT room heat removal is one of the most essential yet
least understood of all critical IT environment processes. Improper or inadequate cooling significantly
detracts from the lifespan and availability of IT equipment. A general understanding of the fundamental
principles of air conditioning and the basic arrangement of precision cooling systems facilitates more
precise communication among IT and cooling professionals when specifying, operating, or maintaining
a cooling solution. The purpose of precision air-conditioning equipment is the precise control of both
temperature and humidity.
Slide 6: Evolution
Despite revolutionary changes in IT technology and products over the past decades, the design of cooling
infrastructure for data centers had changed very little since 1965. Although IT equipment has always
required cooling, the requirements of today’s IT systems, combined with the way that those IT systems are
deployed, has created the need for new cooling-related systems and strategies which were not foreseen
 
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
when the cooling principles for the modern data center were developed over 30 years ago.
Slide 7: Comfort vs. Precision Cooling
Today's technology rooms require precise, stable environments in order for sensitive electronics to
operate at their peak. IT hardware produces an unusual, concentrated heat load, and at the same
time, is very sensitive to changes in temperature or humidity. Most buildings are equipped with
Comfort Air Conditioning units, which are designed for the comfort of people. When compared to
computer room air conditioning systems, comfort systems typically remove an unacceptable amount of
moisture from the space and generally do not have the capability to maintain the temperature and
humidity parameters specified for IT rooms and data centers. Precision air systems are designed for
close temperature and humidity control. They provide year-round operation, with the ease of service,
system flexibility, and redundancy necessary to keep the technology room up and running.
As damaging as the wrong ambient conditions can be, rapid temperature swings can also have a
negative effect on hardware operation. This is one of the reasons hardware is left powered up, even
when not processing data. According to ASHRAE, the recommended upper limit temperature for data
center environments is 81°F (27.22°C). Precision air conditioning is designed to constantly maintain
temperature within 1°F (0.56°C). In contrast, comfort systems are unable to provide such precise
temperature and humidity controls.
Slide 8: The Case for Data Center Cooling
A poorly maintained technology room environment will have a negative impact on data processing and
storage operations. A high or low ambient temperature or rapid temperature swings can corrupt data
processing and shut down an entire system. Temperature variations can alter the electrical and physical
characteristics of electronic chips and other board components, causing faulty operation or failure. These
problems may be transient or may last for days. Transient problems can be very hard to diagnose.
Slide 9: The Case for Data Center Cooling
High Humidity – High humidity can result in tape and surface deterioration, condensation, corrosion, paper
handling problems, and gold and silver migration leading to component and board failure.
Low Humidity – Low humidity increases the possibility of static electric discharges. Such static discharges
can corrupt data and damage hardware.
Slide 10: The Physics of Cooling
Now that we know that heat threatens availability of IT equipment, it’s important to understand the physics
of cooling, and define some basic terminology.
First of all, what is Heat?
Heat is simply a form of energy that is transferred by a difference in temperature. It exists in all matter on
earth, in varied quantities and intensities. Heat energy can be measured relative to any reference
 
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
temperature, body or environment.
What is Temperature?
Temperature is most commonly thought of as how hot or cold something is. It is a measure of heat
intensity based on three different scales: Celsius, Fahrenheit and Kelvin.
What is Pressure?
Pressure is a basic physical property of a gas. It is measured as the force exerted by the gas per unit area
on surroundings.
What is Volume?
Volume is the amount of space taken up by matter. The example of a balloon illustrates the relationship
between pressure and volume. As the pressure inside the balloon gets greater than the pressure outside of
the balloon, the balloon will get larger. Therefore, as the pressure increases, the volume increases.
We will talk more about the relationship between pressure, volume and temperature a little later in this
course.
Slide 11: Three Properties of Heat Energy
Now that we know the key terms related to the physics of cooling, we can now explore the 3 properties of
heat energy. A unique property of heat energy is that it can only flow in one direction, from hot to cold. For
example if an ice cube is placed on a hot surface, it cannot drop in temperature; it can only gain heat
energy and rise in temperature, thereby causing it to melt.
A second property of heat transfer is that Heat energy cannot be destroyed. The third property is that heat
energy can be transferred from one object to another object. In considering the ice cube placed on a hot
surface again, the heat from the surface is not destroyed, rather it is transferred to the ice cube which
causes it to melt.
Slide 12: Heat Transfer Methods
There are three methods of heat transfer: conduction convection and radiation.
Conduction is the process of transferring heat through a solid material. Some substances conduct heat
more easily than others. Solids are better conductors than liquids and liquids are better conductors than
gases. Metals are very good conductors of heat, while air is very poor conductor of heat.
Slide 13: Heat Transfer Methods
Convection is the result of transferring heat through the movement of a liquid or gas.
Radiation related to heat transfer is the process of transferring heat by means of electromagnetic waves,
emitted due to the temperature difference between two objects.
 
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 14: Heat Transfer Methods
For example, blacktop pavement gets hot from radiation heat by the sun’s rays. The light that warms the
blacktop from the Sun is a form of electromagnetic radiation. Radiation is a method of heat transfer that
does not rely on any contact between the heat source and the heated object. If you step barefoot on the
pavement, the pavement feels hot. This feeling is due to the warmth of the pavement being transferred to
your cold feet by means of conduction. The conduction occurs when two objects at different temperatures
are in contact with each other. Heat flows from the warmer to the cooler object until they are both the same
temperature. Finally, if you look down a road of paved blacktop, in the distance, you may see wavy lines
emanating up from the road, much like a mirage. This visible form of convection is caused by the transfer
of heat from the surface of the blacktop to the cooler air above. Convection occurs when warmer areas of a
liquid or gas rise to cooler areas in the liquid or gas. As this happens, cooler liquid or gas takes the place of
the warmer areas which have risen higher. This cycle results in a continuous circulation pattern and heat is
transferred to cooler areas. "Hot air rises and cool air falls to take its place" - this is a description of
convection in our atmosphere.
Slide 15: Air Flow in IT Spaces
As mentioned earlier, heat energy can only flow from hot to cold. For this reason, we have air conditioners
and refrigerators. They use electrical or mechanical energy to pump heat energy from one place to another,
and are even capable of pumping heat from a cooler space to a warmer space. The ability to pump heat to
the outdoors, even when it is hotter outside than it is in the data center, is a critical function that allows high-
power computing equipment to operate in an enclosed space. Understanding how this is possible is a
foundation to understanding the design and operation of cooling systems for IT installations.
 
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
 
Slide 16: Heat Generation
Whenever electrical power is being consumed in an Information Technology (IT) room or data center, heat
is being generated that needs to be removed from the space. This heat generation occurs at various levels
throughout the data center, including the chip level, server level, rack level and room level. With few
exceptions, over 99% of the electricity used to power IT equipment is converted into heat. Unless the
excess heat energy is removed, the room temperature will rise until IT equipment shuts down or potentially
even fails.
 
 
© 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
Slide 17: Heat Generation
Let’s take a closer look at heat generation at the server level. Approximately 50% of the heat energy
released by servers originates in the microprocessor. A fan moves a stream of cold air across the chip
assembly. The server or rack-mounted blade assembly containing the microprocessors usually draws cold
air into the front of the chassis and exhausts it out of the rear. The amount of heat generated by servers is
on a rising trend. A single blade server chassis can release 4 Kilowatts (kW) or more of heat energy into
the IT room or data center. Such a heat output is equivalent to the heat released by forty 100-Watt light
bulbs and is actually more heat energy than the capacity of the heating element in many residential cooking
ovens.
Now that we have learned about the physics and properties of heat, we will talk next about the Ideal Gas
Law.
Slide 18: The Ideal Gas Law
Previously, we defined pressure, temperature, and volume. Further, it is imperative to the understanding
of data center cooling to recognize how these terms relate to each other.
The relation between pressure (P), volume (V) and temperature (T) is known as the Ideal Gas Law,
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript
Data Center Associate Certification Exam Study Guide Transcript

More Related Content

What's hot

Disaster recovery plan sample 2
Disaster recovery plan sample 2Disaster recovery plan sample 2
Disaster recovery plan sample 2AbenetAsmellash
 
Mastering disaster e book Telehouse
Mastering disaster e book TelehouseMastering disaster e book Telehouse
Mastering disaster e book TelehouseTelehouse
 
Automated and dynamic maintenance keeps your computers healthy and performing...
Automated and dynamic maintenance keeps your computers healthy and performing...Automated and dynamic maintenance keeps your computers healthy and performing...
Automated and dynamic maintenance keeps your computers healthy and performing...1E: Software Lifecycle Automation
 
SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...
SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...
SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...1E: Software Lifecycle Automation
 
Mastering disaster a data center checklist
Mastering disaster a data center checklistMastering disaster a data center checklist
Mastering disaster a data center checklistChris Wick
 
Create a right sized disaster recovery plan
Create a right sized disaster recovery planCreate a right sized disaster recovery plan
Create a right sized disaster recovery planInfo-Tech Research Group
 
Business Continuity for Mission Critical Applications
Business Continuity for Mission Critical ApplicationsBusiness Continuity for Mission Critical Applications
Business Continuity for Mission Critical ApplicationsDataCore Software
 
Improved Risk Analysis Through Failure Mode Classification According to Occur...
Improved Risk Analysis Through Failure Mode Classification According to Occur...Improved Risk Analysis Through Failure Mode Classification According to Occur...
Improved Risk Analysis Through Failure Mode Classification According to Occur...Ravish P.Y. Mehairjan
 
A guide to modern it disaster recovery
A guide to modern it disaster recoveryA guide to modern it disaster recovery
A guide to modern it disaster recoveryJohn Brouillard
 
Qtility - Content Management Strategies 2015
Qtility - Content Management Strategies 2015Qtility - Content Management Strategies 2015
Qtility - Content Management Strategies 2015clarkems
 
7 deadly sins of backup and recovery
7 deadly sins of backup and recovery7 deadly sins of backup and recovery
7 deadly sins of backup and recoverygeekmodeboy
 
General presentation 29jun11.ppt
General presentation 29jun11.pptGeneral presentation 29jun11.ppt
General presentation 29jun11.pptWendy Taylor
 
Fulcrum Group- Layer Your DR/BC
Fulcrum Group- Layer Your DR/BCFulcrum Group- Layer Your DR/BC
Fulcrum Group- Layer Your DR/BCSteve Meek
 
Rem NetApp Champion Case Study
Rem NetApp Champion Case StudyRem NetApp Champion Case Study
Rem NetApp Champion Case StudyMichael Hudak
 

What's hot (19)

EMS Case Study
EMS Case StudyEMS Case Study
EMS Case Study
 
Disaster recovery plan sample 2
Disaster recovery plan sample 2Disaster recovery plan sample 2
Disaster recovery plan sample 2
 
Mastering disaster e book Telehouse
Mastering disaster e book TelehouseMastering disaster e book Telehouse
Mastering disaster e book Telehouse
 
Automated and dynamic maintenance keeps your computers healthy and performing...
Automated and dynamic maintenance keeps your computers healthy and performing...Automated and dynamic maintenance keeps your computers healthy and performing...
Automated and dynamic maintenance keeps your computers healthy and performing...
 
IBM PROTECTIER: FROM BACKUP TO RECOVERY
IBM PROTECTIER: FROM BACKUP TO RECOVERYIBM PROTECTIER: FROM BACKUP TO RECOVERY
IBM PROTECTIER: FROM BACKUP TO RECOVERY
 
SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...
SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...
SWEATING THE ASSET HOW IMPROVING THE HEALTH AND PERFORMANCE OF YOUR PCS CAN E...
 
Mastering disaster a data center checklist
Mastering disaster a data center checklistMastering disaster a data center checklist
Mastering disaster a data center checklist
 
Create a right sized disaster recovery plan
Create a right sized disaster recovery planCreate a right sized disaster recovery plan
Create a right sized disaster recovery plan
 
Business Continuity for Mission Critical Applications
Business Continuity for Mission Critical ApplicationsBusiness Continuity for Mission Critical Applications
Business Continuity for Mission Critical Applications
 
Data Protection Governance IT
Data Protection Governance ITData Protection Governance IT
Data Protection Governance IT
 
Improved Risk Analysis Through Failure Mode Classification According to Occur...
Improved Risk Analysis Through Failure Mode Classification According to Occur...Improved Risk Analysis Through Failure Mode Classification According to Occur...
Improved Risk Analysis Through Failure Mode Classification According to Occur...
 
A guide to modern it disaster recovery
A guide to modern it disaster recoveryA guide to modern it disaster recovery
A guide to modern it disaster recovery
 
Qtility - Content Management Strategies 2015
Qtility - Content Management Strategies 2015Qtility - Content Management Strategies 2015
Qtility - Content Management Strategies 2015
 
7 deadly sins of backup and recovery
7 deadly sins of backup and recovery7 deadly sins of backup and recovery
7 deadly sins of backup and recovery
 
SRE in Startup
SRE in StartupSRE in Startup
SRE in Startup
 
General presentation 29jun11.ppt
General presentation 29jun11.pptGeneral presentation 29jun11.ppt
General presentation 29jun11.ppt
 
Fulcrum Group- Layer Your DR/BC
Fulcrum Group- Layer Your DR/BCFulcrum Group- Layer Your DR/BC
Fulcrum Group- Layer Your DR/BC
 
Rem NetApp Champion Case Study
Rem NetApp Champion Case StudyRem NetApp Champion Case Study
Rem NetApp Champion Case Study
 
Doing Too Much PM Report
Doing Too Much PM ReportDoing Too Much PM Report
Doing Too Much PM Report
 

Similar to Data Center Associate Certification Exam Study Guide Transcript

An Investigation of Fault Tolerance Techniques in Cloud Computing
An Investigation of Fault Tolerance Techniques in Cloud ComputingAn Investigation of Fault Tolerance Techniques in Cloud Computing
An Investigation of Fault Tolerance Techniques in Cloud Computingijtsrd
 
Insider's Guide- The Data Protection Imperative
Insider's Guide- The Data Protection ImperativeInsider's Guide- The Data Protection Imperative
Insider's Guide- The Data Protection ImperativeDataCore Software
 
Successful_BC_Strategy.pdf
Successful_BC_Strategy.pdfSuccessful_BC_Strategy.pdf
Successful_BC_Strategy.pdfmykovalenko1
 
Module 4 disaster recovery student slides ver 1.0
Module 4 disaster recovery   student slides ver 1.0Module 4 disaster recovery   student slides ver 1.0
Module 4 disaster recovery student slides ver 1.0Aladdin Dandis
 
Will You Be Prepared When The Next Disaster Strikes - Whitepaper
Will You Be Prepared When The Next Disaster Strikes - WhitepaperWill You Be Prepared When The Next Disaster Strikes - Whitepaper
Will You Be Prepared When The Next Disaster Strikes - WhitepaperChristian Caracciolo
 
Top7ReasonsPreventativeMaintenanceCity
Top7ReasonsPreventativeMaintenanceCityTop7ReasonsPreventativeMaintenanceCity
Top7ReasonsPreventativeMaintenanceCityAlecia Flahiff
 
Resiliency vs High Availability vs Fault Tolerance vs Reliability
Resiliency vs High Availability vs Fault Tolerance vs  ReliabilityResiliency vs High Availability vs Fault Tolerance vs  Reliability
Resiliency vs High Availability vs Fault Tolerance vs Reliabilityjeetendra mandal
 
How network downtime affects associations
How network downtime affects associationsHow network downtime affects associations
How network downtime affects associationsOSIbeyond
 
Fluke Connect Condition Based Maintenance
Fluke Connect Condition Based MaintenanceFluke Connect Condition Based Maintenance
Fluke Connect Condition Based MaintenanceFrederic Baudart, CMRP
 
The Return on Invest in the Internet of Things. Mastering the Digital Transfo...
The Return on Invest in the Internet of Things. Mastering the Digital Transfo...The Return on Invest in the Internet of Things. Mastering the Digital Transfo...
The Return on Invest in the Internet of Things. Mastering the Digital Transfo...Capgemini
 
7 deadly data centre sins: how to recognise them
7 deadly data centre sins: how to recognise them7 deadly data centre sins: how to recognise them
7 deadly data centre sins: how to recognise themKatieirelandSSE
 
Iaetsd design and implementation of secure cloud systems using
Iaetsd design and implementation of secure cloud systems usingIaetsd design and implementation of secure cloud systems using
Iaetsd design and implementation of secure cloud systems usingIaetsd Iaetsd
 
Celera Networks on Cloud Computing
Celera Networks on Cloud Computing Celera Networks on Cloud Computing
Celera Networks on Cloud Computing CeleraNetworks
 
V mware quick start guide to disaster recovery
V mware   quick start guide to disaster recoveryV mware   quick start guide to disaster recovery
V mware quick start guide to disaster recoveryVMware_EMEA
 
Global_Technology_Services - Technical_Support_Services_White_Paper_External_...
Global_Technology_Services - Technical_Support_Services_White_Paper_External_...Global_Technology_Services - Technical_Support_Services_White_Paper_External_...
Global_Technology_Services - Technical_Support_Services_White_Paper_External_...Jim Mason
 
Enterprise-Grade Disaster Recovery Without Breaking the Bank
Enterprise-Grade Disaster Recovery Without Breaking the BankEnterprise-Grade Disaster Recovery Without Breaking the Bank
Enterprise-Grade Disaster Recovery Without Breaking the BankDonna Perlstein
 
Enterprise-Grade Disaster Recovery Without Breaking the Bank
Enterprise-Grade Disaster Recovery Without Breaking the BankEnterprise-Grade Disaster Recovery Without Breaking the Bank
Enterprise-Grade Disaster Recovery Without Breaking the BankCloudEndure
 
Approach To It Simplification PowerPoint Presentation Slides
Approach To It Simplification PowerPoint Presentation SlidesApproach To It Simplification PowerPoint Presentation Slides
Approach To It Simplification PowerPoint Presentation SlidesSlideTeam
 

Similar to Data Center Associate Certification Exam Study Guide Transcript (20)

An Investigation of Fault Tolerance Techniques in Cloud Computing
An Investigation of Fault Tolerance Techniques in Cloud ComputingAn Investigation of Fault Tolerance Techniques in Cloud Computing
An Investigation of Fault Tolerance Techniques in Cloud Computing
 
Insider's Guide- The Data Protection Imperative
Insider's Guide- The Data Protection ImperativeInsider's Guide- The Data Protection Imperative
Insider's Guide- The Data Protection Imperative
 
Successful_BC_Strategy.pdf
Successful_BC_Strategy.pdfSuccessful_BC_Strategy.pdf
Successful_BC_Strategy.pdf
 
Module 4 disaster recovery student slides ver 1.0
Module 4 disaster recovery   student slides ver 1.0Module 4 disaster recovery   student slides ver 1.0
Module 4 disaster recovery student slides ver 1.0
 
Will You Be Prepared When The Next Disaster Strikes - Whitepaper
Will You Be Prepared When The Next Disaster Strikes - WhitepaperWill You Be Prepared When The Next Disaster Strikes - Whitepaper
Will You Be Prepared When The Next Disaster Strikes - Whitepaper
 
Top7ReasonsPreventativeMaintenanceCity
Top7ReasonsPreventativeMaintenanceCityTop7ReasonsPreventativeMaintenanceCity
Top7ReasonsPreventativeMaintenanceCity
 
Resiliency vs High Availability vs Fault Tolerance vs Reliability
Resiliency vs High Availability vs Fault Tolerance vs  ReliabilityResiliency vs High Availability vs Fault Tolerance vs  Reliability
Resiliency vs High Availability vs Fault Tolerance vs Reliability
 
How network downtime affects associations
How network downtime affects associationsHow network downtime affects associations
How network downtime affects associations
 
Neville Fuller
Neville FullerNeville Fuller
Neville Fuller
 
Fluke Connect Condition Based Maintenance
Fluke Connect Condition Based MaintenanceFluke Connect Condition Based Maintenance
Fluke Connect Condition Based Maintenance
 
The Return on Invest in the Internet of Things. Mastering the Digital Transfo...
The Return on Invest in the Internet of Things. Mastering the Digital Transfo...The Return on Invest in the Internet of Things. Mastering the Digital Transfo...
The Return on Invest in the Internet of Things. Mastering the Digital Transfo...
 
CLOUD TESTING MODEL – BENEFITS, LIMITATIONS AND CHALLENGES
CLOUD TESTING MODEL – BENEFITS, LIMITATIONS AND CHALLENGESCLOUD TESTING MODEL – BENEFITS, LIMITATIONS AND CHALLENGES
CLOUD TESTING MODEL – BENEFITS, LIMITATIONS AND CHALLENGES
 
7 deadly data centre sins: how to recognise them
7 deadly data centre sins: how to recognise them7 deadly data centre sins: how to recognise them
7 deadly data centre sins: how to recognise them
 
Iaetsd design and implementation of secure cloud systems using
Iaetsd design and implementation of secure cloud systems usingIaetsd design and implementation of secure cloud systems using
Iaetsd design and implementation of secure cloud systems using
 
Celera Networks on Cloud Computing
Celera Networks on Cloud Computing Celera Networks on Cloud Computing
Celera Networks on Cloud Computing
 
V mware quick start guide to disaster recovery
V mware   quick start guide to disaster recoveryV mware   quick start guide to disaster recovery
V mware quick start guide to disaster recovery
 
Global_Technology_Services - Technical_Support_Services_White_Paper_External_...
Global_Technology_Services - Technical_Support_Services_White_Paper_External_...Global_Technology_Services - Technical_Support_Services_White_Paper_External_...
Global_Technology_Services - Technical_Support_Services_White_Paper_External_...
 
Enterprise-Grade Disaster Recovery Without Breaking the Bank
Enterprise-Grade Disaster Recovery Without Breaking the BankEnterprise-Grade Disaster Recovery Without Breaking the Bank
Enterprise-Grade Disaster Recovery Without Breaking the Bank
 
Enterprise-Grade Disaster Recovery Without Breaking the Bank
Enterprise-Grade Disaster Recovery Without Breaking the BankEnterprise-Grade Disaster Recovery Without Breaking the Bank
Enterprise-Grade Disaster Recovery Without Breaking the Bank
 
Approach To It Simplification PowerPoint Presentation Slides
Approach To It Simplification PowerPoint Presentation SlidesApproach To It Simplification PowerPoint Presentation Slides
Approach To It Simplification PowerPoint Presentation Slides
 

Recently uploaded

Russian Call Girls in Nashik Riya 7001305949 Independent Escort Service Nashik
Russian Call Girls in Nashik Riya 7001305949 Independent Escort Service NashikRussian Call Girls in Nashik Riya 7001305949 Independent Escort Service Nashik
Russian Call Girls in Nashik Riya 7001305949 Independent Escort Service Nashikranjana rawat
 
Russian Call Girls Nashik Riddhi 7001305949 Independent Escort Service Nashik
Russian Call Girls Nashik Riddhi 7001305949 Independent Escort Service NashikRussian Call Girls Nashik Riddhi 7001305949 Independent Escort Service Nashik
Russian Call Girls Nashik Riddhi 7001305949 Independent Escort Service Nashikranjana rawat
 
VIP Call Girls Service Secunderabad Hyderabad Call +91-8250192130
VIP Call Girls Service Secunderabad Hyderabad Call +91-8250192130VIP Call Girls Service Secunderabad Hyderabad Call +91-8250192130
VIP Call Girls Service Secunderabad Hyderabad Call +91-8250192130Suhani Kapoor
 
VIP Call Girl Bikaner Aashi 8250192130 Independent Escort Service Bikaner
VIP Call Girl Bikaner Aashi 8250192130 Independent Escort Service BikanerVIP Call Girl Bikaner Aashi 8250192130 Independent Escort Service Bikaner
VIP Call Girl Bikaner Aashi 8250192130 Independent Escort Service BikanerSuhani Kapoor
 
HIGH PRESSURE PROCESSING ( HPP ) .pptx
HIGH PRESSURE  PROCESSING ( HPP )  .pptxHIGH PRESSURE  PROCESSING ( HPP )  .pptx
HIGH PRESSURE PROCESSING ( HPP ) .pptxparvin6647
 
Jp Nagar Call Girls Bangalore WhatsApp 8250192130 High Profile Service
Jp Nagar Call Girls Bangalore WhatsApp 8250192130 High Profile ServiceJp Nagar Call Girls Bangalore WhatsApp 8250192130 High Profile Service
Jp Nagar Call Girls Bangalore WhatsApp 8250192130 High Profile ServiceHigh Profile Call Girls
 
VIP Kolkata Call Girl Jadavpur 👉 8250192130 Available With Room
VIP Kolkata Call Girl Jadavpur 👉 8250192130  Available With RoomVIP Kolkata Call Girl Jadavpur 👉 8250192130  Available With Room
VIP Kolkata Call Girl Jadavpur 👉 8250192130 Available With Roomdivyansh0kumar0
 
Russian Escorts DELHI - Russian Call Girls in Delhi Greater Kailash TELL-NO. ...
Russian Escorts DELHI - Russian Call Girls in Delhi Greater Kailash TELL-NO. ...Russian Escorts DELHI - Russian Call Girls in Delhi Greater Kailash TELL-NO. ...
Russian Escorts DELHI - Russian Call Girls in Delhi Greater Kailash TELL-NO. ...dollysharma2066
 
Call Girls in Ghitorni Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Ghitorni Delhi 💯Call Us 🔝8264348440🔝Call Girls in Ghitorni Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Ghitorni Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
Chocolate Milk Flavorful Indulgence to RD UHT Innovations.pptx
Chocolate Milk Flavorful Indulgence to RD UHT Innovations.pptxChocolate Milk Flavorful Indulgence to RD UHT Innovations.pptx
Chocolate Milk Flavorful Indulgence to RD UHT Innovations.pptxRD Food
 
Assessment on SITXINV007 Purchase goods.pdf
Assessment on SITXINV007 Purchase goods.pdfAssessment on SITXINV007 Purchase goods.pdf
Assessment on SITXINV007 Purchase goods.pdfUMER979507
 
(办理学位证)加州大学圣塔芭芭拉分校毕业证成绩单原版一比一
(办理学位证)加州大学圣塔芭芭拉分校毕业证成绩单原版一比一(办理学位证)加州大学圣塔芭芭拉分校毕业证成绩单原版一比一
(办理学位证)加州大学圣塔芭芭拉分校毕业证成绩单原版一比一Fi sss
 
Best Connaught Place Call Girls Service WhatsApp -> 9999965857 Available 24x7...
Best Connaught Place Call Girls Service WhatsApp -> 9999965857 Available 24x7...Best Connaught Place Call Girls Service WhatsApp -> 9999965857 Available 24x7...
Best Connaught Place Call Girls Service WhatsApp -> 9999965857 Available 24x7...srsj9000
 
BPP NC II Lesson 3 - Pastry Products.pptx
BPP NC II Lesson 3 - Pastry Products.pptxBPP NC II Lesson 3 - Pastry Products.pptx
BPP NC II Lesson 3 - Pastry Products.pptxmaricel769799
 
VIP Russian Call Girls in Noida Deepika 8250192130 Independent Escort Service...
VIP Russian Call Girls in Noida Deepika 8250192130 Independent Escort Service...VIP Russian Call Girls in Noida Deepika 8250192130 Independent Escort Service...
VIP Russian Call Girls in Noida Deepika 8250192130 Independent Escort Service...Suhani Kapoor
 
Low Rate Call Girls Nashik Mahima 7001305949 Independent Escort Service Nashik
Low Rate Call Girls Nashik Mahima 7001305949 Independent Escort Service NashikLow Rate Call Girls Nashik Mahima 7001305949 Independent Escort Service Nashik
Low Rate Call Girls Nashik Mahima 7001305949 Independent Escort Service Nashikranjana rawat
 
High Class Call Girls Nashik Priya 7001305949 Independent Escort Service Nashik
High Class Call Girls Nashik Priya 7001305949 Independent Escort Service NashikHigh Class Call Girls Nashik Priya 7001305949 Independent Escort Service Nashik
High Class Call Girls Nashik Priya 7001305949 Independent Escort Service Nashikranjana rawat
 
Irradiation preservation of food advancements
Irradiation preservation of food advancementsIrradiation preservation of food advancements
Irradiation preservation of food advancementsDeepika Sugumar
 

Recently uploaded (20)

Russian Call Girls in Nashik Riya 7001305949 Independent Escort Service Nashik
Russian Call Girls in Nashik Riya 7001305949 Independent Escort Service NashikRussian Call Girls in Nashik Riya 7001305949 Independent Escort Service Nashik
Russian Call Girls in Nashik Riya 7001305949 Independent Escort Service Nashik
 
young Whatsapp Call Girls in Jamuna Vihar 🔝 9953056974 🔝 escort service
young Whatsapp Call Girls in Jamuna Vihar 🔝 9953056974 🔝 escort serviceyoung Whatsapp Call Girls in Jamuna Vihar 🔝 9953056974 🔝 escort service
young Whatsapp Call Girls in Jamuna Vihar 🔝 9953056974 🔝 escort service
 
young Whatsapp Call Girls in Vivek Vihar 🔝 9953056974 🔝 escort service
young Whatsapp Call Girls in Vivek Vihar 🔝 9953056974 🔝 escort serviceyoung Whatsapp Call Girls in Vivek Vihar 🔝 9953056974 🔝 escort service
young Whatsapp Call Girls in Vivek Vihar 🔝 9953056974 🔝 escort service
 
Russian Call Girls Nashik Riddhi 7001305949 Independent Escort Service Nashik
Russian Call Girls Nashik Riddhi 7001305949 Independent Escort Service NashikRussian Call Girls Nashik Riddhi 7001305949 Independent Escort Service Nashik
Russian Call Girls Nashik Riddhi 7001305949 Independent Escort Service Nashik
 
VIP Call Girls Service Secunderabad Hyderabad Call +91-8250192130
VIP Call Girls Service Secunderabad Hyderabad Call +91-8250192130VIP Call Girls Service Secunderabad Hyderabad Call +91-8250192130
VIP Call Girls Service Secunderabad Hyderabad Call +91-8250192130
 
VIP Call Girl Bikaner Aashi 8250192130 Independent Escort Service Bikaner
VIP Call Girl Bikaner Aashi 8250192130 Independent Escort Service BikanerVIP Call Girl Bikaner Aashi 8250192130 Independent Escort Service Bikaner
VIP Call Girl Bikaner Aashi 8250192130 Independent Escort Service Bikaner
 
HIGH PRESSURE PROCESSING ( HPP ) .pptx
HIGH PRESSURE  PROCESSING ( HPP )  .pptxHIGH PRESSURE  PROCESSING ( HPP )  .pptx
HIGH PRESSURE PROCESSING ( HPP ) .pptx
 
Jp Nagar Call Girls Bangalore WhatsApp 8250192130 High Profile Service
Jp Nagar Call Girls Bangalore WhatsApp 8250192130 High Profile ServiceJp Nagar Call Girls Bangalore WhatsApp 8250192130 High Profile Service
Jp Nagar Call Girls Bangalore WhatsApp 8250192130 High Profile Service
 
VIP Kolkata Call Girl Jadavpur 👉 8250192130 Available With Room
VIP Kolkata Call Girl Jadavpur 👉 8250192130  Available With RoomVIP Kolkata Call Girl Jadavpur 👉 8250192130  Available With Room
VIP Kolkata Call Girl Jadavpur 👉 8250192130 Available With Room
 
Russian Escorts DELHI - Russian Call Girls in Delhi Greater Kailash TELL-NO. ...
Russian Escorts DELHI - Russian Call Girls in Delhi Greater Kailash TELL-NO. ...Russian Escorts DELHI - Russian Call Girls in Delhi Greater Kailash TELL-NO. ...
Russian Escorts DELHI - Russian Call Girls in Delhi Greater Kailash TELL-NO. ...
 
Call Girls in Ghitorni Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Ghitorni Delhi 💯Call Us 🔝8264348440🔝Call Girls in Ghitorni Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Ghitorni Delhi 💯Call Us 🔝8264348440🔝
 
Chocolate Milk Flavorful Indulgence to RD UHT Innovations.pptx
Chocolate Milk Flavorful Indulgence to RD UHT Innovations.pptxChocolate Milk Flavorful Indulgence to RD UHT Innovations.pptx
Chocolate Milk Flavorful Indulgence to RD UHT Innovations.pptx
 
Assessment on SITXINV007 Purchase goods.pdf
Assessment on SITXINV007 Purchase goods.pdfAssessment on SITXINV007 Purchase goods.pdf
Assessment on SITXINV007 Purchase goods.pdf
 
(办理学位证)加州大学圣塔芭芭拉分校毕业证成绩单原版一比一
(办理学位证)加州大学圣塔芭芭拉分校毕业证成绩单原版一比一(办理学位证)加州大学圣塔芭芭拉分校毕业证成绩单原版一比一
(办理学位证)加州大学圣塔芭芭拉分校毕业证成绩单原版一比一
 
Best Connaught Place Call Girls Service WhatsApp -> 9999965857 Available 24x7...
Best Connaught Place Call Girls Service WhatsApp -> 9999965857 Available 24x7...Best Connaught Place Call Girls Service WhatsApp -> 9999965857 Available 24x7...
Best Connaught Place Call Girls Service WhatsApp -> 9999965857 Available 24x7...
 
BPP NC II Lesson 3 - Pastry Products.pptx
BPP NC II Lesson 3 - Pastry Products.pptxBPP NC II Lesson 3 - Pastry Products.pptx
BPP NC II Lesson 3 - Pastry Products.pptx
 
VIP Russian Call Girls in Noida Deepika 8250192130 Independent Escort Service...
VIP Russian Call Girls in Noida Deepika 8250192130 Independent Escort Service...VIP Russian Call Girls in Noida Deepika 8250192130 Independent Escort Service...
VIP Russian Call Girls in Noida Deepika 8250192130 Independent Escort Service...
 
Low Rate Call Girls Nashik Mahima 7001305949 Independent Escort Service Nashik
Low Rate Call Girls Nashik Mahima 7001305949 Independent Escort Service NashikLow Rate Call Girls Nashik Mahima 7001305949 Independent Escort Service Nashik
Low Rate Call Girls Nashik Mahima 7001305949 Independent Escort Service Nashik
 
High Class Call Girls Nashik Priya 7001305949 Independent Escort Service Nashik
High Class Call Girls Nashik Priya 7001305949 Independent Escort Service NashikHigh Class Call Girls Nashik Priya 7001305949 Independent Escort Service Nashik
High Class Call Girls Nashik Priya 7001305949 Independent Escort Service Nashik
 
Irradiation preservation of food advancements
Irradiation preservation of food advancementsIrradiation preservation of food advancements
Irradiation preservation of food advancements
 

Data Center Associate Certification Exam Study Guide Transcript

  • 2. Fundamentals of Availability P a g e | 1 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Fundamentals of Availability Transcript Slide 1 Welcome to Data Center University™ course on Fundamentals of Availability. Slide 2: Welcome For best viewing results, we recommend that you maximize your browser window now. The screen controls allow you to navigate through the eLearning experience. Using your browser controls may disrupt the normal play of the course. Click the attachments link to download supplemental information for this course. Click the Notes tab to read a transcript of the narration. Slide 3: Learning Objective At the end of this course, you will be able to: • Understand the key terms associated with availability • Understand the difference between availability and reliability • Recognize threats to availability • Calculate cost of downtime Slide 4: Introduction In our rapidly changing business world, highly available systems and processes are of critical importance and are the foundation upon which successful businesses rely. So much so, that according to the National Archives and Records Administration in Washington, D.C., 93% of businesses that have lost availability in their data center for 10 days or more have filed for bankruptcy within one year. The cost of one episode of downtime can cripple an organization. Take for example an e-business. In a case of downtime, not only would they potentially lose thousands or even millions of dollars in lost revenue, but their top competitor is only a mouse-click away. Therefore loss is translated not only to lost revenue but also to a loss in customer loyalty. The challenge of maintaining a highly available network is no longer just the responsibility of the IT
  • 3. Fundamentals of Availability P a g e | 2 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. departments, rather it extends out to management and department heads, as well as the boards which govern company policy. For this reason, having a sound understanding of the factors that lead to high availability, threats to availability, and ways to measure availability is imperative regardless of your business sector. Slide 5: Measuring Business Value Measuring Business Value begins first with an understanding of the Physical Infrastructure. Physical Infrastructure is the foundation upon which Information Technology (IT) and telecommunication Networks reside. Physical Infrastructure consists of the Racks, Power, Cooling, Fire Prevention/Security, Management, and Services. Slide 6: Measuring Business Value Business value for an organization, in general terms, is based on three core objectives: 1. Increasing revenue 2. Reducing costs 3. Better utilizing assets Regardless of the line of business, these three objectives ultimately lead to improved earnings and cash flow. Investments in Physical Infrastructure are made because they both directly and indirectly impact these three business objectives. Managers purchase items such as generators, air conditioners, physical security systems, and Uninterruptible Power Supplies to serve as “insurance policies.” For any network or data center, there are risks of downtime from power and thermal problems, and investing in Physical Infrastructure mitigates these and other risks. So how does this impact the three core business objectives above (revenue, cost, and assets)? Revenue streams are slowed or stopped, business costs / expenses are incurred, and assets are underutilized or underproductive when systems are down. Therefore, the more
  • 4. Fundamentals of Availability P a g e | 3 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. efficient the strategy is in reducing downtime from any cause, the more value it has to the business in meeting all three objectives. Slide 7: Measuring Business Value Historically, assessment of Physical Infrastructure business value was based on two core criteria: availability and upfront costs. Increasing the availability (uptime) of the Physical Infrastructure system and ultimately of the business processes allows a business to continue to bring in revenues and better optimize the use (or productivity) of assets. Imagine a credit card processing company whose systems are unavailable – credit card purchases cannot be processed, halting the revenue stream for the duration of the downtime. In addition, employees are not able to be productive without their systems online. And minimizing the upfront cost of the Physical Infrastructure results in a greater return on that investment. If the Physical Infrastructure cost is low and the risk / cost of downtime is high, the business case becomes easier to justify. While these arguments still hold true, today’s rapidly changing IT environments are dictating an additional criteria for assessing Physical Infrastructure business value. Agility. Business plans must be agile to deal with changing market conditions, opportunities, and environmental factors. Investments that lock resources limit the ability to respond in a flexible manner. And when this flexibility or agility is not present, lost opportunity is the predictable result. Slide 8: Five 9’s of Availability A term that is commonly used when discussing availability is the term ‘5 Nine’s. Although often used, this term is often very misleading, and often misunderstood. 5 9’s refers to a network that is accessible 99.999% of the time. However, it is a rather misleading term. We’ll explain why a little later on in the course. Slide 9: Key Terms There are many additional terms associated with availability, business continuity and disaster recovery. Before we go any further, let’s define some of these terms.
  • 5. Fundamentals of Availability P a g e | 4 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Reliability is the ability of a system or component to perform its required functions under stated conditions for a specified period of time. Availability, on the other hand, is the degree to which a system or component is operational and accessible when required for use. It can be viewed as the likelihood that the system or component is in a state to perform its required function under given conditions at a given instant in time. Availability is determined by a system’s reliability, as well as its recovery time when a failure does occur. When systems have long continuous operating times, failures are inevitable. Availability is often looked at because, when a failure does occur, the critical variable now becomes how quickly the system can be recovered. In the data center, having a reliable system design is the most critical variable, but when a failure occurs, the most important consideration must be getting the IT equipment and business processes up and running as fast as possible to keep downtime to a minimum. Slide 10: Key Terms Upon considering any availability or reliability value, one should always ask for a definition of failure. Moving forward without a clear definition of failure, is like advertising the fuel efficiency of an automobile as “miles per tank” without defining the capacity of the tank in liters or gallons. To address this ambiguity, one should start with one of the following two basic definitions of a failure. According to the IEC (International Electro-technical Commission) there are two basic definitions of a failure: 1. The termination of the ability of the product as a whole to perform its required function. 2. The termination of the ability of any individual component to perform its required function but not the termination of the ability of the product as a whole to perform. Slide 11: Key Terms MTBF Mean Time Between Failure, is a basic measure of a system’s reliability. It is typically represented in units of hours. The higher the MTBF number is, the higher the reliability of the product.
  • 6. Fundamentals of Availability P a g e | 5 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. MTTR Mean Time to Recover (or Repair), is the expected time to recover a system from a failure. This may include the time it takes to diagnose the problem, the time it takes to get a repair technician onsite, and the time it takes to physically repair the system. Similar to MTBF, MTTR is represented in units of hours. MTTR impacts availability and not reliability. The longer the MTTR, the worse off a system is. Simply put, if it takes longer to recover a system from a failure, the system is going to have a lower availability. As the MTBF goes up, availability goes up. As the MTTR goes up, availability goes down. Slide 12: The Limitations of 99.999% As before mentioned 5 9’s is a misleading term because the use of the term has become diluted. 5 9’s has been used to refer to the amount of time that the Data Center is powered up and available. In other words, a data center that has achieved 5 9’s is powered up 99.999% of the time. However, loss of power is only 1 part of the equation. The other part of the availability equation is reliability. Let’s take for example two data centers that are both considered 99.999% available. In one year, Data Center A lost power once, but it lasted for a full 5 minutes. Data Center B lost power 10 times, but for only 30 seconds each time. Both Data Centers were without power for a total of 5 minutes each. The missing detail is the recovery time. Anytime systems lose power, there is a recovery time in which servers must be rebooted, data must be recovered, and corrupted systems must be repaired. The Mean Time to Recover process could take minutes, hours, days, or even weeks. Now, if you consider again the two data centers that have experienced downtime, you will see that Data Center B that has had 10 instances of power outages will actually have a much longer duration of downtime, than the data center that only had once occurrence of downtime. Data Center B will have a significantly higher Mean Time to Recover. It is because of this dynamic that reliability is equally important to this discussion of availability. Reliability of a data center talks to the frequency of downtime in a given time frame. There is an inversely proportional relationship in that as time increases, reliability decreases. Availability, however is only a percentage of downtime in a given duration.
  • 7. Fundamentals of Availability P a g e | 6 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Slide 13: Factors that Affect Availability and Reliability It should be obvious that there are numerous factors that affect data center availability and reliability. Some of these include AC Power conditions, lack of adequate cooling in the data center, equipment failure, natural and artificial disasters, and human errors. Slide 14: AC Power Conditions Let’s look first at the AC power conditions. Power quality anomalies are organized into seven categories based on wave shape: 1. Transients 2. Interruptions 3. Sag / Undervoltage 4. Swell / Overvoltage 5. Waveform distortion 6. Voltage fluctuations 7. Frequency variations Slide 15: Inadequate Cooling Another factor that poses a significant threat to availability is a lack of cooling in the IT environment. Whenever electrical power is being consumed, heat is being generated. In the Data Center Environment, where a mass quantity of heat is being generated, the potential exists for significant downtime unless this heat is removed from the space. Slide 16: Inadequate Cooling Often times, cooling systems may be in place in the data center, however, if the cooling is not distributed properly hotspots can occur. Slide 17: Inadequate Cooling Hot spots within the data center further threaten availability. In addition, inadequate cooling significantly detracts from the lifespan and availability of IT equipment. It is recommended that when designing the data
  • 8. Fundamentals of Availability P a g e | 7 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. center layout, a hot aisle/cold aisle configuration is used. Hot spots can also be alleviated by the use of properly sized cooling systems, and supplemental spot coolers and air distribution units. Slide 18: Equipment Failures The health of IT equipment is an important factor in ensuring a highly available system, as equipment failures pose a significant threat to availability. Failures can occur for a variety of reasons, including damage caused by prolonged improper utility power. Other such causes are from prolonged exposure to elevated or decreased temperatures, humidity, component failure, and equipment age. Slide 19: Natural and Artificial Disasters Disasters also pose a significant threat to availability. Hurricanes, tornadoes, floods, and the often subsequent blackouts that occur after these disasters all create tremendous opportunity for downtime. In many of these cases, downtime is prolonged due to damage sustained by the power grid or the physical site of the data center itself. Slide 20: Human Error According to Gartner Group, the largest single cause of downtime is human error or personnel issues. One of the most common causes of intermittent downtime in the data center is poor training. Data center staff or contractors should be trained on procedures for application failures/hangs, system update/upgrades, and other tasks that can create problems if not done correctly. Slide 21: Human Error Another problem is poor documentation. As staff sizes have shrunk, and with all the changes in the data center due to rapid product cycles, it’s harder and harder to keep the documentation current. Patches can go awry as incorrect software versions are updated. Hardware fixes can fail if the wrong parts are used. Slide 22: Human Error Another area of potential downtime is management of systems. System Management has fragmented from a single point of control to vendors, partners, ASPs, outsource suppliers, and even a number of internal
  • 9. Fundamentals of Availability P a g e | 8 groups. With a variety of vendors, contractors and technicians freely accessing the IT equipment, errors are inevitable. Slide 23: Cost of Downtime It is important to understand the cost of downtime to a business, and specifically, how that cost changes as a function of outage duration. Lost revenue is often the most visible and easily identified cost of downtime, but it is only the tip of the iceberg when discussing the real costs to the organization. In many cases, the cost of downtime per hour remains constant. In other words, a business that loses at a rate of 100 dollars per hour in the first minute of downtime will also lose at the same rate of 100 dollars per hour after an hour of downtime. An example of a company that might experience this type of profile is a retail store, where a constant revenue stream is present. When the systems are down, there is a relatively constant rate of loss. Slide 24: Cost of Downtime Some businesses, however, may lose the most money after the first 500 milliseconds of downtime and then lose very little thereafter. For example, a semiconductor fabrication plant loses the most money in the first moments of an outage because when the process is interrupted, the Silicon wafers that were in production can no longer be used, and must be scrapped. © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
  • 10. Fundamentals of Availability P a g e | 9 Slide 25: Cost of Downtime And others yet, may lose at a lower rate for a short outage (since revenue is not lost but simply delayed), and as the duration lengthens, there is an increased likelihood that the revenue will not be recovered. Regarding customer satisfaction, a short duration may often be acceptable, but as the duration increases, more customers will become increasingly upset. An example of this might be a car dealership, where customers are willing to delay a transaction for a day. With significant outages however, public knowledge often results in damaged brand perception, and inquiries into company operations. All of these activities result in a downtime cost that begins to accelerate quickly as the duration becomes longer. (Image on next page) © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
  • 11. Fundamentals of Availability P a g e | 10 Slide 26: Cost of Downtime Costs associated with downtime can be classified as direct and indirect. Direct costs are easily identified and measured in terms of hard dollars. Examples include: 1. Wages and costs of employees that are idled due to the unavailability of the network. Although some employees will be idle, their salaries and wages continue to be paid. Other employees may still do some work, but their output will likely be diminished. 2. Lost Revenues are the most obvious cost of downtime because if you cannot process customers, you cannot conduct business. Electronic commerce magnifies the problem, as eCommerce sales are entirely dependent on system availability 3. Wages and cost increases due to induced overtime or time spent checking and fixing systems. The same employees that were idled by the system failure are probably the same employees that will go back to work and recover the system via data entry. They not only have to do their ‘day job’ of processing current data, but they must also re-enter any data that was lost due to the system crash, or enter new data that was handwritten during the system outage. This means additional hours of work, most often on an overtime basis. 4. Depending on the nature of the affected systems, the legal costs associated with downtime can be significant. For example, if downtime problems result in a significant drop in share price, shareholders may initiate a class-action suit if they believe that management and the board were © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
  • 12. Fundamentals of Availability P a g e | 11 negligent in protecting vital assets. In another example, if two companies form a business partnership in which one company’s ability to conduct business is dependent on the availability of the other company’s systems, then, depending on the legal structure of the partnership, the first company may be liable to the second for profits lost during any significant downtime event. © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. direct costs are not easily measured, but impact the business just the same. In 2000, Gartner Group xamples include: reduced customer satisfaction; lost opportunity of customers that may have gone to lide 27: Cost of Downtime by Industry Sector the industry sectors. In estimated that 80% of all companies calculating downtime were including indirect costs in their calculations for the first time. E direct competitors during the downtime event; damaged brand perception; and negative public relations. S A business’s downtime costs are directly related to example, Energy and Telecommunications organizations may experience lost revenues on the order of 2 to 3 million dollars an hour. Manufacturing, Financial Institutions, Information Technology, Insurance, Retail and Pharmaceuticals all stand to lose over 1 million dollars an hour. For
  • 13. Fundamentals of Availability P a g e | 12 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. or example, one way to estimate the revenue lost due to a downtime event is to look at normal hourly sales and then multiply that figure by Slide 28: Calculating Cost of Downtime There are many ways to calculate cost of downtime for an organization. F the number of hours of downtime. Remember, however, that this is only one component of a larger equation and, by itself, seriously underestimates the true loss. Another example is loss of productivity. ly that figure by the number of hours f downtime. The most common way to calculate the cost of lost productivity is to first take an average of the hourly salary, benefits and overhead costs for the affected group. Then, multip o Because companies are in business to earn profits, the value employees contribute is usually greater than the cost of employing them. ides only a very conservative estimate of the labor cost of downtime. Therefore, this method prov
  • 14. Fundamentals of Availability P a g e | 13 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. • To stay competitive in today’s global marketplace, businesses must strive to achieve high levels of availability and reliability. While 99.999% availability is the ideal operating condition for most indirect costs of downtime in many business sectors can be exorbitant, and often is oday to calculate their level of availability in order to reduce Slide 29: Summary businesses. • Power outages, inadequate cooling, natural and artificial disasters, and human errors pose a significant barrier to high availability. • The direct and enough to bankrupt many organizations. • Therefore it is critical for businesses t risks, and increase overall reliability and availability. Slide 30: Thank You! Thank you for participating in this course.
  • 15. Examining Fire Protection Methods in the Data Center P a g e | 1 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Examining Fire Protection Methods in the Data Center Transcript Slide 1 Welcome to the Data Centers UniversityTM course on Examining Fire Protection Methods in the Data Center. Slide 2: Welcome For best viewing results, we recommend that you maximize your browser window now. The screen controls allow you to navigate through the eLearning experience. Using your browser controls may disrupt the normal play of the course. Click the attachments link to download supplemental information for this course. Click the Notes tab to read a transcript of the narration. Slide 3: Learning Objectives At the completion of this course, you will be able to: • Explain the importance of fire protection for data centers • Identify the main goals of a data center fire protection system • Explain the basic theory of fire suppression • Differentiate the classes of fire and the stages of combustion • Recognize the different methods of fire detection, fire communication and fire suppression • Identify the different types of fire suppression agents and devices appropriate for data centers Slide 4: Introduction Throughout history, fire has systematically wreaked havoc on industry. Today’s data centers and network rooms are under enormous pressure to maintain seamless operations. Some companies risk losing millions of dollars with one data center catastrophe. Slide 5: Introduction
  • 16. Examining Fire Protection Methods in the Data Center P a g e | 2 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. In fact, industry studies tell us that 43% of businesses that closed due to fire never reopen and 29% of those that do reopen fail within 3 years. With these statistics in mind, it is imperative that all businesses prepare themselves for unseen disasters. The good news is that the most effective method of fire protection is fire prevention. At the completion of this course you will be one step closer to understanding industry safeguarding methods that are used to protect a data centers hottest commodity, information. Slide 6: Introduction This course will discuss the prevention, theory, detection, communication and suppression of fire specific to data centers. Slide 7: national Fire Protection Association Let us start by discussing the National Fire Protection Association or the NFPA. The NFPA is a worldwide organization that was established in 1896 to protect the public against the dangers of fire and electricity. The NFPA’s mission is to “reduce the worldwide burden of fire and other hazards on the quality of life by developing and advocating scientifically based consensus codes and standards, research, training, and education”. The NFPA is responsible for creating fire protection standards, one of them being NFPA 75. NFPA 75 is the standard for protection of computer or data processing equipment. One notable addition to NFPA 75 that took place in 1999, allows data centers to continue to power electronic equipment upon activation of a Gaseous Agent Total Flooding System, which we will discuss later in detail. This exception was made for data centers that meet the following risk considerations: • Economic loss that could result from: • Loss of function or loss of records • Loss of equipment value • Loss of life • and the risk of fire threat to the installation, to occupants or exposed property within that installation It’s important to note that NFPA continually updates its standards to accommodate the ever changing data center environment. Please note, that NFPA does set the worldwide standards for fire protection but in most
  • 17. Examining Fire Protection Methods in the Data Center P a g e | 3 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. cases the Authority Having Jurisdiction (AHJ) has final say in what can or cannot be used for fire protection in a facility. Now that we have identified the standards and guidelines of fire protection for a data center, let’s get started with some facts about fire protection. Slide 8: Prevention Fire prevention provides more protection then any type of fire detection device or fire suppression equipment available. In general, if the data center is incapable of breeding fire there will be no threat of fire damage to the facility. To promote prevention within a data center environment it is important to eliminate as many fire causing factors as possible. A few examples to help achieve this are: • When building a new data center, ensure that it is built far from any other buildings that may pose a fire threat to the data center • Enforce a strict no smoking policy in IT and control rooms • The data center should be void of any trash receptacles • All office furniture in the data center must be constructed of metal. (Chairs may have seat cushions.) • The use of acoustical materials such as foam or fabric or any material used to absorb sound is not recommended in a data center Even if a data center is considered fire proof, it is important to safeguard against downtime in the event that a fire does occur. Fire protection now becomes the priority. Slide 9: System Objectives of Data Center Fire Protection The main goal of a data center fire protection system is to contain a fire without threatening the lives of personnel and to minimize downtime. With this in mind, if a fire were to breakout there are three system objectives that must be met. The first objective is to detect the presence of a fire. The second objective is to communicate the threat to both the authorities and occupants. Finally, the last objective is to suppress the fire and limit any damage. Being familiar with common technologies associated with fire detection, communication, and suppression allows IT managers to better specify a fire protection strategy for their data
  • 18. Examining Fire Protection Methods in the Data Center P a g e | 4 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. center. Prior to the selection of a detection, communication or suppression system, a design engineer must assess the potential hazards and issues associated with the given data center. Slide 10: Tutorial on Fire When discussing fire protection it’s important that we first understand the basic theory behind fire. This section will provide a tutorial on fire. We will cover the following topics: • The Fire Triangle • The classes of Fire and • Fire’s Stages of Combustion Slide 11: The Fire Triangle The “fire triangle” represents the three elements that must interact in order for fire to exist. These elements are heat, oxygen and fuel. Fuel is defined as a material used to produce heat or power by burning. When considering fire in a data center, fuel is anything that has the capability to catch fire, such as servers, cables, or flooring. As you can see, when one of these factors is taken away, the fire can no longer exist. This is the basic theory behind fire suppression. Slide 12: Classes of Fire Fire can be categorized into five classes; class A, B, C, D, K. As you can see from the Classes of Fire chart, Class A represents fires involving ordinary combustible materials such as paper, wood, cloth and some plastics. (Image on next page)
  • 19. Examining Fire Protection Methods in the Data Center P a g e | 5 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Class B fires are fires involving flammable liquids and gases such as oil, paint lacquer, petroleum and gasoline. Class C fires involve live electrical equipment. Class C fires are usually Class A or Class B fires that have electricity present. Class D fires involve combustible metals or combustible metal alloys such as magnesium, sodium and potassium. The last class is Class K fires. These fires involve cooking appliances that use cooking agents such as vegetable or animal oils and fats. Generally, Class A, B and C fires are the most common classes of fire that one may encounter in a data center. This chart represents all of the different classes of fire that are able to be extinguished successfully with a basic fire extinguisher. Later in the course, we will discuss several types of extinguishing agents used in data centers. Slide 13: Stages of Combustion The next step in categorizing a fire is to determine what stage of combustion it is in. The four stages of combustion are: 1. The incipient stage or pre-combustion stage, 2. The visible smoke stage, 3. The flaming fire stage, and lastly, 4. The intense heat stage.
  • 20. Examining Fire Protection Methods in the Data Center P a g e | 6 As these stages progress, the risk of property damage, and risk to life increases drastically. All of these categories play an important role in fire protection specifically, data centers. By studying the classes of fire and the stages of combustion it is easy to determine what type of fire protection system will best suit the needs of a data center. Slide 14: Fire Detection Devices Now that we have completed our tutorial on fire, let us look at some fire detection devices. There are three main types of fire detection devices, they are: 1. Smoke detectors 2. Heat detectors and 3. Flame detectors For the purposes of protecting a data center, smoke detectors are the most effective. Heat detectors and flame detectors are not recommended for use in data centers, as they not provide detection in the incipient stages of a fire and therefore do not provide early warning for the protection of high value assets. Smoke detectors are far more effective forms of protection in data centers simply because they are able to detect a fire at the incipient stage. For this reason we will be focusing on the attributes and impact of smoke detectors. © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.
  • 21. Examining Fire Protection Methods in the Data Center P a g e | 7 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Slide 15: Smoke Detectors The two types of smoke detectors that are used effectively in data centers are: 1. Intelligent spot type detectors and 2. Air sampling smoke detectors Slide 16: Intelligent Spot Type Detectors Intelligent spot type smoke detectors are much more sensitive than a conventional smoke detector. Intelligent spot type smoke detectors utilize a laser beam which scans particles that pass through the detector. The laser beam is able to distinguish whether or not the particles are simply dust or actually a by- product of combustion such as smoke. Furthermore, intelligent spot type smoke detectors are individually addressable. This means that it has the ability to send information to a central control station and pinpoint the exact location of the alarm. Another feature of intelligent spot type smoke detectors is that the sensitivity of the detector can be increased or decreased during certain times of the day. For example, when workers leave an area, the sensitivity can be increased. The intelligent spot type smoke detectors can also compensate to a changing environment due to environmental factors such as humidity or dirt accumulation. Slide 17: Intelligent Spot Type Detectors Intelligent spot type detectors are most commonly placed in the following areas: • Below raised floors, • On ceilings, and • Above drop down ceilings, • In air handling ducts to detect possible fires within an HVAC system. By placing detectors near the exhaust intake of the computer room air conditioners, detection can be accelerated. Slide 18: Air Sampling Smoke Detection Systems Air sampling smoke detection systems, sometimes referred to as a “Very Early Smoke Detection” (VESD) Systems, are usually described as a high powered photoelectric detector. These systems are comprised of
  • 22. Examining Fire Protection Methods in the Data Center P a g e | 8 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. a network of pipes attached to a single detector, which continually draws air in and samples it. The pipes are typically made of PVC but can also be CPVC, EMT or copper. Depending on the space being protected and the configuration of multiple sensors, these systems can cover an area of 2,500 to 80,000 square feet or 232 to 7,432 square meters. This system also utilizes a laser beam, much more powerful than the one contained in a common photoelectric detector, to detect by-products of combustion. As the particles pass through the detector, the laser beam is able to distinguish them as dust or byproducts of combustion. Slide 19: Signaling and Notification Devices Now that we have talked about fire detection devices and systems, let’s take a look at the next objective of fire protection, communication. All of the previously mentioned detection devices would be virtually useless if they were not directly tied into an effective signaling and notification device. Signaling devices provide audible alarms such as horns, bells or sirens or visual alarms such as strobes, which warn building occupants after a signaling device has been activated. Signaling devices are also an effective way of communicating danger to individuals who may be visually or hearing impaired. One of the most basic and common signaling devices are pull stations. These images represent your typical pull station. Slide 20: Control Systems The next communication system we will be covering is the control system. Control systems are often considered the “brains” of a fire protection system. The computer programs used by control systems allow users to program and manage the system based on their individual requirements. The system can be programmed with certain features such as time delays, thresholds, and passwords. Once the detector, pull station or sensor activates the control system, the system has the ability to set its preprogrammed list of rules into motion. Most importantly, the control system can provide valuable information to the authorities. Slide 21: Emergency Power Off One important safety feature to mention that is not directly related to communication or suppression systems is the Emergency Power Off or (EPO). If a fire progresses to the point where all other means of
  • 23. Examining Fire Protection Methods in the Data Center P a g e | 9 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. suppression have been exhausted, the authorities that arrive on site will have the option to utilize this feature. The EPO is intended to power down equipment or an entire installation in an emergency to protect personnel and equipment. EPO is typically used either by fire fighting personnel or by equipment operators. When used by firefighters, it is used to assure that equipment is de-energized during fire fighting so that firefighters are not subjected to shock hazards. The secondary purpose is to facilitate fire fighting by eliminating electricity as a source of energy feeding combustion. EPO may also be activated in case of a flood, electrocution, or other emergency. There is a high cost associated with abruptly shutting down a data center. Unfortunately, EPO tripping is often the result of human error. Much debate has ensued over the use of EPO and may one day lead to the elimination of EPO in data centers. Slide 22: Fire Suppression Agents and Devices The last goal of a data center fire protection system is suppression. The next section will review suppression agents and devices that are often used in data centers or IT environments. Let’s start with the most common suppression agents and devices, they are: • Fire extinguishers and, • Total flooding fire extinguishing systems Slide 23: Fire Extinguishers Fire extinguishers are one of the oldest yet most reliable forms of fire suppression. They are extremely valuable in data centers because they are a quick solution to suppressing a fire. Fire extinguishers allow for a potential hazardous situation to be addressed before more drastic or costly measures need to be taken. It is important to note that only specific types of gaseous agents can be used in data center fire extinguishers. HFC-236fa, is a gaseous agent specific to fire extinguishers, that has been approved for use in data centers. It is environmentally safe and can be discharged in occupied areas. Additionally, it exists as
  • 24. Examining Fire Protection Methods in the Data Center P a g e | 10 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. a gas therefore; it leaves no residue upon discharge. Simply put, it extinguishes fires by removing heat and chemically preventing combustion. Slide 24: Total Flooding Fire Extinguishing Systems A more sophisticated form of fire extinguishing is the Total Flooding Fire Extinguishing System, sometimes referred to as a clean agent fire suppression system. Total Flooding Fire Extinguishing Systems are comprised of a series of cylinders or high pressure tanks filled with an extinguishing or gaseous agent. A gaseous agent is a gaseous chemical compound that extinguishes the fire by either removing heat or oxygen or both. Given a closed, well-sealed room, gaseous agents are very effective at extinguishing a fire while leaving no residue. When installing such a system, the total volume of the room and how much equipment is being protected is taken into consideration. The number of the tanks or cylinders to be installed are dependent upon these factors. It is important to note, that the Standard that guides Total Flooding Suppression Systems is NFPA 2001. The next slide features a live demonstration of a Total Flooding Fire Extinguishing system in action. Slide 25: Total Flooding Fire Extinguishing Systems If a fire occurs and the system is activated, the gaseous agent discharges and fills the room in about 10 seconds. One of the best features of this system is that it is able to infiltrate hard to reach places, such as equipment cabinets. This makes Total Flooding Fire Extinguishing Systems perfect for data centers. Now that we have discussed Total Flooding Fire Extinguishing System, let’s start reviewing the agents that such systems deploy. Slide 26: Total Flooding Fire Extinguishing Systems In the past, some agents were conductive and/or corrosive. Conductive and corrosive agents have negative impacts on IT equipment. For example, conductive agents may cause short circuits between electronic
  • 25. Examining Fire Protection Methods in the Data Center P a g e | 11 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. components within IT equipment and corrosive agents may “eat away” at electronic components within IT equipment. The gaseous agents used in today’s data centers are non conductive and non corrosive. An effective agent that is both non conductive and non corrosive and was widely used in data centers is Halon. Unfortunately, it was discovered that Halon is detrimental to the ozone layer and as of 1994, the production of Halon is no longer permitted. This has lead to the development of safer and cleaner gaseous agents. Let’s review some of the more popular gaseous agents for data centers. Slide 27: Most Commonly Used Gaseous Agents Today, some of the most commonly used gaseous agents in data centers are Inert gases and Fluorine Based Compounds. Let’s review the characteristics of each agent. Slide 28: Inert Gases The most widely accepted inert gases for fire suppression in data centers are : • Pro-Inert or IG-55 and Inergen or IG-451 • Inert gases are composed of nitrogen, argon, and carbon dioxide, all of which are found naturally in the atmosphere. Because of this, they have zero Ozone Depletion Potential, meaning that it possesses no threat to humans or to the environment. Inert gases can be also discharged in occupied areas and are non-conductive. • Inergen requires a large number of storage tanks for effective discharge. But because Inergen is stored as a high pressure gas, it can be stored up to 300 feet or 91.44 meters away from the discharge nozzles and still discharge effectively. Inergen is used successfully in telecommunication offices and data centers. Slide 29: Fluorine Based Compounds
  • 26. Examining Fire Protection Methods in the Data Center P a g e | 12 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Another suppression alternative for data centers is Fluorine Based Compounds. Fluorine Based Compound, HFC-227ea is known under two commercial brands; FE-200 and FE-227. HFC-227ea has a zero ozone depletion potential (ODP) and an acceptably low global warming potential. It is also an odorless, colorless. Slide 30: Fluorine Based Compounds HFC-227ea is stored as liquefied compressed gas with a boiling point of 2.5 degrees F (-16.4 degrees C). It is discharged as an electrically non-conductive gas that leaves no residue and will not harm occupants; however, like in any other fire situation all occupants should evacuate the area as soon as an alarm sounds. It can be used with ceiling heights up to 16 feet. HFC-227ea has one of the lowest storage space requirements; the floor space required is equal to that of needing only 1.7 times that of a Halon 1301 system. HFC-227ea chemically inhibits the combustion reaction by removing heat and can be discharged in 10 seconds or less. An advantage to this agent is that it can be retrofitted into an existing Halon 1301 system but the pipe network must be replaced or an additional cylinder of nitrogen must be used to push the agent through the original Halon pipe network. Some applications include data centers, switchgear rooms, automotive, and battery rooms. Slide 31: Fluorine Based Compounds There is also HFC-125. HFC-125 it is known under two commercial brands; ECARO-25 and FE-25. HFC- 125 has a zero ozone depletion potential (ODP) and an acceptable low global warming potential It is an odorless, colorless and is stored as a liquefied compressed gas. This agent chemically inhibits the combustion reaction by removing heat and can be discharged in 10 seconds or less as an electrically non- conductive gas that leaves no residue and will not harm occupants. It can be used in occupied areas; however, like in any other fire situation all occupants should be evacuated as soon as an alarm sounds. It can be used with ceiling heights up to 16 feet or 4.8 meters. One of the main advantages of HFC-125 is that it flows more like Halon than any other agent available today and can be used in the same pipe network distribution as an original Halon system. Slide 32: Other Methods of Fire Suppression
  • 27. Examining Fire Protection Methods in the Data Center P a g e | 13 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Other methods of fire suppression often found in data centers are: 1. Water Sprinklers Systems and 2. Water Mist Suppression Systems Of the two options, Water Sprinklers are often present in many facilities due to national and/or local fire codes. Let‘s review a few of the key elements of Water Sprinklers and Water Mist Suppression Systems. Slide 33: Water Sprinkler System Water sprinkler systems are designed specifically to protect the structure of a building. The system is activated when the given environment reaches a designated temperature and the valve fuse opens. A valve fuse is a solder or glass bulb that opens when it reaches a temperature of 165-175°F or 74-79°C. Slide 34: Water Sprinkler System There are currently three configurations of water sprinkler systems available: wet-pipe, dry-pipe, and pre- action. Wet-pipe systems are the most commonly used and are usually found in insulated buildings. Dry- pipe systems are charged with compressed air or nitrogen to prevent damage from freezing. Pre-action systems prevent accidental water discharge by requiring a combination of sensors to activate before allowing water to fill the sprinkler pipes. Because of this feature, pre-action systems are highly recommended for data center environments. Lastly, it is important to note that water sprinklers are not typically recommended for data centers, but depending on local fire codes they may be required. Slide 35: Water Mist Suppression Systems The last suppression system we will be discussing is the water mist suppression system. When the system is activated it discharges a very fine mist of water onto a fire. The mist of water extinguishes the fire by absorbing heat. By doing so, vapor is produced, causing a barrier between the flame and the oxygen needed to sustain the fire. Remember the "fire triangle"? The mist system effectively takes away two of the main components of fire, heat and oxygen. This makes the system highly effective. Additionally, because a
  • 28. Examining Fire Protection Methods in the Data Center P a g e | 14 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. fine mist is used, less water is needed; therefore, the water mist system needs minimal storage space. Water mist systems are gaining popularity due to their effectiveness in industrial environments. Because of this, we may see an increase in the utilization of such systems in data centers. Slide 36: Summary In summary: • The three system objectives of a data center fire protection system are: 1. To identify the presence of a fire 2. Communicate the threat to the authorities and occupants 3. To suppress the fire and limit any damage • The 2 types of smoke detectors that are used effectively in data centers are: 1. Intelligent spot type smoke detectors 2. Air sampling smoke detectors • Signaling devices provide: o Audible alarms such as: ƒ Horns ƒ Bells or sirens o Visual alarms such as: ƒ Strobes Slide 37: Summary • The most common fire suppression devices used in data centers are: o Fire extinguishers o Total Flooding Fire Extinguishing Systems • The most commonly used gaseous agents in data centers are: o Inert gases o Fluorine Based Compounds • Additional methods of fire suppression are: o Water sprinkler
  • 29. Examining Fire Protection Methods in the Data Center P a g e | 15 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. o Water mist suppression systems Slide 38: Thank You! Thank you for participating in this course.
  • 30. Fundamental Cabling Strategies for Data Centers P a g e | 1 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Fundamental Cabling Strategies for Data Centers Transcript Slide 1 Welcome to Fundamental Cabling Strategies for Data Centers. Slide 2 For best viewing results, we recommend that you maximize your browser window now. The screen controls allow you to navigate through the eLearning experience. Using your browser controls may disrupt the normal play of the course. Click ATTACHMENTS to download important supplemental information for this course. Click the Notes tab to read a transcript of the narration. Slide 3 At the completion of this course, you will be able to: • Discuss the evolution of cabling • Classify different types of common data center cables • Describe cabling installation practices • Identify the strategies for selecting cabling topologies • Utilize cable management techniques • Recognize the challenges associated with cabling in the data center Slide 4 From a cost perspective, building and operating a data center represents a significant piece of any Information Technology (IT) budget. The key to the success of any data center is the proper design and implementation of core critical infrastructure components. Cabling infrastructure, in particular, is an important area to consider when designing and managing any data center. The cabling infrastructure encompasses all data cables that are part of the data center, as well as all of the power cables necessary to ensure power to all of the loads. It is important to note that cable trays and cable
  • 31. Fundamental Cabling Strategies for Data Centers P a g e | 2 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. management devices are critical to the support of IT infrastructure as they help to reduce the likelihood of downtime due to human error and overheating. Slide 5 This course will address the basics of cabling infrastructure and will discuss cabling installation practices, cable management strategies and cable maintenance practices. We will take an in-depth look at both data cabling and power cabling. Let’s begin with a look at the evolution of data center cabling. Slide 6 Ethernet protocol has been a data communications standard for many years. Along with Ethernet, several traditional data cabling practices continue to shape how data cables are deployed. • High speed data cabling over copper is a cabling medium of choice • Cable fed into patch panels and wall plates is common • The RJ45 is the data cable connector of choice The functionality within the data cables and associated hardware, however, has undergone dramatic change. Increased data speeds have forced many physical changes. Every time a new, faster standard is ratified by standardization bodies, the cable and supporting hardware have been redesigned to support it. New test tools and procedures also follow each new change in speed. These changes have primarily all been required by the newer, faster versions of Ethernet, which are driven by customers’ needs of more speed and bandwidth. When discussing this, it is important to note the uses and differences of both fiber-optic cable, and traditional copper cable. Let’s compare these two. Slide 7 Copper cabling has been used for decades in office buildings, data centers and other installations to provide connectivity. Copper is a reliable medium for transmitting information over shorter distances; but its performance is only guaranteed up to 109.4 yards (100 meters) between devices. (This would include structured cabling and patch cords on either end.)
  • 32. Fundamental Cabling Strategies for Data Centers P a g e | 3 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Copper cabling that is used for data network connectivity contains four pairs of wires, which are twisted along the length of the cable. The twist is crucial to the correct operation of the cable. If the wires unravel, the cable becomes more susceptible to interference. Copper cables come in two configurations: • Solid cables provide better performance and are less susceptible to interference making them the preferred choice for use in a server environment. • Stranded cables are more flexible and less expensive, and typically are only used in patch cord construction. Copper cabling, patch cords, and connectors are classified based upon their performance characteristics and for which applications they are typically used. These ratings, called categories, are spelled out in the TIA/EIA 568 Commercial Building Telecommunications Writing Standard. Slide 8 Fiber-optic cable is another common medium for providing connectivity. Fiber cable consists of five elements. The center portion of the cable, known as the core, is a hair thin strand of glass capable of carrying light. This core is surrounded by a thin layer of slightly purer glass, called cladding, that contains and refracts that light. Core and cladding glass are covered in a coating of plastic to protect them from dust or scratches. Strengthening fibers are then added to protect the core during installation. Finally, all of these materials are wrapped in plastic or other protective substance that serves as the cable’s jacket. A light source, blinking billions of times per second, is used to transmit data along a fiber cable. Fiber-optic components work by turning electronic signals into light signals and vice versa. Light travels down the interior of the glass, refracting off of the cladding and continuing onward until it arrives at the other end of the cable and is seen by receiving equipment. When light passes from one transparent medium to another, like from air to water, or in this case, from the glass core to the cladding material, the light bends. A fiber cable’s cladding consists of a different material from the core — in technical terms, it has a different refraction index — that bends the light back toward the
  • 33. Fundamental Cabling Strategies for Data Centers P a g e | 4 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. core. This phenomenon, known as total internal reflection, keeps the light moving along a fiber-optic cable for great distances, even if that cable is curved. Without the cladding, light would leak out. Fiber cabling can handle connections over a much greater distance than copper cabling, 50 miles (80.5 kilometers) or more in some configurations. Because light is used to transmit the signal, the upper limits of how far a signal can travel along a fiber cable is related not only to the properties of the cable but also to the capabilities and relative location of transmitters. Slide 9 Besides distance, fiber cabling has several other advantages over copper: • Fiber provides faster connection speeds • Fiber is not prone to electrical interference or vibration • Fiber is thinner and light-weight, so more cabling can fit into the same size bundle or limited spaces • Signal loss over distance is less along optical fiber than copper wire Two varieties of fiber cable are available in the marketplace: multimode fiber and single mode fiber. Multimode is commonly used to provide connectivity over moderate distances, such as those in most data center environments, or among rooms within a single building. Single mode fiber is used for the longest distances, such as among buildings on a large campus, or between sites. Copper is generally the less expensive cabling solution over shorter distances (i.e. the length of data center server rows), while fiber is less expensive for longer distances (i.e. connections among buildings on a campus). Slide 10 In the case of data center power cabling, however, historical changes have taken a different route.
  • 34. Fundamental Cabling Strategies for Data Centers P a g e | 5 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. In traditional data centers, designers and engineers were not too concerned with single points of failure. Scheduled downtime was an accepted practice. Systems were periodically taken down to perform maintenance, and to make changes. Data center operators would also perform infrared scans on power cable connections prior to the shutdowns to determine problem areas. They would then locate the hot spots that could indicate possible risk of short circuits and address them. Traditional data centers, very often, had large transformers that would feed large uninterruptible power supplies (UPSs) and distribution switchboards. From there, the cables would go to distribution panels that would often be located on the columns or walls of the data center. Large UPSs, transformers, and distribution switchgear were all located in the back room. The incoming power was then stepped down to the correct voltage and distributed to the panels mounted in the columns. Cables connected to loads, like mainframe computers, would be directly hardwired to the hardware. In smaller server environments, the power cables would be routed to power strips underneath the raised floor. The individual pieces of equipment would then plug into those power strips, using sleeve and pin connectors, to keep the cords from coming apart. Slide 11 Downtime is not as accepted as it once was in the data center. In many instances, it is no longer possible to shut down equipment to perform maintenance. A fundamentally different philosophical approach is at work. Instead of the large transformers of yesterday, smaller ones, called power distribution units (PDUs) are now the norm. These PDUs have moved out of the back room, onto the raised floor, and in some cases, are integrated into the racks. These PDUs feed the critical equipment. This change was the first step in a new way of thinking, a trend that involved getting away from the large transformer and switchboard panel. Modern data centers also have dual cord environments. Dual cord helps to minimize a single point of failure scenario. One of the benefits of the dual cord method is that data center operators can perform
  • 35. Fundamental Cabling Strategies for Data Centers P a g e | 6 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. maintenance work on source A, while source B maintains the load. The server never has to be taken offline while upstream maintenance is being performed. This trend began approximately 10 years ago and it was clearly driven by the user. It became crucial for data center managers to maintain operations 24 hours a day, 7 days per week. Some of the first businesses to require such operations were the banks, who introduced ATMs, which demanded constant uptime. The customer said “We can no longer tolerate a shutdown”. Now that we have painted a clear picture of the history of cabling infrastructure, we’ll discuss the concept of modularity and its importance in the data center. Slide 12 Modularity is an important concept in the contemporary data center. Modular, scalable Network Critical Physical Infrastructure (NCPI) components have been shown to be more efficient and more cost effective. The data cabling industry tackled the issue of modularity decades ago. Before the patch panel was designed, many adds, moves and changes were made by simply running new cable. After years of this ‘run a new cable’ mentality, wiring closets and ceilings were loaded with unused data cables. Many wiring closets became cluttered and congested. The strain on ceilings and roofs from the weight of unused data cables became a potential hazard. The congestion of data cables under the raised floor also impeded proper cooling and exponentially increased the potential for human error and downtime. Slide 13 In the realm of data cabling, the introduction of the patch panel brought an end to the ‘run a new cable’ philosophy and introduced modularity to network cabling. The patch panel, located either on the data center floor or in a wiring closet, is the demarcation point where end points of bulk cable converge. If a data center manager were to trace a data cable from end to end, starting at the patch panel, he would probably find himself ending at the wall plate. This span is known as the backbone.
  • 36. Fundamental Cabling Strategies for Data Centers P a g e | 7 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. The modularity of the system is in the use of patch cables. The user plugs his patch cable into a wall plate. If he needs to move a computer, for example, he simply unplugs his patch cable and connects into a different wall plate. The same is true on the other end, back at the patch panels. If a port on a hub or router malfunctions, the network administrator can simply unplug it and connect it into another open port. Data center backbone cabling is typically designed to be non-scalable. The data cabling backbone, 90% of the time, is located behind the walls, not out in the open. Typically a network backbone, when installed, especially in new construction scenarios, accounts for future growth considerations. Adds, moves and changes can be very costly once the walls are constructed. In new construction it is best to wire as much of the building as possible, with the latest cable standard. This reduces expenses once the walls are constructed. Now that we have discussed the concept of modularity, let’s overview the different types of data cables that exist in a data center. Slide 14 So, what are the different types of common data center specific data cables? Category 5 (Cat 5) was originally designed for use with 100 Base-T. Cat 5e supports 1 Gig Ethernet. Cat 6a supports 10 Gig Ethernet. It is important to note that a higher rated cable can be used to support slower speeds, but the reverse is not true. For example, a Cat 5e installation will not support 10 Gig Ethernet, but Cat 6a cabling will support 100 Base-T. Cable assemblies can be defined as a length of bulk cable with a connector terminated onto both ends. Many of the assemblies used are patch cables of various lengths that match or exceed the cabling standard of the backbone. A Cat 5e backbone requires Cat 5e or better patch cables. Slide 15
  • 37. Fundamental Cabling Strategies for Data Centers P a g e | 8 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Data center equipment can require both standard and custom cables. Some cables are specific to the equipment manufacturer. One example of a common connection would involve a scenario in which a Cisco router with the 60-pin LFH connector connected to a router with V.35 interface requires an LFH60 to V.35 Male DTE cable. An example of a less common connection would be a stand alone tape backup that may have a SCSI interface. If the cable that came with the equipment does not match up to the SCSI card in a computer, the data center manager will find himself looking for a custom SCSI cable. A typical example of the diversity of cables required in the data center is a high speed serial router cable. In a wide area network (WAN), routers are typically connected to modems, which are called DSU/CSU’s. Some router manufacturers feature unorthodox connectors on their routers. Depending on the interface that the router and DSU/CSU use to communicate to one another, several connector possibilities exist. Other devices used in a computer room can require any one of a myriad of cables. Common devices besides the networking hardware are telco equipment, KVM’s, mass storage, monitors, keyboard and mouse, and terminal servers. Sometimes brand-name cables are expensive or unavailable. A large market of manufacturer equivalent cables exists, from which the data center manager can choose. Slide 16 When discussing data center power cabling, it is important to note that American Wire Gauge (AWG) copper wire is the common medium for transporting power in the data center. This has been the case for many years and it still holds true in modern data centers. The formula for power is Amp x Volts = Power; and data center power cables are delineated by amperage. The more power that needs to be delivered to the load, the higher the amperage has to be. (Note: The voltage will not be high under the raised floor. It will be less than 480V; most servers are designed to handle 120 or 208V.)
  • 38. Fundamental Cabling Strategies for Data Centers P a g e | 9 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. If the level of power is the same, the amperage and voltage are the same. As the amperage increases or decreases, the gauge of the wire needs to be larger or smaller to accommodate the change in amperage. AWG ratings organize copper wire into numerous recognizable and standard configurations. A relatively new trend in the domain of data center power cabling is the invention of the whip. Whips are pre- configured cables with a twist lock cap on one end and insulated copper on the other end. The insulated copper end feeds a breaker in the main PDU; the twist lock end feeds the rack mounted PDU that supplies the intelligent power strips in the rack. Server equipment then plugs directly into the power strip. With whips, there is no need for wiring underneath the floor (with the possible exception of the feed to the main PDU breakers). Thus, the expense of a raised floor can be avoided. Another benefit of whips is that a licensed electrician is not required to plug in the twist lock connectors of the whip into the power strip twist lock receptacles. Slide 17 Dual cord, dual power supply also introduced significant changes to the data center power cabling scheme. In traditional data centers, computers had one feed from one transformer or panel board, and the earliest PDUs still only had one feed to servers. Large mainframes required two feeds to keep systems consistently available. Sometimes two different utilities were feeding power to the building. Now, many servers are configured to support two power feeds, hence the dual cord power supply. Because data center managers can now switch from one power source to another, this allows for maintenance on infrastructure equipment without having to take servers offline. It is important to understand that the power cabling requirements to support the dual cord power supply configuration have doubled as a result. The same wire, the same copper, and the same sizes, are required as was required in the past, but now data center designers need to account for double the power infrastructure cable, including power related infrastructure that may be located in the equipment room that supports the data center.
  • 39. Fundamental Cabling Strategies for Data Centers P a g e | 10 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Now that we’ve talked about a basic overview of both power and data cabling, let’s take a look at some best practices for cabling in the data center. Slide 18 Some best practices for data cabling include: • Overhead deployments o Overhead cables that are in large bundles should run in cable trays or troughs. If the manufacturer of the tray or trough offers devices that keep the cable bend radius in check then they should be used as well. Do not over tighten tie wraps or other hanging devices. It can interfere with the performance of the cable. • Underfoot deployments o Be cognizant of the cable’s bend radius specifications and adhere tightly to them. Do not over tighten tie wraps. This can interfere with the performance of the cable. • Rack installations o As talked about previously, be cognizant of the cable’s bend radius specifications and adhere tightly to them. Don’t over tighten tie wraps. This can interfere with the performance of the cable. Use vertical and/or horizontal cable management to take up any extra slack. • Testing cables o There are several manufacturers of test equipment designed specifically to test today’s high speed networks. Make sure that the installer tests and certifies every link. A data center manager can request a report that shows the test results. Are there any common practices that should be avoided? When designing and installing the network’s backbone care should be taken to route all Unshielded Twisted Pair (UTP is the U.S. standard) or Shielded Twisted Pair (STP is the European standard) cables away from possible sources of interference such as power lines, electric motors or overhead lighting. Slide 19 Power cabling best practices are described in the National Electric Code (NEC).
  • 40. Fundamental Cabling Strategies for Data Centers P a g e | 11 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. When addressing best practices in power cabling, it is important that data center professionals use the term, “continuous load’. The continuous load is defined as any load left on for more than 3 hours, which is, in effect, all equipment in a data center. Due to the requirements of the continuous load, data center operators are forced to take all rules that apply to amperages and wire sizes and de-rate those figures by 20%. For example, if a wire is rated for 100 amps, the best practice is not to run more than 80 amps through it. Let’s discuss this further. Over time, cables can get overheated. The de-rating approach helps avoid overheated wires that can lead to shorts and fires. If the quantity of copper in the cable is insufficient for the amperages required, it will heat to the point of melting the insulation. If insulation fails, the copper is exposed to anything metal or grounded in its proximity. If it gets close enough, the electricity will jump or arc and could cause a fire to start. Undersized power cables also stress the connections. If any connection is loose, the excess load exacerbates the situation. The de-rating of the power cables takes these facts into account. To further illustrate this example, let’s compare electricity to water. If too much water gets pushed into a pipe, the force of the water will break the pipe if it is too small. Amperages are forcing electricity through the wire; therefore, the wire is going to heat up if the wire is undersized. The manufacturer, or supplier, of the cable provides the information regarding the circular mill, or the area of the wires, inside the cable. The circular mill does not take into account the wire insulation. The circular mill determines how much amperage can pass through that piece of copper. Next, let’s compare overhead and under the floor installations. Slide 20 The benefit of under the floor cabling is that the cable is not visible. Many changes can be made and the wiring will not be seen. The disadvantage of under the floor cabling is the significant expense of constructing a raised floor. Data center designers also need to take into account the danger of opening up a raised floor and exposing other critical systems like the cooling air flow system, if the raised floor is used as a plenum.
  • 41. Fundamental Cabling Strategies for Data Centers P a g e | 12 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. With overhead cabling, data center designers can use cabling trays to guide the cables to the equipment. They can also run conduit from the PDU directly to the equipment or computer load. The conduit is not flexible, however, which is not good if constant change is expected. A best practice is to use overhead cables which are all pre-configured in the factory and placed in the troughs to the equipment. This standardization creates a more convenient, flexible environment for the data center of today. Slide 21 Where your power source is, where the load is, and what the grid is like, all affect the design and layout of the cabling in the data center. When discussing overhead cabling, data centers designers are tasked with figuring out the proper placement of cables ahead of time. Then, they can decide if it would be best to have the troughs directly over the equipment or in the aisle. Also designers have to take into account local codes for distributing power. For example, there are established rules that require that sprinkler heads not be blocked. If there is a 24 inch (60.96 cm) cable tray, designers could not run that tray any closer than 10 inches (25.4 cm) below the sprinkler head to cover up or obstruct the head. They would need to account for this upfront in the design stage. Now that we’ve touched upon best practices for installation, let’s discuss some strategies for selecting cabling topologies. Slide 22 Network Topology deals with the different ways computers (and network enabled peripherals) are arranged on or connected to a network. The most common network topologies are: • Star. All computers are connected to a central hub. • Ring. Each computer is connected to two others, such that, starting at any one computer, the connection can be traced through each computer on the ring back to the first. • Bus. All computers are connected to a central cable, normally termed bus or backbone.
  • 42. Fundamental Cabling Strategies for Data Centers P a g e | 13 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. • Tree. A group of start networks are each connected to a linear backbone. For data cabling, in IEEE 802.3, UTP/STP Ethernet scenarios, a star network topology is used. Star topology implies that all computers are connected to a central hub. In it’s simplest form a UTP/STP Ethernet Star topology has a Hub at the center and devices (i.e. personal computer, printers, etc.) connected directly to it. Small LANs fit this simple model. Larger installations can be much more complicated, with segments connecting to other segments, but the basic Star topology remains intact. Slide 23 Power cables can be laid out either overhead in troughs or below the raised floor. Many factors come into play when deciding on a power distribution layout from the PDUs to the racks. The size of the data center, the nature of the equipment being installed and budget are all variables. However, be aware that two approaches are commonly utilized for distribution of power cables in the data center. Slide 24 One approach is to run the power cables inside conduits from large wall mounted or floor mounted PDUs to each cabinet location. This works moderately well for a small server environment with a limited number of conduits. This does not work well for larger data centers when cabinet locations require multiple power receptacles. Slide 25 Another approach, more manageable for larger server environments, is the installation of electrical substations at the end of each row in the form of circuit panels. Conduit is run from power distribution units to the circuit panels and then to a subset of connections to the server cabinets. This configuration uses shorter electrical conduit, which makes it easier to manage, less expensive to install, and more resistant to a physical accident in the data center. For example, if a heavy object is dropped
  • 43. Fundamental Cabling Strategies for Data Centers P a g e | 14 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. through a raised floor, the damage it can cause is greatly reduced in a room with segmented power, because fewer conduits overlap one another in a given area. Even more efficient is to deploy PDUs in the racks themselves and to have whips feed the various racks in the row. Slide 26 What are the best practices for cable management and organization techniques? Some end users purchase stranded bulk data cable and RJ45 connectors and manufacture their own patch cables on sight. While doing this assures a clean installation with no excess wire, it is time consuming and costly. Most companies find it more prudent to inventory pre-made patch cables and use horizontal or vertical cable management to take up any excess cable. Patch cables are readily available in many standard lengths and colors. Are there any common practices that should be avoided? All of today’s high speed networks have minimum bend radius specifications for the bulk cable. This is also true for the patch cables. Care should be taken not to exceed bend radius on the patch cables. Slide 27 Proper labeling of power cables in the data center is a recommended best practice. A typical electrical panel labeling scheme is based on a split bus (two buses in the panel) where the labels represent an odd numbered side and an even numbered side. Instead of normal sequenced numbering, the breakers would be numbered 1, 3, 5 on the left hand side and would be numbered 2, 4, 6 on the right side, for example. When labeling a power cable or whip, the PDU designation from the circuit breaker would be a first identifier. This identifier number indicates from where the whip comes. Identifying the source of the power cable can be complicated because the power may not be supplied from the PDU that is physically the closest to the rack and may not be the one that is feeding the whip. In addition, data center staff may want to access the “B” power source even though the “A” power source might be physically closer. This is why the power cables need to be properly labeled at each end. The cable label needs to indicate the source PDU (i.e. PDU1) and also identify the circuit (i.e. circuit B).
  • 44. Fundamental Cabling Strategies for Data Centers P a g e | 15 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Ideally on the other end of the cable, a label will indicate what load the cable is feeding (i.e. SAN device, or Processer D23). To help clarify labeling, very large data centers are laid out in two foot squares that match the raised floor. They are usually addressed with east/west and numbered designations. For example, “2 west by 30 east” identifies the location of an exact square on the data center floor (which is supporting a particular piece or pieces of equipment). Therefore the label identifies the load that is being supported by the cable. Labeling of both ends of the cable in an organized, consistent manner allows data center personnel to know the origin of the opposite end. Slide 28 With network data cabling, once the backbone is installed and tested it should be fairly stable. Infrequently, a cable may become exposed, damaged, and therefore needs to be repaired or replaced. But once in place, the backbone of a network should remain secure. Occasionally, patch cables can be jarred and damaged; this occurs most commonly on the user end. Since the backbone is fairly stable except for occasional repair, almost all changes are initiated simply by disconnecting a patch cable and reconnecting it somewhere else. The modularity of a well designed cabling system allows users to disconnect from one wall plate, connect to another and be back up and running immediately. In the data center, adds, moves and changes should be as simple as connecting and disconnecting patch cables. So what are some of the challenges associated with cabling in the data center? We’ll talk about three of the more common challenges. Slide 29 The first challenge is associated with useful life. The initial design and cabling choices can determine the useful life of a data cabling plant. One of the most important decisions to make when designing a network is choosing the medium: copper, fiber or both?
  • 45. Fundamental Cabling Strategies for Data Centers P a g e | 16 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Every few years newer-faster-better copper cables are introduced into the marketplace, but fiber seems to remain relatively unchanged. If an organization chose to install FDDI grade 62.5/125 fiber 15 years ago, that organization may still be using the same cable today. Whereas if the same organization had installed Cat 5 the organization more than likely would have had replaced it by now. In the early days few large installations were done in fiber because of the cost. The fiber was more expensive and so was the hardware that it plugged into. Now the costs of fiber and copper are much closer. Fiber cabling is also starting to change. The current state of the art is 50/125 laser optimized for 10 Gig Ethernet. Next, there is airflow and cooling. There are a few issues with cables and cabling in the data center that effect airflow and cooling. Cables inside of an enclosed cabinet need to be managed so that they allow for maximum airflow, which helps reduce heat. When cooling is provided through a raised floor it is best to keep that space as cable free as possible. For this reason expect to see more and more cables being run across the tops of cabinets as opposed to at the bottom or underneath the raised floor. Finally, there is management and labeling. Many manufacturers offer labeling products for wall plates, patch panels and cables. Also software packages exist that help keep track of cable management. In a large installation, these tools can be invaluable. Let’s take a look at some expenses associated with cabling in the data center. Slide 30 For data cabling, the initial installation of a cabling plant, and the future replacement of that plant, are the two greatest expenses. Beyond installation and replacement costs, the only other expense is adding patch cables as the network grows. The cost of patch cables is minimal considering the other costs in an IT budget. Cabling costs are, for the most part, up front costs.
  • 46. Fundamental Cabling Strategies for Data Centers P a g e | 17 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Regarding power cables, the design of the data center, and the location of the PDUs, will have a significant impact on costs. Dual cord power supplies are driving up the cost because double the power cabling is required. Design decisions are critical. Where will the loads be located? How far from the power distribution? What if PDUs are fed from different circuits? If not planned properly, unnecessarily long power cable runs will be required and will drive up overall data center infrastructure costs. Next, let’s look at cabling maintenance. Slide 31 How are cables replaced? Patch cables are replaced by simply unplugging both ends and connecting the new one. However, cables do not normally wear out. Most often, if a cable shorts, it is due to misuse or abuse. Cable assemblies have a lifetime far beyond the equipment to which they are connected. How are cables rerouted? If the cable that needs to be rerouted is a patch cable then it can simply be unplugged on one or both ends and rerouted. If the cable that needs to be rerouted is one of the backbone cables run through the walls, ceilings, or in cable troughs, it could be difficult to access. The backbone of a cabling installation should be worry-free, but if problems come up they can sometimes be difficult to address. It depends on what the issue is, where it is, and what the best solution is. Sometimes re-running a new link is the best solution. Slide 32 The equipment changes quite frequently in the data center; on the average a server changes every 2-3 years. It is important to note that power cabling only fails at the termination points. The maintenance occurs at the connections. Data center managers need to scan those connections and look for hot spots. It is also prudent to scan the large PDU and its connection off the bus for hot spots. Heat indicates that there is either a loose connection or an overload. By doing the infrared scan, data center operators can sense that failure before it happens. In the dual cord environment, it becomes easy to switch to the alternate source, unplug the connector, and check it’s connection.
  • 47. Fundamental Cabling Strategies for Data Centers P a g e | 18 © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Slide 33 Every few feet, the power cable is labeled with a voltage rating, an amperage rating, and the number of conductors that can be found inside the cable. This information is stamped on the cable. American Wire Gauge (AWG) is a rating of the size of the copper, and identifies the number of conductors in the cable. Inside a whip there are a minimum of 3 wires, one hot, one neutral, and one ground. It is also possible, to have 5 wires (3 hot, 1 neutral, 1 ground) inside the whip. Feeder cables which feed the Uninterruptible Power Supply (UPS) and feed the PDU are thicker, heavier cables. Single conductor cables (insulated cables with multiple strands of uninsulated copper wires inside) are usually placed in groups within metal conduit to feed power hungry data center infrastructure components such as large UPSs and Computer Room Air Conditioners (CRACs). Multiple conductor cables, (cables inside the larger insulated cable that are each separately insulated) are most often found on the load side of the PDU. Single conductors are most often placed within conduit, while multiple conductor cables are generally distributed outside of the conduit. Whips are multiple conductor cables. Slide 34 To summarize, let’s review some of the information that we have covered throughout the course. • A modular, scalable approach to data center cabling is more energy efficient and cost effective • Copper and fiber data cables running over Ethernet networks are considered the standard for data centers • American Wire Gauge copper cable is a common means of transporting power in the data center • Cabling to support dual cord power supplies helps to minimize single points of failure in a data center • To minimize cabling costs, it important for data center managers to take a proactive approach to the design, build, and operation of the data center of today Slide 35 Thank you for participating in this course.
  • 48.   © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Fundamentals of Cooling I Transcript Slide 1: The Fundamentals of Cooling I Welcome to The Fundamentals of Cooling I. Slide 2: Welcome For best viewing results, we recommend that you maximize your browser window now. The screen controls allow you to navigate through the eLearning experience. Using your browser controls may disrupt the normal play of the course. Click the Notes tab to read a transcript of the narration. Slide 3: Objectives At the completion of this course, you will be able to: • Explain why cooling in the data center is so critical to high availability • Distinguish between Precision and Comfort Cooling • Recognize how heat is generated and transferred • Define basic terms like Pressure, Volume and Temperature as well as their units of measurement • Describe how these terms are related to the Refrigeration Cycle • Describe the Refrigeration Cycle and its components Slide 4: Introduction Every Information Technology professional who is involved with the operation of computing equipment needs to understand the function of air conditioning in the data center or network room. This course explains the function of basic components of an air conditioning system for a computer room. Slide 5: Introduction Whenever electrical power is being consumed in an Information Technology (IT) room or data center, heat is being generated. We will talk more about how heat is generated a little later in this course. In the Data Center Environment, heat has the potential to create significant downtime, and therefore must be removed from the space. Data Center and IT room heat removal is one of the most essential yet least understood of all critical IT environment processes. Improper or inadequate cooling significantly detracts from the lifespan and availability of IT equipment. A general understanding of the fundamental principles of air conditioning and the basic arrangement of precision cooling systems facilitates more precise communication among IT and cooling professionals when specifying, operating, or maintaining a cooling solution. The purpose of precision air-conditioning equipment is the precise control of both temperature and humidity. Slide 6: Evolution Despite revolutionary changes in IT technology and products over the past decades, the design of cooling infrastructure for data centers had changed very little since 1965. Although IT equipment has always required cooling, the requirements of today’s IT systems, combined with the way that those IT systems are deployed, has created the need for new cooling-related systems and strategies which were not foreseen
  • 49.   © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. when the cooling principles for the modern data center were developed over 30 years ago. Slide 7: Comfort vs. Precision Cooling Today's technology rooms require precise, stable environments in order for sensitive electronics to operate at their peak. IT hardware produces an unusual, concentrated heat load, and at the same time, is very sensitive to changes in temperature or humidity. Most buildings are equipped with Comfort Air Conditioning units, which are designed for the comfort of people. When compared to computer room air conditioning systems, comfort systems typically remove an unacceptable amount of moisture from the space and generally do not have the capability to maintain the temperature and humidity parameters specified for IT rooms and data centers. Precision air systems are designed for close temperature and humidity control. They provide year-round operation, with the ease of service, system flexibility, and redundancy necessary to keep the technology room up and running. As damaging as the wrong ambient conditions can be, rapid temperature swings can also have a negative effect on hardware operation. This is one of the reasons hardware is left powered up, even when not processing data. According to ASHRAE, the recommended upper limit temperature for data center environments is 81°F (27.22°C). Precision air conditioning is designed to constantly maintain temperature within 1°F (0.56°C). In contrast, comfort systems are unable to provide such precise temperature and humidity controls. Slide 8: The Case for Data Center Cooling A poorly maintained technology room environment will have a negative impact on data processing and storage operations. A high or low ambient temperature or rapid temperature swings can corrupt data processing and shut down an entire system. Temperature variations can alter the electrical and physical characteristics of electronic chips and other board components, causing faulty operation or failure. These problems may be transient or may last for days. Transient problems can be very hard to diagnose. Slide 9: The Case for Data Center Cooling High Humidity – High humidity can result in tape and surface deterioration, condensation, corrosion, paper handling problems, and gold and silver migration leading to component and board failure. Low Humidity – Low humidity increases the possibility of static electric discharges. Such static discharges can corrupt data and damage hardware. Slide 10: The Physics of Cooling Now that we know that heat threatens availability of IT equipment, it’s important to understand the physics of cooling, and define some basic terminology. First of all, what is Heat? Heat is simply a form of energy that is transferred by a difference in temperature. It exists in all matter on earth, in varied quantities and intensities. Heat energy can be measured relative to any reference
  • 50.   © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. temperature, body or environment. What is Temperature? Temperature is most commonly thought of as how hot or cold something is. It is a measure of heat intensity based on three different scales: Celsius, Fahrenheit and Kelvin. What is Pressure? Pressure is a basic physical property of a gas. It is measured as the force exerted by the gas per unit area on surroundings. What is Volume? Volume is the amount of space taken up by matter. The example of a balloon illustrates the relationship between pressure and volume. As the pressure inside the balloon gets greater than the pressure outside of the balloon, the balloon will get larger. Therefore, as the pressure increases, the volume increases. We will talk more about the relationship between pressure, volume and temperature a little later in this course. Slide 11: Three Properties of Heat Energy Now that we know the key terms related to the physics of cooling, we can now explore the 3 properties of heat energy. A unique property of heat energy is that it can only flow in one direction, from hot to cold. For example if an ice cube is placed on a hot surface, it cannot drop in temperature; it can only gain heat energy and rise in temperature, thereby causing it to melt. A second property of heat transfer is that Heat energy cannot be destroyed. The third property is that heat energy can be transferred from one object to another object. In considering the ice cube placed on a hot surface again, the heat from the surface is not destroyed, rather it is transferred to the ice cube which causes it to melt. Slide 12: Heat Transfer Methods There are three methods of heat transfer: conduction convection and radiation. Conduction is the process of transferring heat through a solid material. Some substances conduct heat more easily than others. Solids are better conductors than liquids and liquids are better conductors than gases. Metals are very good conductors of heat, while air is very poor conductor of heat. Slide 13: Heat Transfer Methods Convection is the result of transferring heat through the movement of a liquid or gas. Radiation related to heat transfer is the process of transferring heat by means of electromagnetic waves, emitted due to the temperature difference between two objects.
  • 51.   © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Slide 14: Heat Transfer Methods For example, blacktop pavement gets hot from radiation heat by the sun’s rays. The light that warms the blacktop from the Sun is a form of electromagnetic radiation. Radiation is a method of heat transfer that does not rely on any contact between the heat source and the heated object. If you step barefoot on the pavement, the pavement feels hot. This feeling is due to the warmth of the pavement being transferred to your cold feet by means of conduction. The conduction occurs when two objects at different temperatures are in contact with each other. Heat flows from the warmer to the cooler object until they are both the same temperature. Finally, if you look down a road of paved blacktop, in the distance, you may see wavy lines emanating up from the road, much like a mirage. This visible form of convection is caused by the transfer of heat from the surface of the blacktop to the cooler air above. Convection occurs when warmer areas of a liquid or gas rise to cooler areas in the liquid or gas. As this happens, cooler liquid or gas takes the place of the warmer areas which have risen higher. This cycle results in a continuous circulation pattern and heat is transferred to cooler areas. "Hot air rises and cool air falls to take its place" - this is a description of convection in our atmosphere. Slide 15: Air Flow in IT Spaces As mentioned earlier, heat energy can only flow from hot to cold. For this reason, we have air conditioners and refrigerators. They use electrical or mechanical energy to pump heat energy from one place to another, and are even capable of pumping heat from a cooler space to a warmer space. The ability to pump heat to the outdoors, even when it is hotter outside than it is in the data center, is a critical function that allows high- power computing equipment to operate in an enclosed space. Understanding how this is possible is a foundation to understanding the design and operation of cooling systems for IT installations.
  • 52.   © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners.   Slide 16: Heat Generation Whenever electrical power is being consumed in an Information Technology (IT) room or data center, heat is being generated that needs to be removed from the space. This heat generation occurs at various levels throughout the data center, including the chip level, server level, rack level and room level. With few exceptions, over 99% of the electricity used to power IT equipment is converted into heat. Unless the excess heat energy is removed, the room temperature will rise until IT equipment shuts down or potentially even fails.  
  • 53.   © 2013 Schneider Electric. All rights reserved. All trademarks provided are the property of their respective owners. Slide 17: Heat Generation Let’s take a closer look at heat generation at the server level. Approximately 50% of the heat energy released by servers originates in the microprocessor. A fan moves a stream of cold air across the chip assembly. The server or rack-mounted blade assembly containing the microprocessors usually draws cold air into the front of the chassis and exhausts it out of the rear. The amount of heat generated by servers is on a rising trend. A single blade server chassis can release 4 Kilowatts (kW) or more of heat energy into the IT room or data center. Such a heat output is equivalent to the heat released by forty 100-Watt light bulbs and is actually more heat energy than the capacity of the heating element in many residential cooking ovens. Now that we have learned about the physics and properties of heat, we will talk next about the Ideal Gas Law. Slide 18: The Ideal Gas Law Previously, we defined pressure, temperature, and volume. Further, it is imperative to the understanding of data center cooling to recognize how these terms relate to each other. The relation between pressure (P), volume (V) and temperature (T) is known as the Ideal Gas Law,