SlideShare a Scribd company logo
DATA CENTER
Migrating Your Data Center to
Become Energy Efficient
Providing your agency with a self-funded roadmap
to energy efficiency.
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 2 of 23
CONTENTS
Introduction................................................................................................................................................................................3
Executive Summary..................................................................................................................................................................3
Government Drivers For Striving For Energy Efficiency........................................................................................................4
Where and how the energy is being consumed........................................................................................ 5
The Cascading Effect of a Single Watt of Consumption............................................................................ 6
What you are replacing and why............................................................................................................. 7
Modern data center architecture: Ethernet fabrics............................................................................. 8
Choose best practices or state-of-the-art ............................................................................................... 9
Best practices.............................................................................................................................. 10
Best Practices to Manage Data Center Energy Consumption.......................................................................................... 11
Hot Aisle Containment.................................................................................................................. 11
Cold Aisle Containment................................................................................................................. 11
Increasing the ambient temperature of the data center.................................................................... 12
Virtualization of the data center........................................................................................................... 12
Examples of application and server migration and their savings potential................................................ 14
Migrating 5 full racks to 1/5th of a rack......................................................................................... 14
Migrating 18 full racks to a single rack........................................................................................... 15
Migrating 88 full racks to 5 racks.................................................................................................. 17
Modeling the data center consolidation example................................................................................... 17
Consolidation and migration of 20 sites to a primary site with backup.............................................. 18
Virtualization Savings............................................................................................................................................................ 19
Network Savings.................................................................................................................................................................... 21
Assumptions Associated With Reduced Costs In Electricity............................................................................................ 21
Summary................................................................................................................................................................................ 22
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 3 of 23
INTRODUCTION
Intel Corporation estimates that global server consumption is responsible for $27 billion U.S. dollars (USD) of
energy consumption annually. This cost makes virtualization an attractive endeavor for any large organization.
Large enterprises that have not begun implementing server virtualization may be struggling to make the case to
do so because of a time-consuming effort to measure or simply to estimate existing consumption. This document
attempts to present a realistic expectation for agencies that are in the process of performing due diligence in
this area.
EXECUTIVE SUMMARY
Brocade strives to show government clients who want the benefits of an energy efficient data center that they
can develop a self-funded solution. The primary questions that a government official would want answered are:
•	Can we use the energy savings from the migration of our current application computing platforms to finance
an energy-efficient data center model?
•	Can we achieve the type of consumption reductions as called out by Presidential Executive Orders 13423
and 13514?
•	Do we have to rely upon private sector to manage and deliver the entire solution, or can an existing site be
prepared now to achieve the same results as leading energy-efficient data center operators do?
•	Who has done this before, and what were the results? What can we expect? My organization has nearly
820,000 end computers. How much would we save by reducing energy in the data center?
Note:
1. Executive Order (EO) 13423, “Strengthening Federal Environmental, Energy, and Transportation Management”
2. Executive Order (EO) 13514, “Federal Leadership in Environmental, Energy, and Economic Performance”
The answers to these questions are straightforward ones. The government can test, evaluate, and prepare
deployable virtualized applications and the supporting network to begin saving energy while benefiting from
lowered operational costs. The EPA estimates that data centers are 100 to 200 times more energy intensive
than standard office buildings. The potential energy savings provides the rationale to prioritize the migration of
the data center to a more energy-efficient posture. Even further, similar strategies can be deployed for standard
office buildings to achieve a smaller consumption footprint, which helps the agency to achieve the goals of
the Presidential Directives. The government can manage the entire process, or use a combination of public
sector firms (ESCO’s-energy Savings Companies), and develop its own best practices from the testing phase.
These best practices maximize application performance and energy efficiency to achieve the resulting savings
in infrastructure costs. Other government agencies have made significant strides in several areas to gain
world-class energy efficiencies in their data centers. You can review and replicate their best practices for your
government agency.
When you migrate your data centers to an energy-efficient solution, your agency can save over
$38 million (USD) per year in energy consumption costs alone.
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 4 of 23
GOVERNMENT DRIVERS FOR STRIVING FOR ENERGY EFFICIENCY
Executive Order (EO) 13423 (2007) and EO 13514 (2009) are two directives that require agencies to work
toward several measurable government-wide green initiatives.
EO 13423 contains these important tenets that enterprise IT planners and executives who manage
infrastructure may address directly:
1.	 Energy Efficiency: Reduce energy intensity 30 percent by 2015, compared to an FY 2003 baseline.
2.	 Greenhouse Gases: Reduce greenhouse gas emissions through reduction of energy intensity 30 percent by
2015, compared to an FY 2003 baseline.
EO 13514 mandates that at least 15% of existing federal buildings (and leases) meet Energy Efficiency Guideline
principles by 2015. EO 13514 also mandates an annual progress being made toward 100 percent conformance
of all federal buildings, with a goal that 100% of all new federal buildings achieve zero-net-energy by 2030.
The Department of Energy (DoE) is a leader in the delivery and the development of best practices for energy
consumption and carbon dioxide emissions. The DoE has successfully converted their data centers to more
efficient profiles, and they have shared their results. Table 1, from the DoE, offers an educated look into the
practical net results of data center modernization.
Note: This information comes from this Department of Energy document:
Department of Energy: Leadership in Green IT (Brochure), Department of Energy Laboratories (DOE), S. Grant NREL.
Table 1. Sample of actual facilities and their respective power usage effectiveness (PUE). (The DoE and the EPA
have several metrics that demonstrate that the best practices and current technology allow agencies to achieve
the desired results. These agencies have measured and published their results. To see data center closures by
Department, see the Federal Data Center initiative at http://explore.data.gov.)
Sample Data Center PUE 2012 Comments/Context
DoE Savannah River 2.77 4.0 Previously
DoE NREL (National Renewable Energy Laboratory) 1.15 3.3 Previously
EPA Denver 1.50 3.2 Previously
DoE NERSC (National Energy Research Scientific Computing Center) 1.15 Previously 1.35
DoE Lawrence Livermore National Laboratory (451) 1.67 2.5 (37 year old building)
SLAC National Accelerator Laboratory at Stanford 1.30 New
INL Idaho National Laboratory High Performance Computing 1.10 1.4 (Weather Dependent)
DoE Terra Scale Simulation Facility 1.32 to 1.34 New
Google Data enter weighted average (March 2013) 1.13 1.09 Lowest site reported
DoE PPPL Princeton Plasma Physics Laboratory 1.04 New
Microsoft 1.22 New Chicago Facility
World Class PUE 1.30 Below 2.0 before 2006
Brocade Data Center (Building 2, San Jose HQ) 1.30
Brocade Corporate Data Center
Consolidation project
Standard PUE 2.00 3.0 before 2006
Note: Power Usage Effectiveness (PUE) is the ratio, in a data center, of electrical power used by the servers and network (IT) in
contrast to the total power delivered to the facility.
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 5 of 23
Where and how the energy is being consumed
Many studies discuss the consumption of energy and demonstrate the cost to house, power, and cool IT
communications gear in the data center and the typical commercial building. The Energy Information Agency
(EIA) has estimated that data centers, specifically the government data center population (estimated to
number >3000 sites), consume approximately 4% of the electricity in the United States annually. The EIA
has assessed that the total use of energy consumption in the United States was 29.601 quadrillion watts
in 2007 (29.601 x 1015). Energy usage is expected to increase at least 2% per year overall. The data center
consumption of 4% may seem like a low percentile; however, it is a low percentage of a very large number.
The number is so large that it makes sense to examine the causes and costs that are associated with the
electricity consumption of the government enterprise data center.
Note: EIA 2010 Energy Use All Sources. http://www.eia.gov/state/seds/seds-data-complete.cfm.
In 2006, the server consumption was measured at approximately 25% of total consumption. The Heating,
Ventilation, and Air Conditioning (HVAC) levels necessary to cool the center also equaled 25%, which totaled
50%. Many government agencies have data centers in varying sizes. To illustrate the source of consumption,
look at a sample 5,000 square foot data center. The site would be approximately 100 feet long by 50 feet
wide. The site would likely have servers, uninterruptible power and backup services, building switchgear, power
distribution units, and other support systems, such as HVAC. Figure 1 illustrates the approximate consumption
percentage of a typical data center of this size. The actual breakout of the server total consumption is provided
(40% total).
Note: The U.S. government has roughly 500,000 buildings, which means 15% is ~75,000 buildings. If one large data center is made
energy efficient, it is like making 100-200 buildings more energy efficient.
Figure 1. Breakout of support/demand consumption of a typical 5000 square foot data center. (This figure
demonstrates the consumption categories and their respective share of the total consumption for the facility.
The server consumption is broken into three parts: processor, power supply (inefficiency), and other server
components. The communication equipment is broken into two parts: storage and networking.)
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 6 of 23
Figure 2. Typical 5,000 square consumption breakout by infrastructure category. (This figure demonstrates the
consumption categories broken out by kilowatt hours consumed.)
The Cascading Effect of a Single Watt of Consumption
When a single watt of electricity is delivered to IT gear in a data center or a standard commercial building, a ripple
effect occurs. The total energy intensity of the data center can be 100 to 200 times that of a standard building
with network and some local IT services. However, the net result of delivering power to each site type is similar.
In 2006, the PUE of the standard data center was 2.84, which means that 2.84 watts were expended to support
1 watt of IT application, storage, and network gear.
Figure 3. Cascade effect of using a single watt of power. (Making a poor decision on IT consumption for network
and storage has the same effect as a poor decision for computing platforms. The Brocade®
network fabric can
consume up to 28% less than the competing architectures.)
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 7 of 23
The cost of a watt is not simply a fraction of a cent. In the data center, an IT element is powered on until it is
replaced, which can be a long period of time. A single watt, constantly drawn, can cost nearly $24.35 over a
10-year period in a campus LAN, which is the typical lifespan of a legacy core switch. Similarly, every additional watt
that is drawn in a data center since 2006 would have cost $20.34 if deployed during the period of 2006-2012.
Note: The Energy Star Program of the U.S. Environmental Protection Agency estimates that servers and data centers alone use
approximately 100 billion kWh of electricity, which represents an annual cost of about $7.4 billion. The EPA also estimates that without
the implementation of sustainability measures in data centers, the United States may need to add 10 additional power plants to the
grid just to keep up with the energy demands of these facilities.
What you are replacing and why
Several reasons drive the enterprise to replace the current data center IT infrastructure. Multiple advancements in
different areas can enable a more efficient means of application service delivery. For example, energy utilization in
all facets of IT products has been addressed via lower consumption, higher density, higher throughput, and smaller
footprint. Another outcome of rethinking the energy efficiency of the data center is that outdated protocols, such
as Spanning Tree Protocol (STP), are being removed. STP requires inactive links and has outlived its usefulness
and relevance in the network. (See Figure 4.)
Core
Server Rack
ISLs
Inactive
links
AccessAggregation
TrafficflowBlocking by
STP
Legacy Architecture
Figure 4. Sample legacy architecture (circa 2006). (The legacy architecture is inflexible because it is deployed as
three tiers, optimized for legacy for client/server applications. STP ensures its inherent inefficiency, which makes
operating it complex and individual switch management makes it expensive.)
The data center architecture of the 2006 era was inflexible. This architecture was typically deployed as three tiers
and optimized for legacy client/server applications. This architecture was inefficient because it is dependent upon
STP, which disables links to prevent loops and limits network utilization. This architecture was complex because
additional protocol layers were needed to enable it to scale. This architecture was expensive to deploy, operate,
and maintain. STP often caused network designers to provide duplicate systems, ports, risers (trunks), VLANs,
bandwidth, optics, and engineering effort. The added expense of all this equipment and support were necessary
to increase network availability and utilize risers that were empty due to STP blocking. This expense was overcome
to a degree by delivering Layer 3 (routing) to the edge and aggregation layers. Brocade has selected the use of
Transparent Interconnection of Lots of Links (TRILL), an IETF standard, to overcome this problem completely.
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 8 of 23
In addition, STP caused several effects on the use of the single server access to the network. STP caused
port blocking, physical port oversubscription at the access, edge, and aggregation layers, slower convergence,
and wasted CPU wait cycles. STP also lacked deterministic treatment of network payload to the core. This
architecture also caused separate network and engineering resources to accommodate real-time traffic models.
The networking bottleneck issues were made worse by the additional backend switched storage tier that
connected the server layer with mounted disk arrays and network storage. This tier supported substantial traffic
flows in the data center between network and storage servers. Also, more efficient fabric technologies are
increasingly being implemented to accomodate the new trends of data transmission behavior. The data center
traffic model has evolved from a mostly northbound-southbound model to an 80/20 east-west traffic model,
which means that 80% of server traffic can be attributed to server-to-server application traffic flows.
Note: Information about the evolution to an east-to-west traffic model comes from this Gartner document: “Use Top-of-Rack Switching
for I/O Virtualization and Convergence; the 80/20 Benefits Rule Applies”.
Modern data center architecture: Ethernet fabrics
Brocade has designed a data center fabric architecture that resolves many of the problems of the legacy
architecture. The Brocade Ethernet fabric architecture eliminates STP, which enables all server access ports to
operate at the access layer and enables all fabric uplinks to the core switching platform to remain active. The
Ethernet fabric architecture allows for a two-tier architecture that increases the access to network edge from
a 50% blocking model at n × 1 Gigabit Ethernet (GbE), to a 1:1 access model at n × 1 GbE or n × 10 GbE.
Risers from the edge can now increase their utilization from the typical 6–15% use of interconnects to much
greater rates of utilization (50 to even >90%). When utilization is increased, end user to application wait times
are reduced or eliminated. This architecture enables Ethernet connectivity and Fibre Channel over Ethernet
(FCoE) storage access by applications, thus collapsing the backend storage network into the Ethernet fabric.
The Ethernet fabric data center switching architecture eliminates unneeded duplication and enables all ports to
pass traffic to the data center switching platforms. Ports can pass traffic northbound to the core or east-west
bound to the storage layer. The combination of the virtual network layer delivered by the fabric and the virtual
server layer in the application computing layer delivers a highly utilized, highly scaled solution that decreases
complexity, capital outlays, and operational costs.
Figure 5. Efficient Ethernet Fabric Architecture for the Data Center. (Ethernet fabric architecture topologies are
optimized for east-west traffic patterns and virtualized applications. These architectures are efficient because
all links in the fabric are active with Layer 1/2/3 multipathing. These architectures are scalable because they
are flat to the edge. Customers receive the benefit of converged Ethernet and storage. The architectures create
simplicity because the entire fabric behaves like a logical switch.)
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 9 of 23
Choose best practices or state-of-the-art
If you adopt best practices, you could achieve your energy usage reduction goals with a realistic initial investment.
If you adopt state-of-the-art technologies, the initial investment to achieve the desired results may be much higher.
In 2006, the Green Grid looked at these potential of energy saving strategies. They estimated that by using best
practices or state-of-the-art technology, that overall consumption would measurably drop across all commercial
sectors. Historically, energy costs increase by 2-3% per year, and energy use increases 2% per year, so quantifiable
action must be taken. The study showed that using better operational procedures and policies, using
state-of-the-art technology, and using industry best practices contribute to an overall drop in consumption.
Note: This information about PUE estimation and calculation comes from the Green Grid document at this link:
http://www.thegreengrid.org/~/media/WhitePapers/WP49-PUE%20A%20Comprehensive%20Examination%20of%20the%20Metric_v6.pdf?lang=en
Figure 6. Using best practices and state-of-the-art technology controls consumption. (Best practices, such as
HAC/CAC, high-efficiency CRAH units, low-consumption servers with high-density cores in CPUs make 1.6 PUEs
achievable. Solid State Drives on servers and storage arrays could provide a lower return on investment
depending upon current acquisition cost. Agencies should carefully weigh the benefits of some state-of-the-art
options. For more information about best practices and state-of-the-art technology, see www.thegreengrid.org.)
Notes:
Hot Aisle Containment (HAC). Cold Aisle Containment (CAC).
The terms Computer Room Air Handler (CRAH) unit, Computer Room Air Conditioning (CRAC) unit, and Air-Handling Unit (AHU) are used
inter-changeably.
For more information about PUE estimation and calculation, search for Green Grid White Paper #49 at this site:
http://www.thegreengrid.org/en/library-and-tools.aspx?category=All&range=Entire%20Archive&type=White%20Paper&lang=en&paging=All
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 10 of 23
Best practices
With respect to enterprise application delivery, the strategy of lowering or changing the consumption curve
will become outdated while demand continues to climb. Many options are available with varying results. Many
data center experts recommend that a CAC system be utilized to provide efficient cooling to the server farm
and network gear within the racks of a data center. On the other hand, an HAC (Hot Aisle Containment) system
typically achieves a 15% higher efficiency level with a cooler ambient environment for data center workers. Hot
air return plenums direct the hot air back through the cooling system, which could include ground cooling, chilled
water systems, or systems that use refrigerant.
Note: This information about a HAC system comes from this article from Schneider Electric: “Impact of Hot and Cold Aisle
Containment on Data Center Temperature and Efficiency R2”, J.Niemann, K. Brown, V. Avelar
Examples of best practices:
•	High-efficiency Computer Room Air Handler. If you upgrade the fan system of a CRAH unit to one with a
variable-speed fan, energy costs can be reduced by up to 16% to 27%, under identical conditions.
•	Use mid- to high-density Virtual Machines (VMs). Per studies offered by the Green Grid, the optimal power
supply load level is typically in the mid-range of its performance curve: around 40% to 60%. Typical server
utilization shows highly inefficient use of power at low levels (below 30% load), and slightly less efficiency
when operating at high capacity loads as a result of using high-density VMs (above 60% load).
•	Higher-performing, lower-consumption servers. Current server technology includes many efficiency features,
such as large drives that require less consumption, solid-state technology, energy-efficient CPUs, high-speed
internal buses, and shared power units with high-efficiency power supplies under load.
•	Higher-performing, lower-consumption network gear. With the full-scale adoption of 10 GbE and 40 GbE edge
interconnects and n x 100 GbE switching from the edge to the core, network fabrics in data centers are
poised to unlock the bottleneck that previously existed in the server farm.
•	Low-tech solutions. Install blank plates to maximize the control of air flow.
Examples of state-of-the-art:
•	High-speed and more efficient storage link. Brocade Generation 5 Fibre Channel at rates such as 16 G are the
current performance-leading solutions in the industry.
•	Semiconductor manufacturing processes. In 2006, the typical data center was outfitted with devices that
utilized 130 nm, 90 nm, or 65 nm technology at best. The semiconductor chips that were embedded within
the Common Off the Shelf (COTS) systems, such as switches or servers, required more power to operate.
Now that 45 nm and 32 nm chipsets have been introduced into the manufacturing process, a lower energy
footprint can be achieved by adopting state-of-the-art servers, CPUs, and network and storage equipment.
With the advent of 22 nm (2012), the servers of the future will operate with even lower footprint CPUs and
interface circuits. Intel estimates that they can achieve performance gains with consistent load and do it at
half the power of 32 nm chip sets. (“Intel’s Revolutionary 22 nm Transistor Technology” M. Bohr, K. Mistry,)
•	Solid State Drives (SSDs) for servers. Whether the choice is to use a high–end, enterprise-class Hard Disk
Drive (HDD), or the latest, best-performing SSD, the challenge is to achieve a balance between performance,
consumption, and density.
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 11 of 23
BEST PRACTICES TO MANAGE DATA CENTER ENERGY CONSUMPTION
Hot Aisle Containment
Hot Aisle Containment (HAC) ensures that the cool air passes through the front of the IT equipment rack from a
cooling source that consists of ambient air at lower temperatures or cool air through perforated tiles at the face
of the cabinet or rack.
The air is forced through the IT gear as it is pulled across the face of the equipment by the internal fans. The
fans direct the air across the motherboard and internal components, and the air exits the rear of the rack to
a contained area between the aisles. This area captures the higher-temperature air and directs it to a plenum
return. The net result is that the ambient air in the data center can be kept at a higher temperature and the
thermostat on the CRAH unit can be kept at a level that ensures that the unit does not get forced on. The key
to successful implementation of HAC is to select IT components the can withstand higher temperatures,
therefore saving energy, while continuing to operate normally.
Figure 7. Hot Aisle Containment. (HAC that complies with OSHA Standards can reduce PUE by reducing chiller
consumption, for example, via increased cold water supply. The room can still be maintained at 75 degrees
Fahrenheit and the hot aisle could be up to 100 degrees. In some instances, the heat can rise to 117 degrees F.)
Cold Aisle Containment
Another method of air management uses Cold Aisle Containment (CAC). In CAC, cold air is brought into the room
through the perforated floor tiles across the air exhaust side of the rack and mixed with ambient air, which is
pulled through the chillers that are mounted above each rack. In this implementation, the cold air is contained
in between the rack aisles and it is pulled through the front of the IT equipment and run across the internal
components.
The air exits the rear of the racked equipment. It is contained in the cold aisle by doors or heavy plastic curtains
to prevent escape of the cold air to the side of the aisle. The exiting air from the rear of the IT rack is intermixed
with the ambient air and the chilled air coming up from the floor tiles and lowers the room temperature to within
OSHA standards. The ambient room temperature may be kept at temperatures up to 79 degrees Wet-Bulb Globe
Temperature (WBGT) (26 degrees Celsius). The chillers turn on more often within the room as a result of the higher
temperature ambient air, and the PUE of the data center profile is raised about 15% higher than that of the HAC.
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 12 of 23
Figure 8. Cold Aisle Containment. (A WBGT index of greater than 26 degrees Celsius (79 degrees F) is
considered a “hot environment.” If the WBGT measures less than 26 degrees Celsius everywhere and at all
times, then the workplace is relatively safe for most workers. The room may still be maintained at 75–79
degrees F and the cold aisle would be as low as 64–65 degrees F.)
Increasing the ambient temperature of the data center
Recent studies suggest that if the ambient temperature of the data center is allowed to increase somewhat
because the cooling is not so intense, additional savings can be achieved. The network and server equipment
that is selected must meet the specifications of running normally under more extreme conditions.
Note: This information comes from the Intel document “How High Temperature Data Centers and Intel Technologies Decrease Operating
Costs” at this link: http://www.intel.com/content/www/us/en/data-center-efficiency/data-center-efficiency-gitong-case-study.html
Virtualization of the data center
Many benefits occur when you introduce virtual servers into your application data center and software service
delivery strategy. For government purposes, applications typically are in one of two categories, mission (real-
time or near real-time) and services (non-real-time). For this discussion, it is recommended that the platform
migration targets non-real-time applications, such as Exchange®
, SharePoint®
, or other applications that do not
require real-time performance.
Additionally, many mission-oriented applications, for example unified communications, must not be supported
on virtual platforms in a high–density, virtual server environment by the OEM or their subcontracted software
suppliers. Much of this has more to do with the testing phases their products go through to achieve general
availability.
When you move non-real-time services to a virtual environment, you can achieve benefits like these:
•	Lower power and distribution load
•	Smaller Uninterrupted Power Supplies (UPSs)
•	Faster OS and application patching
•	Controlled upgrade processes
•	More efficient Information Assurance (IA) activity
•	Optimized server utilization
•	Reduced weight (as well as lower shipping costs)
•	Increased mobility
•	Lower ongoing operational and maintenance costs
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 13 of 23
Based upon the enterprise mission and the expected performance of an application, the typical compute
ratio for virtual servers can be reduced from 1:1 to 14:1. The U.S. Department of Energy has a 29:1 ratio for
non-mission applications. Brocade has determined that a 14:1 density of virtual servers per real server is
a reasonable number to use to illustrate the energy efficiency of using virtual servers. The platform used to
demonstrate this savings benefits was also limited by the number of virtual media access control addresses
(MACs) that the system could support (14) in each blade center chassis and the embedded compute blades.
Figure 9. Virtual servers and CPU cores. (The server can be virtualized with specific processor assignment to
address application performance, which enables real-time and non-real-time applications for virtualization. In
this example, 14 servers were modeled to 2 quad-core CPUs. Three servers consume 100% of a single core and
other applications receive a partial core. Though 14 nm technology promises higher density cores to CPU in the
near future, this document explores only what was currently available at a competitive cost.)
In the illustrated examples, Brocade determined that the compute space required for 196 virtual servers
running on 14 separate physical blades would reduce the typical footprint of 5 full racks to between 7 and
9 Rack Units (RUs) of space. In a typical server, the unit is running most of the 168 hours in a week at less than
20% utilization. Many factors reduce speed, such as network access speed of the system interface and buses
between CPU cores. Also, many dual or quad CPUs have the cores that are connected to the same die, which
eliminates bus delay. Most blade center systems allow the administrator to view all the CPUs in the system and
assign CPU cycles to applications to gain the desired work density. When you assign the work cycles for the CPU
to applications, some applications can be given a higher computational share of the processor pool while others
have only reduced access to CPU cycles. One application may require the assignment of only one half of a single
processor core embedded within the blade, while another may require 1.5 times the number of cycles that a
3.2 GHz core would provide. As a result, the application is assigned one and a half processor cores from the pool.
Note: For standard 19” racks, the holes on the mounting flange are grouped in threes. This three-hole group is defined as a Rack Unit
(RU) or sometimes a “U”. 1U occupies 1.75” (44.45 mm) of vertical space.
Enabling the virtual server solves a part of the overall problem. Additional steps to increase the workload to
the server require that you eliminate the bottleneck to the server. In the circa 2006 data center, the workload
is limited because one of the two ports is blocked and in standby mode. The server configuration, used in this
example, is also running a single Gigabit Ethernet port. To alleviate this bottleneck issue, increase the network
interface speed of a server from n × 1 GbE to n × 10 GbE. The port speeds increase tenfold, and the port
blocking introduced by STP is now eliminated by use of a network fabric. When we increase this speed of the
interface to 10 GbE, with all the network ports that are available, we provide up to 20 gigabits of bandwidth.
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 14 of 23
When the virtual server density is increased, the physical server density is decreased. Due to the savings of
the physical hardware and the eventual energy savings of the high-density platform, it is recommended that the
original application servers be turned off after the application is moved. As a result, the power consumption
of the original 196 servers that occupy several racks is reduced from an average consumption of 200 kWh
to between 9 kWh (a Brocade Fabric) and 14 kWh (competing fabric). (The range of consumption on the new
solution depends on the workload of the system and the choice of network fabric solution.)
Examples of application and server migration and their savings potential
Figure 10. Savings Potential of Server Migration. (This figure shows that five racks of servers and top of rack
switches can be condensed to a footprint that is less than one third of a 42 RU rack. The 196 servers deployed
in the circa 2006 footprint of five racks consume 206 kWh of energy using the server consumption under
load measurement, not the regulatory power of the unit itself. The same model was used to estimate the
consumption of the IT gear in the proposed virtualized deployment estimated to be between 8 and 14 kWh.)
Migrating 5 full racks to 1/5th of a rack
In the original implementation, five full racks of IT computing and networking equipment are shown. The original
circa 2006 equipment has 2 × 1 GbE network links per server, typically to 2 Top-of-Rack (ToR) units such as Cisco
Catalyst 3750E-48 TD GbE switches connected via 10 GbE risers to two Cisco Catalyst 6509 switches running
4 × 16 port 10 GbE cards with 2 × 6000 WAC power supply units. The network consumption for these units is
estimated as ~19 kWh. The CPU density of the rack included 2 CPUs per server, with an average of 39 to 40
servers per rack. The rack also includes 4 × 10 GbE risers to the aggregation layer network per rack (200 Gbps
total with 100 Gbps blocked due to STP). This original configuration represents a 4 to 1 bottleneck, or an average
of only 500 Mbps per server bandwidth, without excessive over engineering of VLANs or using Layer 3 protocols.
The IBM 2-port Model 5437 10 GbE Converged Network Adaptor (OEM by Brocade) is embedded within the blade
center chassis for FCoE capability. The Converged Network Adaptor enables the Ethernet fabric and direct
access to the storage area network, which flattens the architecture further and eliminates a tier within the
standard data center.
Note: A Top-of-Rack (ToR) switch is a small port count switch that sits at or near the top of a Telco rack in data centers or co-location
facilities.
The EIA reports that the U.S. average cost of electricity when this solution was implemented was 9.46 cents per
kWh. The annual cost in circa 2006 U.S. dollars (USD) for electricity to operate this solution is approximately
$171,282. The average consumption per server is estimated at 275W per unit, and the two network ToR
switches are estimated to draw 161W each when all cables are plugged in with 10 GbE risers in operation.
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 15 of 23
For this solution, the annual computing platform energy cost is $44,667, the network energy cost is $15,644,
and the facility and infrastructure costs to support it are $56,039/yr. The HVAC cost to cool this solution is
estimated to be $77,872. These numbers are consistent with a facility that has a PUE of 2.84.
Using available technology from Brocade, standard IT blade center technology, and the implementation of virtual
servers, the associated footprint is reduced to approximately one-fifth of a rack. This new solution would include
14 physical blade servers within a chassis solution with 2 × 10 GbE links per server, 196 virtual servers using
112 CPU cores (2 × quad core CPUs per blade), and 280 Gbps of direct network fabric access. The average
bandwidth that is accessible by the virtual server application is between 1 GbE to 20 GbE of network bandwidth
(2:1 to 40:1 increase over the aforementioned circa 2006 solution). The network fabric solution from Brocade
utilizes 28% less energy (cost) than the competing solution today. Using a CAC model of 1.98 PUE, or a HAC
model of 1.68, the total cost of this solution including the blade center is approximately $9,930/year HAC to
$11,714/year CAC. If this solution is delivered in a state-of-the-art container, a PUE of 1.15 4 or lower would
cost approximately $6,803/year.
Note: The information about the solution that achieves a PUE of 1.15 is from this Google document:
Google PUE Q2 2013 performance measurement: http://www.google.com/about/datacenters/efficiency/internal/index.html
Figure 11. Migration of a small-medium data center with 18 racks of IT into a 42 U rack. (Using the same
methodology as in Figure 11, the consumption of the 18 racks of servers in the 2006 architecture is estimated
to be approximately 650 kWh. The replacement architecture is expected to consume between 15 kWh and
27 kWh while increasing the performance from the access to the core of the network.)
Migrating 18 full racks to a single rack
In this migration model, it was determined that up to 18 full racks of circa 2006 computing and network
data center equipment would be migrated into less than a single rack. The network and blade center slots
remain available for nominal expansion. Here, we are moving the capability of 700 physicals servers and the
corresponding 1,400 network interfaces connected to the access layer to a footprint that takes up only one
full rack of space.
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 16 of 23
The original 18 Full Rack deployment had 2 x Cisco Catalyst 3750E-48 TD switches at the top of each rack,
aggregating risers to 2 x Cisco Catalyst 6509 switches with 7 x 16-port 10 GbE modules using 2 x 600 WAC
power supply units. The 18 racks also housed 700 physical servers, an average of 39 servers per physical rack.
The resulting IT consumption load of the original servers was 192 kWh. The resulting load of the top of rack,
aggregation, and core switches was 36 kWh. The infrastructure load of the facility (76 kWh), including HVAC
(245 kWh) was 421 kWh with an annualized cost of $539,044 per year. A PUE of 2.84 was used to reflect the
typical PUE achieved in the 2006 time period. In 2006, the national average cost per kWh was 9.46 cents;
which is reflected in this calculation.
The migration platform used for this comparison was a rack with four typical blade center chassis with up to
14 blades per system, each equipped with 2 x 3.2 GHz quad-core CPUs. The compute platform consumption
in the circa 2006 solution drew 192 kWh, versus new solution consumption of 10,240 kWh. The ToR switches
of the original data center are replaced with a pair of Brocade VDX®
6720-60 Switches (which are ToR fabric
switches) that are connected to a pair of Brocade VDX 8770 Switches (which are fabric chassis-based switches)
that are connected to a Brocade MLXe Core Router. This upgrade reduced network consumption from 36,540
kWh to 4826 kWh, which reduced energy costs from $30,280 in 2006 to $4,362 at 2012 rates.
By migrating the physical servers and legacy network products to a currently available virtualized environment,
the annual costs are reduced to between $22,882 (HAC model) to $26,968 (CAC model) when calculated at the
EIA 2012 commercial cost of 10.32 cents per kWh. When use of the Brocade Data Center Fabric is compared to
the circa 2006 solution, the savings are 32,000 kWh, or ~$26,000 per year (USD). The total savings in energy
alone for the virtual network and virtual computing platform is greater than $500,000 per year. When comparing
the Brocade fabric solution to currently available competing products, the savings in the network costs is 27%.
What is also significant is that the 18 full racks would be reduced to a single rack footprint. This scenario would
collapse an entire small data center into a single rack row in new location. In order to gain the economy of the
HAC or CAC model, this IT computing and network rack would benefit from being migrated into a larger data
center setting where several other racks of gear are already in place and operating in a CAC/HAC environment.
Figure 12. Migration of medium-large data center with 88 racks of IT into 5 x 42 U racks. (Using the same
methodology as used in Figures 11 and 12, the consumption of the 88 racks of servers in the circa 2006
solution is estimated to be approximately 2.9M kWh, while its replacement architecture is expected to consume
between 65 and 112 kWh while providing increases in performance from the access to the core of the network.)
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 17 of 23
Migrating 88 full racks to 5 racks
A migration from a medium-sized data center solution of 11 rack rows with 8 racks per row would yield similar
results, yet on a greater scale. To be effective in a HAC or CAC environment, two data centers of this size would
need to be migrated into a single site, just to create a two rack rows that could create a CAC/HAC environment.
The resulting 5-rack density is more ideal for a migration to a containerized solution, where PUEs have been as
low as 1.11 to 1.2. In such a scenario, the migrated 88 racks of IT would have 1 MWh in direct consumption,
including network and computing. The new solution would consume 2.9 MWh with a PUE of 2.94, which results
in an annual cost of $2.4M per year using EIA 2006 rates of 9.46 cents per kWh.
After the 88 racks are migrated to a properly engineered HAC, CAC, or containerized facility, the energy cost
of the solution would be between $66,275 USD to $112,386 USD per year, at 2012 commercial rates. This
migration scenario would result in nearly $2.3M per year in cost savings due to energy reduction.
Note: Using the methodology shown in Figures 11-13, data centers with 350 and 875 racks are consolidated into 20 and 50 rack
footprints respectively. The results are included as part of the 20 site consolidation and migration, which is discussed later in
this document.
Modeling the data center consolidation example
To model an agency- or department-level data center migration, Brocade estimated the size of the data center
in terms of space, and the size of the customer base. Brocade also estimated the IT equipment that was
needed to service the disparately located applications across the agency. To do this, Brocade reviewed studies
that offered a reasonable estimate of servers in use per desktop, minimum room size estimates for various
sized data centers, as well as population densities of the agency being modeled.
Number and minimum size per site: Brocade has estimated that a very large enterprise would have up to
20 sites with varying levels and density of application computing deployed. In the migration model, Brocade
used a sample of 12 sites with 5 full racks of data center application IT equipment (60 racks total), 4 sites
with 18 full racks (72 racks total), 2 sites with 88 racks (176 racks total), a single large site with up to 350
racks, and a single site with 875 racks (875). These sites contain a total of 1,533 racks that occupy up to
18,000 square feet of space.
Number of desktops: Brocade used the example of a military department that consisted of 335,000 active
duty personnel, 185,000 civilian personnel, 72,000 reserve personnel, 230,000 contractor personnel, which
totaled approximately 822,000 end users. Brocade used this population total and 1:1 desktop ratio, to
derive a raw estimate for server counts. A study performed by the Census Bureau estimates that there are
variances by verticals, such as education, finance, healthcare, utilities, transportation, as well as services. With
approximately 822,000 end users, approximately 41,100 servers would support primary data center operations.
However, purpose-built military applications and programs may push this figure even higher. The migration
example used accounts for 67% mission servers (41,100), 17% growth servers (10,275), 8% mirrored critical
applications servers (4,889 secondary servers), and 8% disaster recovery servers (4,888).
Server to desktop ratio: The ratio to determine how many servers exist per desktop computer depends upon
the vertical being studied. The U.S. Census Bureau estimates that the government desktop-to-employee ratio is
1:1.48 employees. The net of the statistics in the study offered is that there is approximately 1 server for every
20 desktops. Of the 4.7 non-retail firms responding to the study (6 million total), there were 43 million desktops
and 2.1 million servers to support operations. This count resulted in a 20:1 desktop-to-server ratio.
Note: The Census Bureau determined that the average ratio of PCs per server was approximately 20:1.
With approximately 822,000 end users, and factoring in primary applications (67%), growth (17%), mirrored
backup (8%), and Disaster Recovery and Continuity of Operations (DR/CooP) (8%), the virtualized server
environment was estimated at 61,000 servers.
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 18 of 23
Consolidation and migration of 20 sites to a primary site with backup
Figure 14 depicts the 20 sites that would be migrated to a single primary data center. The originating sites
would account for approximately 60,000 square feet of space. The Post Migration Footprint (PMF) of 87 racks
would occupy 2 condensed spaces at up to 45’ by 22’ each, or approximately 8000 square feet per primary and
backup center. These numbers represent a 15:1 reduction.
To reduce risk, Brocade recommends that you model the migration process at a test site. A low-risk fielding
approach should be used. Brocade recommends that a virtual server environment coupled with the virtual
network fabric architecture to be created into a consolidated test bed. The test bed would enable an
organization to begin modeling the application migration of selected non-real-time applications from their current
physical servers to the virtual server environment. This test bed could function as the primary gating process for
applications, with respect to fit, cost, performance, and overall feasibility.
The goal of the test bed would be to organize a methodology for deploying, upgrading, and patching specific
application types to work out any issues prior to their implementation in the target data center environment. As
applications are tested and approved for deployment, the condensed data center could be constructed, or the
target site could be retrofitted with the applicable technology to support a lower energy consumption posture.
The first step to migration would be to test the applications that reside at the 12 small sites (#1 in the Data
Center Consolidation Diagram) with an average of 5 racks of servers per site. When a successful migration
procedural environment is achieved, continue to migrate the applications the remaining sites to the target
data center.
Figure 13. Migration of 20 sites to one modern data center configuration. (This example shows the difference in
the pre and post migration footprints. The 2006 solution uses 1,533 racks, which are transferred to a footprint
of 87 racks. A secondary data center that mirrors the functions of the first is also shown.)
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 19 of 23
VIRTUALIZATION SAVINGS
The energy reduction and consumption savings are enormous when viewed in contrast to their costs in 2006
USD. The energy use of the 12 small sites was reduced from 206 kWh to less than 9.9 kWh per site. The
88 full rack sites consumed 2.4 MWh and reduced to less than 120 kWh. The 4 sites with 18 full racks per site
were reduced from 650 kWh each to less than 22 kWh per site, for a total of 2.6 MWh that dropped to 88 kWh.	
The two 88-rack sites, which consumed 2.9 MWh per site, were reduced to a little over 95 kWh each, for
a change of ~5.8 MWh to <200 kWh. Energy consumption for the site with 350 racks was reduced from
11.3 MWh to about 370 kWh. Energy consumption for the site with 875 racks was reduced from 28 MWh
to approximately 863 kWh.
Table 2. The data center modernization from the circa 2006 solution to the new model results in the
following reductions:
Category Previous New Reduction Ratio
Total Consumption 50 MWh 1.63 MWh (HAC) 30:1 Reduction
Total Rackspace 1,533 ToR switches 87 ToR fabric switches 17:1 Reduction
Total Footprint 60,000 sq. ft. 4,000 sq. ft. 15:1 Reduction
Server Footprint 60,000 Physical 4,350 Physical Servers 14:1 Reduction
Virtualization achieved a 14:1 virtual server to real server reduction. This reduction is similar to aggressive
virtualization ratios for the non-real time applications, ratios as high as 29:1, versus mission applications with
1:1 reduction ratio.
Virtualization of the servers and their physical connectivity requirements, elimination of unnecessary protocols,
and the use of better density at low consumption rates allowed for the network to gain a 10:1 increase in access
speed, with available bandwidth by all resident applications on the blade.
Advancement in network technology for ToR and end-of-rack solutions enabled the fabric to provide for an
upgrade to the core, from 10 GbE to 40 GbE and eventually 100 GbE. The circa 2006 aggregation layer was
eliminated in its entirety and replaced by a data center fabric from the access to the core. The circa 2006 core
routing and switching platform was upgraded from n x 10 GbE to n x 100 GbE.
Consolidation reduced a once-acceptable 8:1 network access oversubscription rate to an industry-leading
1:1 ratio. Virtual services necessitate that the bottleneck in the data center be eliminated. The virtual server
and network fabric consolidation with 10 GbE at the edge reduced consumption from an average of 147W per
10 G interface to ~5.5W in this migration model.
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 20 of 23
Figure 14. Migration of the circa 2006 data center network architecture elements. (The 20 sites that have been
migrated to the Brocade fabric solution include 1,533 ToR switches, 104 aggregation layer switches, and 20
core switches. The total consumption of these network switches was based upon the actual draw under load, not
the regulatory-rated power of the unit. The result is that this architecture consumes 1.317 MWh in direct draw.
Considering the typical PUE of 2.84 for this period of deployment, the resulting total consumption is 3.7 MWh.)
Figure 15. Architecture elements of the resulting Brocade Data Center Fabric. (The resulting migration uses one
fabric with two Brocade VDX 8770 Switches, along with 87 Brocade VDX ToR fabric switches, with two Brocade
MLXe-16 Routers (core). This network architecture consumes 90 kWh of electricity, which factors in an achievable
PUE of 1.68. This example assumes the use of an HAC model.)
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 21 of 23
NETWORK SAVINGS
In terms of scale, the reduced network footprint is just as significant for the network footprint. The 1,533 ToR
switches were reduced to 87 ToR units (17:1). Also, the bandwidth available to each application increased. From
the circa 2006 network, 104 aggregation switches are converted to end-of-row Brocade 8770 fabric switches.
In addition, the elimination of the 20 sites, allowed for the consolidation of 20 underperforming core units, to
4 easily scalable and expandable core switches (Brocade MLX®
).
Network energy consumption was reduced from 3.7 MWh (including circa 2006 2.84 PUE) to 90 kWh with a PUE
of 1.68. This reduction of 37:1 coincides with an order of magnitude increase in network capacity.
The network layer of the new virtualized data center effectively experiences a higher level of traffic density and link
utilization. Depending upon the initial application physical connectivity, the new fabric allows for application access
from the server to the user to be increased from 1, 2, or 4 active interface connections to 2, 4, or 8 connections,
all active.
ASSUMPTIONS ASSOCIATED WITH REDUCED COSTS IN ELECTRICITY
Several reasonable assumptions were made in determining the size and scope of the migration, as well as the
calculation of consumption, on a per system basis. Brocade has also assumed that the typical server, network
switches, protocols used, and typical computing density assigned to the legacy (circa 2006) data center would
be the systems that were generally available from 2004-2006 during the design and implementation phases of
the migration candidate sites.
Other assumptions include:
•	HAC load factors implementations are typically 39% of the IT load factor, versus the 2006 ration of 1.07 watts
of cooling per 1W of IT equipment.(1)
•	The new data center facility consumption estimates were based upon a 29% load factor of the IT load vs.
a 77% load factor, which was typical of the circa 2006 legacy data center. (2)
•	Assumption that the UPS models and typical consumption remained constant.
•	The compute platforms that were migrated to virtual servers used a 14:1 ratio. The mission-oriented solutions
typically required a 1:1 application to core ratio. A ratio of 29:1 was used for services compute platforms,
which is in line with practices from the U.S. Department of Energy (3) methodologies for one of their labs with
high-density virtual servers. By taking an average of the extremes, 1:1 and 29:1, the 14:1 ratio was adopted.
•	The Brocade migration model assumed the use of 8 cores per blade (2 x quad-core CPU at 3.2 GHz), as
well as the limit of 14 virtual MACs per blade, which results in an average of 1 MAC per virtual interface
to the fabric.
•	Mission-critical compute applications remain on existing hardware until a CPU core mapping per application
determination is made, or were migrated to a 1:1 core per application ratio.
•	Brocade adopted the 2006 PUE of 2.84 (1) versus an achievable 2013 PUE of 1.68 (2). Brocade found in
several studies that industry seemed to agree that a standard PUE in 2013 was 2.0, while a standard PUE
of the 2006 timeframe was around 3.0. Therefore, Brocade utilized the available breakdown offered as a
viable model for the legacy data center. (1)
•	Brocade determined that not all enterprises would opt for the cost associated with state-of-the-art solutions
that provide for PUEs as low as 1.15. Even though some companies have since experienced even lower PUEs,
for example Microsoft and Google (4), Brocade took a conservative approach to the final data center model.
Notes: Information in this list came from these documents:
1. Emerson Electric: Energy Logic: “Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems”.
2. Schneider Electric: “Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency R2”, J.Niemann, K. Brown,
V. Avelar
3. Department of Energy: Leadership in Green IT (Brochure), Department of Energy Laboratories (DOE), S. Grant NREL
4. Google PUE Q2 2013 performance measurement: http://www.google.com/about/datacenters/efficiency/internal/index.html
DATA CENTER 	 TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 22 of 23
SUMMARY
For an enterprise of 822,000 users, the virtualized architecture with the Brocade Data Center Fabric can save
$36 million per year in energy consumption. This huge savings occurs when legacy data centers are migrated to
the modern data center architecture and its backup data center. (Without a backup center, the savings would be
$38 million per year.) Brocade delivers a world-class solution in network and storage integration into a network
fabric that enables access to the virtual servers. All links to the fabric are operational, there is higher density and
utilization of access links, and fabric links and core switching are capable of switching or routing at wire speed.
Table 3. Costs for individual sites pre and post migration.* (The chart demonstrates that the circa 2006 data
center model will cost $40M per year using EIA national average kWh costs published in 2006. In the post-
virtualization deployment, coupled with a Brocade fabric network architecture upgrade, the consumption costs
vary between $1.8M (CAC model), $1.5M (HAC model), and $1M (state-of-the-art model) calculated in the 2013
national average costs per EIA.)
Circa 2006 Costs 2013 Costs
Migration Examples
Original Costs
(PUE 2.84)
Hot Aisle Containment
(PUE 1.68)
Cold Aisle Containment
(PUE 1.98)
State of Art
(PUE 1.15)
12 Sites at 5 Racks to 1/5th 171,295 9,951 11,726 6,815
4 Sites at 18 Racks to 1 Rack 2,156,179 91,527 107,872 62,653
4 Sites at 88 Racks to 1 Rack 4,840,286 190,715 224,772 130,549
1 Site at 350 Racks to 20 Racks 9,418,877 370,423 436,570 253,563
4 Sites at 875 Racks to 20
Racks
23,509,779 863,553 1,017,759 591,123
Totals Each Option 40,096,417* 1,526,170 1,798,698 1,044,704
Notes:
The EIA advised that the national average cost per kWh in 2006 was 9.46 cents. This figure would be more than $45M at
2012 rates.
EIA 2010 Energy Use All Sources. http://www.eia.gov/state/seds/seds-data-complete.cfm.
Brocade solutions can lower the network consumption model in comparison to competing solutions available today by up to 29%.
However, Brocade can reduce the legacy network consumption with industry-leading virtualized solutions by a reduction factor of
19 to 1. Using this model, a large enterprise could consolidate their legacy IT and legacy network solutions to a far smaller site
and save over $38 million USD in energy consumption each year.
DATA CENTER 	 TECHNICAL BRIEF
© 2013 Brocade Communications Systems, Inc. All Rights Reserved. 08/13 GA-TB-483-00
ADX, AnyIO, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, ICX, MLX, MyBrocade,
OpenScript, VCS, VDX, and Vyatta are registered trademarks, and HyperEdge, The Effortless
Network, and The On-Demand Data Center are trademarks of Brocade Communications Systems,
Inc., in the United States and/or in other countries. Other brands, products, or service names
mentioned may be trademarks of their respective owners.
Notice: This document is for informational purposes only and does not set forth any warranty,
expressed or implied, concerning any equipment, equipment feature, or service offered or to be
offered by Brocade. Brocade reserves the right to make changes to this document at any time,
without notice, and assumes no responsibility for its use. This informational document describes
features that may not be currently available. Contact a Brocade sales office for information on
feature and product availability. Export of technical data contained in this document may require an
export license from the United States government.

More Related Content

What's hot

White_Paper_Green_BTS_NEC_MAIN_PAPER_150115_Release
White_Paper_Green_BTS_NEC_MAIN_PAPER_150115_ReleaseWhite_Paper_Green_BTS_NEC_MAIN_PAPER_150115_Release
White_Paper_Green_BTS_NEC_MAIN_PAPER_150115_Release
ADITYA KUMAR
 
Impact of Dispersed Generation on Optimization of Power Exports
Impact of Dispersed Generation on Optimization of Power ExportsImpact of Dispersed Generation on Optimization of Power Exports
Impact of Dispersed Generation on Optimization of Power Exports
IJERA Editor
 
Grow a Greener Data Center
Grow a Greener Data CenterGrow a Greener Data Center
Grow a Greener Data Center
Jamie Shoup
 
Achieving Energy Proportionality In Server Clusters
Achieving Energy Proportionality In Server ClustersAchieving Energy Proportionality In Server Clusters
Achieving Energy Proportionality In Server Clusters
CSCJournals
 
Adding Psychological Factor in the Model of Electricity Consumption in Office...
Adding Psychological Factor in the Model of Electricity Consumption in Office...Adding Psychological Factor in the Model of Electricity Consumption in Office...
Adding Psychological Factor in the Model of Electricity Consumption in Office...
IJECEIAES
 
IRJET- Planning Issues in Grid Connected Solar Power System: An Overview
IRJET- Planning Issues in Grid Connected Solar Power System: An OverviewIRJET- Planning Issues in Grid Connected Solar Power System: An Overview
IRJET- Planning Issues in Grid Connected Solar Power System: An Overview
IRJET Journal
 
Economic Dispatch using Quantum Evolutionary Algorithm in Electrical Power S...
Economic Dispatch  using Quantum Evolutionary Algorithm in Electrical Power S...Economic Dispatch  using Quantum Evolutionary Algorithm in Electrical Power S...
Economic Dispatch using Quantum Evolutionary Algorithm in Electrical Power S...
IJECEIAES
 
greendatacenter
greendatacentergreendatacenter
greendatacenter
korzay
 

What's hot (8)

White_Paper_Green_BTS_NEC_MAIN_PAPER_150115_Release
White_Paper_Green_BTS_NEC_MAIN_PAPER_150115_ReleaseWhite_Paper_Green_BTS_NEC_MAIN_PAPER_150115_Release
White_Paper_Green_BTS_NEC_MAIN_PAPER_150115_Release
 
Impact of Dispersed Generation on Optimization of Power Exports
Impact of Dispersed Generation on Optimization of Power ExportsImpact of Dispersed Generation on Optimization of Power Exports
Impact of Dispersed Generation on Optimization of Power Exports
 
Grow a Greener Data Center
Grow a Greener Data CenterGrow a Greener Data Center
Grow a Greener Data Center
 
Achieving Energy Proportionality In Server Clusters
Achieving Energy Proportionality In Server ClustersAchieving Energy Proportionality In Server Clusters
Achieving Energy Proportionality In Server Clusters
 
Adding Psychological Factor in the Model of Electricity Consumption in Office...
Adding Psychological Factor in the Model of Electricity Consumption in Office...Adding Psychological Factor in the Model of Electricity Consumption in Office...
Adding Psychological Factor in the Model of Electricity Consumption in Office...
 
IRJET- Planning Issues in Grid Connected Solar Power System: An Overview
IRJET- Planning Issues in Grid Connected Solar Power System: An OverviewIRJET- Planning Issues in Grid Connected Solar Power System: An Overview
IRJET- Planning Issues in Grid Connected Solar Power System: An Overview
 
Economic Dispatch using Quantum Evolutionary Algorithm in Electrical Power S...
Economic Dispatch  using Quantum Evolutionary Algorithm in Electrical Power S...Economic Dispatch  using Quantum Evolutionary Algorithm in Electrical Power S...
Economic Dispatch using Quantum Evolutionary Algorithm in Electrical Power S...
 
greendatacenter
greendatacentergreendatacenter
greendatacenter
 

Viewers also liked

SEACOTEX Factory Profile Updated
SEACOTEX Factory Profile UpdatedSEACOTEX Factory Profile Updated
SEACOTEX Factory Profile Updated
Manjur Murshid Masum
 
Pelmenit
Pelmenit Pelmenit
Pelmenit
jklkoti
 
Renrollment a fiduciary imperative
Renrollment a fiduciary imperativeRenrollment a fiduciary imperative
Renrollment a fiduciary imperative
Richard Davies
 
Clarissa Niederauer Leote da Silva
Clarissa Niederauer Leote da SilvaClarissa Niederauer Leote da Silva
Clarissa Niederauer Leote da Silva
Clarissa E Fabiano
 
Get started with dropbox
Get started with dropboxGet started with dropbox
Get started with dropbox
Ville Gjerulff
 
Dengue.fever
Dengue.feverDengue.fever
Dengue.fever
Ruhul Amin
 
irshadali_CV
irshadali_CVirshadali_CV
irshadali_CV
Irshad Ali
 
2015 Supervisor's Final Evaluation
2015 Supervisor's Final Evaluation2015 Supervisor's Final Evaluation
2015 Supervisor's Final Evaluation
Angela Li
 
Resume_Sneha Bhatia
Resume_Sneha BhatiaResume_Sneha Bhatia
Resume_Sneha Bhatia
Sneha Bhatia
 
book cover-10 laws
book cover-10 lawsbook cover-10 laws
book cover-10 laws
Doreen Hillier
 
Alok_ResumeHclJul
Alok_ResumeHclJulAlok_ResumeHclJul
Alok_ResumeHclJul
Alok Behera
 
CV Gerrit Fourie
CV Gerrit FourieCV Gerrit Fourie
CV Gerrit Fourie
Gerrit Fourie
 

Viewers also liked (13)

SEACOTEX Factory Profile Updated
SEACOTEX Factory Profile UpdatedSEACOTEX Factory Profile Updated
SEACOTEX Factory Profile Updated
 
Pelmenit
Pelmenit Pelmenit
Pelmenit
 
Renrollment a fiduciary imperative
Renrollment a fiduciary imperativeRenrollment a fiduciary imperative
Renrollment a fiduciary imperative
 
Curriculum Vitae
Curriculum VitaeCurriculum Vitae
Curriculum Vitae
 
Clarissa Niederauer Leote da Silva
Clarissa Niederauer Leote da SilvaClarissa Niederauer Leote da Silva
Clarissa Niederauer Leote da Silva
 
Get started with dropbox
Get started with dropboxGet started with dropbox
Get started with dropbox
 
Dengue.fever
Dengue.feverDengue.fever
Dengue.fever
 
irshadali_CV
irshadali_CVirshadali_CV
irshadali_CV
 
2015 Supervisor's Final Evaluation
2015 Supervisor's Final Evaluation2015 Supervisor's Final Evaluation
2015 Supervisor's Final Evaluation
 
Resume_Sneha Bhatia
Resume_Sneha BhatiaResume_Sneha Bhatia
Resume_Sneha Bhatia
 
book cover-10 laws
book cover-10 lawsbook cover-10 laws
book cover-10 laws
 
Alok_ResumeHclJul
Alok_ResumeHclJulAlok_ResumeHclJul
Alok_ResumeHclJul
 
CV Gerrit Fourie
CV Gerrit FourieCV Gerrit Fourie
CV Gerrit Fourie
 

Similar to FFM - Technical Brief - Migrating Your Data Center to Become Energy Efficient-with notes

Improve sustainability through energy insights
Improve sustainability through energy insights Improve sustainability through energy insights
Improve sustainability through energy insights
Principled Technologies
 
Case Studies in Highly-Energy Efficient Datacenters
Case Studies in Highly-Energy Efficient DatacentersCase Studies in Highly-Energy Efficient Datacenters
Case Studies in Highly-Energy Efficient Datacenters
Michael Searles
 
Green data center_rahul ppt
Green data center_rahul pptGreen data center_rahul ppt
Green data center_rahul ppt
RAHUL KAUSHAL
 
Compu Dynamics White Paper - Essential Elements for Data Center Optimization
Compu Dynamics White Paper - Essential Elements for Data Center OptimizationCompu Dynamics White Paper - Essential Elements for Data Center Optimization
Compu Dynamics White Paper - Essential Elements for Data Center Optimization
Dan Ephraim
 
BLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMSBLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMS
Daniel Bodenski
 
Improve sustainability through energy insights - Infographic
Improve sustainability through energy insights - InfographicImprove sustainability through energy insights - Infographic
Improve sustainability through energy insights - Infographic
Principled Technologies
 
Comparison between SAM and RETScreen
Comparison between SAM and RETScreenComparison between SAM and RETScreen
Comparison between SAM and RETScreen
Ankit Thiranh
 
A Journey to Power Intelligent IT - Big Data Employed
A Journey to Power Intelligent IT - Big Data EmployedA Journey to Power Intelligent IT - Big Data Employed
A Journey to Power Intelligent IT - Big Data Employed
Mohamed Sohail
 
Green Computing
Green ComputingGreen Computing
Green Computing
Harsh Tamakuwala
 
Co mn-bus-data-center-study-template
Co mn-bus-data-center-study-templateCo mn-bus-data-center-study-template
Co mn-bus-data-center-study-template
server computer room cleaning
 
Southern Energy Efficiency Center Final Report
Southern Energy Efficiency Center Final ReportSouthern Energy Efficiency Center Final Report
Southern Energy Efficiency Center Final Report
Flanna489y
 
451 Dct Dc Sustainability Exec Overview
451 Dct Dc Sustainability Exec Overview451 Dct Dc Sustainability Exec Overview
451 Dct Dc Sustainability Exec Overview
Michael Searles
 
FY 2013 R&D REPORT January 6 2014 - Department of Energy
FY 2013 R&D REPORT January 6 2014 - Department of EnergyFY 2013 R&D REPORT January 6 2014 - Department of Energy
FY 2013 R&D REPORT January 6 2014 - Department of Energy
Lyle Birkey
 
Sumerian datacentre news
Sumerian datacentre newsSumerian datacentre news
Sumerian datacentre news
Sumerian
 
Sumerian datacentre news
Sumerian datacentre newsSumerian datacentre news
Sumerian datacentre news
Mark O'Donnell
 
Bloom Engergy
Bloom EngergyBloom Engergy
Bloom Engergy
Yi (Lisa) Lu
 
Facility Optimization
Facility OptimizationFacility Optimization
Facility Optimization
cwoodson
 
Datacenter ISO50001 and CoC
Datacenter ISO50001 and CoCDatacenter ISO50001 and CoC
Datacenter ISO50001 and CoC
Didier Monestes
 
The energy manager's guide to real time submetering data 1.16.14
The energy manager's guide to real time submetering data 1.16.14The energy manager's guide to real time submetering data 1.16.14
The energy manager's guide to real time submetering data 1.16.14
GridPoint
 
ENERGY EFFICIENCY EVALUAITON_CONCEPTS & BEST PRACTICES
ENERGY EFFICIENCY EVALUAITON_CONCEPTS & BEST PRACTICESENERGY EFFICIENCY EVALUAITON_CONCEPTS & BEST PRACTICES
ENERGY EFFICIENCY EVALUAITON_CONCEPTS & BEST PRACTICES
Charles Xu
 

Similar to FFM - Technical Brief - Migrating Your Data Center to Become Energy Efficient-with notes (20)

Improve sustainability through energy insights
Improve sustainability through energy insights Improve sustainability through energy insights
Improve sustainability through energy insights
 
Case Studies in Highly-Energy Efficient Datacenters
Case Studies in Highly-Energy Efficient DatacentersCase Studies in Highly-Energy Efficient Datacenters
Case Studies in Highly-Energy Efficient Datacenters
 
Green data center_rahul ppt
Green data center_rahul pptGreen data center_rahul ppt
Green data center_rahul ppt
 
Compu Dynamics White Paper - Essential Elements for Data Center Optimization
Compu Dynamics White Paper - Essential Elements for Data Center OptimizationCompu Dynamics White Paper - Essential Elements for Data Center Optimization
Compu Dynamics White Paper - Essential Elements for Data Center Optimization
 
BLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMSBLOG-POST_DATA CENTER INCENTIVE PROGRAMS
BLOG-POST_DATA CENTER INCENTIVE PROGRAMS
 
Improve sustainability through energy insights - Infographic
Improve sustainability through energy insights - InfographicImprove sustainability through energy insights - Infographic
Improve sustainability through energy insights - Infographic
 
Comparison between SAM and RETScreen
Comparison between SAM and RETScreenComparison between SAM and RETScreen
Comparison between SAM and RETScreen
 
A Journey to Power Intelligent IT - Big Data Employed
A Journey to Power Intelligent IT - Big Data EmployedA Journey to Power Intelligent IT - Big Data Employed
A Journey to Power Intelligent IT - Big Data Employed
 
Green Computing
Green ComputingGreen Computing
Green Computing
 
Co mn-bus-data-center-study-template
Co mn-bus-data-center-study-templateCo mn-bus-data-center-study-template
Co mn-bus-data-center-study-template
 
Southern Energy Efficiency Center Final Report
Southern Energy Efficiency Center Final ReportSouthern Energy Efficiency Center Final Report
Southern Energy Efficiency Center Final Report
 
451 Dct Dc Sustainability Exec Overview
451 Dct Dc Sustainability Exec Overview451 Dct Dc Sustainability Exec Overview
451 Dct Dc Sustainability Exec Overview
 
FY 2013 R&D REPORT January 6 2014 - Department of Energy
FY 2013 R&D REPORT January 6 2014 - Department of EnergyFY 2013 R&D REPORT January 6 2014 - Department of Energy
FY 2013 R&D REPORT January 6 2014 - Department of Energy
 
Sumerian datacentre news
Sumerian datacentre newsSumerian datacentre news
Sumerian datacentre news
 
Sumerian datacentre news
Sumerian datacentre newsSumerian datacentre news
Sumerian datacentre news
 
Bloom Engergy
Bloom EngergyBloom Engergy
Bloom Engergy
 
Facility Optimization
Facility OptimizationFacility Optimization
Facility Optimization
 
Datacenter ISO50001 and CoC
Datacenter ISO50001 and CoCDatacenter ISO50001 and CoC
Datacenter ISO50001 and CoC
 
The energy manager's guide to real time submetering data 1.16.14
The energy manager's guide to real time submetering data 1.16.14The energy manager's guide to real time submetering data 1.16.14
The energy manager's guide to real time submetering data 1.16.14
 
ENERGY EFFICIENCY EVALUAITON_CONCEPTS & BEST PRACTICES
ENERGY EFFICIENCY EVALUAITON_CONCEPTS & BEST PRACTICESENERGY EFFICIENCY EVALUAITON_CONCEPTS & BEST PRACTICES
ENERGY EFFICIENCY EVALUAITON_CONCEPTS & BEST PRACTICES
 

FFM - Technical Brief - Migrating Your Data Center to Become Energy Efficient-with notes

  • 1. DATA CENTER Migrating Your Data Center to Become Energy Efficient Providing your agency with a self-funded roadmap to energy efficiency.
  • 2. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 2 of 23 CONTENTS Introduction................................................................................................................................................................................3 Executive Summary..................................................................................................................................................................3 Government Drivers For Striving For Energy Efficiency........................................................................................................4 Where and how the energy is being consumed........................................................................................ 5 The Cascading Effect of a Single Watt of Consumption............................................................................ 6 What you are replacing and why............................................................................................................. 7 Modern data center architecture: Ethernet fabrics............................................................................. 8 Choose best practices or state-of-the-art ............................................................................................... 9 Best practices.............................................................................................................................. 10 Best Practices to Manage Data Center Energy Consumption.......................................................................................... 11 Hot Aisle Containment.................................................................................................................. 11 Cold Aisle Containment................................................................................................................. 11 Increasing the ambient temperature of the data center.................................................................... 12 Virtualization of the data center........................................................................................................... 12 Examples of application and server migration and their savings potential................................................ 14 Migrating 5 full racks to 1/5th of a rack......................................................................................... 14 Migrating 18 full racks to a single rack........................................................................................... 15 Migrating 88 full racks to 5 racks.................................................................................................. 17 Modeling the data center consolidation example................................................................................... 17 Consolidation and migration of 20 sites to a primary site with backup.............................................. 18 Virtualization Savings............................................................................................................................................................ 19 Network Savings.................................................................................................................................................................... 21 Assumptions Associated With Reduced Costs In Electricity............................................................................................ 21 Summary................................................................................................................................................................................ 22
  • 3. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 3 of 23 INTRODUCTION Intel Corporation estimates that global server consumption is responsible for $27 billion U.S. dollars (USD) of energy consumption annually. This cost makes virtualization an attractive endeavor for any large organization. Large enterprises that have not begun implementing server virtualization may be struggling to make the case to do so because of a time-consuming effort to measure or simply to estimate existing consumption. This document attempts to present a realistic expectation for agencies that are in the process of performing due diligence in this area. EXECUTIVE SUMMARY Brocade strives to show government clients who want the benefits of an energy efficient data center that they can develop a self-funded solution. The primary questions that a government official would want answered are: • Can we use the energy savings from the migration of our current application computing platforms to finance an energy-efficient data center model? • Can we achieve the type of consumption reductions as called out by Presidential Executive Orders 13423 and 13514? • Do we have to rely upon private sector to manage and deliver the entire solution, or can an existing site be prepared now to achieve the same results as leading energy-efficient data center operators do? • Who has done this before, and what were the results? What can we expect? My organization has nearly 820,000 end computers. How much would we save by reducing energy in the data center? Note: 1. Executive Order (EO) 13423, “Strengthening Federal Environmental, Energy, and Transportation Management” 2. Executive Order (EO) 13514, “Federal Leadership in Environmental, Energy, and Economic Performance” The answers to these questions are straightforward ones. The government can test, evaluate, and prepare deployable virtualized applications and the supporting network to begin saving energy while benefiting from lowered operational costs. The EPA estimates that data centers are 100 to 200 times more energy intensive than standard office buildings. The potential energy savings provides the rationale to prioritize the migration of the data center to a more energy-efficient posture. Even further, similar strategies can be deployed for standard office buildings to achieve a smaller consumption footprint, which helps the agency to achieve the goals of the Presidential Directives. The government can manage the entire process, or use a combination of public sector firms (ESCO’s-energy Savings Companies), and develop its own best practices from the testing phase. These best practices maximize application performance and energy efficiency to achieve the resulting savings in infrastructure costs. Other government agencies have made significant strides in several areas to gain world-class energy efficiencies in their data centers. You can review and replicate their best practices for your government agency. When you migrate your data centers to an energy-efficient solution, your agency can save over $38 million (USD) per year in energy consumption costs alone.
  • 4. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 4 of 23 GOVERNMENT DRIVERS FOR STRIVING FOR ENERGY EFFICIENCY Executive Order (EO) 13423 (2007) and EO 13514 (2009) are two directives that require agencies to work toward several measurable government-wide green initiatives. EO 13423 contains these important tenets that enterprise IT planners and executives who manage infrastructure may address directly: 1. Energy Efficiency: Reduce energy intensity 30 percent by 2015, compared to an FY 2003 baseline. 2. Greenhouse Gases: Reduce greenhouse gas emissions through reduction of energy intensity 30 percent by 2015, compared to an FY 2003 baseline. EO 13514 mandates that at least 15% of existing federal buildings (and leases) meet Energy Efficiency Guideline principles by 2015. EO 13514 also mandates an annual progress being made toward 100 percent conformance of all federal buildings, with a goal that 100% of all new federal buildings achieve zero-net-energy by 2030. The Department of Energy (DoE) is a leader in the delivery and the development of best practices for energy consumption and carbon dioxide emissions. The DoE has successfully converted their data centers to more efficient profiles, and they have shared their results. Table 1, from the DoE, offers an educated look into the practical net results of data center modernization. Note: This information comes from this Department of Energy document: Department of Energy: Leadership in Green IT (Brochure), Department of Energy Laboratories (DOE), S. Grant NREL. Table 1. Sample of actual facilities and their respective power usage effectiveness (PUE). (The DoE and the EPA have several metrics that demonstrate that the best practices and current technology allow agencies to achieve the desired results. These agencies have measured and published their results. To see data center closures by Department, see the Federal Data Center initiative at http://explore.data.gov.) Sample Data Center PUE 2012 Comments/Context DoE Savannah River 2.77 4.0 Previously DoE NREL (National Renewable Energy Laboratory) 1.15 3.3 Previously EPA Denver 1.50 3.2 Previously DoE NERSC (National Energy Research Scientific Computing Center) 1.15 Previously 1.35 DoE Lawrence Livermore National Laboratory (451) 1.67 2.5 (37 year old building) SLAC National Accelerator Laboratory at Stanford 1.30 New INL Idaho National Laboratory High Performance Computing 1.10 1.4 (Weather Dependent) DoE Terra Scale Simulation Facility 1.32 to 1.34 New Google Data enter weighted average (March 2013) 1.13 1.09 Lowest site reported DoE PPPL Princeton Plasma Physics Laboratory 1.04 New Microsoft 1.22 New Chicago Facility World Class PUE 1.30 Below 2.0 before 2006 Brocade Data Center (Building 2, San Jose HQ) 1.30 Brocade Corporate Data Center Consolidation project Standard PUE 2.00 3.0 before 2006 Note: Power Usage Effectiveness (PUE) is the ratio, in a data center, of electrical power used by the servers and network (IT) in contrast to the total power delivered to the facility.
  • 5. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 5 of 23 Where and how the energy is being consumed Many studies discuss the consumption of energy and demonstrate the cost to house, power, and cool IT communications gear in the data center and the typical commercial building. The Energy Information Agency (EIA) has estimated that data centers, specifically the government data center population (estimated to number >3000 sites), consume approximately 4% of the electricity in the United States annually. The EIA has assessed that the total use of energy consumption in the United States was 29.601 quadrillion watts in 2007 (29.601 x 1015). Energy usage is expected to increase at least 2% per year overall. The data center consumption of 4% may seem like a low percentile; however, it is a low percentage of a very large number. The number is so large that it makes sense to examine the causes and costs that are associated with the electricity consumption of the government enterprise data center. Note: EIA 2010 Energy Use All Sources. http://www.eia.gov/state/seds/seds-data-complete.cfm. In 2006, the server consumption was measured at approximately 25% of total consumption. The Heating, Ventilation, and Air Conditioning (HVAC) levels necessary to cool the center also equaled 25%, which totaled 50%. Many government agencies have data centers in varying sizes. To illustrate the source of consumption, look at a sample 5,000 square foot data center. The site would be approximately 100 feet long by 50 feet wide. The site would likely have servers, uninterruptible power and backup services, building switchgear, power distribution units, and other support systems, such as HVAC. Figure 1 illustrates the approximate consumption percentage of a typical data center of this size. The actual breakout of the server total consumption is provided (40% total). Note: The U.S. government has roughly 500,000 buildings, which means 15% is ~75,000 buildings. If one large data center is made energy efficient, it is like making 100-200 buildings more energy efficient. Figure 1. Breakout of support/demand consumption of a typical 5000 square foot data center. (This figure demonstrates the consumption categories and their respective share of the total consumption for the facility. The server consumption is broken into three parts: processor, power supply (inefficiency), and other server components. The communication equipment is broken into two parts: storage and networking.)
  • 6. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 6 of 23 Figure 2. Typical 5,000 square consumption breakout by infrastructure category. (This figure demonstrates the consumption categories broken out by kilowatt hours consumed.) The Cascading Effect of a Single Watt of Consumption When a single watt of electricity is delivered to IT gear in a data center or a standard commercial building, a ripple effect occurs. The total energy intensity of the data center can be 100 to 200 times that of a standard building with network and some local IT services. However, the net result of delivering power to each site type is similar. In 2006, the PUE of the standard data center was 2.84, which means that 2.84 watts were expended to support 1 watt of IT application, storage, and network gear. Figure 3. Cascade effect of using a single watt of power. (Making a poor decision on IT consumption for network and storage has the same effect as a poor decision for computing platforms. The Brocade® network fabric can consume up to 28% less than the competing architectures.)
  • 7. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 7 of 23 The cost of a watt is not simply a fraction of a cent. In the data center, an IT element is powered on until it is replaced, which can be a long period of time. A single watt, constantly drawn, can cost nearly $24.35 over a 10-year period in a campus LAN, which is the typical lifespan of a legacy core switch. Similarly, every additional watt that is drawn in a data center since 2006 would have cost $20.34 if deployed during the period of 2006-2012. Note: The Energy Star Program of the U.S. Environmental Protection Agency estimates that servers and data centers alone use approximately 100 billion kWh of electricity, which represents an annual cost of about $7.4 billion. The EPA also estimates that without the implementation of sustainability measures in data centers, the United States may need to add 10 additional power plants to the grid just to keep up with the energy demands of these facilities. What you are replacing and why Several reasons drive the enterprise to replace the current data center IT infrastructure. Multiple advancements in different areas can enable a more efficient means of application service delivery. For example, energy utilization in all facets of IT products has been addressed via lower consumption, higher density, higher throughput, and smaller footprint. Another outcome of rethinking the energy efficiency of the data center is that outdated protocols, such as Spanning Tree Protocol (STP), are being removed. STP requires inactive links and has outlived its usefulness and relevance in the network. (See Figure 4.) Core Server Rack ISLs Inactive links AccessAggregation TrafficflowBlocking by STP Legacy Architecture Figure 4. Sample legacy architecture (circa 2006). (The legacy architecture is inflexible because it is deployed as three tiers, optimized for legacy for client/server applications. STP ensures its inherent inefficiency, which makes operating it complex and individual switch management makes it expensive.) The data center architecture of the 2006 era was inflexible. This architecture was typically deployed as three tiers and optimized for legacy client/server applications. This architecture was inefficient because it is dependent upon STP, which disables links to prevent loops and limits network utilization. This architecture was complex because additional protocol layers were needed to enable it to scale. This architecture was expensive to deploy, operate, and maintain. STP often caused network designers to provide duplicate systems, ports, risers (trunks), VLANs, bandwidth, optics, and engineering effort. The added expense of all this equipment and support were necessary to increase network availability and utilize risers that were empty due to STP blocking. This expense was overcome to a degree by delivering Layer 3 (routing) to the edge and aggregation layers. Brocade has selected the use of Transparent Interconnection of Lots of Links (TRILL), an IETF standard, to overcome this problem completely.
  • 8. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 8 of 23 In addition, STP caused several effects on the use of the single server access to the network. STP caused port blocking, physical port oversubscription at the access, edge, and aggregation layers, slower convergence, and wasted CPU wait cycles. STP also lacked deterministic treatment of network payload to the core. This architecture also caused separate network and engineering resources to accommodate real-time traffic models. The networking bottleneck issues were made worse by the additional backend switched storage tier that connected the server layer with mounted disk arrays and network storage. This tier supported substantial traffic flows in the data center between network and storage servers. Also, more efficient fabric technologies are increasingly being implemented to accomodate the new trends of data transmission behavior. The data center traffic model has evolved from a mostly northbound-southbound model to an 80/20 east-west traffic model, which means that 80% of server traffic can be attributed to server-to-server application traffic flows. Note: Information about the evolution to an east-to-west traffic model comes from this Gartner document: “Use Top-of-Rack Switching for I/O Virtualization and Convergence; the 80/20 Benefits Rule Applies”. Modern data center architecture: Ethernet fabrics Brocade has designed a data center fabric architecture that resolves many of the problems of the legacy architecture. The Brocade Ethernet fabric architecture eliminates STP, which enables all server access ports to operate at the access layer and enables all fabric uplinks to the core switching platform to remain active. The Ethernet fabric architecture allows for a two-tier architecture that increases the access to network edge from a 50% blocking model at n × 1 Gigabit Ethernet (GbE), to a 1:1 access model at n × 1 GbE or n × 10 GbE. Risers from the edge can now increase their utilization from the typical 6–15% use of interconnects to much greater rates of utilization (50 to even >90%). When utilization is increased, end user to application wait times are reduced or eliminated. This architecture enables Ethernet connectivity and Fibre Channel over Ethernet (FCoE) storage access by applications, thus collapsing the backend storage network into the Ethernet fabric. The Ethernet fabric data center switching architecture eliminates unneeded duplication and enables all ports to pass traffic to the data center switching platforms. Ports can pass traffic northbound to the core or east-west bound to the storage layer. The combination of the virtual network layer delivered by the fabric and the virtual server layer in the application computing layer delivers a highly utilized, highly scaled solution that decreases complexity, capital outlays, and operational costs. Figure 5. Efficient Ethernet Fabric Architecture for the Data Center. (Ethernet fabric architecture topologies are optimized for east-west traffic patterns and virtualized applications. These architectures are efficient because all links in the fabric are active with Layer 1/2/3 multipathing. These architectures are scalable because they are flat to the edge. Customers receive the benefit of converged Ethernet and storage. The architectures create simplicity because the entire fabric behaves like a logical switch.)
  • 9. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 9 of 23 Choose best practices or state-of-the-art If you adopt best practices, you could achieve your energy usage reduction goals with a realistic initial investment. If you adopt state-of-the-art technologies, the initial investment to achieve the desired results may be much higher. In 2006, the Green Grid looked at these potential of energy saving strategies. They estimated that by using best practices or state-of-the-art technology, that overall consumption would measurably drop across all commercial sectors. Historically, energy costs increase by 2-3% per year, and energy use increases 2% per year, so quantifiable action must be taken. The study showed that using better operational procedures and policies, using state-of-the-art technology, and using industry best practices contribute to an overall drop in consumption. Note: This information about PUE estimation and calculation comes from the Green Grid document at this link: http://www.thegreengrid.org/~/media/WhitePapers/WP49-PUE%20A%20Comprehensive%20Examination%20of%20the%20Metric_v6.pdf?lang=en Figure 6. Using best practices and state-of-the-art technology controls consumption. (Best practices, such as HAC/CAC, high-efficiency CRAH units, low-consumption servers with high-density cores in CPUs make 1.6 PUEs achievable. Solid State Drives on servers and storage arrays could provide a lower return on investment depending upon current acquisition cost. Agencies should carefully weigh the benefits of some state-of-the-art options. For more information about best practices and state-of-the-art technology, see www.thegreengrid.org.) Notes: Hot Aisle Containment (HAC). Cold Aisle Containment (CAC). The terms Computer Room Air Handler (CRAH) unit, Computer Room Air Conditioning (CRAC) unit, and Air-Handling Unit (AHU) are used inter-changeably. For more information about PUE estimation and calculation, search for Green Grid White Paper #49 at this site: http://www.thegreengrid.org/en/library-and-tools.aspx?category=All&range=Entire%20Archive&type=White%20Paper&lang=en&paging=All
  • 10. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 10 of 23 Best practices With respect to enterprise application delivery, the strategy of lowering or changing the consumption curve will become outdated while demand continues to climb. Many options are available with varying results. Many data center experts recommend that a CAC system be utilized to provide efficient cooling to the server farm and network gear within the racks of a data center. On the other hand, an HAC (Hot Aisle Containment) system typically achieves a 15% higher efficiency level with a cooler ambient environment for data center workers. Hot air return plenums direct the hot air back through the cooling system, which could include ground cooling, chilled water systems, or systems that use refrigerant. Note: This information about a HAC system comes from this article from Schneider Electric: “Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency R2”, J.Niemann, K. Brown, V. Avelar Examples of best practices: • High-efficiency Computer Room Air Handler. If you upgrade the fan system of a CRAH unit to one with a variable-speed fan, energy costs can be reduced by up to 16% to 27%, under identical conditions. • Use mid- to high-density Virtual Machines (VMs). Per studies offered by the Green Grid, the optimal power supply load level is typically in the mid-range of its performance curve: around 40% to 60%. Typical server utilization shows highly inefficient use of power at low levels (below 30% load), and slightly less efficiency when operating at high capacity loads as a result of using high-density VMs (above 60% load). • Higher-performing, lower-consumption servers. Current server technology includes many efficiency features, such as large drives that require less consumption, solid-state technology, energy-efficient CPUs, high-speed internal buses, and shared power units with high-efficiency power supplies under load. • Higher-performing, lower-consumption network gear. With the full-scale adoption of 10 GbE and 40 GbE edge interconnects and n x 100 GbE switching from the edge to the core, network fabrics in data centers are poised to unlock the bottleneck that previously existed in the server farm. • Low-tech solutions. Install blank plates to maximize the control of air flow. Examples of state-of-the-art: • High-speed and more efficient storage link. Brocade Generation 5 Fibre Channel at rates such as 16 G are the current performance-leading solutions in the industry. • Semiconductor manufacturing processes. In 2006, the typical data center was outfitted with devices that utilized 130 nm, 90 nm, or 65 nm technology at best. The semiconductor chips that were embedded within the Common Off the Shelf (COTS) systems, such as switches or servers, required more power to operate. Now that 45 nm and 32 nm chipsets have been introduced into the manufacturing process, a lower energy footprint can be achieved by adopting state-of-the-art servers, CPUs, and network and storage equipment. With the advent of 22 nm (2012), the servers of the future will operate with even lower footprint CPUs and interface circuits. Intel estimates that they can achieve performance gains with consistent load and do it at half the power of 32 nm chip sets. (“Intel’s Revolutionary 22 nm Transistor Technology” M. Bohr, K. Mistry,) • Solid State Drives (SSDs) for servers. Whether the choice is to use a high–end, enterprise-class Hard Disk Drive (HDD), or the latest, best-performing SSD, the challenge is to achieve a balance between performance, consumption, and density.
  • 11. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 11 of 23 BEST PRACTICES TO MANAGE DATA CENTER ENERGY CONSUMPTION Hot Aisle Containment Hot Aisle Containment (HAC) ensures that the cool air passes through the front of the IT equipment rack from a cooling source that consists of ambient air at lower temperatures or cool air through perforated tiles at the face of the cabinet or rack. The air is forced through the IT gear as it is pulled across the face of the equipment by the internal fans. The fans direct the air across the motherboard and internal components, and the air exits the rear of the rack to a contained area between the aisles. This area captures the higher-temperature air and directs it to a plenum return. The net result is that the ambient air in the data center can be kept at a higher temperature and the thermostat on the CRAH unit can be kept at a level that ensures that the unit does not get forced on. The key to successful implementation of HAC is to select IT components the can withstand higher temperatures, therefore saving energy, while continuing to operate normally. Figure 7. Hot Aisle Containment. (HAC that complies with OSHA Standards can reduce PUE by reducing chiller consumption, for example, via increased cold water supply. The room can still be maintained at 75 degrees Fahrenheit and the hot aisle could be up to 100 degrees. In some instances, the heat can rise to 117 degrees F.) Cold Aisle Containment Another method of air management uses Cold Aisle Containment (CAC). In CAC, cold air is brought into the room through the perforated floor tiles across the air exhaust side of the rack and mixed with ambient air, which is pulled through the chillers that are mounted above each rack. In this implementation, the cold air is contained in between the rack aisles and it is pulled through the front of the IT equipment and run across the internal components. The air exits the rear of the racked equipment. It is contained in the cold aisle by doors or heavy plastic curtains to prevent escape of the cold air to the side of the aisle. The exiting air from the rear of the IT rack is intermixed with the ambient air and the chilled air coming up from the floor tiles and lowers the room temperature to within OSHA standards. The ambient room temperature may be kept at temperatures up to 79 degrees Wet-Bulb Globe Temperature (WBGT) (26 degrees Celsius). The chillers turn on more often within the room as a result of the higher temperature ambient air, and the PUE of the data center profile is raised about 15% higher than that of the HAC.
  • 12. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 12 of 23 Figure 8. Cold Aisle Containment. (A WBGT index of greater than 26 degrees Celsius (79 degrees F) is considered a “hot environment.” If the WBGT measures less than 26 degrees Celsius everywhere and at all times, then the workplace is relatively safe for most workers. The room may still be maintained at 75–79 degrees F and the cold aisle would be as low as 64–65 degrees F.) Increasing the ambient temperature of the data center Recent studies suggest that if the ambient temperature of the data center is allowed to increase somewhat because the cooling is not so intense, additional savings can be achieved. The network and server equipment that is selected must meet the specifications of running normally under more extreme conditions. Note: This information comes from the Intel document “How High Temperature Data Centers and Intel Technologies Decrease Operating Costs” at this link: http://www.intel.com/content/www/us/en/data-center-efficiency/data-center-efficiency-gitong-case-study.html Virtualization of the data center Many benefits occur when you introduce virtual servers into your application data center and software service delivery strategy. For government purposes, applications typically are in one of two categories, mission (real- time or near real-time) and services (non-real-time). For this discussion, it is recommended that the platform migration targets non-real-time applications, such as Exchange® , SharePoint® , or other applications that do not require real-time performance. Additionally, many mission-oriented applications, for example unified communications, must not be supported on virtual platforms in a high–density, virtual server environment by the OEM or their subcontracted software suppliers. Much of this has more to do with the testing phases their products go through to achieve general availability. When you move non-real-time services to a virtual environment, you can achieve benefits like these: • Lower power and distribution load • Smaller Uninterrupted Power Supplies (UPSs) • Faster OS and application patching • Controlled upgrade processes • More efficient Information Assurance (IA) activity • Optimized server utilization • Reduced weight (as well as lower shipping costs) • Increased mobility • Lower ongoing operational and maintenance costs
  • 13. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 13 of 23 Based upon the enterprise mission and the expected performance of an application, the typical compute ratio for virtual servers can be reduced from 1:1 to 14:1. The U.S. Department of Energy has a 29:1 ratio for non-mission applications. Brocade has determined that a 14:1 density of virtual servers per real server is a reasonable number to use to illustrate the energy efficiency of using virtual servers. The platform used to demonstrate this savings benefits was also limited by the number of virtual media access control addresses (MACs) that the system could support (14) in each blade center chassis and the embedded compute blades. Figure 9. Virtual servers and CPU cores. (The server can be virtualized with specific processor assignment to address application performance, which enables real-time and non-real-time applications for virtualization. In this example, 14 servers were modeled to 2 quad-core CPUs. Three servers consume 100% of a single core and other applications receive a partial core. Though 14 nm technology promises higher density cores to CPU in the near future, this document explores only what was currently available at a competitive cost.) In the illustrated examples, Brocade determined that the compute space required for 196 virtual servers running on 14 separate physical blades would reduce the typical footprint of 5 full racks to between 7 and 9 Rack Units (RUs) of space. In a typical server, the unit is running most of the 168 hours in a week at less than 20% utilization. Many factors reduce speed, such as network access speed of the system interface and buses between CPU cores. Also, many dual or quad CPUs have the cores that are connected to the same die, which eliminates bus delay. Most blade center systems allow the administrator to view all the CPUs in the system and assign CPU cycles to applications to gain the desired work density. When you assign the work cycles for the CPU to applications, some applications can be given a higher computational share of the processor pool while others have only reduced access to CPU cycles. One application may require the assignment of only one half of a single processor core embedded within the blade, while another may require 1.5 times the number of cycles that a 3.2 GHz core would provide. As a result, the application is assigned one and a half processor cores from the pool. Note: For standard 19” racks, the holes on the mounting flange are grouped in threes. This three-hole group is defined as a Rack Unit (RU) or sometimes a “U”. 1U occupies 1.75” (44.45 mm) of vertical space. Enabling the virtual server solves a part of the overall problem. Additional steps to increase the workload to the server require that you eliminate the bottleneck to the server. In the circa 2006 data center, the workload is limited because one of the two ports is blocked and in standby mode. The server configuration, used in this example, is also running a single Gigabit Ethernet port. To alleviate this bottleneck issue, increase the network interface speed of a server from n × 1 GbE to n × 10 GbE. The port speeds increase tenfold, and the port blocking introduced by STP is now eliminated by use of a network fabric. When we increase this speed of the interface to 10 GbE, with all the network ports that are available, we provide up to 20 gigabits of bandwidth.
  • 14. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 14 of 23 When the virtual server density is increased, the physical server density is decreased. Due to the savings of the physical hardware and the eventual energy savings of the high-density platform, it is recommended that the original application servers be turned off after the application is moved. As a result, the power consumption of the original 196 servers that occupy several racks is reduced from an average consumption of 200 kWh to between 9 kWh (a Brocade Fabric) and 14 kWh (competing fabric). (The range of consumption on the new solution depends on the workload of the system and the choice of network fabric solution.) Examples of application and server migration and their savings potential Figure 10. Savings Potential of Server Migration. (This figure shows that five racks of servers and top of rack switches can be condensed to a footprint that is less than one third of a 42 RU rack. The 196 servers deployed in the circa 2006 footprint of five racks consume 206 kWh of energy using the server consumption under load measurement, not the regulatory power of the unit itself. The same model was used to estimate the consumption of the IT gear in the proposed virtualized deployment estimated to be between 8 and 14 kWh.) Migrating 5 full racks to 1/5th of a rack In the original implementation, five full racks of IT computing and networking equipment are shown. The original circa 2006 equipment has 2 × 1 GbE network links per server, typically to 2 Top-of-Rack (ToR) units such as Cisco Catalyst 3750E-48 TD GbE switches connected via 10 GbE risers to two Cisco Catalyst 6509 switches running 4 × 16 port 10 GbE cards with 2 × 6000 WAC power supply units. The network consumption for these units is estimated as ~19 kWh. The CPU density of the rack included 2 CPUs per server, with an average of 39 to 40 servers per rack. The rack also includes 4 × 10 GbE risers to the aggregation layer network per rack (200 Gbps total with 100 Gbps blocked due to STP). This original configuration represents a 4 to 1 bottleneck, or an average of only 500 Mbps per server bandwidth, without excessive over engineering of VLANs or using Layer 3 protocols. The IBM 2-port Model 5437 10 GbE Converged Network Adaptor (OEM by Brocade) is embedded within the blade center chassis for FCoE capability. The Converged Network Adaptor enables the Ethernet fabric and direct access to the storage area network, which flattens the architecture further and eliminates a tier within the standard data center. Note: A Top-of-Rack (ToR) switch is a small port count switch that sits at or near the top of a Telco rack in data centers or co-location facilities. The EIA reports that the U.S. average cost of electricity when this solution was implemented was 9.46 cents per kWh. The annual cost in circa 2006 U.S. dollars (USD) for electricity to operate this solution is approximately $171,282. The average consumption per server is estimated at 275W per unit, and the two network ToR switches are estimated to draw 161W each when all cables are plugged in with 10 GbE risers in operation.
  • 15. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 15 of 23 For this solution, the annual computing platform energy cost is $44,667, the network energy cost is $15,644, and the facility and infrastructure costs to support it are $56,039/yr. The HVAC cost to cool this solution is estimated to be $77,872. These numbers are consistent with a facility that has a PUE of 2.84. Using available technology from Brocade, standard IT blade center technology, and the implementation of virtual servers, the associated footprint is reduced to approximately one-fifth of a rack. This new solution would include 14 physical blade servers within a chassis solution with 2 × 10 GbE links per server, 196 virtual servers using 112 CPU cores (2 × quad core CPUs per blade), and 280 Gbps of direct network fabric access. The average bandwidth that is accessible by the virtual server application is between 1 GbE to 20 GbE of network bandwidth (2:1 to 40:1 increase over the aforementioned circa 2006 solution). The network fabric solution from Brocade utilizes 28% less energy (cost) than the competing solution today. Using a CAC model of 1.98 PUE, or a HAC model of 1.68, the total cost of this solution including the blade center is approximately $9,930/year HAC to $11,714/year CAC. If this solution is delivered in a state-of-the-art container, a PUE of 1.15 4 or lower would cost approximately $6,803/year. Note: The information about the solution that achieves a PUE of 1.15 is from this Google document: Google PUE Q2 2013 performance measurement: http://www.google.com/about/datacenters/efficiency/internal/index.html Figure 11. Migration of a small-medium data center with 18 racks of IT into a 42 U rack. (Using the same methodology as in Figure 11, the consumption of the 18 racks of servers in the 2006 architecture is estimated to be approximately 650 kWh. The replacement architecture is expected to consume between 15 kWh and 27 kWh while increasing the performance from the access to the core of the network.) Migrating 18 full racks to a single rack In this migration model, it was determined that up to 18 full racks of circa 2006 computing and network data center equipment would be migrated into less than a single rack. The network and blade center slots remain available for nominal expansion. Here, we are moving the capability of 700 physicals servers and the corresponding 1,400 network interfaces connected to the access layer to a footprint that takes up only one full rack of space.
  • 16. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 16 of 23 The original 18 Full Rack deployment had 2 x Cisco Catalyst 3750E-48 TD switches at the top of each rack, aggregating risers to 2 x Cisco Catalyst 6509 switches with 7 x 16-port 10 GbE modules using 2 x 600 WAC power supply units. The 18 racks also housed 700 physical servers, an average of 39 servers per physical rack. The resulting IT consumption load of the original servers was 192 kWh. The resulting load of the top of rack, aggregation, and core switches was 36 kWh. The infrastructure load of the facility (76 kWh), including HVAC (245 kWh) was 421 kWh with an annualized cost of $539,044 per year. A PUE of 2.84 was used to reflect the typical PUE achieved in the 2006 time period. In 2006, the national average cost per kWh was 9.46 cents; which is reflected in this calculation. The migration platform used for this comparison was a rack with four typical blade center chassis with up to 14 blades per system, each equipped with 2 x 3.2 GHz quad-core CPUs. The compute platform consumption in the circa 2006 solution drew 192 kWh, versus new solution consumption of 10,240 kWh. The ToR switches of the original data center are replaced with a pair of Brocade VDX® 6720-60 Switches (which are ToR fabric switches) that are connected to a pair of Brocade VDX 8770 Switches (which are fabric chassis-based switches) that are connected to a Brocade MLXe Core Router. This upgrade reduced network consumption from 36,540 kWh to 4826 kWh, which reduced energy costs from $30,280 in 2006 to $4,362 at 2012 rates. By migrating the physical servers and legacy network products to a currently available virtualized environment, the annual costs are reduced to between $22,882 (HAC model) to $26,968 (CAC model) when calculated at the EIA 2012 commercial cost of 10.32 cents per kWh. When use of the Brocade Data Center Fabric is compared to the circa 2006 solution, the savings are 32,000 kWh, or ~$26,000 per year (USD). The total savings in energy alone for the virtual network and virtual computing platform is greater than $500,000 per year. When comparing the Brocade fabric solution to currently available competing products, the savings in the network costs is 27%. What is also significant is that the 18 full racks would be reduced to a single rack footprint. This scenario would collapse an entire small data center into a single rack row in new location. In order to gain the economy of the HAC or CAC model, this IT computing and network rack would benefit from being migrated into a larger data center setting where several other racks of gear are already in place and operating in a CAC/HAC environment. Figure 12. Migration of medium-large data center with 88 racks of IT into 5 x 42 U racks. (Using the same methodology as used in Figures 11 and 12, the consumption of the 88 racks of servers in the circa 2006 solution is estimated to be approximately 2.9M kWh, while its replacement architecture is expected to consume between 65 and 112 kWh while providing increases in performance from the access to the core of the network.)
  • 17. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 17 of 23 Migrating 88 full racks to 5 racks A migration from a medium-sized data center solution of 11 rack rows with 8 racks per row would yield similar results, yet on a greater scale. To be effective in a HAC or CAC environment, two data centers of this size would need to be migrated into a single site, just to create a two rack rows that could create a CAC/HAC environment. The resulting 5-rack density is more ideal for a migration to a containerized solution, where PUEs have been as low as 1.11 to 1.2. In such a scenario, the migrated 88 racks of IT would have 1 MWh in direct consumption, including network and computing. The new solution would consume 2.9 MWh with a PUE of 2.94, which results in an annual cost of $2.4M per year using EIA 2006 rates of 9.46 cents per kWh. After the 88 racks are migrated to a properly engineered HAC, CAC, or containerized facility, the energy cost of the solution would be between $66,275 USD to $112,386 USD per year, at 2012 commercial rates. This migration scenario would result in nearly $2.3M per year in cost savings due to energy reduction. Note: Using the methodology shown in Figures 11-13, data centers with 350 and 875 racks are consolidated into 20 and 50 rack footprints respectively. The results are included as part of the 20 site consolidation and migration, which is discussed later in this document. Modeling the data center consolidation example To model an agency- or department-level data center migration, Brocade estimated the size of the data center in terms of space, and the size of the customer base. Brocade also estimated the IT equipment that was needed to service the disparately located applications across the agency. To do this, Brocade reviewed studies that offered a reasonable estimate of servers in use per desktop, minimum room size estimates for various sized data centers, as well as population densities of the agency being modeled. Number and minimum size per site: Brocade has estimated that a very large enterprise would have up to 20 sites with varying levels and density of application computing deployed. In the migration model, Brocade used a sample of 12 sites with 5 full racks of data center application IT equipment (60 racks total), 4 sites with 18 full racks (72 racks total), 2 sites with 88 racks (176 racks total), a single large site with up to 350 racks, and a single site with 875 racks (875). These sites contain a total of 1,533 racks that occupy up to 18,000 square feet of space. Number of desktops: Brocade used the example of a military department that consisted of 335,000 active duty personnel, 185,000 civilian personnel, 72,000 reserve personnel, 230,000 contractor personnel, which totaled approximately 822,000 end users. Brocade used this population total and 1:1 desktop ratio, to derive a raw estimate for server counts. A study performed by the Census Bureau estimates that there are variances by verticals, such as education, finance, healthcare, utilities, transportation, as well as services. With approximately 822,000 end users, approximately 41,100 servers would support primary data center operations. However, purpose-built military applications and programs may push this figure even higher. The migration example used accounts for 67% mission servers (41,100), 17% growth servers (10,275), 8% mirrored critical applications servers (4,889 secondary servers), and 8% disaster recovery servers (4,888). Server to desktop ratio: The ratio to determine how many servers exist per desktop computer depends upon the vertical being studied. The U.S. Census Bureau estimates that the government desktop-to-employee ratio is 1:1.48 employees. The net of the statistics in the study offered is that there is approximately 1 server for every 20 desktops. Of the 4.7 non-retail firms responding to the study (6 million total), there were 43 million desktops and 2.1 million servers to support operations. This count resulted in a 20:1 desktop-to-server ratio. Note: The Census Bureau determined that the average ratio of PCs per server was approximately 20:1. With approximately 822,000 end users, and factoring in primary applications (67%), growth (17%), mirrored backup (8%), and Disaster Recovery and Continuity of Operations (DR/CooP) (8%), the virtualized server environment was estimated at 61,000 servers.
  • 18. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 18 of 23 Consolidation and migration of 20 sites to a primary site with backup Figure 14 depicts the 20 sites that would be migrated to a single primary data center. The originating sites would account for approximately 60,000 square feet of space. The Post Migration Footprint (PMF) of 87 racks would occupy 2 condensed spaces at up to 45’ by 22’ each, or approximately 8000 square feet per primary and backup center. These numbers represent a 15:1 reduction. To reduce risk, Brocade recommends that you model the migration process at a test site. A low-risk fielding approach should be used. Brocade recommends that a virtual server environment coupled with the virtual network fabric architecture to be created into a consolidated test bed. The test bed would enable an organization to begin modeling the application migration of selected non-real-time applications from their current physical servers to the virtual server environment. This test bed could function as the primary gating process for applications, with respect to fit, cost, performance, and overall feasibility. The goal of the test bed would be to organize a methodology for deploying, upgrading, and patching specific application types to work out any issues prior to their implementation in the target data center environment. As applications are tested and approved for deployment, the condensed data center could be constructed, or the target site could be retrofitted with the applicable technology to support a lower energy consumption posture. The first step to migration would be to test the applications that reside at the 12 small sites (#1 in the Data Center Consolidation Diagram) with an average of 5 racks of servers per site. When a successful migration procedural environment is achieved, continue to migrate the applications the remaining sites to the target data center. Figure 13. Migration of 20 sites to one modern data center configuration. (This example shows the difference in the pre and post migration footprints. The 2006 solution uses 1,533 racks, which are transferred to a footprint of 87 racks. A secondary data center that mirrors the functions of the first is also shown.)
  • 19. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 19 of 23 VIRTUALIZATION SAVINGS The energy reduction and consumption savings are enormous when viewed in contrast to their costs in 2006 USD. The energy use of the 12 small sites was reduced from 206 kWh to less than 9.9 kWh per site. The 88 full rack sites consumed 2.4 MWh and reduced to less than 120 kWh. The 4 sites with 18 full racks per site were reduced from 650 kWh each to less than 22 kWh per site, for a total of 2.6 MWh that dropped to 88 kWh. The two 88-rack sites, which consumed 2.9 MWh per site, were reduced to a little over 95 kWh each, for a change of ~5.8 MWh to <200 kWh. Energy consumption for the site with 350 racks was reduced from 11.3 MWh to about 370 kWh. Energy consumption for the site with 875 racks was reduced from 28 MWh to approximately 863 kWh. Table 2. The data center modernization from the circa 2006 solution to the new model results in the following reductions: Category Previous New Reduction Ratio Total Consumption 50 MWh 1.63 MWh (HAC) 30:1 Reduction Total Rackspace 1,533 ToR switches 87 ToR fabric switches 17:1 Reduction Total Footprint 60,000 sq. ft. 4,000 sq. ft. 15:1 Reduction Server Footprint 60,000 Physical 4,350 Physical Servers 14:1 Reduction Virtualization achieved a 14:1 virtual server to real server reduction. This reduction is similar to aggressive virtualization ratios for the non-real time applications, ratios as high as 29:1, versus mission applications with 1:1 reduction ratio. Virtualization of the servers and their physical connectivity requirements, elimination of unnecessary protocols, and the use of better density at low consumption rates allowed for the network to gain a 10:1 increase in access speed, with available bandwidth by all resident applications on the blade. Advancement in network technology for ToR and end-of-rack solutions enabled the fabric to provide for an upgrade to the core, from 10 GbE to 40 GbE and eventually 100 GbE. The circa 2006 aggregation layer was eliminated in its entirety and replaced by a data center fabric from the access to the core. The circa 2006 core routing and switching platform was upgraded from n x 10 GbE to n x 100 GbE. Consolidation reduced a once-acceptable 8:1 network access oversubscription rate to an industry-leading 1:1 ratio. Virtual services necessitate that the bottleneck in the data center be eliminated. The virtual server and network fabric consolidation with 10 GbE at the edge reduced consumption from an average of 147W per 10 G interface to ~5.5W in this migration model.
  • 20. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 20 of 23 Figure 14. Migration of the circa 2006 data center network architecture elements. (The 20 sites that have been migrated to the Brocade fabric solution include 1,533 ToR switches, 104 aggregation layer switches, and 20 core switches. The total consumption of these network switches was based upon the actual draw under load, not the regulatory-rated power of the unit. The result is that this architecture consumes 1.317 MWh in direct draw. Considering the typical PUE of 2.84 for this period of deployment, the resulting total consumption is 3.7 MWh.) Figure 15. Architecture elements of the resulting Brocade Data Center Fabric. (The resulting migration uses one fabric with two Brocade VDX 8770 Switches, along with 87 Brocade VDX ToR fabric switches, with two Brocade MLXe-16 Routers (core). This network architecture consumes 90 kWh of electricity, which factors in an achievable PUE of 1.68. This example assumes the use of an HAC model.)
  • 21. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 21 of 23 NETWORK SAVINGS In terms of scale, the reduced network footprint is just as significant for the network footprint. The 1,533 ToR switches were reduced to 87 ToR units (17:1). Also, the bandwidth available to each application increased. From the circa 2006 network, 104 aggregation switches are converted to end-of-row Brocade 8770 fabric switches. In addition, the elimination of the 20 sites, allowed for the consolidation of 20 underperforming core units, to 4 easily scalable and expandable core switches (Brocade MLX® ). Network energy consumption was reduced from 3.7 MWh (including circa 2006 2.84 PUE) to 90 kWh with a PUE of 1.68. This reduction of 37:1 coincides with an order of magnitude increase in network capacity. The network layer of the new virtualized data center effectively experiences a higher level of traffic density and link utilization. Depending upon the initial application physical connectivity, the new fabric allows for application access from the server to the user to be increased from 1, 2, or 4 active interface connections to 2, 4, or 8 connections, all active. ASSUMPTIONS ASSOCIATED WITH REDUCED COSTS IN ELECTRICITY Several reasonable assumptions were made in determining the size and scope of the migration, as well as the calculation of consumption, on a per system basis. Brocade has also assumed that the typical server, network switches, protocols used, and typical computing density assigned to the legacy (circa 2006) data center would be the systems that were generally available from 2004-2006 during the design and implementation phases of the migration candidate sites. Other assumptions include: • HAC load factors implementations are typically 39% of the IT load factor, versus the 2006 ration of 1.07 watts of cooling per 1W of IT equipment.(1) • The new data center facility consumption estimates were based upon a 29% load factor of the IT load vs. a 77% load factor, which was typical of the circa 2006 legacy data center. (2) • Assumption that the UPS models and typical consumption remained constant. • The compute platforms that were migrated to virtual servers used a 14:1 ratio. The mission-oriented solutions typically required a 1:1 application to core ratio. A ratio of 29:1 was used for services compute platforms, which is in line with practices from the U.S. Department of Energy (3) methodologies for one of their labs with high-density virtual servers. By taking an average of the extremes, 1:1 and 29:1, the 14:1 ratio was adopted. • The Brocade migration model assumed the use of 8 cores per blade (2 x quad-core CPU at 3.2 GHz), as well as the limit of 14 virtual MACs per blade, which results in an average of 1 MAC per virtual interface to the fabric. • Mission-critical compute applications remain on existing hardware until a CPU core mapping per application determination is made, or were migrated to a 1:1 core per application ratio. • Brocade adopted the 2006 PUE of 2.84 (1) versus an achievable 2013 PUE of 1.68 (2). Brocade found in several studies that industry seemed to agree that a standard PUE in 2013 was 2.0, while a standard PUE of the 2006 timeframe was around 3.0. Therefore, Brocade utilized the available breakdown offered as a viable model for the legacy data center. (1) • Brocade determined that not all enterprises would opt for the cost associated with state-of-the-art solutions that provide for PUEs as low as 1.15. Even though some companies have since experienced even lower PUEs, for example Microsoft and Google (4), Brocade took a conservative approach to the final data center model. Notes: Information in this list came from these documents: 1. Emerson Electric: Energy Logic: “Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems”. 2. Schneider Electric: “Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency R2”, J.Niemann, K. Brown, V. Avelar 3. Department of Energy: Leadership in Green IT (Brochure), Department of Energy Laboratories (DOE), S. Grant NREL 4. Google PUE Q2 2013 performance measurement: http://www.google.com/about/datacenters/efficiency/internal/index.html
  • 22. DATA CENTER TECHNICAL BRIEF Migrating Your Data Center to Become Energy Efficient 22 of 23 SUMMARY For an enterprise of 822,000 users, the virtualized architecture with the Brocade Data Center Fabric can save $36 million per year in energy consumption. This huge savings occurs when legacy data centers are migrated to the modern data center architecture and its backup data center. (Without a backup center, the savings would be $38 million per year.) Brocade delivers a world-class solution in network and storage integration into a network fabric that enables access to the virtual servers. All links to the fabric are operational, there is higher density and utilization of access links, and fabric links and core switching are capable of switching or routing at wire speed. Table 3. Costs for individual sites pre and post migration.* (The chart demonstrates that the circa 2006 data center model will cost $40M per year using EIA national average kWh costs published in 2006. In the post- virtualization deployment, coupled with a Brocade fabric network architecture upgrade, the consumption costs vary between $1.8M (CAC model), $1.5M (HAC model), and $1M (state-of-the-art model) calculated in the 2013 national average costs per EIA.) Circa 2006 Costs 2013 Costs Migration Examples Original Costs (PUE 2.84) Hot Aisle Containment (PUE 1.68) Cold Aisle Containment (PUE 1.98) State of Art (PUE 1.15) 12 Sites at 5 Racks to 1/5th 171,295 9,951 11,726 6,815 4 Sites at 18 Racks to 1 Rack 2,156,179 91,527 107,872 62,653 4 Sites at 88 Racks to 1 Rack 4,840,286 190,715 224,772 130,549 1 Site at 350 Racks to 20 Racks 9,418,877 370,423 436,570 253,563 4 Sites at 875 Racks to 20 Racks 23,509,779 863,553 1,017,759 591,123 Totals Each Option 40,096,417* 1,526,170 1,798,698 1,044,704 Notes: The EIA advised that the national average cost per kWh in 2006 was 9.46 cents. This figure would be more than $45M at 2012 rates. EIA 2010 Energy Use All Sources. http://www.eia.gov/state/seds/seds-data-complete.cfm. Brocade solutions can lower the network consumption model in comparison to competing solutions available today by up to 29%. However, Brocade can reduce the legacy network consumption with industry-leading virtualized solutions by a reduction factor of 19 to 1. Using this model, a large enterprise could consolidate their legacy IT and legacy network solutions to a far smaller site and save over $38 million USD in energy consumption each year.
  • 23. DATA CENTER TECHNICAL BRIEF © 2013 Brocade Communications Systems, Inc. All Rights Reserved. 08/13 GA-TB-483-00 ADX, AnyIO, Brocade, Brocade Assurance, the B-wing symbol, DCX, Fabric OS, ICX, MLX, MyBrocade, OpenScript, VCS, VDX, and Vyatta are registered trademarks, and HyperEdge, The Effortless Network, and The On-Demand Data Center are trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands, products, or service names mentioned may be trademarks of their respective owners. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.