This document discusses how government agencies can migrate their data centers to become more energy efficient in order to save costs. It states that when data centers are migrated to an energy-efficient solution, agencies can save over $38 million per year in energy consumption costs alone. It provides examples of how virtualization and consolidation of servers and applications can significantly reduce energy usage and lower PUE ratios. Government drivers for energy efficiency like Executive Orders 13423 and 13514 are also summarized.
THIS DOCUMENT DISCUSSES THE FEDERAL DATA CENTER CONSOLIDATION INITIATIVE AND DESCRIBES AREAS WHERE GREATER ENERGY EFFICIENCY AND CARBON REDUCTIONS CAN BE ACHIEVED USING A NUMBER OF QUICK WINS.
This document discusses the transition to an integrated grid that can accommodate high levels of distributed energy resources (DER) like solar and storage. As DER deployment increases, the traditional electric grid needs to be modernized and operations changed to integrate DER while maintaining reliability. Germany's experience integrating high amounts of solar and wind shows this is challenging without coordination. The document proposes collaboration on interconnection standards, advanced distribution technologies, planning processes that include DER, and policies that enable grid modernization and ensure costs are allocated fairly. EPRI will further study frameworks for assessing the costs and benefits of grid modernization options through an initial concept paper and later framework development project.
Forecasting and scheduling of wind and solar power generation in indiaDas A. K.
This document discusses forecasting and scheduling of wind and solar power generation in India. It notes that wind and solar energy are major sources of renewable energy in India, but their variability poses challenges for grid integration. Accurate forecasting of wind and solar power is needed to effectively integrate renewable energy into the grid. The document examines regulations in India requiring day-ahead forecasting and scheduling of wind and solar power and explores methods for improving forecast accuracy to minimize penalties for deviations from forecasts.
This article describes a multi-objective optimization approach to determine the optimal sizing and placement of distributed generation (DG) units in a distribution system. The objectives are to minimize total real power losses and total DG installation cost. A weighted sum method is used to combine the objectives into a single scalar function. Constraints include power flow equations and limits on voltage, generation capacity, and line flows. The problem is formulated as a non-linear program and solved using sequential quadratic programming. The method provides a set of Pareto optimal solutions, from which a compromise solution can be selected using fuzzy decision making. The approach is demonstrated on a 15-bus test system.
The Total Economic Impact Of NetApp Solutions For Cloud Service ProvidersNetApp
This document is a Forrester Consulting study on the total economic impact of NetApp solutions for cloud service providers. It analyzes the potential financial benefits for a composite service provider organization that develops cloud-based storage-as-a-service using NetApp's platform. The study finds that the composite organization was able to successfully build and profitably grow its cloud services business using NetApp. It achieved benefits such as scalability, data security, storage efficiencies, and partnerships that helped reduce costs and increase revenues over three years.
This document discusses using statistical analysis of outage data to plan asset maintenance in electric power distribution networks. It describes collecting outage data from distribution components like lines, cables, breakers and transformers. The data is processed and analyzed using statistical tests to identify critical components affecting system reliability. The results show maintenance decisions should be based on analyzed outage data to identify weak components for targeted maintenance. This improves reliability and reduces costs compared to preventative or reactive maintenance.
Role of Alternative Energy Sources: Natural Gas Technology AssessmentMarcellus Drilling News
This document analyzes the role of natural gas power in the United States. It discusses natural gas power plant performance characteristics, the natural gas resource base and supply/demand outlook, and environmental and cost analyses of natural gas power generation. Specifically, it examines the efficiency and emissions of natural gas combined cycle and simple cycle plants. It also evaluates the domestic natural gas supply from conventional and unconventional sources like shale gas, and projects growing natural gas demand and a balanced supply/demand outlook through 2035. Environmental impacts like greenhouse gas, water, and air emissions across the natural gas life cycle are quantified. Finally, the document conducts a life cycle cost analysis of natural gas power generation technologies.
This document provides a summary of a meta-review of 57 studies on the impact of advanced metering initiatives and residential feedback programs on household electricity savings. Some key findings from the meta-review include:
1) Feedback programs have achieved average electricity savings of 4-12% across different countries and program types. Well-designed national programs could save up to 6% of total US residential electricity by 2030.
2) Daily/weekly feedback and real-time plus feedback achieved the highest median savings of 11% and 14% respectively, however these results come from small, short studies.
3) Motivational elements like goal setting, commitments, and social norms can significantly enhance savings, yet few programs currently incorporate these
THIS DOCUMENT DISCUSSES THE FEDERAL DATA CENTER CONSOLIDATION INITIATIVE AND DESCRIBES AREAS WHERE GREATER ENERGY EFFICIENCY AND CARBON REDUCTIONS CAN BE ACHIEVED USING A NUMBER OF QUICK WINS.
This document discusses the transition to an integrated grid that can accommodate high levels of distributed energy resources (DER) like solar and storage. As DER deployment increases, the traditional electric grid needs to be modernized and operations changed to integrate DER while maintaining reliability. Germany's experience integrating high amounts of solar and wind shows this is challenging without coordination. The document proposes collaboration on interconnection standards, advanced distribution technologies, planning processes that include DER, and policies that enable grid modernization and ensure costs are allocated fairly. EPRI will further study frameworks for assessing the costs and benefits of grid modernization options through an initial concept paper and later framework development project.
Forecasting and scheduling of wind and solar power generation in indiaDas A. K.
This document discusses forecasting and scheduling of wind and solar power generation in India. It notes that wind and solar energy are major sources of renewable energy in India, but their variability poses challenges for grid integration. Accurate forecasting of wind and solar power is needed to effectively integrate renewable energy into the grid. The document examines regulations in India requiring day-ahead forecasting and scheduling of wind and solar power and explores methods for improving forecast accuracy to minimize penalties for deviations from forecasts.
This article describes a multi-objective optimization approach to determine the optimal sizing and placement of distributed generation (DG) units in a distribution system. The objectives are to minimize total real power losses and total DG installation cost. A weighted sum method is used to combine the objectives into a single scalar function. Constraints include power flow equations and limits on voltage, generation capacity, and line flows. The problem is formulated as a non-linear program and solved using sequential quadratic programming. The method provides a set of Pareto optimal solutions, from which a compromise solution can be selected using fuzzy decision making. The approach is demonstrated on a 15-bus test system.
The Total Economic Impact Of NetApp Solutions For Cloud Service ProvidersNetApp
This document is a Forrester Consulting study on the total economic impact of NetApp solutions for cloud service providers. It analyzes the potential financial benefits for a composite service provider organization that develops cloud-based storage-as-a-service using NetApp's platform. The study finds that the composite organization was able to successfully build and profitably grow its cloud services business using NetApp. It achieved benefits such as scalability, data security, storage efficiencies, and partnerships that helped reduce costs and increase revenues over three years.
This document discusses using statistical analysis of outage data to plan asset maintenance in electric power distribution networks. It describes collecting outage data from distribution components like lines, cables, breakers and transformers. The data is processed and analyzed using statistical tests to identify critical components affecting system reliability. The results show maintenance decisions should be based on analyzed outage data to identify weak components for targeted maintenance. This improves reliability and reduces costs compared to preventative or reactive maintenance.
Role of Alternative Energy Sources: Natural Gas Technology AssessmentMarcellus Drilling News
This document analyzes the role of natural gas power in the United States. It discusses natural gas power plant performance characteristics, the natural gas resource base and supply/demand outlook, and environmental and cost analyses of natural gas power generation. Specifically, it examines the efficiency and emissions of natural gas combined cycle and simple cycle plants. It also evaluates the domestic natural gas supply from conventional and unconventional sources like shale gas, and projects growing natural gas demand and a balanced supply/demand outlook through 2035. Environmental impacts like greenhouse gas, water, and air emissions across the natural gas life cycle are quantified. Finally, the document conducts a life cycle cost analysis of natural gas power generation technologies.
This document provides a summary of a meta-review of 57 studies on the impact of advanced metering initiatives and residential feedback programs on household electricity savings. Some key findings from the meta-review include:
1) Feedback programs have achieved average electricity savings of 4-12% across different countries and program types. Well-designed national programs could save up to 6% of total US residential electricity by 2030.
2) Daily/weekly feedback and real-time plus feedback achieved the highest median savings of 11% and 14% respectively, however these results come from small, short studies.
3) Motivational elements like goal setting, commitments, and social norms can significantly enhance savings, yet few programs currently incorporate these
This document discusses using predictive analysis to optimize energy management systems. It proposes integrating predictive analytics with energy management systems (EMS) to improve optimization of energy source selection and usage. Currently, EMS systems select energy sources like grid, diesel, solar, batteries based on simple priority rules. Integrating predictive analytics can help EMS systems better forecast power outages and optimize cost and emissions by deciding which sources to use and in what proportion, based on machine learning of past and present energy and environmental data to predict the future. This could increase optimization of source selection from the current 40-50% with traditional EMS to 80-90%. The document uses telecom tower energy usage as a case study.
Impact of Dispersed Generation on Optimization of Power ExportsIJERA Editor
Dispersed generation (DG) is defined as any source of electrical energy of limited size that is connected directly to the distribution system of a power network. It is also called decentralized generation, embedded generation or distributed generation. Dispersed generation is any modular generation located at or near the load center. It can be applied in the form of rechargeable, such as, mini-hydro, solar, wind and photovoltaic system or in the form of fuel-based systems, such as, fuel cells and micro-turbines. This paper presents the impact of dispersed generation on the optimization of power exports. Computer simulation was carried out using the hourly loads of the selected distribution feeders on Kaduna distribution system as input parameters for the computation of the line loss reduction ratio index (LLRI). The result showed that the line loss reduced from 163.56MW to 144.61 MW when DG was introduced which is an indication of a reduction in line losses with the installation of DG at the various feeders of the distribution system. In all the feeders where DG is integrated, the average magnitude of the line loss reduction index is 0.8754 MW which is less than 1 indicating a reduction in the electrical line losses with the introduction of DG. The line loss reduction index confirmed that by integrating DG into the distribution system, the distribution losses are reduced and optimization of power exports is achieved The results of this research paper will form a basis to establish that proper location of distributed generation units have significant impact on their effective capacity.
This chapter defines green data centers and discusses the drivers for companies to build them. It outlines the benefits, including monetary savings. Green data centers use resources more efficiently and have less environmental impact. The demand for data center power is growing rapidly but resources are limited, so greening data centers can help maximize the use of available power capacity. Implementing energy efficiency measures can significantly reduce long-term operational costs, with some studies finding a 10x return on the initial investment within 20 years.
Achieving Energy Proportionality In Server ClustersCSCJournals
a great amount of interests in the past few years. Energy proportionality is a principal to ensure that energy consumption is proportional to the system workload. Energy proportional design can effectively improve energy efficiency of computing systems. In this paper, an energy proportional model is proposed based on queuing theory and service differentiation in server clusters, which can provide controllable and predictable quantitative control over power consumption with theoretically guaranteed service performance. Futher study for the transition overhead is carried out corresponding strategy is proposed to compensate the performance degradation caused by transition overhead. The model is evaluated via extensive simulations and is justified by the real workload data trace. The results show that our model can achieve satisfied service performance while still preserving energy efficiency in the system.
Adding Psychological Factor in the Model of Electricity Consumption in Office...IJECEIAES
This document describes an agent-based model that examines the impact of different approaches for providing energy consumption feedback and data apportionment on changing staff behavior to reduce energy use in an office building. The model divides staff into four stereotypes based on motivation levels and analyzes how factors like feedback, data apportionment at individual or group levels, sanctions, and anonymity impact motivation levels and overall energy consumption. Simulation results indicate greater potential for energy savings when data is apportioned at the group level compared to the individual level. Staff with low and medium motivation levels showed the most significant reductions in energy use.
IRJET- Planning Issues in Grid Connected Solar Power System: An OverviewIRJET Journal
The document discusses various planning issues related to integrating solar power systems into electric grids, including long term load forecasting techniques, concentrating solar power systems, interconnection requirements, incorporating solar into comprehensive and subarea planning, performing power audits, panel interconnections, and grid-tie considerations. It also covers power system operation planning, power system augmentation planning, and the benefits of comprehensive planning approaches for integrating variable renewable energy sources like solar power.
Economic Dispatch using Quantum Evolutionary Algorithm in Electrical Power S...IJECEIAES
Unpredictable increase in power demands will overload the supply subsystems and insufficiently powered systems will suffer from instabilities, in which voltages drop below acceptable levels. Additional power sources are needed to satisfy the demand. Small capacity distributed generators (DGs) serve for this purpose well. One advantage of DGs is that they can be installed close to loads, so as to minimise loses. Optimum placements and sizing of DGs are critical to increase system voltages and to reduce loses. This will finally increase the overall system efficiency. This work exploits Quantum Evolutionary Algorithm (QEA) for the placements and sizing. This optimisation targets the cheapest generation cost. Quantum Evolutionary Algorithm is an Evolutionary Algorithm running on quantum computing, which works based on qubits and states superposition of quantum mechanics. Evolutionary algorithm with qubit representation has a better characteristic of diversity than classical approaches, since it can represent superposition of states.
Gartner has developed new metrics to measure data center energy efficiency beyond PUE. The new metrics are Idle Energy (IE), which measures the energy consumed when equipment is idle, and Computational Energy (CE), which measures the energy used for useful computation. IE + CE = Total Energy. Reducing IE improves efficiency. An example shows how improving IE from 56.43% to 49.37% through power capping increased annual useful energy by over 525,000 kWh, saving $55,000. The document recommends using kWh to evaluate efficiencies, reducing IE, and using CE to quantify energy for IT services.
Seacotex Fabrics Ltd. and Seamens Garments Ltd. are pioneering knit-dye-garments companies in Bangladesh that offer high quality, fast production at competitive rates. Established in 1992 and 2005 respectively, they have extensive experience serving major international buyers. Both companies have modern facilities with over 150,000 square feet of production space and thousands of employees. Their equipment and capacities allow them to produce over 40,000 garments per day.
1) The document discusses how defined contribution plan re-enrollment can help guide all plan participants to better asset allocations by defaulting them into target date funds if they do not make active selections during the re-enrollment period.
2) It addresses some potential roadblocks to re-enrollment like beliefs about participant decision making and collective bargaining, and suggests overcommunicating the benefits of target date funds to help with implementation.
3) The case study describes how one plan re-enrollment resulted in 75% of participants being in age-appropriate allocations compared to 29% before, dramatically improving portfolio construction.
O documento resume três teorias da aprendizagem: 1) Teoria da Aprendizagem Significativa de Ausubel foca na relação entre novos e antigos conhecimentos; 2) Teoria da Aprendizagem Construtivista de Piaget descreve estágios de desenvolvimento e aprendizagem por ação; 3) Teoria da Aprendizagem Interacionista de Vygotsky enfatiza a aprendizagem por meio das interações sociais e culturais.
1. The document provides instructions for using Dropbox to store and share files across devices. It explains how to install Dropbox on computers and mobile devices, upload and access files from any device, and share files and folders with links.
2. The instructions also cover how to collaborate on documents by setting up shared folders that allow multiple users to work on files simultaneously and see edits in real-time.
3. Additional tips are provided for recovering file versions, managing file storage, and strengthening security settings for the Dropbox account.
Irshadali Iftekhar Attar is seeking a position in a reputable organization where he can contribute his 2 years and 9 months of experience in radio network optimization, drive testing, and transmission in the telecom sector. He is currently working as a radio network optimization engineer for Huawei Telecommunication Ltd. His experience also includes working as a DT & RF engineer for Ericsson and LinkQuest Telecom. He is looking to utilize his skills in planning, managing projects, technical resources, and streamlining operations. He holds a Bachelor's degree in Electronics and Telecommunication from the University of Pune.
The supervisor provides a glowing evaluation of student Angela Li's internship performance. Angela receives the highest possible rating of 5 out of 5 in nearly all evaluation categories, including ability to learn, reading/writing skills, listening/oral communication, creative/problem-solving skills, professional development, interpersonal/teamwork skills, work habits, and character. The supervisor notes Angela quickly learned new applications and completed all assigned tasks on time and above expectations. Additionally, the supervisor would consider Angela for a permanent position and recommends her as an asset to any organization.
Sneha Bhatia is seeking a career opportunity in human resources. She has over 5 years of experience in HR roles across various industries. Currently, she works as a Research Associate for Asia Pulp and Paper India Private Limited. Previously, she was an Assistant Manager of Human Resources for Lingual Consultancy Services Private Limited. She holds an MA in Psychology from IGNOU and an MBA in Human Resources from Graphic Era University. Her skills include recruitment, compensation and benefits, talent acquisition, and employee engagement.
The document describes Doreen Hillier, a businesswoman and leadership expert. She has authored a book on the 10 Laws of Leadership and runs a company called Training for Greatness where she gives workshops on leadership, success, budgeting, and business planning. Hillier has started corporations and non-profits and draws from her experiences in her presentations and workshops to teach others how to improve their leadership and life skills.
This document contains a summary of a job applicant's experience and qualifications. The applicant has over 9 years of experience in software development using languages like C, C++, and Pro*C on UNIX/Linux platforms. They have worked with databases like Oracle, Sybase, and MySQL. Notable projects include developing various reports for Axis Bank using Summit APIs, implementing segregation of duties in the Summit system for HSBC, and automating FX/IR rate resets for Sydney and Hong Kong trades. The applicant has received several performance awards and has expertise in Summit FT and Control M.
This curriculum vitae is for Gerrit Fourie, a 29-year-old South African man living in Newcastle. He received his high school education from Newcastle High School from 2003 to 2007. Gerrit has worked in several sales and management positions over the past decade, including at Canon Office Automation, Discovery IC, Old Mutual, Virgin Active, On Tap, and G4E Irrigation. He enjoys an active lifestyle, spending time with his family, and networking through his involvement with the Newcastle High School Old Boys Club.
The Dell OpenManage Enterprise Power Manager 3.0 plug-in provides sustainability insights through automated data collection and reporting. It tracks power usage metrics, identifies idle servers, estimates carbon emissions, and detects devices exceeding power thresholds - helping optimize infrastructure utilization and reduce energy costs and carbon footprint. The plug-in interfaces all functionality through a single console for improved visibility and workload management.
This document discusses using predictive analysis to optimize energy management systems. It proposes integrating predictive analytics with energy management systems (EMS) to improve optimization of energy source selection and usage. Currently, EMS systems select energy sources like grid, diesel, solar, batteries based on simple priority rules. Integrating predictive analytics can help EMS systems better forecast power outages and optimize cost and emissions by deciding which sources to use and in what proportion, based on machine learning of past and present energy and environmental data to predict the future. This could increase optimization of source selection from the current 40-50% with traditional EMS to 80-90%. The document uses telecom tower energy usage as a case study.
Impact of Dispersed Generation on Optimization of Power ExportsIJERA Editor
Dispersed generation (DG) is defined as any source of electrical energy of limited size that is connected directly to the distribution system of a power network. It is also called decentralized generation, embedded generation or distributed generation. Dispersed generation is any modular generation located at or near the load center. It can be applied in the form of rechargeable, such as, mini-hydro, solar, wind and photovoltaic system or in the form of fuel-based systems, such as, fuel cells and micro-turbines. This paper presents the impact of dispersed generation on the optimization of power exports. Computer simulation was carried out using the hourly loads of the selected distribution feeders on Kaduna distribution system as input parameters for the computation of the line loss reduction ratio index (LLRI). The result showed that the line loss reduced from 163.56MW to 144.61 MW when DG was introduced which is an indication of a reduction in line losses with the installation of DG at the various feeders of the distribution system. In all the feeders where DG is integrated, the average magnitude of the line loss reduction index is 0.8754 MW which is less than 1 indicating a reduction in the electrical line losses with the introduction of DG. The line loss reduction index confirmed that by integrating DG into the distribution system, the distribution losses are reduced and optimization of power exports is achieved The results of this research paper will form a basis to establish that proper location of distributed generation units have significant impact on their effective capacity.
This chapter defines green data centers and discusses the drivers for companies to build them. It outlines the benefits, including monetary savings. Green data centers use resources more efficiently and have less environmental impact. The demand for data center power is growing rapidly but resources are limited, so greening data centers can help maximize the use of available power capacity. Implementing energy efficiency measures can significantly reduce long-term operational costs, with some studies finding a 10x return on the initial investment within 20 years.
Achieving Energy Proportionality In Server ClustersCSCJournals
a great amount of interests in the past few years. Energy proportionality is a principal to ensure that energy consumption is proportional to the system workload. Energy proportional design can effectively improve energy efficiency of computing systems. In this paper, an energy proportional model is proposed based on queuing theory and service differentiation in server clusters, which can provide controllable and predictable quantitative control over power consumption with theoretically guaranteed service performance. Futher study for the transition overhead is carried out corresponding strategy is proposed to compensate the performance degradation caused by transition overhead. The model is evaluated via extensive simulations and is justified by the real workload data trace. The results show that our model can achieve satisfied service performance while still preserving energy efficiency in the system.
Adding Psychological Factor in the Model of Electricity Consumption in Office...IJECEIAES
This document describes an agent-based model that examines the impact of different approaches for providing energy consumption feedback and data apportionment on changing staff behavior to reduce energy use in an office building. The model divides staff into four stereotypes based on motivation levels and analyzes how factors like feedback, data apportionment at individual or group levels, sanctions, and anonymity impact motivation levels and overall energy consumption. Simulation results indicate greater potential for energy savings when data is apportioned at the group level compared to the individual level. Staff with low and medium motivation levels showed the most significant reductions in energy use.
IRJET- Planning Issues in Grid Connected Solar Power System: An OverviewIRJET Journal
The document discusses various planning issues related to integrating solar power systems into electric grids, including long term load forecasting techniques, concentrating solar power systems, interconnection requirements, incorporating solar into comprehensive and subarea planning, performing power audits, panel interconnections, and grid-tie considerations. It also covers power system operation planning, power system augmentation planning, and the benefits of comprehensive planning approaches for integrating variable renewable energy sources like solar power.
Economic Dispatch using Quantum Evolutionary Algorithm in Electrical Power S...IJECEIAES
Unpredictable increase in power demands will overload the supply subsystems and insufficiently powered systems will suffer from instabilities, in which voltages drop below acceptable levels. Additional power sources are needed to satisfy the demand. Small capacity distributed generators (DGs) serve for this purpose well. One advantage of DGs is that they can be installed close to loads, so as to minimise loses. Optimum placements and sizing of DGs are critical to increase system voltages and to reduce loses. This will finally increase the overall system efficiency. This work exploits Quantum Evolutionary Algorithm (QEA) for the placements and sizing. This optimisation targets the cheapest generation cost. Quantum Evolutionary Algorithm is an Evolutionary Algorithm running on quantum computing, which works based on qubits and states superposition of quantum mechanics. Evolutionary algorithm with qubit representation has a better characteristic of diversity than classical approaches, since it can represent superposition of states.
Gartner has developed new metrics to measure data center energy efficiency beyond PUE. The new metrics are Idle Energy (IE), which measures the energy consumed when equipment is idle, and Computational Energy (CE), which measures the energy used for useful computation. IE + CE = Total Energy. Reducing IE improves efficiency. An example shows how improving IE from 56.43% to 49.37% through power capping increased annual useful energy by over 525,000 kWh, saving $55,000. The document recommends using kWh to evaluate efficiencies, reducing IE, and using CE to quantify energy for IT services.
Seacotex Fabrics Ltd. and Seamens Garments Ltd. are pioneering knit-dye-garments companies in Bangladesh that offer high quality, fast production at competitive rates. Established in 1992 and 2005 respectively, they have extensive experience serving major international buyers. Both companies have modern facilities with over 150,000 square feet of production space and thousands of employees. Their equipment and capacities allow them to produce over 40,000 garments per day.
1) The document discusses how defined contribution plan re-enrollment can help guide all plan participants to better asset allocations by defaulting them into target date funds if they do not make active selections during the re-enrollment period.
2) It addresses some potential roadblocks to re-enrollment like beliefs about participant decision making and collective bargaining, and suggests overcommunicating the benefits of target date funds to help with implementation.
3) The case study describes how one plan re-enrollment resulted in 75% of participants being in age-appropriate allocations compared to 29% before, dramatically improving portfolio construction.
O documento resume três teorias da aprendizagem: 1) Teoria da Aprendizagem Significativa de Ausubel foca na relação entre novos e antigos conhecimentos; 2) Teoria da Aprendizagem Construtivista de Piaget descreve estágios de desenvolvimento e aprendizagem por ação; 3) Teoria da Aprendizagem Interacionista de Vygotsky enfatiza a aprendizagem por meio das interações sociais e culturais.
1. The document provides instructions for using Dropbox to store and share files across devices. It explains how to install Dropbox on computers and mobile devices, upload and access files from any device, and share files and folders with links.
2. The instructions also cover how to collaborate on documents by setting up shared folders that allow multiple users to work on files simultaneously and see edits in real-time.
3. Additional tips are provided for recovering file versions, managing file storage, and strengthening security settings for the Dropbox account.
Irshadali Iftekhar Attar is seeking a position in a reputable organization where he can contribute his 2 years and 9 months of experience in radio network optimization, drive testing, and transmission in the telecom sector. He is currently working as a radio network optimization engineer for Huawei Telecommunication Ltd. His experience also includes working as a DT & RF engineer for Ericsson and LinkQuest Telecom. He is looking to utilize his skills in planning, managing projects, technical resources, and streamlining operations. He holds a Bachelor's degree in Electronics and Telecommunication from the University of Pune.
The supervisor provides a glowing evaluation of student Angela Li's internship performance. Angela receives the highest possible rating of 5 out of 5 in nearly all evaluation categories, including ability to learn, reading/writing skills, listening/oral communication, creative/problem-solving skills, professional development, interpersonal/teamwork skills, work habits, and character. The supervisor notes Angela quickly learned new applications and completed all assigned tasks on time and above expectations. Additionally, the supervisor would consider Angela for a permanent position and recommends her as an asset to any organization.
Sneha Bhatia is seeking a career opportunity in human resources. She has over 5 years of experience in HR roles across various industries. Currently, she works as a Research Associate for Asia Pulp and Paper India Private Limited. Previously, she was an Assistant Manager of Human Resources for Lingual Consultancy Services Private Limited. She holds an MA in Psychology from IGNOU and an MBA in Human Resources from Graphic Era University. Her skills include recruitment, compensation and benefits, talent acquisition, and employee engagement.
The document describes Doreen Hillier, a businesswoman and leadership expert. She has authored a book on the 10 Laws of Leadership and runs a company called Training for Greatness where she gives workshops on leadership, success, budgeting, and business planning. Hillier has started corporations and non-profits and draws from her experiences in her presentations and workshops to teach others how to improve their leadership and life skills.
This document contains a summary of a job applicant's experience and qualifications. The applicant has over 9 years of experience in software development using languages like C, C++, and Pro*C on UNIX/Linux platforms. They have worked with databases like Oracle, Sybase, and MySQL. Notable projects include developing various reports for Axis Bank using Summit APIs, implementing segregation of duties in the Summit system for HSBC, and automating FX/IR rate resets for Sydney and Hong Kong trades. The applicant has received several performance awards and has expertise in Summit FT and Control M.
This curriculum vitae is for Gerrit Fourie, a 29-year-old South African man living in Newcastle. He received his high school education from Newcastle High School from 2003 to 2007. Gerrit has worked in several sales and management positions over the past decade, including at Canon Office Automation, Discovery IC, Old Mutual, Virgin Active, On Tap, and G4E Irrigation. He enjoys an active lifestyle, spending time with his family, and networking through his involvement with the Newcastle High School Old Boys Club.
The Dell OpenManage Enterprise Power Manager 3.0 plug-in provides sustainability insights through automated data collection and reporting. It tracks power usage metrics, identifies idle servers, estimates carbon emissions, and detects devices exceeding power thresholds - helping optimize infrastructure utilization and reduce energy costs and carbon footprint. The plug-in interfaces all functionality through a single console for improved visibility and workload management.
Case Studies in Highly-Energy Efficient DatacentersMichael Searles
New tools, designs and services have emerged to help datacenter operators improve the energy efficiency of IT and facilties. This report examines the use of these technologies and techniques in real deployments.
This document discusses the need for green data centers and provides strategies for making data centers more energy efficient. It notes that while many organizations say they are green, few have specific targets or programs to reduce their carbon footprint. As data center electricity consumption and costs rise, running out of power capacity, cooling capacity, and physical space are major concerns. The document then provides questions to assess a data center's energy efficiency in terms of facilities, IT equipment, and utilization rates. It recommends strategies like optimizing infrastructure utilization and choosing more efficient hardware and cooling options. The goal is to improve the data center infrastructure efficiency metric and lower costs by reducing redundant, underutilized resources.
Compu Dynamics White Paper - Essential Elements for Data Center OptimizationDan Ephraim
This white paper discusses essential elements for optimizing data center operations, including airflow management, data center infrastructure management (DCIM) tools, power management, and operational best practices. It focuses on recent government initiatives like the Data Center Optimization Initiative (DCOI) that mandate increased energy efficiency in federal data centers through metrics like power usage effectiveness (PUE). The paper examines strategies like hot/cold aisle containment and cloud migration that can help data centers improve optimization and meet new efficiency requirements.
This document discusses how utility incentive programs can affect equipment upgrade decisions for data centers. It notes that equipment efficiency and energy costs are top priorities for data centers to meet growing capacity needs. Older equipment operates less efficiently and incentive programs can help offset upgrade costs, with some utilities covering up to $1M for efficiency projects. Partnering with maintenance providers gives access to engineering expertise, utility program insights, and opportunities to improve efficiency and reduce operating costs through upgrades.
Dell OpenManage Enterprise Power Manager 3.0 provides data center managers with end-to-end insights into their data centers that enable them to improve power usage and track greenhouse gas emissions. It allows automated collection of server utilization metrics, location information, and power consumption data to identify resources using high energy or underutilized power. The plugin also features carbon emission tracking and an energy cost calculator to help guide infrastructure optimization and consolidation efforts to reduce costs while meeting sustainability goals.
A Journey to Power Intelligent IT - Big Data EmployedMohamed Sohail
Sustainability has become a hot topic as a result of significant concerns about the unintended social, environmental, and economic consequences of rapid population growth, economic growth, and consumption of our natural resources. For the IT industry in particular, a highly important consideration that affects the decisions of IT managers is data center power consumption and carbon emission.
This document discusses the history and development of green computing. It originated in the early 1990s with programs like Energy Star that promoted energy efficiency. Since then, government regulations and industry initiatives have further advanced green computing aims like attaining economic viability and improving sustainability in areas like manufacturing, design, use and disposal of computing devices. The document outlines several approaches to green computing like optimizing software/algorithms, virtualization, power management, and reducing data center energy usage. It provides examples of various industry and government programs/standards that have promoted green computing goals.
We have been offering expert level data center cleaning services for over a decade. Our professional cleaning services help to prevent your critical environment and data centre equipment from overheating and potential failure caused by contaminations.
Southern Energy Efficiency Center Final ReportFlanna489y
The Southern Energy Efficiency Center (SEEC) final report summarizes the organization's activities from 2009-2010. The SEEC worked with partners in 12 southern states to increase the deployment of high-performance buildings. Key accomplishments included developing an online resource center, producing educational materials on efficient building techniques, hosting conferences, and delivering training to over 1,000 attendees. Moving forward, the SEEC recommends expanding these outreach and education efforts to further realize energy savings in the region.
As huge energy consumers, datacenters find their environmental performance under intense scrutiny. This report provides an overview of current environmental issues most relevant to the datacenter industry and its suppliers, including legislation, standards, metrics and other topics.
FY 2013 R&D REPORT January 6 2014 - Department of EnergyLyle Birkey
The document summarizes federal funding for environmental research and development from the Department of Energy (DOE) in fiscal years 2011-2013. It finds that DOE provides the largest amount of federal funding for environmental R&D of any federal agency, totaling $1.994 billion in FY2013. Much of this funding supports research at DOE national laboratories and is directed towards energy efficiency and renewable energy, fossil fuels like coal, and carbon capture and storage technologies. Specific areas of research focus on areas like energy efficient buildings, electric vehicles, advanced manufacturing, and improving the efficiency of power plants while enabling affordable carbon capture.
Data centre consolidation involves integrating multiple data centres into fewer physical sites to improve efficiency and reduce costs. It is a complex process that requires careful planning and facilitation to realize savings without compromising operations. Advanced analytics can help by establishing a baseline of current infrastructure usage and capacity, and modeling different consolidation scenarios to identify the optimal approach. This enables organizations to consolidate aggressively while maintaining performance and choosing technologies that maximize savings through a right-sized data centre footprint.
Data centre consolidation involves integrating multiple data centres into fewer physical sites to improve efficiency and reduce costs. It is a complex process that requires careful planning and facilitation to realize savings without compromising operations. Advanced analytics can help by establishing a baseline of current infrastructure usage and capacity, and modeling different consolidation scenarios to identify the optimal approach. This enables organizations to consolidate aggressively while maintaining performance and choosing technologies that maximize savings through a right-sized data centre footprint.
Bloom Energy produces solid oxide fuel cells and servers that generate clean electricity on-site. Between 2010-2013, Bloom's revenues grew from $102.7M to $600M while losses decreased. Its main product, the Bloom Energy Server, provides 200kW of power and retails between $700k-$800k. Bloom offers capital purchases or paying only for generated electricity. Competitors FuelCell Energy and Plug Power also produce fuel cells but focus on combined heat and power applications or material handling equipment, respectively. For long-term success, Bloom aims to reduce costs to $0.06-$0.08/kWh through manufacturing improvements and targeting a variety of industry customers across the U.S
The document discusses optimizing facility efficiency in federal mission-critical environments. It recommends taking a long-term approach to planning by understanding organizational goals and bridging IT and facilities. Key steps include assessing existing facilities, selecting efficient equipment, right-sizing capacity, and establishing monitoring, maintenance, and benchmarking programs to ensure optimization over time. Regular maintenance is emphasized as critical for sustained efficiency gains and reliability.
The energy manager's guide to real time submetering data 1.16.14GridPoint
This document discusses the benefits of real-time, asset-level submetering for energy managers. It explains that submetering provides visibility into energy usage at both the total site level and individual asset level in real-time. This allows energy managers to (1) optimize equipment performance, (2) implement smarter alarms, and (3) monitor sustainable energy sources. A variety of industry organizations and standards promote the use of submetering for compliance and increased energy efficiency. Access to granular submetering data through an energy management system enables seven hidden benefits including optimized equipment, smarter alarms, integrated sustainability, and dynamic control strategies.
ENERGY EFFICIENCY EVALUAITON_CONCEPTS & BEST PRACTICESCharles Xu
This document provides an overview of key concepts and best practices for energy efficiency program evaluation. It is divided into two parts. Part 1 covers general concepts such as types of evaluations, impact evaluation techniques, data sources, and the evaluation process. Part 2 focuses on specific techniques, including logic modeling, statistical billing analysis, engineering computation, sampling, measurement and verification, and determining net savings. The document aims to comprehensively yet concisely describe established theories and methodologies for evaluating the impact and effectiveness of energy efficiency programs.
Similar to FFM - Technical Brief - Migrating Your Data Center to Become Energy Efficient-with notes (20)
ENERGY EFFICIENCY EVALUAITON_CONCEPTS & BEST PRACTICES
FFM - Technical Brief - Migrating Your Data Center to Become Energy Efficient-with notes
1. DATA CENTER
Migrating Your Data Center to
Become Energy Efficient
Providing your agency with a self-funded roadmap
to energy efficiency.
2. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 2 of 23
CONTENTS
Introduction................................................................................................................................................................................3
Executive Summary..................................................................................................................................................................3
Government Drivers For Striving For Energy Efficiency........................................................................................................4
Where and how the energy is being consumed........................................................................................ 5
The Cascading Effect of a Single Watt of Consumption............................................................................ 6
What you are replacing and why............................................................................................................. 7
Modern data center architecture: Ethernet fabrics............................................................................. 8
Choose best practices or state-of-the-art ............................................................................................... 9
Best practices.............................................................................................................................. 10
Best Practices to Manage Data Center Energy Consumption.......................................................................................... 11
Hot Aisle Containment.................................................................................................................. 11
Cold Aisle Containment................................................................................................................. 11
Increasing the ambient temperature of the data center.................................................................... 12
Virtualization of the data center........................................................................................................... 12
Examples of application and server migration and their savings potential................................................ 14
Migrating 5 full racks to 1/5th of a rack......................................................................................... 14
Migrating 18 full racks to a single rack........................................................................................... 15
Migrating 88 full racks to 5 racks.................................................................................................. 17
Modeling the data center consolidation example................................................................................... 17
Consolidation and migration of 20 sites to a primary site with backup.............................................. 18
Virtualization Savings............................................................................................................................................................ 19
Network Savings.................................................................................................................................................................... 21
Assumptions Associated With Reduced Costs In Electricity............................................................................................ 21
Summary................................................................................................................................................................................ 22
3. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 3 of 23
INTRODUCTION
Intel Corporation estimates that global server consumption is responsible for $27 billion U.S. dollars (USD) of
energy consumption annually. This cost makes virtualization an attractive endeavor for any large organization.
Large enterprises that have not begun implementing server virtualization may be struggling to make the case to
do so because of a time-consuming effort to measure or simply to estimate existing consumption. This document
attempts to present a realistic expectation for agencies that are in the process of performing due diligence in
this area.
EXECUTIVE SUMMARY
Brocade strives to show government clients who want the benefits of an energy efficient data center that they
can develop a self-funded solution. The primary questions that a government official would want answered are:
• Can we use the energy savings from the migration of our current application computing platforms to finance
an energy-efficient data center model?
• Can we achieve the type of consumption reductions as called out by Presidential Executive Orders 13423
and 13514?
• Do we have to rely upon private sector to manage and deliver the entire solution, or can an existing site be
prepared now to achieve the same results as leading energy-efficient data center operators do?
• Who has done this before, and what were the results? What can we expect? My organization has nearly
820,000 end computers. How much would we save by reducing energy in the data center?
Note:
1. Executive Order (EO) 13423, “Strengthening Federal Environmental, Energy, and Transportation Management”
2. Executive Order (EO) 13514, “Federal Leadership in Environmental, Energy, and Economic Performance”
The answers to these questions are straightforward ones. The government can test, evaluate, and prepare
deployable virtualized applications and the supporting network to begin saving energy while benefiting from
lowered operational costs. The EPA estimates that data centers are 100 to 200 times more energy intensive
than standard office buildings. The potential energy savings provides the rationale to prioritize the migration of
the data center to a more energy-efficient posture. Even further, similar strategies can be deployed for standard
office buildings to achieve a smaller consumption footprint, which helps the agency to achieve the goals of
the Presidential Directives. The government can manage the entire process, or use a combination of public
sector firms (ESCO’s-energy Savings Companies), and develop its own best practices from the testing phase.
These best practices maximize application performance and energy efficiency to achieve the resulting savings
in infrastructure costs. Other government agencies have made significant strides in several areas to gain
world-class energy efficiencies in their data centers. You can review and replicate their best practices for your
government agency.
When you migrate your data centers to an energy-efficient solution, your agency can save over
$38 million (USD) per year in energy consumption costs alone.
4. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 4 of 23
GOVERNMENT DRIVERS FOR STRIVING FOR ENERGY EFFICIENCY
Executive Order (EO) 13423 (2007) and EO 13514 (2009) are two directives that require agencies to work
toward several measurable government-wide green initiatives.
EO 13423 contains these important tenets that enterprise IT planners and executives who manage
infrastructure may address directly:
1. Energy Efficiency: Reduce energy intensity 30 percent by 2015, compared to an FY 2003 baseline.
2. Greenhouse Gases: Reduce greenhouse gas emissions through reduction of energy intensity 30 percent by
2015, compared to an FY 2003 baseline.
EO 13514 mandates that at least 15% of existing federal buildings (and leases) meet Energy Efficiency Guideline
principles by 2015. EO 13514 also mandates an annual progress being made toward 100 percent conformance
of all federal buildings, with a goal that 100% of all new federal buildings achieve zero-net-energy by 2030.
The Department of Energy (DoE) is a leader in the delivery and the development of best practices for energy
consumption and carbon dioxide emissions. The DoE has successfully converted their data centers to more
efficient profiles, and they have shared their results. Table 1, from the DoE, offers an educated look into the
practical net results of data center modernization.
Note: This information comes from this Department of Energy document:
Department of Energy: Leadership in Green IT (Brochure), Department of Energy Laboratories (DOE), S. Grant NREL.
Table 1. Sample of actual facilities and their respective power usage effectiveness (PUE). (The DoE and the EPA
have several metrics that demonstrate that the best practices and current technology allow agencies to achieve
the desired results. These agencies have measured and published their results. To see data center closures by
Department, see the Federal Data Center initiative at http://explore.data.gov.)
Sample Data Center PUE 2012 Comments/Context
DoE Savannah River 2.77 4.0 Previously
DoE NREL (National Renewable Energy Laboratory) 1.15 3.3 Previously
EPA Denver 1.50 3.2 Previously
DoE NERSC (National Energy Research Scientific Computing Center) 1.15 Previously 1.35
DoE Lawrence Livermore National Laboratory (451) 1.67 2.5 (37 year old building)
SLAC National Accelerator Laboratory at Stanford 1.30 New
INL Idaho National Laboratory High Performance Computing 1.10 1.4 (Weather Dependent)
DoE Terra Scale Simulation Facility 1.32 to 1.34 New
Google Data enter weighted average (March 2013) 1.13 1.09 Lowest site reported
DoE PPPL Princeton Plasma Physics Laboratory 1.04 New
Microsoft 1.22 New Chicago Facility
World Class PUE 1.30 Below 2.0 before 2006
Brocade Data Center (Building 2, San Jose HQ) 1.30
Brocade Corporate Data Center
Consolidation project
Standard PUE 2.00 3.0 before 2006
Note: Power Usage Effectiveness (PUE) is the ratio, in a data center, of electrical power used by the servers and network (IT) in
contrast to the total power delivered to the facility.
5. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 5 of 23
Where and how the energy is being consumed
Many studies discuss the consumption of energy and demonstrate the cost to house, power, and cool IT
communications gear in the data center and the typical commercial building. The Energy Information Agency
(EIA) has estimated that data centers, specifically the government data center population (estimated to
number >3000 sites), consume approximately 4% of the electricity in the United States annually. The EIA
has assessed that the total use of energy consumption in the United States was 29.601 quadrillion watts
in 2007 (29.601 x 1015). Energy usage is expected to increase at least 2% per year overall. The data center
consumption of 4% may seem like a low percentile; however, it is a low percentage of a very large number.
The number is so large that it makes sense to examine the causes and costs that are associated with the
electricity consumption of the government enterprise data center.
Note: EIA 2010 Energy Use All Sources. http://www.eia.gov/state/seds/seds-data-complete.cfm.
In 2006, the server consumption was measured at approximately 25% of total consumption. The Heating,
Ventilation, and Air Conditioning (HVAC) levels necessary to cool the center also equaled 25%, which totaled
50%. Many government agencies have data centers in varying sizes. To illustrate the source of consumption,
look at a sample 5,000 square foot data center. The site would be approximately 100 feet long by 50 feet
wide. The site would likely have servers, uninterruptible power and backup services, building switchgear, power
distribution units, and other support systems, such as HVAC. Figure 1 illustrates the approximate consumption
percentage of a typical data center of this size. The actual breakout of the server total consumption is provided
(40% total).
Note: The U.S. government has roughly 500,000 buildings, which means 15% is ~75,000 buildings. If one large data center is made
energy efficient, it is like making 100-200 buildings more energy efficient.
Figure 1. Breakout of support/demand consumption of a typical 5000 square foot data center. (This figure
demonstrates the consumption categories and their respective share of the total consumption for the facility.
The server consumption is broken into three parts: processor, power supply (inefficiency), and other server
components. The communication equipment is broken into two parts: storage and networking.)
6. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 6 of 23
Figure 2. Typical 5,000 square consumption breakout by infrastructure category. (This figure demonstrates the
consumption categories broken out by kilowatt hours consumed.)
The Cascading Effect of a Single Watt of Consumption
When a single watt of electricity is delivered to IT gear in a data center or a standard commercial building, a ripple
effect occurs. The total energy intensity of the data center can be 100 to 200 times that of a standard building
with network and some local IT services. However, the net result of delivering power to each site type is similar.
In 2006, the PUE of the standard data center was 2.84, which means that 2.84 watts were expended to support
1 watt of IT application, storage, and network gear.
Figure 3. Cascade effect of using a single watt of power. (Making a poor decision on IT consumption for network
and storage has the same effect as a poor decision for computing platforms. The Brocade®
network fabric can
consume up to 28% less than the competing architectures.)
7. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 7 of 23
The cost of a watt is not simply a fraction of a cent. In the data center, an IT element is powered on until it is
replaced, which can be a long period of time. A single watt, constantly drawn, can cost nearly $24.35 over a
10-year period in a campus LAN, which is the typical lifespan of a legacy core switch. Similarly, every additional watt
that is drawn in a data center since 2006 would have cost $20.34 if deployed during the period of 2006-2012.
Note: The Energy Star Program of the U.S. Environmental Protection Agency estimates that servers and data centers alone use
approximately 100 billion kWh of electricity, which represents an annual cost of about $7.4 billion. The EPA also estimates that without
the implementation of sustainability measures in data centers, the United States may need to add 10 additional power plants to the
grid just to keep up with the energy demands of these facilities.
What you are replacing and why
Several reasons drive the enterprise to replace the current data center IT infrastructure. Multiple advancements in
different areas can enable a more efficient means of application service delivery. For example, energy utilization in
all facets of IT products has been addressed via lower consumption, higher density, higher throughput, and smaller
footprint. Another outcome of rethinking the energy efficiency of the data center is that outdated protocols, such
as Spanning Tree Protocol (STP), are being removed. STP requires inactive links and has outlived its usefulness
and relevance in the network. (See Figure 4.)
Core
Server Rack
ISLs
Inactive
links
AccessAggregation
TrafficflowBlocking by
STP
Legacy Architecture
Figure 4. Sample legacy architecture (circa 2006). (The legacy architecture is inflexible because it is deployed as
three tiers, optimized for legacy for client/server applications. STP ensures its inherent inefficiency, which makes
operating it complex and individual switch management makes it expensive.)
The data center architecture of the 2006 era was inflexible. This architecture was typically deployed as three tiers
and optimized for legacy client/server applications. This architecture was inefficient because it is dependent upon
STP, which disables links to prevent loops and limits network utilization. This architecture was complex because
additional protocol layers were needed to enable it to scale. This architecture was expensive to deploy, operate,
and maintain. STP often caused network designers to provide duplicate systems, ports, risers (trunks), VLANs,
bandwidth, optics, and engineering effort. The added expense of all this equipment and support were necessary
to increase network availability and utilize risers that were empty due to STP blocking. This expense was overcome
to a degree by delivering Layer 3 (routing) to the edge and aggregation layers. Brocade has selected the use of
Transparent Interconnection of Lots of Links (TRILL), an IETF standard, to overcome this problem completely.
8. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 8 of 23
In addition, STP caused several effects on the use of the single server access to the network. STP caused
port blocking, physical port oversubscription at the access, edge, and aggregation layers, slower convergence,
and wasted CPU wait cycles. STP also lacked deterministic treatment of network payload to the core. This
architecture also caused separate network and engineering resources to accommodate real-time traffic models.
The networking bottleneck issues were made worse by the additional backend switched storage tier that
connected the server layer with mounted disk arrays and network storage. This tier supported substantial traffic
flows in the data center between network and storage servers. Also, more efficient fabric technologies are
increasingly being implemented to accomodate the new trends of data transmission behavior. The data center
traffic model has evolved from a mostly northbound-southbound model to an 80/20 east-west traffic model,
which means that 80% of server traffic can be attributed to server-to-server application traffic flows.
Note: Information about the evolution to an east-to-west traffic model comes from this Gartner document: “Use Top-of-Rack Switching
for I/O Virtualization and Convergence; the 80/20 Benefits Rule Applies”.
Modern data center architecture: Ethernet fabrics
Brocade has designed a data center fabric architecture that resolves many of the problems of the legacy
architecture. The Brocade Ethernet fabric architecture eliminates STP, which enables all server access ports to
operate at the access layer and enables all fabric uplinks to the core switching platform to remain active. The
Ethernet fabric architecture allows for a two-tier architecture that increases the access to network edge from
a 50% blocking model at n × 1 Gigabit Ethernet (GbE), to a 1:1 access model at n × 1 GbE or n × 10 GbE.
Risers from the edge can now increase their utilization from the typical 6–15% use of interconnects to much
greater rates of utilization (50 to even >90%). When utilization is increased, end user to application wait times
are reduced or eliminated. This architecture enables Ethernet connectivity and Fibre Channel over Ethernet
(FCoE) storage access by applications, thus collapsing the backend storage network into the Ethernet fabric.
The Ethernet fabric data center switching architecture eliminates unneeded duplication and enables all ports to
pass traffic to the data center switching platforms. Ports can pass traffic northbound to the core or east-west
bound to the storage layer. The combination of the virtual network layer delivered by the fabric and the virtual
server layer in the application computing layer delivers a highly utilized, highly scaled solution that decreases
complexity, capital outlays, and operational costs.
Figure 5. Efficient Ethernet Fabric Architecture for the Data Center. (Ethernet fabric architecture topologies are
optimized for east-west traffic patterns and virtualized applications. These architectures are efficient because
all links in the fabric are active with Layer 1/2/3 multipathing. These architectures are scalable because they
are flat to the edge. Customers receive the benefit of converged Ethernet and storage. The architectures create
simplicity because the entire fabric behaves like a logical switch.)
9. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 9 of 23
Choose best practices or state-of-the-art
If you adopt best practices, you could achieve your energy usage reduction goals with a realistic initial investment.
If you adopt state-of-the-art technologies, the initial investment to achieve the desired results may be much higher.
In 2006, the Green Grid looked at these potential of energy saving strategies. They estimated that by using best
practices or state-of-the-art technology, that overall consumption would measurably drop across all commercial
sectors. Historically, energy costs increase by 2-3% per year, and energy use increases 2% per year, so quantifiable
action must be taken. The study showed that using better operational procedures and policies, using
state-of-the-art technology, and using industry best practices contribute to an overall drop in consumption.
Note: This information about PUE estimation and calculation comes from the Green Grid document at this link:
http://www.thegreengrid.org/~/media/WhitePapers/WP49-PUE%20A%20Comprehensive%20Examination%20of%20the%20Metric_v6.pdf?lang=en
Figure 6. Using best practices and state-of-the-art technology controls consumption. (Best practices, such as
HAC/CAC, high-efficiency CRAH units, low-consumption servers with high-density cores in CPUs make 1.6 PUEs
achievable. Solid State Drives on servers and storage arrays could provide a lower return on investment
depending upon current acquisition cost. Agencies should carefully weigh the benefits of some state-of-the-art
options. For more information about best practices and state-of-the-art technology, see www.thegreengrid.org.)
Notes:
Hot Aisle Containment (HAC). Cold Aisle Containment (CAC).
The terms Computer Room Air Handler (CRAH) unit, Computer Room Air Conditioning (CRAC) unit, and Air-Handling Unit (AHU) are used
inter-changeably.
For more information about PUE estimation and calculation, search for Green Grid White Paper #49 at this site:
http://www.thegreengrid.org/en/library-and-tools.aspx?category=All&range=Entire%20Archive&type=White%20Paper&lang=en&paging=All
10. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 10 of 23
Best practices
With respect to enterprise application delivery, the strategy of lowering or changing the consumption curve
will become outdated while demand continues to climb. Many options are available with varying results. Many
data center experts recommend that a CAC system be utilized to provide efficient cooling to the server farm
and network gear within the racks of a data center. On the other hand, an HAC (Hot Aisle Containment) system
typically achieves a 15% higher efficiency level with a cooler ambient environment for data center workers. Hot
air return plenums direct the hot air back through the cooling system, which could include ground cooling, chilled
water systems, or systems that use refrigerant.
Note: This information about a HAC system comes from this article from Schneider Electric: “Impact of Hot and Cold Aisle
Containment on Data Center Temperature and Efficiency R2”, J.Niemann, K. Brown, V. Avelar
Examples of best practices:
• High-efficiency Computer Room Air Handler. If you upgrade the fan system of a CRAH unit to one with a
variable-speed fan, energy costs can be reduced by up to 16% to 27%, under identical conditions.
• Use mid- to high-density Virtual Machines (VMs). Per studies offered by the Green Grid, the optimal power
supply load level is typically in the mid-range of its performance curve: around 40% to 60%. Typical server
utilization shows highly inefficient use of power at low levels (below 30% load), and slightly less efficiency
when operating at high capacity loads as a result of using high-density VMs (above 60% load).
• Higher-performing, lower-consumption servers. Current server technology includes many efficiency features,
such as large drives that require less consumption, solid-state technology, energy-efficient CPUs, high-speed
internal buses, and shared power units with high-efficiency power supplies under load.
• Higher-performing, lower-consumption network gear. With the full-scale adoption of 10 GbE and 40 GbE edge
interconnects and n x 100 GbE switching from the edge to the core, network fabrics in data centers are
poised to unlock the bottleneck that previously existed in the server farm.
• Low-tech solutions. Install blank plates to maximize the control of air flow.
Examples of state-of-the-art:
• High-speed and more efficient storage link. Brocade Generation 5 Fibre Channel at rates such as 16 G are the
current performance-leading solutions in the industry.
• Semiconductor manufacturing processes. In 2006, the typical data center was outfitted with devices that
utilized 130 nm, 90 nm, or 65 nm technology at best. The semiconductor chips that were embedded within
the Common Off the Shelf (COTS) systems, such as switches or servers, required more power to operate.
Now that 45 nm and 32 nm chipsets have been introduced into the manufacturing process, a lower energy
footprint can be achieved by adopting state-of-the-art servers, CPUs, and network and storage equipment.
With the advent of 22 nm (2012), the servers of the future will operate with even lower footprint CPUs and
interface circuits. Intel estimates that they can achieve performance gains with consistent load and do it at
half the power of 32 nm chip sets. (“Intel’s Revolutionary 22 nm Transistor Technology” M. Bohr, K. Mistry,)
• Solid State Drives (SSDs) for servers. Whether the choice is to use a high–end, enterprise-class Hard Disk
Drive (HDD), or the latest, best-performing SSD, the challenge is to achieve a balance between performance,
consumption, and density.
11. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 11 of 23
BEST PRACTICES TO MANAGE DATA CENTER ENERGY CONSUMPTION
Hot Aisle Containment
Hot Aisle Containment (HAC) ensures that the cool air passes through the front of the IT equipment rack from a
cooling source that consists of ambient air at lower temperatures or cool air through perforated tiles at the face
of the cabinet or rack.
The air is forced through the IT gear as it is pulled across the face of the equipment by the internal fans. The
fans direct the air across the motherboard and internal components, and the air exits the rear of the rack to
a contained area between the aisles. This area captures the higher-temperature air and directs it to a plenum
return. The net result is that the ambient air in the data center can be kept at a higher temperature and the
thermostat on the CRAH unit can be kept at a level that ensures that the unit does not get forced on. The key
to successful implementation of HAC is to select IT components the can withstand higher temperatures,
therefore saving energy, while continuing to operate normally.
Figure 7. Hot Aisle Containment. (HAC that complies with OSHA Standards can reduce PUE by reducing chiller
consumption, for example, via increased cold water supply. The room can still be maintained at 75 degrees
Fahrenheit and the hot aisle could be up to 100 degrees. In some instances, the heat can rise to 117 degrees F.)
Cold Aisle Containment
Another method of air management uses Cold Aisle Containment (CAC). In CAC, cold air is brought into the room
through the perforated floor tiles across the air exhaust side of the rack and mixed with ambient air, which is
pulled through the chillers that are mounted above each rack. In this implementation, the cold air is contained
in between the rack aisles and it is pulled through the front of the IT equipment and run across the internal
components.
The air exits the rear of the racked equipment. It is contained in the cold aisle by doors or heavy plastic curtains
to prevent escape of the cold air to the side of the aisle. The exiting air from the rear of the IT rack is intermixed
with the ambient air and the chilled air coming up from the floor tiles and lowers the room temperature to within
OSHA standards. The ambient room temperature may be kept at temperatures up to 79 degrees Wet-Bulb Globe
Temperature (WBGT) (26 degrees Celsius). The chillers turn on more often within the room as a result of the higher
temperature ambient air, and the PUE of the data center profile is raised about 15% higher than that of the HAC.
12. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 12 of 23
Figure 8. Cold Aisle Containment. (A WBGT index of greater than 26 degrees Celsius (79 degrees F) is
considered a “hot environment.” If the WBGT measures less than 26 degrees Celsius everywhere and at all
times, then the workplace is relatively safe for most workers. The room may still be maintained at 75–79
degrees F and the cold aisle would be as low as 64–65 degrees F.)
Increasing the ambient temperature of the data center
Recent studies suggest that if the ambient temperature of the data center is allowed to increase somewhat
because the cooling is not so intense, additional savings can be achieved. The network and server equipment
that is selected must meet the specifications of running normally under more extreme conditions.
Note: This information comes from the Intel document “How High Temperature Data Centers and Intel Technologies Decrease Operating
Costs” at this link: http://www.intel.com/content/www/us/en/data-center-efficiency/data-center-efficiency-gitong-case-study.html
Virtualization of the data center
Many benefits occur when you introduce virtual servers into your application data center and software service
delivery strategy. For government purposes, applications typically are in one of two categories, mission (real-
time or near real-time) and services (non-real-time). For this discussion, it is recommended that the platform
migration targets non-real-time applications, such as Exchange®
, SharePoint®
, or other applications that do not
require real-time performance.
Additionally, many mission-oriented applications, for example unified communications, must not be supported
on virtual platforms in a high–density, virtual server environment by the OEM or their subcontracted software
suppliers. Much of this has more to do with the testing phases their products go through to achieve general
availability.
When you move non-real-time services to a virtual environment, you can achieve benefits like these:
• Lower power and distribution load
• Smaller Uninterrupted Power Supplies (UPSs)
• Faster OS and application patching
• Controlled upgrade processes
• More efficient Information Assurance (IA) activity
• Optimized server utilization
• Reduced weight (as well as lower shipping costs)
• Increased mobility
• Lower ongoing operational and maintenance costs
13. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 13 of 23
Based upon the enterprise mission and the expected performance of an application, the typical compute
ratio for virtual servers can be reduced from 1:1 to 14:1. The U.S. Department of Energy has a 29:1 ratio for
non-mission applications. Brocade has determined that a 14:1 density of virtual servers per real server is
a reasonable number to use to illustrate the energy efficiency of using virtual servers. The platform used to
demonstrate this savings benefits was also limited by the number of virtual media access control addresses
(MACs) that the system could support (14) in each blade center chassis and the embedded compute blades.
Figure 9. Virtual servers and CPU cores. (The server can be virtualized with specific processor assignment to
address application performance, which enables real-time and non-real-time applications for virtualization. In
this example, 14 servers were modeled to 2 quad-core CPUs. Three servers consume 100% of a single core and
other applications receive a partial core. Though 14 nm technology promises higher density cores to CPU in the
near future, this document explores only what was currently available at a competitive cost.)
In the illustrated examples, Brocade determined that the compute space required for 196 virtual servers
running on 14 separate physical blades would reduce the typical footprint of 5 full racks to between 7 and
9 Rack Units (RUs) of space. In a typical server, the unit is running most of the 168 hours in a week at less than
20% utilization. Many factors reduce speed, such as network access speed of the system interface and buses
between CPU cores. Also, many dual or quad CPUs have the cores that are connected to the same die, which
eliminates bus delay. Most blade center systems allow the administrator to view all the CPUs in the system and
assign CPU cycles to applications to gain the desired work density. When you assign the work cycles for the CPU
to applications, some applications can be given a higher computational share of the processor pool while others
have only reduced access to CPU cycles. One application may require the assignment of only one half of a single
processor core embedded within the blade, while another may require 1.5 times the number of cycles that a
3.2 GHz core would provide. As a result, the application is assigned one and a half processor cores from the pool.
Note: For standard 19” racks, the holes on the mounting flange are grouped in threes. This three-hole group is defined as a Rack Unit
(RU) or sometimes a “U”. 1U occupies 1.75” (44.45 mm) of vertical space.
Enabling the virtual server solves a part of the overall problem. Additional steps to increase the workload to
the server require that you eliminate the bottleneck to the server. In the circa 2006 data center, the workload
is limited because one of the two ports is blocked and in standby mode. The server configuration, used in this
example, is also running a single Gigabit Ethernet port. To alleviate this bottleneck issue, increase the network
interface speed of a server from n × 1 GbE to n × 10 GbE. The port speeds increase tenfold, and the port
blocking introduced by STP is now eliminated by use of a network fabric. When we increase this speed of the
interface to 10 GbE, with all the network ports that are available, we provide up to 20 gigabits of bandwidth.
14. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 14 of 23
When the virtual server density is increased, the physical server density is decreased. Due to the savings of
the physical hardware and the eventual energy savings of the high-density platform, it is recommended that the
original application servers be turned off after the application is moved. As a result, the power consumption
of the original 196 servers that occupy several racks is reduced from an average consumption of 200 kWh
to between 9 kWh (a Brocade Fabric) and 14 kWh (competing fabric). (The range of consumption on the new
solution depends on the workload of the system and the choice of network fabric solution.)
Examples of application and server migration and their savings potential
Figure 10. Savings Potential of Server Migration. (This figure shows that five racks of servers and top of rack
switches can be condensed to a footprint that is less than one third of a 42 RU rack. The 196 servers deployed
in the circa 2006 footprint of five racks consume 206 kWh of energy using the server consumption under
load measurement, not the regulatory power of the unit itself. The same model was used to estimate the
consumption of the IT gear in the proposed virtualized deployment estimated to be between 8 and 14 kWh.)
Migrating 5 full racks to 1/5th of a rack
In the original implementation, five full racks of IT computing and networking equipment are shown. The original
circa 2006 equipment has 2 × 1 GbE network links per server, typically to 2 Top-of-Rack (ToR) units such as Cisco
Catalyst 3750E-48 TD GbE switches connected via 10 GbE risers to two Cisco Catalyst 6509 switches running
4 × 16 port 10 GbE cards with 2 × 6000 WAC power supply units. The network consumption for these units is
estimated as ~19 kWh. The CPU density of the rack included 2 CPUs per server, with an average of 39 to 40
servers per rack. The rack also includes 4 × 10 GbE risers to the aggregation layer network per rack (200 Gbps
total with 100 Gbps blocked due to STP). This original configuration represents a 4 to 1 bottleneck, or an average
of only 500 Mbps per server bandwidth, without excessive over engineering of VLANs or using Layer 3 protocols.
The IBM 2-port Model 5437 10 GbE Converged Network Adaptor (OEM by Brocade) is embedded within the blade
center chassis for FCoE capability. The Converged Network Adaptor enables the Ethernet fabric and direct
access to the storage area network, which flattens the architecture further and eliminates a tier within the
standard data center.
Note: A Top-of-Rack (ToR) switch is a small port count switch that sits at or near the top of a Telco rack in data centers or co-location
facilities.
The EIA reports that the U.S. average cost of electricity when this solution was implemented was 9.46 cents per
kWh. The annual cost in circa 2006 U.S. dollars (USD) for electricity to operate this solution is approximately
$171,282. The average consumption per server is estimated at 275W per unit, and the two network ToR
switches are estimated to draw 161W each when all cables are plugged in with 10 GbE risers in operation.
15. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 15 of 23
For this solution, the annual computing platform energy cost is $44,667, the network energy cost is $15,644,
and the facility and infrastructure costs to support it are $56,039/yr. The HVAC cost to cool this solution is
estimated to be $77,872. These numbers are consistent with a facility that has a PUE of 2.84.
Using available technology from Brocade, standard IT blade center technology, and the implementation of virtual
servers, the associated footprint is reduced to approximately one-fifth of a rack. This new solution would include
14 physical blade servers within a chassis solution with 2 × 10 GbE links per server, 196 virtual servers using
112 CPU cores (2 × quad core CPUs per blade), and 280 Gbps of direct network fabric access. The average
bandwidth that is accessible by the virtual server application is between 1 GbE to 20 GbE of network bandwidth
(2:1 to 40:1 increase over the aforementioned circa 2006 solution). The network fabric solution from Brocade
utilizes 28% less energy (cost) than the competing solution today. Using a CAC model of 1.98 PUE, or a HAC
model of 1.68, the total cost of this solution including the blade center is approximately $9,930/year HAC to
$11,714/year CAC. If this solution is delivered in a state-of-the-art container, a PUE of 1.15 4 or lower would
cost approximately $6,803/year.
Note: The information about the solution that achieves a PUE of 1.15 is from this Google document:
Google PUE Q2 2013 performance measurement: http://www.google.com/about/datacenters/efficiency/internal/index.html
Figure 11. Migration of a small-medium data center with 18 racks of IT into a 42 U rack. (Using the same
methodology as in Figure 11, the consumption of the 18 racks of servers in the 2006 architecture is estimated
to be approximately 650 kWh. The replacement architecture is expected to consume between 15 kWh and
27 kWh while increasing the performance from the access to the core of the network.)
Migrating 18 full racks to a single rack
In this migration model, it was determined that up to 18 full racks of circa 2006 computing and network
data center equipment would be migrated into less than a single rack. The network and blade center slots
remain available for nominal expansion. Here, we are moving the capability of 700 physicals servers and the
corresponding 1,400 network interfaces connected to the access layer to a footprint that takes up only one
full rack of space.
16. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 16 of 23
The original 18 Full Rack deployment had 2 x Cisco Catalyst 3750E-48 TD switches at the top of each rack,
aggregating risers to 2 x Cisco Catalyst 6509 switches with 7 x 16-port 10 GbE modules using 2 x 600 WAC
power supply units. The 18 racks also housed 700 physical servers, an average of 39 servers per physical rack.
The resulting IT consumption load of the original servers was 192 kWh. The resulting load of the top of rack,
aggregation, and core switches was 36 kWh. The infrastructure load of the facility (76 kWh), including HVAC
(245 kWh) was 421 kWh with an annualized cost of $539,044 per year. A PUE of 2.84 was used to reflect the
typical PUE achieved in the 2006 time period. In 2006, the national average cost per kWh was 9.46 cents;
which is reflected in this calculation.
The migration platform used for this comparison was a rack with four typical blade center chassis with up to
14 blades per system, each equipped with 2 x 3.2 GHz quad-core CPUs. The compute platform consumption
in the circa 2006 solution drew 192 kWh, versus new solution consumption of 10,240 kWh. The ToR switches
of the original data center are replaced with a pair of Brocade VDX®
6720-60 Switches (which are ToR fabric
switches) that are connected to a pair of Brocade VDX 8770 Switches (which are fabric chassis-based switches)
that are connected to a Brocade MLXe Core Router. This upgrade reduced network consumption from 36,540
kWh to 4826 kWh, which reduced energy costs from $30,280 in 2006 to $4,362 at 2012 rates.
By migrating the physical servers and legacy network products to a currently available virtualized environment,
the annual costs are reduced to between $22,882 (HAC model) to $26,968 (CAC model) when calculated at the
EIA 2012 commercial cost of 10.32 cents per kWh. When use of the Brocade Data Center Fabric is compared to
the circa 2006 solution, the savings are 32,000 kWh, or ~$26,000 per year (USD). The total savings in energy
alone for the virtual network and virtual computing platform is greater than $500,000 per year. When comparing
the Brocade fabric solution to currently available competing products, the savings in the network costs is 27%.
What is also significant is that the 18 full racks would be reduced to a single rack footprint. This scenario would
collapse an entire small data center into a single rack row in new location. In order to gain the economy of the
HAC or CAC model, this IT computing and network rack would benefit from being migrated into a larger data
center setting where several other racks of gear are already in place and operating in a CAC/HAC environment.
Figure 12. Migration of medium-large data center with 88 racks of IT into 5 x 42 U racks. (Using the same
methodology as used in Figures 11 and 12, the consumption of the 88 racks of servers in the circa 2006
solution is estimated to be approximately 2.9M kWh, while its replacement architecture is expected to consume
between 65 and 112 kWh while providing increases in performance from the access to the core of the network.)
17. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 17 of 23
Migrating 88 full racks to 5 racks
A migration from a medium-sized data center solution of 11 rack rows with 8 racks per row would yield similar
results, yet on a greater scale. To be effective in a HAC or CAC environment, two data centers of this size would
need to be migrated into a single site, just to create a two rack rows that could create a CAC/HAC environment.
The resulting 5-rack density is more ideal for a migration to a containerized solution, where PUEs have been as
low as 1.11 to 1.2. In such a scenario, the migrated 88 racks of IT would have 1 MWh in direct consumption,
including network and computing. The new solution would consume 2.9 MWh with a PUE of 2.94, which results
in an annual cost of $2.4M per year using EIA 2006 rates of 9.46 cents per kWh.
After the 88 racks are migrated to a properly engineered HAC, CAC, or containerized facility, the energy cost
of the solution would be between $66,275 USD to $112,386 USD per year, at 2012 commercial rates. This
migration scenario would result in nearly $2.3M per year in cost savings due to energy reduction.
Note: Using the methodology shown in Figures 11-13, data centers with 350 and 875 racks are consolidated into 20 and 50 rack
footprints respectively. The results are included as part of the 20 site consolidation and migration, which is discussed later in
this document.
Modeling the data center consolidation example
To model an agency- or department-level data center migration, Brocade estimated the size of the data center
in terms of space, and the size of the customer base. Brocade also estimated the IT equipment that was
needed to service the disparately located applications across the agency. To do this, Brocade reviewed studies
that offered a reasonable estimate of servers in use per desktop, minimum room size estimates for various
sized data centers, as well as population densities of the agency being modeled.
Number and minimum size per site: Brocade has estimated that a very large enterprise would have up to
20 sites with varying levels and density of application computing deployed. In the migration model, Brocade
used a sample of 12 sites with 5 full racks of data center application IT equipment (60 racks total), 4 sites
with 18 full racks (72 racks total), 2 sites with 88 racks (176 racks total), a single large site with up to 350
racks, and a single site with 875 racks (875). These sites contain a total of 1,533 racks that occupy up to
18,000 square feet of space.
Number of desktops: Brocade used the example of a military department that consisted of 335,000 active
duty personnel, 185,000 civilian personnel, 72,000 reserve personnel, 230,000 contractor personnel, which
totaled approximately 822,000 end users. Brocade used this population total and 1:1 desktop ratio, to
derive a raw estimate for server counts. A study performed by the Census Bureau estimates that there are
variances by verticals, such as education, finance, healthcare, utilities, transportation, as well as services. With
approximately 822,000 end users, approximately 41,100 servers would support primary data center operations.
However, purpose-built military applications and programs may push this figure even higher. The migration
example used accounts for 67% mission servers (41,100), 17% growth servers (10,275), 8% mirrored critical
applications servers (4,889 secondary servers), and 8% disaster recovery servers (4,888).
Server to desktop ratio: The ratio to determine how many servers exist per desktop computer depends upon
the vertical being studied. The U.S. Census Bureau estimates that the government desktop-to-employee ratio is
1:1.48 employees. The net of the statistics in the study offered is that there is approximately 1 server for every
20 desktops. Of the 4.7 non-retail firms responding to the study (6 million total), there were 43 million desktops
and 2.1 million servers to support operations. This count resulted in a 20:1 desktop-to-server ratio.
Note: The Census Bureau determined that the average ratio of PCs per server was approximately 20:1.
With approximately 822,000 end users, and factoring in primary applications (67%), growth (17%), mirrored
backup (8%), and Disaster Recovery and Continuity of Operations (DR/CooP) (8%), the virtualized server
environment was estimated at 61,000 servers.
18. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 18 of 23
Consolidation and migration of 20 sites to a primary site with backup
Figure 14 depicts the 20 sites that would be migrated to a single primary data center. The originating sites
would account for approximately 60,000 square feet of space. The Post Migration Footprint (PMF) of 87 racks
would occupy 2 condensed spaces at up to 45’ by 22’ each, or approximately 8000 square feet per primary and
backup center. These numbers represent a 15:1 reduction.
To reduce risk, Brocade recommends that you model the migration process at a test site. A low-risk fielding
approach should be used. Brocade recommends that a virtual server environment coupled with the virtual
network fabric architecture to be created into a consolidated test bed. The test bed would enable an
organization to begin modeling the application migration of selected non-real-time applications from their current
physical servers to the virtual server environment. This test bed could function as the primary gating process for
applications, with respect to fit, cost, performance, and overall feasibility.
The goal of the test bed would be to organize a methodology for deploying, upgrading, and patching specific
application types to work out any issues prior to their implementation in the target data center environment. As
applications are tested and approved for deployment, the condensed data center could be constructed, or the
target site could be retrofitted with the applicable technology to support a lower energy consumption posture.
The first step to migration would be to test the applications that reside at the 12 small sites (#1 in the Data
Center Consolidation Diagram) with an average of 5 racks of servers per site. When a successful migration
procedural environment is achieved, continue to migrate the applications the remaining sites to the target
data center.
Figure 13. Migration of 20 sites to one modern data center configuration. (This example shows the difference in
the pre and post migration footprints. The 2006 solution uses 1,533 racks, which are transferred to a footprint
of 87 racks. A secondary data center that mirrors the functions of the first is also shown.)
19. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 19 of 23
VIRTUALIZATION SAVINGS
The energy reduction and consumption savings are enormous when viewed in contrast to their costs in 2006
USD. The energy use of the 12 small sites was reduced from 206 kWh to less than 9.9 kWh per site. The
88 full rack sites consumed 2.4 MWh and reduced to less than 120 kWh. The 4 sites with 18 full racks per site
were reduced from 650 kWh each to less than 22 kWh per site, for a total of 2.6 MWh that dropped to 88 kWh.
The two 88-rack sites, which consumed 2.9 MWh per site, were reduced to a little over 95 kWh each, for
a change of ~5.8 MWh to <200 kWh. Energy consumption for the site with 350 racks was reduced from
11.3 MWh to about 370 kWh. Energy consumption for the site with 875 racks was reduced from 28 MWh
to approximately 863 kWh.
Table 2. The data center modernization from the circa 2006 solution to the new model results in the
following reductions:
Category Previous New Reduction Ratio
Total Consumption 50 MWh 1.63 MWh (HAC) 30:1 Reduction
Total Rackspace 1,533 ToR switches 87 ToR fabric switches 17:1 Reduction
Total Footprint 60,000 sq. ft. 4,000 sq. ft. 15:1 Reduction
Server Footprint 60,000 Physical 4,350 Physical Servers 14:1 Reduction
Virtualization achieved a 14:1 virtual server to real server reduction. This reduction is similar to aggressive
virtualization ratios for the non-real time applications, ratios as high as 29:1, versus mission applications with
1:1 reduction ratio.
Virtualization of the servers and their physical connectivity requirements, elimination of unnecessary protocols,
and the use of better density at low consumption rates allowed for the network to gain a 10:1 increase in access
speed, with available bandwidth by all resident applications on the blade.
Advancement in network technology for ToR and end-of-rack solutions enabled the fabric to provide for an
upgrade to the core, from 10 GbE to 40 GbE and eventually 100 GbE. The circa 2006 aggregation layer was
eliminated in its entirety and replaced by a data center fabric from the access to the core. The circa 2006 core
routing and switching platform was upgraded from n x 10 GbE to n x 100 GbE.
Consolidation reduced a once-acceptable 8:1 network access oversubscription rate to an industry-leading
1:1 ratio. Virtual services necessitate that the bottleneck in the data center be eliminated. The virtual server
and network fabric consolidation with 10 GbE at the edge reduced consumption from an average of 147W per
10 G interface to ~5.5W in this migration model.
20. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 20 of 23
Figure 14. Migration of the circa 2006 data center network architecture elements. (The 20 sites that have been
migrated to the Brocade fabric solution include 1,533 ToR switches, 104 aggregation layer switches, and 20
core switches. The total consumption of these network switches was based upon the actual draw under load, not
the regulatory-rated power of the unit. The result is that this architecture consumes 1.317 MWh in direct draw.
Considering the typical PUE of 2.84 for this period of deployment, the resulting total consumption is 3.7 MWh.)
Figure 15. Architecture elements of the resulting Brocade Data Center Fabric. (The resulting migration uses one
fabric with two Brocade VDX 8770 Switches, along with 87 Brocade VDX ToR fabric switches, with two Brocade
MLXe-16 Routers (core). This network architecture consumes 90 kWh of electricity, which factors in an achievable
PUE of 1.68. This example assumes the use of an HAC model.)
21. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 21 of 23
NETWORK SAVINGS
In terms of scale, the reduced network footprint is just as significant for the network footprint. The 1,533 ToR
switches were reduced to 87 ToR units (17:1). Also, the bandwidth available to each application increased. From
the circa 2006 network, 104 aggregation switches are converted to end-of-row Brocade 8770 fabric switches.
In addition, the elimination of the 20 sites, allowed for the consolidation of 20 underperforming core units, to
4 easily scalable and expandable core switches (Brocade MLX®
).
Network energy consumption was reduced from 3.7 MWh (including circa 2006 2.84 PUE) to 90 kWh with a PUE
of 1.68. This reduction of 37:1 coincides with an order of magnitude increase in network capacity.
The network layer of the new virtualized data center effectively experiences a higher level of traffic density and link
utilization. Depending upon the initial application physical connectivity, the new fabric allows for application access
from the server to the user to be increased from 1, 2, or 4 active interface connections to 2, 4, or 8 connections,
all active.
ASSUMPTIONS ASSOCIATED WITH REDUCED COSTS IN ELECTRICITY
Several reasonable assumptions were made in determining the size and scope of the migration, as well as the
calculation of consumption, on a per system basis. Brocade has also assumed that the typical server, network
switches, protocols used, and typical computing density assigned to the legacy (circa 2006) data center would
be the systems that were generally available from 2004-2006 during the design and implementation phases of
the migration candidate sites.
Other assumptions include:
• HAC load factors implementations are typically 39% of the IT load factor, versus the 2006 ration of 1.07 watts
of cooling per 1W of IT equipment.(1)
• The new data center facility consumption estimates were based upon a 29% load factor of the IT load vs.
a 77% load factor, which was typical of the circa 2006 legacy data center. (2)
• Assumption that the UPS models and typical consumption remained constant.
• The compute platforms that were migrated to virtual servers used a 14:1 ratio. The mission-oriented solutions
typically required a 1:1 application to core ratio. A ratio of 29:1 was used for services compute platforms,
which is in line with practices from the U.S. Department of Energy (3) methodologies for one of their labs with
high-density virtual servers. By taking an average of the extremes, 1:1 and 29:1, the 14:1 ratio was adopted.
• The Brocade migration model assumed the use of 8 cores per blade (2 x quad-core CPU at 3.2 GHz), as
well as the limit of 14 virtual MACs per blade, which results in an average of 1 MAC per virtual interface
to the fabric.
• Mission-critical compute applications remain on existing hardware until a CPU core mapping per application
determination is made, or were migrated to a 1:1 core per application ratio.
• Brocade adopted the 2006 PUE of 2.84 (1) versus an achievable 2013 PUE of 1.68 (2). Brocade found in
several studies that industry seemed to agree that a standard PUE in 2013 was 2.0, while a standard PUE
of the 2006 timeframe was around 3.0. Therefore, Brocade utilized the available breakdown offered as a
viable model for the legacy data center. (1)
• Brocade determined that not all enterprises would opt for the cost associated with state-of-the-art solutions
that provide for PUEs as low as 1.15. Even though some companies have since experienced even lower PUEs,
for example Microsoft and Google (4), Brocade took a conservative approach to the final data center model.
Notes: Information in this list came from these documents:
1. Emerson Electric: Energy Logic: “Reducing Data Center Energy Consumption by Creating Savings that Cascade Across Systems”.
2. Schneider Electric: “Impact of Hot and Cold Aisle Containment on Data Center Temperature and Efficiency R2”, J.Niemann, K. Brown,
V. Avelar
3. Department of Energy: Leadership in Green IT (Brochure), Department of Energy Laboratories (DOE), S. Grant NREL
4. Google PUE Q2 2013 performance measurement: http://www.google.com/about/datacenters/efficiency/internal/index.html
22. DATA CENTER TECHNICAL BRIEF
Migrating Your Data Center to Become Energy Efficient 22 of 23
SUMMARY
For an enterprise of 822,000 users, the virtualized architecture with the Brocade Data Center Fabric can save
$36 million per year in energy consumption. This huge savings occurs when legacy data centers are migrated to
the modern data center architecture and its backup data center. (Without a backup center, the savings would be
$38 million per year.) Brocade delivers a world-class solution in network and storage integration into a network
fabric that enables access to the virtual servers. All links to the fabric are operational, there is higher density and
utilization of access links, and fabric links and core switching are capable of switching or routing at wire speed.
Table 3. Costs for individual sites pre and post migration.* (The chart demonstrates that the circa 2006 data
center model will cost $40M per year using EIA national average kWh costs published in 2006. In the post-
virtualization deployment, coupled with a Brocade fabric network architecture upgrade, the consumption costs
vary between $1.8M (CAC model), $1.5M (HAC model), and $1M (state-of-the-art model) calculated in the 2013
national average costs per EIA.)
Circa 2006 Costs 2013 Costs
Migration Examples
Original Costs
(PUE 2.84)
Hot Aisle Containment
(PUE 1.68)
Cold Aisle Containment
(PUE 1.98)
State of Art
(PUE 1.15)
12 Sites at 5 Racks to 1/5th 171,295 9,951 11,726 6,815
4 Sites at 18 Racks to 1 Rack 2,156,179 91,527 107,872 62,653
4 Sites at 88 Racks to 1 Rack 4,840,286 190,715 224,772 130,549
1 Site at 350 Racks to 20 Racks 9,418,877 370,423 436,570 253,563
4 Sites at 875 Racks to 20
Racks
23,509,779 863,553 1,017,759 591,123
Totals Each Option 40,096,417* 1,526,170 1,798,698 1,044,704
Notes:
The EIA advised that the national average cost per kWh in 2006 was 9.46 cents. This figure would be more than $45M at
2012 rates.
EIA 2010 Energy Use All Sources. http://www.eia.gov/state/seds/seds-data-complete.cfm.
Brocade solutions can lower the network consumption model in comparison to competing solutions available today by up to 29%.
However, Brocade can reduce the legacy network consumption with industry-leading virtualized solutions by a reduction factor of
19 to 1. Using this model, a large enterprise could consolidate their legacy IT and legacy network solutions to a far smaller site
and save over $38 million USD in energy consumption each year.