This document discusses challenges for achieving energy efficiency in local and regional data centers. It reviews common metrics used to measure energy efficiency and examines sources of energy loss in data centers. Some key points:
- Standard metrics and guidelines are needed to properly measure and reduce carbon emissions from data centers. Common metrics examine the ratio of data processed to energy consumed.
- Data centers consume large amounts of electricity, around 40 million kWh annually worldwide. Non-critical infrastructure like cooling accounts for around 70% of energy use, while only 30% powers IT equipment.
- Sources of energy loss include inefficient UPS systems, oversized and underutilized equipment, lack of virtualization, and cooling air traveling long distances. Both operational
AN INVESTIGATION OF THE ENERGY CONSUMPTION BY INFORMATION TECHNOLOGY EQUIPMENTSijcsit
The World Wide Web and the rise of servers and PC's data centers have become a major position in the
overall power consumption of the world. In order to prevent global warming and ensuing disasters, already
Internet-service providers, hosting providers on green power have changed. Even household energy
suppliers offer green electricity from renewable energy such as wind, solar, biomass and hydro, which
emits no carbon dioxide, to stand against global warming. Only a global change for the information
technology can prevent the global-warming. The switch to renewable energy is the beginning of our future
and must be pursued as well as the research and development in information and communication
technology.
Commercial Overview DC Session 4 Introduction To Energy In The Data Centrepaul_mathews
The document discusses energy usage in data centres, noting that IT equipment accounts for 40% of energy consumption while cooling and ventilation makes up 35%. It also outlines metrics for measuring data centre efficiency like PUE and DCIE and discusses factors that influence energy consumption from cooling systems, UPS systems, and the external environment. Standards and legislation from organizations like the EU and US aim to improve data centre energy efficiency and reduce costs and environmental impact.
Building Blocks for Eco-efficient Cloud Computing for Higher Learning Institu...Editor IJCATR
Owning and managing a cloud-computing infrastructure, i.e. private data centers (DC), is a feasible way forward for an organization to ensure security of data when opting for cloud computing services. However, the cost associated with operating and managing a DC is a challenge because of the huge amount of power consumed and the carbon dioxide added to the environment. In particular, Higher Learning Institutions in Tanzania (HLIT) are among the institutions which need efficient computing infrastructure. This paper proposes eco-efficient cloud computing building blocks that ensure environment protection and optimal operational costs of a cloud computing framework that suffices HLIT computing needs. The proposed building blocks are in a form of power usage (renewable and nonrenewable); cloud deployment model and data center location; ambient climatic conditions and data center cooling; network coverage; quality of service and HLIT cloud software. The blocks are identified by considering HLIT computing requirements and challenges that exist in managing and operating cloud data centers. Moreover, this work identifies the challenges associated with optimization of resource usage in the proposed approach; and suggests related solutions as future work.
Organizations are increasingly concerned about the energy consumption of their data centers, which account for a large portion of business energy usage. The document outlines several approaches for making data centers more energy efficient, including retiring legacy systems, enhancing power management on existing systems, migrating to more efficient platforms like blade servers, implementing virtualization to consolidate servers, standardizing on server performance matching application needs, and right-sizing power and cooling infrastructure to avoid overprovisioning. Taken together, these strategies can significantly reduce a data center's energy consumption and associated costs.
This document discusses how the IBM XIV Storage System is designed to significantly reduce power consumption compared to other storage systems. It achieves over 65% lower power usage through an architecture that optimizes capacity utilization, eliminating unused "orphaned" storage space and using thin provisioning to allocate more virtual storage capacity than actual physical capacity installed. This allows customers to purchase only the storage needed currently while still having room for future growth. The efficient architecture also reduces the amount of hardware required, further cutting power and cooling costs while still providing high-performance storage.
BNY Mellon managed the construction of a new 165,000 square foot data center using poured concrete instead of steel due to high steel prices. The data center uses redundant electrical and cooling systems along with 8 generators for standby power. BNY Mellon is prioritizing object storage, software-defined data centers, and stateless computing while reducing its environmental footprint through initiatives like going tapeless and reducing paper usage. One of BNY Mellon's data centers earned an Energy Star designation for its efficient energy usage.
IRJET- Reducing electricity usage in Internet using transactional dataIRJET Journal
This document summarizes a research paper that proposes a method to reduce electricity usage and costs for internet services by optimizing how transactional data is mapped across geographically distributed data centers. It formulates the problem as a stochastic programming problem to maximize energy utilization within a cost budget. An efficient online algorithm is developed using Lyapunov optimization to map user requests to data centers based on changing factors like electricity prices and workload, with the goal of significantly reducing costs compared to baseline strategies. The system architecture involves front-end servers collecting user requests and dispatching them to appropriate back-end data centers for processing.
AN INVESTIGATION OF THE ENERGY CONSUMPTION BY INFORMATION TECHNOLOGY EQUIPMENTSijcsit
The World Wide Web and the rise of servers and PC's data centers have become a major position in the
overall power consumption of the world. In order to prevent global warming and ensuing disasters, already
Internet-service providers, hosting providers on green power have changed. Even household energy
suppliers offer green electricity from renewable energy such as wind, solar, biomass and hydro, which
emits no carbon dioxide, to stand against global warming. Only a global change for the information
technology can prevent the global-warming. The switch to renewable energy is the beginning of our future
and must be pursued as well as the research and development in information and communication
technology.
Commercial Overview DC Session 4 Introduction To Energy In The Data Centrepaul_mathews
The document discusses energy usage in data centres, noting that IT equipment accounts for 40% of energy consumption while cooling and ventilation makes up 35%. It also outlines metrics for measuring data centre efficiency like PUE and DCIE and discusses factors that influence energy consumption from cooling systems, UPS systems, and the external environment. Standards and legislation from organizations like the EU and US aim to improve data centre energy efficiency and reduce costs and environmental impact.
Building Blocks for Eco-efficient Cloud Computing for Higher Learning Institu...Editor IJCATR
Owning and managing a cloud-computing infrastructure, i.e. private data centers (DC), is a feasible way forward for an organization to ensure security of data when opting for cloud computing services. However, the cost associated with operating and managing a DC is a challenge because of the huge amount of power consumed and the carbon dioxide added to the environment. In particular, Higher Learning Institutions in Tanzania (HLIT) are among the institutions which need efficient computing infrastructure. This paper proposes eco-efficient cloud computing building blocks that ensure environment protection and optimal operational costs of a cloud computing framework that suffices HLIT computing needs. The proposed building blocks are in a form of power usage (renewable and nonrenewable); cloud deployment model and data center location; ambient climatic conditions and data center cooling; network coverage; quality of service and HLIT cloud software. The blocks are identified by considering HLIT computing requirements and challenges that exist in managing and operating cloud data centers. Moreover, this work identifies the challenges associated with optimization of resource usage in the proposed approach; and suggests related solutions as future work.
Organizations are increasingly concerned about the energy consumption of their data centers, which account for a large portion of business energy usage. The document outlines several approaches for making data centers more energy efficient, including retiring legacy systems, enhancing power management on existing systems, migrating to more efficient platforms like blade servers, implementing virtualization to consolidate servers, standardizing on server performance matching application needs, and right-sizing power and cooling infrastructure to avoid overprovisioning. Taken together, these strategies can significantly reduce a data center's energy consumption and associated costs.
This document discusses how the IBM XIV Storage System is designed to significantly reduce power consumption compared to other storage systems. It achieves over 65% lower power usage through an architecture that optimizes capacity utilization, eliminating unused "orphaned" storage space and using thin provisioning to allocate more virtual storage capacity than actual physical capacity installed. This allows customers to purchase only the storage needed currently while still having room for future growth. The efficient architecture also reduces the amount of hardware required, further cutting power and cooling costs while still providing high-performance storage.
BNY Mellon managed the construction of a new 165,000 square foot data center using poured concrete instead of steel due to high steel prices. The data center uses redundant electrical and cooling systems along with 8 generators for standby power. BNY Mellon is prioritizing object storage, software-defined data centers, and stateless computing while reducing its environmental footprint through initiatives like going tapeless and reducing paper usage. One of BNY Mellon's data centers earned an Energy Star designation for its efficient energy usage.
IRJET- Reducing electricity usage in Internet using transactional dataIRJET Journal
This document summarizes a research paper that proposes a method to reduce electricity usage and costs for internet services by optimizing how transactional data is mapped across geographically distributed data centers. It formulates the problem as a stochastic programming problem to maximize energy utilization within a cost budget. An efficient online algorithm is developed using Lyapunov optimization to map user requests to data centers based on changing factors like electricity prices and workload, with the goal of significantly reducing costs compared to baseline strategies. The system architecture involves front-end servers collecting user requests and dispatching them to appropriate back-end data centers for processing.
The building of Internet data centers (IDCs) is a growing industry that is pushing the limits of electric power and reliability requirements. As utilities must decide whether it is worth the cost to build new infrastructure to keep up with the present demand, facility operators are looking at power distribution designs that will improve efficiency and allow them to continue to expand their operations.
Greening the Data Center: The IT Industry's Energy Efficiency Imperativedigitallibrary
The document discusses the growing energy consumption of data centers and makes the case for government incentives and regulation to promote more energy efficient servers and data centers. It notes that data centers are critical infrastructure experiencing surging demand that is leading to higher costs due to rising energy intensity. The document also points out that the cost of electricity and supporting infrastructure for data centers now surpasses the capital cost of IT equipment itself and there is impact on regional power grids from data center energy use.
[Oil & Gas White Paper] Optimizing Pipeline Energy ConsumptionSchneider Electric
Effective energy management can benefit the hydrocarbon pipeline operator. Energy consumption costs – the leading expense for most operators – are rising continuously; however, the commitments needed on the part of the operator often impede implementation of energy-saving practices.
Key to effective energy management is the ability to quantify energy consumption accurately at any specific time and its cost and that of drag reducing agent (DR) use. Operators committed to energy management can implement a real-time, system-wide power optimization solution that evaluates the resource efficiency of a steady-state model of the pipeline. This solution will also perform costing runs of alternative configurations, formulated for the next energy cost rate or other ‘what if’ scenarios, in order to find the most energy-efficient alternative that maintains operational safety and integrity. Implementing these alternatives, the operator can save one percent to five percent of energy costs and reduce carbon emissions.
This advanced information management technology makes these costing considerations so practical they can become a routine, real-time operations process. Putting available information to work with this solution can make power optimization extremely realistic and highly rewarding for the company while supporting overall operational security, safety and environmental stewardship.
Green computing refers to environmentally sustainable and efficient computing practices throughout a product's lifecycle. This includes green use through energy efficient computing, green disposal like recycling, green design of efficient components, and green manufacturing with low environmental impact. Approaches to green computing involve optimizing software and deployment, like virtualization and power management, as well as recycling materials to reduce waste. The goals are to minimize environmental impact and costs while maximizing performance and sustainability.
This document discusses a proposed framework for green computing networks. It begins with an introduction that outlines the energy crisis and impact of increased connectivity on the environment. It then reviews existing solutions for wireless networks, including caching, virtualization, network services, energy awareness, and cloud computing. The document proposes an architecture for green computing networks that utilizes software-defined networking and information-centric networking principles. It leverages concepts like caching, virtualization and energy-aware algorithms to more efficiently schedule tasks based on available energy. The goal is to minimize the environmental impact of rapidly growing wireless networks through this software-based approach.
The document discusses green IT and how companies can become greener. It notes that while IT contributes to environmental issues due to growth, IT can also reduce emissions in other sectors. It discusses stakeholders in green IT like IT users and vendors. It highlights that energy costs are a major cost for servers and storage and presents calculations showing the significant cost of power over time. It argues that improving energy efficiency through new technologies and virtualization presents a big business case for cost savings. The document concludes more metrics are needed but energy efficiency offers immediate monetary rewards and adopting dynamic infrastructure concepts can have a leading environmental impact.
[Infographic] 2013 U.S. Utility Grid Automation Survey (Part 2)Schneider Electric
The document reports the results of a 2013 utility grid automation survey conducted by Schneider Electric. It shows that 22% of respondents expected high renewable energy penetration to cause significant problems for their distribution systems, while 18% expected high electric vehicle charging to cause significant problems. It also found that 31% said their utility would need an automated software system to track energy efficiency savings and sustainability projects to meet regulatory requirements.
The document discusses the environmental costs of cloud computing in terms of power usage and impacts. It notes that while data center electricity usage is growing rapidly as cloud services increase, the efficiency of information technology is also improving quickly. The cloud offers advantages over in-house IT in load diversity and economies of scale that help reduce power usage and costs. Overall, the indirect environmental and productivity benefits of IT may be more significant than direct electricity consumption.
ENERGY MANAGEMENT ALGORITHMS IN SMART GRIDS: STATE OF THE ART AND EMERGING TR...ijaia
The electric grid is radically evolving into the smart grid, which is characterized by improved energy
efficiency of available resources. The smart grid permits interactions among its computational and physical
elements thanks to the integration of Information and Communication Technologies (ICTs). ICTs provide
energy management algorithms and allow renewable energy integration and energy price minimization.
Given the importance of renewable energy, many researchers developed energy management (EM)
algorithms to minimize renewable energy intermittency. EM plays an important role in the control of users'
energy consumption and enables increased consumer participation in the market. These algorithms provide
consumers with information about their energy consumption patterns and help them adopt energy-efficient
behaviour. In this paper, we present a review of the state of the energy management algorithms. We define
a set of requirements for EM algorithms and evaluate them qualitatively. We also discuss emerging tools
and trends in this area.
Importance of Data Driven Decision Making in Enterprise Energy Management | D...Cairn India Limited
This document summarizes a presentation on the importance of data-driven decision making in enterprise energy management. It provides context on India's growing energy needs and challenges with access and reliability. It highlights the significant growth expected in India's building sector and commercial electricity use. The presentation outlines approaches to benchmarking building energy use and performance indicators. It provides benchmarking data for common building types in India such as offices, hospitals, hotels and shopping malls. The importance of data collection and benchmarking for evaluating energy efficiency opportunities and tracking performance over time is emphasized.
Informi GIS hand-out used in connection with presentation at Esri UC 2014Jens Dalsgaard
The document discusses how integrating GIS with other systems using a common data model, such as the PowerGrid model, can optimize electric utility grid operations by providing a single source of consolidated network data. This allows data to be shared across applications such as DMS, asset management, and planning systems. The example of the Finnish utility Fingrid's project demonstrates integrating GIS with IBM Maximo for asset management and extracting network data in CIM format for transmission planning software.
This document summarizes a study on green cloud computing. It defines green computing and cloud computing, noting that green cloud computing aims to minimize energy consumption through cloud infrastructure. It outlines different cloud service models and analyzes their energy usage. The document also summarizes a Microsoft study finding cloud can reduce energy usage by 30-60% compared to on-premise systems, but a Greenpeace study argues cloud could increase energy demands significantly if usage grows rapidly. In conclusion, cloud services can be more efficient than local systems depending on usage levels and transport energy costs.
The document discusses how green a data center is from different perspectives such as being environmentally conscious, reducing costs through efficiency, using renewable energy sources, and lowering carbon footprint, and provides examples of data on power consumption, cooling waste, and challenges faced by data centers. It also includes charts showing common problems in data centers related to power, heat, and space as well as inventory of typical IT equipment in a data center rack.
CIGRE WG “Network of the Future” Electricity Supply Systems of the futurePower System Operation
The document discusses the key technical issues that will shape future electric power systems, as identified by a CIGRE working group. The 10 issues are: 1) active distribution networks with bidirectional power flows; 2) increased information exchange needs from advanced metering; 3) growth of HVDC and power electronics; 4) development and use of energy storage; 5) new concepts for system operation and control; 6) new protection concepts; 7) planning with environmental and technology changes; 8) tools for assessing technical performance; 9) increasing transmission infrastructure capacity; and 10) stakeholder engagement. The working group assesses which CIGRE study committees would be involved in addressing each issue.
Retrofit, build, or go cloud/colo? Choosing your best directionSchneider Electric
When faced with the decision of upgrading an existing data center, building a new data center or leasing space in a third party colocation data center, there are both quantitative and qualitative differences to consider. This session reviews several key factors to help make a sound decision including a business’ sensitivity to cash flow, deployment timeframe, data center life expectancy, regulatory requirements, and other strategic factors.
The document summarizes Cisco EnergyWise, a new approach from Cisco Systems to managing corporate energy consumption through the enterprise network. Cisco EnergyWise allows organizations to measure, manage, and control the power usage of all devices connected to the corporate network, including both IT and non-IT systems. It provides a way to centrally monitor and optimize energy usage across the entire organization. The architecture is built on Cisco switches and uses the network to distribute commands and aggregate power data from all connected devices. This allows organizations to gain visibility and control over their total energy footprint and costs.
This document provides an overview of a study on designing an energy microgrid for Ithaca, NY using a systems architecture approach. It begins with an introduction to microgrids and their benefits, as well as background on the New York Prize competition which aims to develop independent energy systems. The goal is then defined as designing a high-level system to provide reliable power from renewable sources to critical services in Ithaca during grid failures, at lowest cost. A literature review shows most studies optimize single metrics like cost or reliability, while the proposed approach uses multi-objective optimization and considers stakeholder needs. The document outlines analyzing stakeholder values, defining system goals, selecting concepts, and developing architectural models to generate and evaluate alternative designs.
Mission critical facilities like data centers have three key characteristics: 1) they must operate continuously without shutdowns, 2) they require redundant power and cooling systems, and 3) they have technical equipment with high power demands. The document discusses how standards like ASHRAE 90.1 have evolved over time to account for changes in data center design and energy efficiency, starting from initially excluding computer equipment, to later adding specific requirements for data centers. It also provides examples of how innovations in equipment design have allowed facilities to use higher temperature cooling to reduce energy use.
Case Studies in Highly-Energy Efficient DatacentersMichael Searles
New tools, designs and services have emerged to help datacenter operators improve the energy efficiency of IT and facilties. This report examines the use of these technologies and techniques in real deployments.
Review: Potential Ecodesign regulation for economic cable conductor sizing in...Leonardo ENERGY
Increasing the conductor cross sectional area (CSA) of a cable reduces its energy losses. The most economic CSA is that for which the cable investment cost is equal to the total lifetime cost of energy losses.
Cable sizing is subject to regulation through national building codes, but these only take safety and aspects of functionality into account, not energy efficiency. These mandatory cable sizing prescriptions have given rise to the general misconception that following them precisely is best practice. The notion that the regulations are only the bare minimum requirement is often disregarded. As a result, economic cable sizing is not usually even taken into consideration during installation design or energy management initiatives.
Economic cable sizing cannot be derived just from the physical design parameters, but depends on the load profile of the electrical circuit in which the cable is used. Consequently, it is not the cable and its current-carrying capacity that should be regulated, but the choice of the cable cross section in the context of the electrical circuit and its load profile – in other words the installed cable system.
Approximately 8% of the electrical energy generated in the EU gets lost in the network between generation and end-use. Of this 8%, around 6% represents losses in the transmission and distribution network and 2% is behind-the-meter. Of the latter, 1.5% can be attributed to non-residential buildings – around 50 TWh per year – and the remaining 0.5% to residential buildings.
Green networking aims to reduce the carbon footprint of information and communication technology (ICT) networks by improving energy efficiency. Key strategies include optimizing network infrastructure utilization through technologies like virtualization, improving equipment energy efficiency, and locating network resources closer to renewable energy sources. Measurement of energy savings is important to track progress towards a lower carbon "Green Network".
ScottMadden has developed an approach for analyzing data center requirements and driving improvements in existing data center retrofits. Our approach takes into account the technological requirements, the physical attributes of a data center, and the requirements for a rigorous measurement and verification program needed to ensure improvements actually capture the energy efficiently gains and the resultant greenhouse gas reductions.
Our approach addresses the latest trends in data center management such as virtualization and cloud computing and provide a framework for developing metrics needed to drive changes in data center performance.
The building of Internet data centers (IDCs) is a growing industry that is pushing the limits of electric power and reliability requirements. As utilities must decide whether it is worth the cost to build new infrastructure to keep up with the present demand, facility operators are looking at power distribution designs that will improve efficiency and allow them to continue to expand their operations.
Greening the Data Center: The IT Industry's Energy Efficiency Imperativedigitallibrary
The document discusses the growing energy consumption of data centers and makes the case for government incentives and regulation to promote more energy efficient servers and data centers. It notes that data centers are critical infrastructure experiencing surging demand that is leading to higher costs due to rising energy intensity. The document also points out that the cost of electricity and supporting infrastructure for data centers now surpasses the capital cost of IT equipment itself and there is impact on regional power grids from data center energy use.
[Oil & Gas White Paper] Optimizing Pipeline Energy ConsumptionSchneider Electric
Effective energy management can benefit the hydrocarbon pipeline operator. Energy consumption costs – the leading expense for most operators – are rising continuously; however, the commitments needed on the part of the operator often impede implementation of energy-saving practices.
Key to effective energy management is the ability to quantify energy consumption accurately at any specific time and its cost and that of drag reducing agent (DR) use. Operators committed to energy management can implement a real-time, system-wide power optimization solution that evaluates the resource efficiency of a steady-state model of the pipeline. This solution will also perform costing runs of alternative configurations, formulated for the next energy cost rate or other ‘what if’ scenarios, in order to find the most energy-efficient alternative that maintains operational safety and integrity. Implementing these alternatives, the operator can save one percent to five percent of energy costs and reduce carbon emissions.
This advanced information management technology makes these costing considerations so practical they can become a routine, real-time operations process. Putting available information to work with this solution can make power optimization extremely realistic and highly rewarding for the company while supporting overall operational security, safety and environmental stewardship.
Green computing refers to environmentally sustainable and efficient computing practices throughout a product's lifecycle. This includes green use through energy efficient computing, green disposal like recycling, green design of efficient components, and green manufacturing with low environmental impact. Approaches to green computing involve optimizing software and deployment, like virtualization and power management, as well as recycling materials to reduce waste. The goals are to minimize environmental impact and costs while maximizing performance and sustainability.
This document discusses a proposed framework for green computing networks. It begins with an introduction that outlines the energy crisis and impact of increased connectivity on the environment. It then reviews existing solutions for wireless networks, including caching, virtualization, network services, energy awareness, and cloud computing. The document proposes an architecture for green computing networks that utilizes software-defined networking and information-centric networking principles. It leverages concepts like caching, virtualization and energy-aware algorithms to more efficiently schedule tasks based on available energy. The goal is to minimize the environmental impact of rapidly growing wireless networks through this software-based approach.
The document discusses green IT and how companies can become greener. It notes that while IT contributes to environmental issues due to growth, IT can also reduce emissions in other sectors. It discusses stakeholders in green IT like IT users and vendors. It highlights that energy costs are a major cost for servers and storage and presents calculations showing the significant cost of power over time. It argues that improving energy efficiency through new technologies and virtualization presents a big business case for cost savings. The document concludes more metrics are needed but energy efficiency offers immediate monetary rewards and adopting dynamic infrastructure concepts can have a leading environmental impact.
[Infographic] 2013 U.S. Utility Grid Automation Survey (Part 2)Schneider Electric
The document reports the results of a 2013 utility grid automation survey conducted by Schneider Electric. It shows that 22% of respondents expected high renewable energy penetration to cause significant problems for their distribution systems, while 18% expected high electric vehicle charging to cause significant problems. It also found that 31% said their utility would need an automated software system to track energy efficiency savings and sustainability projects to meet regulatory requirements.
The document discusses the environmental costs of cloud computing in terms of power usage and impacts. It notes that while data center electricity usage is growing rapidly as cloud services increase, the efficiency of information technology is also improving quickly. The cloud offers advantages over in-house IT in load diversity and economies of scale that help reduce power usage and costs. Overall, the indirect environmental and productivity benefits of IT may be more significant than direct electricity consumption.
ENERGY MANAGEMENT ALGORITHMS IN SMART GRIDS: STATE OF THE ART AND EMERGING TR...ijaia
The electric grid is radically evolving into the smart grid, which is characterized by improved energy
efficiency of available resources. The smart grid permits interactions among its computational and physical
elements thanks to the integration of Information and Communication Technologies (ICTs). ICTs provide
energy management algorithms and allow renewable energy integration and energy price minimization.
Given the importance of renewable energy, many researchers developed energy management (EM)
algorithms to minimize renewable energy intermittency. EM plays an important role in the control of users'
energy consumption and enables increased consumer participation in the market. These algorithms provide
consumers with information about their energy consumption patterns and help them adopt energy-efficient
behaviour. In this paper, we present a review of the state of the energy management algorithms. We define
a set of requirements for EM algorithms and evaluate them qualitatively. We also discuss emerging tools
and trends in this area.
Importance of Data Driven Decision Making in Enterprise Energy Management | D...Cairn India Limited
This document summarizes a presentation on the importance of data-driven decision making in enterprise energy management. It provides context on India's growing energy needs and challenges with access and reliability. It highlights the significant growth expected in India's building sector and commercial electricity use. The presentation outlines approaches to benchmarking building energy use and performance indicators. It provides benchmarking data for common building types in India such as offices, hospitals, hotels and shopping malls. The importance of data collection and benchmarking for evaluating energy efficiency opportunities and tracking performance over time is emphasized.
Informi GIS hand-out used in connection with presentation at Esri UC 2014Jens Dalsgaard
The document discusses how integrating GIS with other systems using a common data model, such as the PowerGrid model, can optimize electric utility grid operations by providing a single source of consolidated network data. This allows data to be shared across applications such as DMS, asset management, and planning systems. The example of the Finnish utility Fingrid's project demonstrates integrating GIS with IBM Maximo for asset management and extracting network data in CIM format for transmission planning software.
This document summarizes a study on green cloud computing. It defines green computing and cloud computing, noting that green cloud computing aims to minimize energy consumption through cloud infrastructure. It outlines different cloud service models and analyzes their energy usage. The document also summarizes a Microsoft study finding cloud can reduce energy usage by 30-60% compared to on-premise systems, but a Greenpeace study argues cloud could increase energy demands significantly if usage grows rapidly. In conclusion, cloud services can be more efficient than local systems depending on usage levels and transport energy costs.
The document discusses how green a data center is from different perspectives such as being environmentally conscious, reducing costs through efficiency, using renewable energy sources, and lowering carbon footprint, and provides examples of data on power consumption, cooling waste, and challenges faced by data centers. It also includes charts showing common problems in data centers related to power, heat, and space as well as inventory of typical IT equipment in a data center rack.
CIGRE WG “Network of the Future” Electricity Supply Systems of the futurePower System Operation
The document discusses the key technical issues that will shape future electric power systems, as identified by a CIGRE working group. The 10 issues are: 1) active distribution networks with bidirectional power flows; 2) increased information exchange needs from advanced metering; 3) growth of HVDC and power electronics; 4) development and use of energy storage; 5) new concepts for system operation and control; 6) new protection concepts; 7) planning with environmental and technology changes; 8) tools for assessing technical performance; 9) increasing transmission infrastructure capacity; and 10) stakeholder engagement. The working group assesses which CIGRE study committees would be involved in addressing each issue.
Retrofit, build, or go cloud/colo? Choosing your best directionSchneider Electric
When faced with the decision of upgrading an existing data center, building a new data center or leasing space in a third party colocation data center, there are both quantitative and qualitative differences to consider. This session reviews several key factors to help make a sound decision including a business’ sensitivity to cash flow, deployment timeframe, data center life expectancy, regulatory requirements, and other strategic factors.
The document summarizes Cisco EnergyWise, a new approach from Cisco Systems to managing corporate energy consumption through the enterprise network. Cisco EnergyWise allows organizations to measure, manage, and control the power usage of all devices connected to the corporate network, including both IT and non-IT systems. It provides a way to centrally monitor and optimize energy usage across the entire organization. The architecture is built on Cisco switches and uses the network to distribute commands and aggregate power data from all connected devices. This allows organizations to gain visibility and control over their total energy footprint and costs.
This document provides an overview of a study on designing an energy microgrid for Ithaca, NY using a systems architecture approach. It begins with an introduction to microgrids and their benefits, as well as background on the New York Prize competition which aims to develop independent energy systems. The goal is then defined as designing a high-level system to provide reliable power from renewable sources to critical services in Ithaca during grid failures, at lowest cost. A literature review shows most studies optimize single metrics like cost or reliability, while the proposed approach uses multi-objective optimization and considers stakeholder needs. The document outlines analyzing stakeholder values, defining system goals, selecting concepts, and developing architectural models to generate and evaluate alternative designs.
Mission critical facilities like data centers have three key characteristics: 1) they must operate continuously without shutdowns, 2) they require redundant power and cooling systems, and 3) they have technical equipment with high power demands. The document discusses how standards like ASHRAE 90.1 have evolved over time to account for changes in data center design and energy efficiency, starting from initially excluding computer equipment, to later adding specific requirements for data centers. It also provides examples of how innovations in equipment design have allowed facilities to use higher temperature cooling to reduce energy use.
Case Studies in Highly-Energy Efficient DatacentersMichael Searles
New tools, designs and services have emerged to help datacenter operators improve the energy efficiency of IT and facilties. This report examines the use of these technologies and techniques in real deployments.
Review: Potential Ecodesign regulation for economic cable conductor sizing in...Leonardo ENERGY
Increasing the conductor cross sectional area (CSA) of a cable reduces its energy losses. The most economic CSA is that for which the cable investment cost is equal to the total lifetime cost of energy losses.
Cable sizing is subject to regulation through national building codes, but these only take safety and aspects of functionality into account, not energy efficiency. These mandatory cable sizing prescriptions have given rise to the general misconception that following them precisely is best practice. The notion that the regulations are only the bare minimum requirement is often disregarded. As a result, economic cable sizing is not usually even taken into consideration during installation design or energy management initiatives.
Economic cable sizing cannot be derived just from the physical design parameters, but depends on the load profile of the electrical circuit in which the cable is used. Consequently, it is not the cable and its current-carrying capacity that should be regulated, but the choice of the cable cross section in the context of the electrical circuit and its load profile – in other words the installed cable system.
Approximately 8% of the electrical energy generated in the EU gets lost in the network between generation and end-use. Of this 8%, around 6% represents losses in the transmission and distribution network and 2% is behind-the-meter. Of the latter, 1.5% can be attributed to non-residential buildings – around 50 TWh per year – and the remaining 0.5% to residential buildings.
Green networking aims to reduce the carbon footprint of information and communication technology (ICT) networks by improving energy efficiency. Key strategies include optimizing network infrastructure utilization through technologies like virtualization, improving equipment energy efficiency, and locating network resources closer to renewable energy sources. Measurement of energy savings is important to track progress towards a lower carbon "Green Network".
ScottMadden has developed an approach for analyzing data center requirements and driving improvements in existing data center retrofits. Our approach takes into account the technological requirements, the physical attributes of a data center, and the requirements for a rigorous measurement and verification program needed to ensure improvements actually capture the energy efficiently gains and the resultant greenhouse gas reductions.
Our approach addresses the latest trends in data center management such as virtualization and cloud computing and provide a framework for developing metrics needed to drive changes in data center performance.
The document discusses green IT and reducing the environmental impact of information technology. It provides an overview of the U.S. Department of Commerce's Green IT Initiative, which aims to help companies reduce energy consumption and costs associated with IT infrastructure. The initiative focuses on increasing energy efficiency in areas like data center management, server virtualization, and power management of desktop computers. Adopting green IT best practices can significantly cut electricity usage and costs, with payback periods often under a year.
Bringing Enterprise IT into the 21st Century: A Management and Sustainabilit...Jonathan Koomey
I gave this talk as a webinar on March 19th, 2014 for the Corporate Eco Forum. It discusses ways to improve the efficiency of enterprise IT, mainly focusing on institutional changes that are necessary to make modern IT organizations perform effectively. It draws upon our case study of eBay as well as my other work on data centers over the years.
Optimization of power consumption in data centers using machine learning bas...IJECEIAES
Data center hosting is in higher demand to fulfill the computing and storage requirements of information technology (IT) and cloud services platforms which need more electricity to power on the IT devices and for data center cooling requirements. Because of the increased demand for data center facilities, optimizing power usage and ensuring that data center energy quality is not compromised has become a difficult task. As a result, various machine learning-based optimization approaches for enhancing overall power effectiveness have been outlined. This paper aims to identify and analyze the key ongoing research made between 2015 and 2021 to evaluate the types of approaches being used by researchers in data center energy consumption optimization using Machine Learning algorithms. It is discussed how machine learning can be used to optimize data center power. A potential future scope is proposed based on the findings of this review by combining a mixture of bioinspired optimization and neural network.
This document discusses how information and communication technology (ICT) can help conserve energy. ICT traditionally optimized energy-using systems and processes, but will now play a critical role in supporting more sustainable electricity generation and reducing domestic energy consumption. Smart technology allows automating energy savings, but also engaging consumers to change behaviors. The document describes a prototype that provides direct feedback on household electricity use to induce conservation.
Green IT at University of Bahrain aims to reduce energy consumption and carbon dioxide emissions from information and communication technology (ICT) usage. It identifies several green IT initiatives including equipment recycling, server consolidation and virtualization, print optimization, rightsizing IT equipment, and green considerations in procurement. Going green in the data center involves reducing overall power consumption, maximizing power utilization, reducing hardware needs through consolidation, and decreasing storage requirements. The top drivers for adopting green technology are reducing power consumption and costs. Strategies like energy efficiency technologies, power/cooling solutions, systems virtualization, and data center consolidation can help green the IT department.
This document discusses the need for green data centers and provides strategies for making data centers more energy efficient. It notes that while many organizations say they are green, few have specific targets or programs to reduce their carbon footprint. As data center electricity consumption and costs rise, running out of power capacity, cooling capacity, and physical space are major concerns. The document then provides questions to assess a data center's energy efficiency in terms of facilities, IT equipment, and utilization rates. It recommends strategies like optimizing infrastructure utilization and choosing more efficient hardware and cooling options. The goal is to improve the data center infrastructure efficiency metric and lower costs by reducing redundant, underutilized resources.
COMMON PROBLEMS AND CHALLENGES IN DATA CENTRESKamran Hassan
in this paper common problems and challenges of data center have been identified and methods have been explained to improve the efficiency and reliability of data center
This document discusses the environmental impacts of datacenters and the need for more sustainable practices. It notes that datacenter energy usage and associated costs are rising rapidly as more equipment is needed to support modern technologies and applications. Two major issues are energy inefficiency and toxic e-waste from outdated equipment. The document examines steps some companies like Symantec are taking to reduce their carbon footprint through consolidation, efficiency programs, and LEED certification. However, it also discusses barriers like a lack of accountability for energy costs and a focus on redundancy over sustainability. Overall, it argues for improved monitoring, equipment management, and use of renewable energy to help datacenters transition to more environmentally friendly operations.
This chapter discusses approaches to green computing, including virtualization, server virtualization and consolidation, storage consolidation, and desktop virtualization. These approaches improve cost and energy efficiency through optimized use of computing and storage capacity, electricity, cooling, and real estate. Moving to thin clients and virtual desktops reduces energy consumption compared to traditional desktop computers. Server room upgrades are also discussed to improve cooling/ventilation systems and increase capacity for virtualized servers.
Green Computing is a way of study of ending reutilizing and rebuilding of computers and electronic devices is overall analysis. The goal of green computing is to reduce the dangerous material increasing the utilization of energy. Green computing implies to practices and ways of utilizing computing resources in an ecofriendly way while maintaining overall computing .green IT refers to computer and information system and IT applications and predominant strategy to help save and enrich an environment, an increase in the eco logical sustainability in today times. Green computing is under consideration of all the business organization and leading companies with the advancement of new technologies and its varieties of applications. In yester years, especially during last 10 years, computer and IT industries realized the importance of going green an addressing the major concern relating to environment and also to minimize the cost which has led to sharp drift in strategy and policy to IT industry. The importance behind this change arise from computing demand and emerging cost of energy, global warning issues ,this paper present ecofriendly initiatives under way in IT industry and in brief covers the main research challenges which are still gazing to meet green computing requirements. Ms. Amritpal Kaur | Ms. Saravjit Kaur "Green Computing: Emerging Issues in IT" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25311.pdfPaper URL: https://www.ijtsrd.com/engineering/computer-engineering/25311/green-computing-emerging-issues-in-it/ms-amritpal-kaur
Green Computing for Internet of Things: Energy Efficient and Delay-Guaranteed...IRJET Journal
This document discusses green computing approaches for internet of things (IoT) systems that aim to minimize energy consumption and guarantee processing delays. It proposes a framework for allocating workloads from IoT devices to edge servers and cloud computing resources in an energy-efficient manner while meeting delay requirements. The framework models the IoT, edge and cloud system, including traffic generation from IoT devices, queuing delays at edge servers, and transmission delays between network components. It analyzes properties of edge server queueing systems and proposes a delay-based workload allocation scheme to minimize energy consumption while controlling processing delays.
The document discusses the next wave of green IT and making data centers more energy efficient. It notes that data center energy costs are significant and that McKinsey predicts data centers will produce more greenhouse gases than airlines by 2020. It provides best practices for building sustainable green data centers, including exploiting virtualization, improving server utilization rates, and designing efficient cooling systems.
Sklubi AlumniWeekend 23.10.2010:
Reijo Maihaniemi
Electricity Consumption: General
Electricity Savings Through DC Power Feed
DC Data Center Projects in the World
ICT Energy saving actions
This document discusses green cloud computing. It begins by defining cloud computing and green computing, noting that cloud computing requires large data centers that consume significant energy. It then discusses how green cloud computing aims to reduce this energy usage through techniques like server virtualization and energy-aware resource allocation. Specific strategies that cloud providers and data centers are taking to improve energy efficiency are also summarized, such as geographic placement of data centers and measures to optimize cooling.
This document discusses green computing, which aims to reduce the environmental impact of computing through more efficient and sustainable practices across hardware design, manufacturing, use, and disposal. It outlines the goals of green computing to minimize hazardous materials and maximize energy efficiency and recyclability. The document then describes several industry initiatives and standards that have been developed to promote green computing, including Energy Star, the EPEAT rating system, and benchmarks for measuring energy efficiency in data centers, servers, and other IT equipment. It also discusses approaches like extending product lifetimes and optimizing data center design, software deployment, and algorithms to reduce computing's environmental footprint.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This document discusses green computing practices and sustainable IT services. It provides an overview of factors driving adoption of green computing to reduce costs and environmental impact of data centers, such as rising energy costs and density. Green strategies discussed include improving infrastructure efficiency, power management, thermal management, efficient product design, and virtualization to optimize resource utilization. The document examines how green computing aims to lower costs and environmental footprint, and how sustainable IT services take a broader approach considering economic, environmental and social impacts.
The document discusses how information and communication technologies (ICT) can help address climate change through various strategies such as relocating data centers to remote renewable energy sites connected by optical networks, virtualizing services to reduce physical infrastructure needs, and incentivizing reductions in carbon emissions through "carbon rewards" rather than penalties. It outlines pilots and examples of implementing these strategies and argues that a zero carbon approach is essential for sustainable growth of ICT.
The 2025 Huawei trend forecast gives you the lowdown on data centre facilitie...Heiko Joerg Schick
The document summarizes 10 trends predicted to shape data center facilities by 2025:
1. Power density of 15 to 20 kW/rack will be predominant as CPU and server capacity increases.
2. Data centers will require scalable architectures to support evolving IT over 10-15 year lifecycles.
3. The average PUE of new Chinese data centers will drop to 1.1 as energy efficiency, reduction, and sustainability become greater challenges.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
009
1. See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/228698824
Challenges for energy efficiency in local and regional data centers
Article in Journal of Green Engineering · October 2010
CITATIONS
32
READS
275
2 authors:
Some of the authors of this publication are also working on these related projects:
Intelligent Knowledge as a Service (iKaaS) View project
SPEED-5G View project
George Koutitas
International Hellenic University
6 PUBLICATIONS 136 CITATIONS
SEE PROFILE
Panagiotis Demestichas
University of Piraeus
347 PUBLICATIONS 3,250 CITATIONS
SEE PROFILE
All content following this page was uploaded by Panagiotis Demestichas on 29 May 2014.
The user has requested enhancement of the downloaded file.
2. Challenges for Energy Efficiency in Local
and Regional Data Centers
G. Koutitas1 and P. Demestichas2
1School of Science and Technology, International Hellenic University, Thessaloniki,
Greece; e-mail: g.koutitas@ihu.edu.gr
2Department of Digital Systems, University of Piraeus, Piraeus, Greece;
e-mail: pdemest@unipi.gr
Abstract
This paper investigates challenges for achieving energy efficiency in local
and regional datacenters. The challenges are divided into operational and
planning categories that must be considered for the green transformation of
the datacenter. The study shows that the standardization of the used metrics
and guidelines are necessary for the reduction of the carbon emissions related
to data centers. The paper presents a review of the available metrics and most
modern techniques for energy efficiency. Numerous examples and reviews are
discussed that introduce the reader to the most modern green technologies.
Finally, the correlation of the energy efficient techniques to the overall carbon
emissions is highlighted. It is shown that a green data center not only presents
minimum operational expenditures but also produces low carbon emissions
that are important to achieve sustainability in modern societies.
Keywords: data center design, energy efficiency of data center, energy
efficient metrics, data center carbon footprint computation.
1 Introduction
Energy efficiency and low carbon strategies have attracted a lot of concern.
The goal for 20% energy efficiency and carbon reduction by 2020 drove
the Information Communication Technologies (ICT) sector to strategies that
Journal of Green Engineering, 1–32.
c 2010 River Publishers. All rights reserved.
3. 2 G. Koutitas and P. Demestichas
incorporate modern designs for a low carbon and sustainable growth [1, 2].
The ICT sector is part of the 2020 goal and participates in three different
ways. In the direct way, ICT are called to reduce their own energy demands
(green networks, green IT), in the indirect way ICT are used for carbon
displacements and in the systematic way ICT collaborate with other sectors
of the economy to provide energy efficiency (smartgrids, smart buildings,
intelligent transportations systems, etc.). ICT and in particular data centers
have a strong impact to the global CO2 emissions. Moreover, an important
part of the OPEX is due to the electricity demands. This paper presents the
sources and challenges that have to be addressed to reduce carbon emissions
and electricity expenses of the sector.
The data center is the most active element of an ICT infrastructure that
provides computations and storage resources and supports respective ap-
plications. The data center infrastructure is central to the ICT architecture,
from which all content is sourced or passes through. Worldwide, data centres
consume around 40,000,000,000 kW/hr of electricity per year and a big por-
tion of this consumption is wasted due to inefficiencies and non-optimized
designs. According to the Gartner Report [3], a typical data center consumes
the same amount of energy as 25000 households per year, and the electricity
consumption by data centers is about 0.5% of the world production. In terms
of carbon emissions this power consumption pattern is identical to the airline
industry and comparable to emissions generated by Argentina, Malaysia or
the Netherlands.
Energy efficiency in ICT is defined as the ratio of data processed over the
required energy (Gbps/Watt) and is different than power conservation where
the target is to reduce energy demands without considering the data volume.
Taking into consideration this ratio, green IT technologies have important
benefits in terms of
• reduce electricity costs and OPEX;
• improve corporate image;
• provide sustainability;
• extend useful life of hardware;
• reduce IT maintenance activities;
• reduce carbon emissions and prevent climate change;
• provide foundations for the penetration of renewable energy sources in
IT systems.
The demand for high speed data transfer and storage capacity together with
the increasingly growth of broadband subscribers and services will drive the
4. Energy Efficiency in Local and Regional Data Centers 3
green technologies to be of vital importance for the telecommunication in-
dustry, in the near future. Already, recent research and technological papers
show that energy efficiency is an important issue for the future networks. In
[2] a review of energy efficient technologies for wireless and wired networks
is presented. In [4] the design of energy efficient WDM ring networks is
highlighted. It is shown that energy efficiency can be achieved by increas-
ing the CAPEX of the network, by reducing the complexity and by utilizing
management schemes. The case of thin client solutions is investigated in [5]
and it is shown that employing power states in the operation of a data center
can yield energy efficiency. Efforts have been cited related to agreeing and
enabling standard efficiency metric, real time measurement systems, model-
ling energy efficiency, suggesting optimal designs, incorporating renewable
energy sources in the data center and developing sophisticated algorithms
for designing and managing the data centers. These approaches have been
published by various companies, experts in the field and organizations [6–17].
Although the IT industry has begun “greening” major corporate data
centers, most of the cyber infrastructure on a university campus or SMEs
involves a complex network of ad hoc and suboptimal energy environments,
with clusters placed in small departmental facilities. This paper investigates
challenges for achieving energy efficiency in local and regional data cen-
ters and reviews the most recent achievements in this direction. The paper
is organized as follows. In Section 2 the data center infrastructure and the
power consumption associated to each part are examined. Section 3 presents
a review of the derived energy efficiency metrics found in the literature and
the effect of energy efficiency to carbon emissions is examined. In Section 4
we investigate energy efficient techniques.
2 Data Center Infrastructure and Power Consumption
2.1 Data Center Infrastructure
Data centers incorporate critical and non-critical equipments. Critical equip-
ments are related to devices that are responsible for data delivery and are
usually named as IT equipments. Non-critical equipments are devices re-
sponsible for cooling and power delivery and are named as Non Critical
Physical Infrastructure (NCPI). Figure 1 presents a typical data center block
diagram [18, 19].
The overall design of a data center can be classified in four categories
Tier I–IV each one presenting advantages and disadvantages related to power
5. 4 G. Koutitas and P. Demestichas
Figure 1 Typical data center infrastructure [18, 19].
consumption and availability [18, 19]. In most cases availability and safety
issues yield to redundant N +1, N +2 or 2N data center designs and this has
a serious effect on power consumption. According to Figure 1, a data center
has the following main units:
• Heat Rejection – is usually placed outside the main infrastructure and
incorporates chillers, drycoolers and present an N + 1 design.
• Pump Room – they are used to pump chilled water between drycoolers
and CRACs and present an N + 1 design (one pump in standby).
• Switchgear – it provides direct distribution to mechanical equipment and
electrical equipment via the UPS.
• UPS – Uninterruptible Power Supply modules provide power supply and
are usually designed with multiple redundant configurations for safety.
Usually 1000 kVA to 800 kW per module.
• EG – Emergency Generators supply with the necessary power the data
center in case of a breakdown. Usually diesel generators.
• PDU – Power Distribution Units for power delivery to the IT. Usually
200 kW per unit and dual PDUs (2N) for redundancy and safety.
6. Energy Efficiency in Local and Regional Data Centers 5
Figure 2 Power delivery in a typical data center [21].
• CRAC – Computer Room Air Conditioners provide cooling and air flow
in the IT equipments. Usually air discharge is in upflow or downflow
configuration.
• IT Room – incorporates computers and servers placed in blades, cabinets
or suites in a grid formation. Provide data manipulation and transfer.
2.2 Power Consumption in Data Centers
The overall power consumption of a data center is related to the associated
power consumed by each unit. Efficiency at individual parts is an important
step for “greening” the data center but optimization is achieved when the
efficiency aims to the overall data center design [20]. The power delivery in
a typical data center is presented in Figure 2 [21]. The power is divided in an
in-series path and a in-parallel path. The power enters the data center from the
main utility (electric grid, generator), PM or the Renewable Energy Supply
(RES) utility, PG, and feeds the switchgear in series. Within the switchgear,
transformers scale down the voltage to 400–600 V [12]. This voltage flows
in the UPS that is also fed by the EG in case of a utility failure. The UPS
incorporates batteries for emergency power supply and process the voltage
with a double AC-DC-AC conversion to protect from utility failures and
smooth transition to the EG system. Of course, the AC-DC-AC conversion
is a process that wastes power and reduces the overall efficiency. The out-
7. 6 G. Koutitas and P. Demestichas
Figure 3 Power consumption by various parts of a data center (system refers to motherboards,
fans, etc.) [19].
put of the UPS feeds the PDUs that are placed within the main data center
room. The PDUs break the high voltage from the UPS into many 110–220 V
circuits to supply the electronic equipments. Finally, power is consumed for
the IT processes namely as storage, networking, CPU and in general data
manipulation.
The parallel path feeds the cooling system that is important for the heat
protection of a data center. The cooling system is also connected to the EG
since without cooling a typical data center can operate for a few minutes
before getting overheated. The cooling system incorporates fans and liquid
chillers. The power distribution of these processes in an inefficient data center
is presented in Figure 3 [19, 27]. It can be observed that almost 70% of
the power is consumed for non-critical operations like cooling and power
delivery and conversion and 30% is used by the IT equipments. Of course, a
portion of this percentage is also wasted for networking, CPU, fans, storage
and memory processing [19]. In other words, the useful work of the data
8. Energy Efficiency in Local and Regional Data Centers 7
center is associated to a percentage of power, smaller than the 30% delivered
to the IT equipments.
The power consumption pattern presented in Figure 3 is not constant with
time but varies according to different parameters. The main are the work-
load of the data center and the outside environment. Modelling the energy
efficiency and the losses of the data center’s equipments is a complex tasks
and crucial simplificative assumptions yielded great errors. First of all, the
assumption that the losses associated to the power and cooling equipments are
constant with time is wrong. It has been observed that the energy efficiency
of these equipments is a function of the IT load and presents a nonlinear be-
haviour. In addition, these equipments are usually operating at lower than the
maximum capacity loads and this increases the losses of the system. Finally,
the heat generated by the NCPI equipments is not insignificant. In general,
the losses of NCPI equipments are highly correlated to the workload of the
data center in a complex nonlinear relationship [21].
According to the input workload the losses of NCPI equipments can be
categorized as follows:
• No load losses – Losses that are fixed even if the data center has no
workload. The loss percentage increases with decrease of load.
• Proportional losses – Losses that depend linearly on workload. The loss
percentage is constant with load.
• Square law losses – Losses that depend on the square of the workload.
These losses appear at high workloads (over 90%). The loss percentage
decreases with a decrease of load.
The IT equipment also present non-constant losses and variable energy
efficiency that depends on the input workload. Based on these observations
it is concluded that energy efficiency of a data center is a complicate factor
parameter, non-constant with time. For this reason, techniques for measuring
and predicting the data center’s energy efficiency is of great importance.
2.3 Sources of Losses at Data centers
The operation of data centers suffers great inefficiencies and a great amount
of power is wasted for the operation of non-critical equipments and for the
produced heat by the electronic equipments. The main disadvantage of real
data centers is that a great amount of energy is wasted for cooling or it is
transformed to heat because of the inefficient operation of electronic equip-
9. 8 G. Koutitas and P. Demestichas
ments that can be NCPI or IT type. The main causes of power are summarized
as
• Power units (UPS, Transformers, etc.) operate below their full load
capacities.
• UPS are oversized to the actual load requirements in order to avoid
operating near their capacity limit.
• Air conditioning equipment consumes extra power in order to deliver
cool air flow at long distances.
• Inefficient UPS equipments.
• Blockages between air conditioners and equipments that yield to ineffi-
cient operation.
• No virtualization and consolidation.
• Inefficient servers.
• No closed coupling cooling.
• No efficient lighting.
• No energy management and monitoring.
• Underutilization due to N + 1 or 2N redundant designs.
• Oversizing of data center.
• Under-floor blockages that contribute to inefficiency by forcing cool-
ing devices to work harder to accommodate existing load heat removal
requirements.
The procedure to transform a data center into an energy efficient one
(green data center) is complex and it can only be achieved by targeting both
individual part optimization that can be considered as operational costs and
overall system performance that can be considered as planning actions. Ac-
cording to Figures 2 and 3, the optimized operation of the data center requires
the input power to be minimized without affecting the operation of the IT
equipments.
3 Energy Efficiency Metrics
3.1 Review of Metrics
The energy efficiency of a data center is a complicate non-constant parameter
that depends on the input workload and environmental conditions and its
estimation has attracted a lot of research. In order to investigate and propose
directions to optimize energy consumption in a data center it is important to
quantize its performance. This can be achieved by using a standard metric to
measure the inefficiencies. In past years the efficiency was incorrectly calcu-
10. Energy Efficiency in Local and Regional Data Centers 9
lated by just adding the efficiencies of the individual parts as published by the
manufacturers. This yielded great inaccuracies and overestimations and the
need for a standard metric and accurate model was obvious. As an example,
the efficiency of a UPS system is measured as the kWout over the kWin at full
load. According to the workload that enters the UPS, the efficiency can vary
from 0% at no load to 95% at full load in a nonlinear way [21]. Taking into
consideration that common data centers operate at 30–40% of their maximum
capacity workloads, the efficiency of the UPS cannot be considered constant
and equal to the imposed by the manufacturer value.
In general energy efficiency in the telecommunication industry is related
to
Energy Efficiency ∼
Joule
bit
∼
Watt
Gbps
∼
Watt
bitrate/Hz
(spectral efficiency)
(1)
The optimal description of this value depends on the system’s characterist-
ics and the type of equipment. As an example, for modulation and coding
techniques in wireless communications the spectral efficiency is a common
measure. For electronic components the ratio of joule per bit best describes
performance. In telecommunication networks and data centers the ratio of
Watts consumed over the Gbps of data processed is preferred. In [22] an
absolute energy efficiency metric is introduced, called dBε. The metric is
computed according to the following equation:
dBε = 10 log10
Power/bitrate
kT ln 2
(2)
where k is the Boltzman’s constant and T is the absolute temperature (300 K).
The value kT ln 2 represents the minimum energy dissipated per bit of in-
formation. Characteristic values of common systems are presented in Table 1
[22]. The smaller the dBε value is, the greater the achieved efficiency.
Data centers’ energy efficiency can be broadly defined, according to (1) as
the amount of useful computation divided by the total energy used during the
process. The development of a standard metric has attracted a lot of research
and initiatives have commenced by the green grid association [23]. The green
grid has established metrics according to the infrastructure efficiency and the
data center performance efficiency.
Data centers encounter power waste in the non-critical equipments and in
the critical equipments. The metrics that best define the non-critical equip-
ments’ efficiency are the Power Usage Effectiveness (PUE) and the Data
Center Efficiency (DCiE).
11. 10 G. Koutitas and P. Demestichas
Table 1 Energy efficiency measured for typical systems.
System Power Effective bitrate Energy/bit [J/b] dBε
BT network 1 GWatt 22 Tb/s 45 × 10−6 162
Dell laptop 80 Watt 1.87 GHz (clock) 42.8 × 10−9 131.7
Ultra-low power DSL/fiber 165 mWatt 10Mb/s 16.5 × 10−9 127.6
Tb/s router 10 kWatt 1Tb/s 10 × 10−9 125.4
Efficient CPU 2.8 Watt 1 Gflops 2.8 × 10−9 119.9
Efficient 10 Gb/s system 10 Watt 10Gb/s 1 × 10−6 115.4
Human brain 20 Watt 40 Tb/s 0.5 × 10−12 82.3
1 photon/bit 1.28n Watt 10 Gb/s 0.128 × 10−18 16.5
PUE is defined as the ratio of the total facility input power over the power
delivered to IT. DCiE is the inverse of PUE. According to the notation in
Figure 2 this is written in mathematical form as
PUE =
PIN
PIT
= CLF + PLF + 1, 1 < PUE < ∞
DCiE =
1
PUE
=
PIT
PIN
, 0 < DCiE < 1 (3)
where CLF represents the cooling load factor normalized to IT load (losses
associated to chillers, pumps, air conditioners) and PLF represents the power
load factor normalized to IT load (losses associated to switchgear, UPS,
PDU). These metrics characterize the performance or the power wasted in
the non-critical components of the data center. These are the cooling infra-
structure and the power infrastructure. Figure 4 presents measured values of
PUE over 24 different data centers.
It can be observed that the mean value of the measured PUE is 1.83
or 0.53 (53%) DCiE. This means that almost 53% of the power that enters
the data center is wasted for cooling and power delivery in the non-critical
components. The remaining 47% is used for data processing. The infrastruc-
ture efficiency metrics are variable in time and mainly depend on outdoor
environmental conditions and traffic demands. For example in low temperat-
ure periods, losses due to cooling are minimized. In addition, in low traffic
periods losses are increased since the data center is oversized.
In Figure 5 the NCPI efficiency metrics for two different data centers is
presented. It can be observed that data center A is more energy efficient than
data center B but at low input IT workload it is underutilized resulting to less
energy efficiency.
12. Energy Efficiency in Local and Regional Data Centers 11
Figure 4 PUE measurements over 24 data centers [19].
The energy efficiency measure of the overall data center’s perform-
ance is computed according to the DCeP (Data Center Performance) metric
presented by the green grid [24]. This metric is preferred for long term meas-
urements of the performance of the data center and in a mathematical form it
is computed according to (4)
DCeP =
Useful Work
PIN
=
m
i=1
[Vi · Ui(t, T ) · Ti]
EDC
(4)
The term “useful work” describes the number of tasks executed by the data
center and PIN or EDC represents the consumed power or energy respectively
for the completion of the tasks. In the above formulation m is the number
of tasks initiated during the assessment window, Vi is a normalization factor
that allows the tasks to be summed, Ui is a time based utility function for each
task, t is the elapsed time from initiation to completion of the task, T is the
absolute time of completion of the task, Ti = 1 when the task is completed
during the assessment window, or 0 otherwise.
The assessment window must be defined in such a way to allow the cap-
ture of data center’s variation over time. The DCeP factor gives an estimate
of the performance of the data center and is not as accurate as DCiE or PUE
due to its relativity. Proxies for computing the useful work according to the
13. 12 G. Koutitas and P. Demestichas
Figure 5 DCiE efficiency metrics for two data centers as a function of IT load.
scenario of interest are presented in [24]. These proxies incorporate computa-
tions regarding bits per kilowatt-hour, weighted CPU utilization, useful work
self assessment and other cases.
In [9] a power to Performance Effectiveness (PPE) metric is introduced to
help identify, at the device level, where efficiencies could be gained. It gives
the IT managers a view of performance levels within the data center. It is
computed according to the following equation:
PPE =
Actual Power Performance
Optimal Power Performance
(5)
where the optimal power performance is computed as the ratio of the product
of optimal server, optimal serever performance utilization, average Watts per
server over 1000. The factor optimal server is equal to rack density multiplied
by the optimal percentage. The metric PPE is used at the device level and
compares its actual performance with the theoretical efficiency indicated by
the manufacturers.
A more generic approach to define the efficiency metric of a data center
is presented in [19]. The proposed efficiency metric combines the PUE (or
14. Energy Efficiency in Local and Regional Data Centers 13
DCiE) and DCeP. The formulation is
Efficiency =
Computation
Total Energy(PIN)
=
1
PUE
×
1
SPUE
×
Computation
PIT
(6)
where the factor SPUE represents the server energy conversion and captures
inefficiencies caused by non-critical equipments of the IT equipments. These
can be the server’s power supply, voltage regulator modules and cooling fans.
SPUE is defined as the ratio of the total server input power over the useful
server power, i.e the power consumed by motherboards, CPU, DRAM, I/O
cards, etc. The combination of PUE and SPUE measures the total losses
associated to non critical components that exist in the data center’s NCPI
and IT equipments.
In [25] the metric DPPE (Data Center Performance per Energy) is presen-
ted that correlates the performance of the data center with carbon emissions.
The metric follows the general rules presented in (4) and (6) and introduces
one more factor for the green energy supply. In a mathematical form it is
DPPE =
Data Center Work
Carbon Energy
= ITEU × ITEE ×
1
PUE
×
1
1 − GEC
(7)
where
ITEU =
Total Measured Energy of IT [KWh]
Total Specification Energy IT (by manufacturer) [KWh]
ITEE =
a · server capacity + b · storage capacity + c · NW capacity
Total Specification Energy IT (by manufacturer) [KWh]
GEC =
Green Energy
DC Total Power Consumption
In the above formulation ITEU represents the IT equipment utilization, ITEE
represents the IT equipment energy efficiency, PUE represents the efficiency
of the physical infrastructure and GEC represents the penetration of renew-
able (green) energy into the system. ITEU is the average utilization factor
of all IT equipment included in the data center and can be considered as the
degree of energy saving by virtual techniques and operational techniques that
utilize the available IT equipment capacity without waste. ITEE is based on
DCeP presented in (4) and it aims to promote energy saving by encouraging
the installation of equipment with high processing capacity per unit electric
power. Parameters a, b, c are weighted coefficients. PUE is defined in (3) and
15. 14 G. Koutitas and P. Demestichas
is the efficiency of the physical infrastructure of the data center. Finally, GEC
is the available “green” energy that the data center is supplied additionally to
the grid electricity. It is also presented in Figure 2 as PG. The higher the value
of DPPE the less the carbon emissions it produces and more energy efficient
it is.
3.2 Carbon Emission and Cost Effects
The carbon emissions caused by the operation of data centers are related
to the consumed power. The grid electricity is responsible for CO2 emis-
sions depending on the used material for energy conversion. In order to
provide a measure of carbon emissions, the energy is converted to gr of
CO2. This is subject to each country energy sources. The relationship is
Q = 1 KWh ∼ XgrCO2. For anthracite electricity production X = 870,
for gas electricity production X = 370 and for petroleum it is X = 950 [26].
The used metric is TonsCO2/year. Therefore, the carbon footprint of a data
enter is computed according to [27]
KCO2
= 8.74 · 10−6
· P · X [TonsCO2/year] (8)
where P is shown in Figure 2 and represents the power in Watts consumed
by the data center and is related to the grid electricity. In case the data center
is also supplied by green energy then the corresponding Watts in equation
(8) would be equal to P = PM = PIN − PG (according to Figure 2). The
effect of the data center efficiency DCiE and the type of power supply to
the produced carbon emissions is more obvious with the following example.
Lets consider a data center that requires for the operation of the IT equipment,
PIT = 300 KWatts. The data center is supplied with grid electricity produced
by anthracite (X = 870 grCO2/KWh) and has also a G% renewable energy
supply (PIN = GPG + (1 − G)PM). The comparison of the additional car-
bon emissions produced by a data center with a non perfect infrastructure
efficiency (0 < DCiE < 1) in relation to a 100% efficient data center
(DCiE = 1) and relative to a data center that is supplied by G% of green
energy is computed according to
CCO2
= 8.74 · 10−6
· PIT ·
1
DCiE
· (1 − G) [TonsCO2/year] (9)
where PIT is expressed in Watts. The effect of DCiE and green energy to the
annual produced carbon emissions compared to a 100% efficient data center
is shown in Figure 6a).
16. Energy Efficiency in Local and Regional Data Centers 15
Figure 6 (a) Carbon emissions in CO2/year (assuming anthracite electricity production
∼870grCO2/KWh) as a function of DCiE and “green” energy relative to a 100% efficient
data center; (b) operational costs due to electricity (assuming 0.1 ¤/KWh) as a function of
DCiE and “green” energy relative to a 100% efficient data center.
The annual electricity costs (OPEX) a data center requires is computed
according to
MEuros
= 8.74 · P · Y [Euros/year] (10)
where Y represents the relationship between cost of energy. For the pur-
pose of our investigation it was assumed as 1 KW∼0.1¤. Similar to (9) the
comparison of a data center with non-perfect infrastructure efficiency that is
supplied by G% of renewable energy (no cost energy) is given by
EEuros
= 8.74 · PIT ·
1
DCiE
· (1 − G) [Euros/year] (11)
where PIT is expressed in Watts. The effect of DCiE and green energy to
the annual electricity expenses compared to a 100% efficient data center is
shown in Figure 6b). It can be observed that the DCiE is a very important
factor that must be first consider for the efficient operation of data centers.
Assuming a 0% supply of renewable energy, a data center with DCiE = 0.3
produces 7600 TonsCO2/year and requires 874,000 ¤/year electricity ex-
penses whereas a data center with DCiE = 0.7 produces 3260 TonsCO2/year
and requires 370,000 ¤/year electricity expenses.
17. 16 G. Koutitas and P. Demestichas
Figure 7 Directions for green data center transformation.
4 Challenges for Energy Efficiency
In general, efficiency can be achieved through the optimization of the opera-
tion and the optimal planning. Moreover, standards can be engineered in order
to drive energy efficiency. This process incorporates the domains presented
in Figure 7.
4.1 Optimization of Operational Costs
The operational costs are associated to the optimization of individual equip-
ments like the IT equipments and NCPI [14, 20].
4.1.1 IT Equipment
Efficiency of IT equipments is an important step for the green operation of
a data center. The DCeP metric of (4) and the factors ITEE and ITEU of
equation (7) show that energy efficiency is correlated to the efficient use of the
18. Energy Efficiency in Local and Regional Data Centers 17
IT equipment. Achieving efficiency at the IT level can be considered as the
most important strategy for a green data center since for every Watt saved in
computation, two additional Watts are saved – one Watt in power conversion
and one Watt for cooling [7]. In general the following actions are necessary
to achieve this goal.
Retiring – some data centers have application servers which are operating
but have no users. These servers add noload losses to the system and need
to be removed. The usable lifetime for servers within the data center varies
greatly, ranging from as little as two years for some x86 servers to seven years
or more for large, scalable SMP server systems. IDC surveys [28] indicate
that almost 40% of deployed servers have been operating in place for four
years or longer. That represents over 12 million single core-based servers still
in use. The servers that exist in most data centers today have been designed
for performance and cost optimization and not for energy efficiency. Many
servers in data centers have power supplies that are only 70% efficient.
This means that 30% of the power going to the server is simply lost as
heat. Having inefficient power supplies means that excess money is being
spent on power with additional cooling needed as well. Another problem with
current servers is that they are used at only 15–25% of their capacity. A big
problem with this, from a power and cooling perspective, is that the amount
of power required to run traditional servers does not vary linearly with the
utilization of the server. That is, ten servers running each at 10% utilization
will consume much more power than one or two servers that each run at
80–90% utilization [29].
Migrating to more energy efficient platforms – use of blade servers that
produce less heat in a smaller area around it. Non-blade systems require
bulky, hot and space-inefficient components, and may duplicate these across
many computers that may or may not perform at capacity. By locating these
services in one place and sharing them between the blade computers, the
overall utilization becomes more efficient. The efficiency of blade centers is
obvious if one considers power, cooling, networking and storage capabilities
[7, 29, 30]. An example is presented in [7] where 53 blade servers consuming
21 KWatts provided 3.6 Tflops of operation when in 2002 required Intel Ar-
chitecture based HPC cluster of 512 servers arraigned in 25 racks consuming
128 KWatts. This means that compared to 2002 technology today 17% of the
energy is necessary for the same performance.
• Power of Blade Servers – Computers operate over a range of DC
voltages, but utilities deliver power as AC, and at higher voltages than
19. 18 G. Koutitas and P. Demestichas
required within computers. Converting this current requires one or more
power supply units (or PSUs). To ensure that the failure of one power
source does not affect the operation of the computer, even entry-level
servers have redundant power supplies, again adding to the bulk and
heat output of the design. The blade enclosure’s power supply provides
a single power source for all blades within the enclosure. This single
power source may come as a power supply in the enclosure or as a
dedicated separate PSU supplying DC to multiple enclosures.
• Cooling of Blade Servers – During operation, electrical and mechan-
ical components produce heat, which a system must displace to ensure
the proper functionality of its components. Most blade enclosures, like
most computing systems, remove heat by using fans. The blade’s shared
power and cooling means that it does not generate as much heat as
traditional servers. Newer blade-enclosure designs feature high-speed,
adjustable fans and control logic that tune the cooling to the system’s
requirements, or even liquid cooling systems.
• Networking of Blade Servers – The blade enclosure provides one or
more network buses to which the blade will connect, and either presents
these ports individually in a single location (versus one in each com-
puter chassis), or aggregates them into fewer ports, reducing the cost of
connecting the individual devices. This also means that the probability
of wiring blockage to air flow of cooling systems in the data center is
minimized.
More Efficient Server – For energy efficiency, vendors should focus on
making the processors consume less energy and on using the energy as ef-
ficiently as possible. This is best done when there is close coordination in
system design between the processor and server manufacturers [29]. In [31]
practical strategies of power efficient computing technologies are presented
based on microelectronic, underlying logic device, associated cache memory,
off-chip interconnect, and power delivery system. In addition, replacement of
old single core processors with dual or quad core machines is important. This
combination can improve performance per Watt and efficiency per square
meter.
Energy Proportional Computing – Many servers operate at a fraction of
their maximum processing capacity [32]. Efficiency can be achieved when the
server scales down its power use when the workload is below its maximum
capacity. When search traffic is high, all servers are being heavily used, but
during periods of low traffic, a server might still see hundreds of queries
20. Energy Efficiency in Local and Regional Data Centers 19
Figure 8 Comparison of power usage and energy efficiency for Energy Proportional Comput-
ing (EPC) and common server (NO EPC) [32].
per second, meaning that any idleness periods are likely to be no longer
than a few milliseconds. In addition, it is well known that rapid transition
between idle and full mode consumes high energy. In [33] a research over
5000 Google servers shows that the activity profile for most of the time
is limited to 20–30% of the servers’ maximum capacity. Server’s power
consumption responds differently at varying input workloads. In Figure 8
the normalized power usage and energy efficiency of a common server is
presented as a function of the utilization (% of maximum capacity). It can be
observed that for typical server operation (10–50%) the energy efficiency is
very low meaning that the server is oversized for the given input workload.
This means that even for a no workload scenario, the power consumption of
the server is high. In case of energy proportional computing (EPC) the energy
varies proportionally to the input work. Energy-proportional machines must
exhibit a wide dynamic power range – a property that might be rare today in
computing equipment but is not unprecedented in other domains. To achieve
energy proportional computing two key features are necessary such as wide
dynamic power range and active low power modes.
Current processors have a wide dynamic power range of even more than
70% of peak power. On the other hand the dynamic power range of other
equipment is much narrower such as DRAM 50%, disk drives 25% and
21. 20 G. Koutitas and P. Demestichas
network switches 15%. A processor running at a lower voltage-frequency
mode can still execute instructions without requiring a performance impact-
ing mode transition. There are no other components in the system with
active low-power modes. Networking equipment rarely offers any low-power
modes, and the only low-power modes currently available in mainstream
DRAM and disks are fully inactive [33]. The same observations are presented
in [34]. A technique to improve power efficiency is with dynamic Clock
Frequency and Voltage Scaling (CFVS). CFVS provides performance-on-
demand by dynamically adjusting CPU performance (via clock rate and
voltage) to match the workload. Another advantage obtained by energy pro-
portional machines is the ability to develop power management software that
can identify underutilized equipments and set to idle or sleep modes. Finally,
benchmarks such as SPECpower ssj2008 provide a standard application base
that is representative of a broad class of server workloads, and it can help
isolate efficiency differences in the hardware platform.
4.1.2 NCPI Equipments
Efficiency of NCPI equipments is another step for the green operation of data
center. This is highlighted by the DCiE or PUE metric of (3) and the effect of
an efficient NCPI data center is presented in Figure 6. In general the following
actions are necessary to achieve this goal.
Replacing chiller or UPS systems that have been in service for 15 years
or more can result in substantial savings. New chiller systems can improve
efficiency by up to 50% [20].
Free cooling is a complex technique and requires workload manage-
ment according to the environmental conditions. Recently, the green grid
association presented a European air-side free cooling map [35].
Air conditioners – use of airconditions that can operate at economizer
mode. This can have a great effect especially for low outdoor temperatures.
In addition, if air-conditioners work with low output temps there is a further
increase of humidifier operation resulting to increased power demands.
Power delivery – energy efficient power delivery can be achieved using
efficient voltage regulators and power supply units. In [7] a study of three
different power delivery architectures is presented namely as: conventional
alternating current (AC), rack-level direct current (DC), and facility-level DC
distribution. Results showed that the greatest efficiency is achieved through
facility-level 380V DC distribution. Intel calculated that an efficiency of ap-
proximately 75% may be achieved with facility-level 380V DC distribution
using best-in-class components. Finally, in [10] a comparison of the energy
22. Energy Efficiency in Local and Regional Data Centers 21
efficiency, capital expense and operating expense of power distribution at
400 and 600 V as alternatives to traditional 480 V is presented. The study
confirmed that by modifying the voltage at which power is distributed in the
data center, data center managers can dramatically reduce energy consump-
tion and the cost of power equipment. The study recommended 400 V power
distribution, stepped down to 230 V to support IT systems to yield end to end
power delivery efficiency. Switch consolidation is also an effective strategy
[34].
4.2 Planning Actions
The individual equipment optimization is a crucial step for the data center to
operate in a green manner but it is inadequate to transform the overall system
operation. Planning actions for the efficiency of the overall system are re-
quired and can be achieved by introducing new technologies and management
techniques.
4.2.1 Reducing Cooling Needs
A data center usually occupies a large space and the optimum equipment
installation can yield to great savings. The following steps are considered as
important for energy efficiency:
• Organizing IT equipment into a hot aisle and cold aisle configuration
[19, 20].
• Minimize blockage by wiring and secondary equipments that influence
air flow and cooling and heat removal.
• Use raised floor environments.
• Positioning the equipment so that one can control the airflow between
the hot and cold aisles and prevent hot air from recirculating back to the
IT equipment cooling intakes.
• Leveraging low-cost supplemental cooling options.
• Use equipments with higher thermal tolerance and so reduce the need of
cooling.
• Investigate heat and cooling transfer within the data center space by
using advanced software of fluid mechanics and perform air flow
management [36].
4.2.2 Exploitation of Virtualization
Virtualization and consolidation is a necessary step to overcome underutil-
ization of the IT equipments of the data center. In [7] a study over 1000
23. 22 G. Koutitas and P. Demestichas
servers showed that the servers operate at 10–25% usage relative to their max-
imum capacity. The need for consolidation is obvious. Consolidation is used
for centralizing IT systems and when used at software level, consolidation
means centralization of the solution, integration of data, redesign of business
processes and when used at hardware level consolidation means centraliz-
ation of multiple servers to one more powerful and more energy efficient.
Virtualization can also be defined at a software level or hardware level or
even a combination of the above. Virtualization can be oriented for servers,
and can also be very effective for networking and storage. Virtualization en-
ables multiple low-utilization OS images to occupy a single physical server.
Virtualization allows applications to be consolidated on a smaller number
of servers, through elimination of many low utilization servers dedicated
to single applications or operating system versions [37]. The main advant-
age of server virtualization is that the total number of servers is reduced,
lower infrastructure for cooling and power delivery is required, the energy
costs are reduced, the virtual machines operate at their maximum capacity,
where the energy efficiency is met, and it can provide business continuity and
disaster recovery. A study presented in [38] showed that server virtualiza-
tion in Bosnia-Herzegovina over 500 servers in 60 data centers can provide
electricity costs saving of $500,000 over a three years period.
4.2.3 Remote Monitoring for Optimal Planning
The aim of remote monitoring is to provide the industry with a set of design
guides to be used by operators and designers to plan and operate energy
efficient data centers. Remote monitoring is an enabler for optimal planning
since it provides the necessary information for planning actions. The design
includes all IT and NCPI equipment. The outcome is an intelligent monit-
oring system that can provide real time information about power condition
of the equipments involved in a data center by means of sensor networks
implementation or SCADA systems.
The energy efficiency design incorporates the following directions [39]:
• Fully Scalable. All systems/subsystems scale energy consumption and
performance to use the minimal energy required to accomplish work-
load.
• Fully Instrumented. All systems/subsystems within the data center are
instrumented and provide real time operating power and performance
data through standardized management interfaces.
24. Energy Efficiency in Local and Regional Data Centers 23
• Fully Announced. All systems/subsystems are discoverable and report
minimum and maximum energy used, performance level capabilities,
and location.
• Enhanced Management Infrastructure. Compute, network, stor-
age, power, cooling, and facilities utilize standardized manage-
ment/interoperability interfaces and language.
• Policy Driven. Operations are automated at all levels via policies set
through management infrastructure.
• Standardized Metrics/Measurements. Energy efficiency is monitored at
all levels within the data center from individual subsystems to complete
data center and is reported using standardized metrics during operation.
In order to accomplish this target a set of logical divisions can be extracted
and a set of tasks are assigned as shown in Figure 9. An important goal of
remote monitoring is that crucial information of the real data center will be
gathered that will improve the development of efficiency predictions models
and will guide to optimal planning actions. Furthermore, better management
of the system is possible. As far as the power management is concerned with
an adequate remote monitoring technology, workloads and efficiencies of dif-
ferent systems could be measured and ideally, power usage in a data center
could be balanced to the workload. This feature is important for workload
management actions, which are further investigated in a paragraph below.
One way to achieve this balance is to idle unneeded equipment or transfer
workloads in such a way to obtain high usage capacity of data centers.
A strategy towards real time remote monitoring and management of a
data center is also supported in [40]. A data center management software is
presented that supports energy efficiency, stadardization and benchmarking
at different layers of the system.
4.2.4 Rightsizing
Data centers suffer low utilization fractions relative to their maximum capa-
city. In [41] a study shows that 70% of today’s data centers operate at less than
45% of their maximum capacity limits. This has a serious effect on energy
efficiency as presented in Figure 10.
There are numerous reasons why oversizing occurs, for example:
• The cost for not providing sufficient space for a data center is enormous
and must be eliminated. This means that in case of undersizing, the total
cost is dangerous.
25. 24 G. Koutitas and P. Demestichas
Figure 9 Logical division and tasks for green data centers and remote monitoring [39].
26. Energy Efficiency in Local and Regional Data Centers 25
Figure 10 Effect of individual equipment efficiency and rightsizing [41].
• It is a tremendous cost to increase the capacity during data center
lifecycle.
• There are numerous risks associated with an increase of the capacity
during lifecycle.
• It is difficult to predict the final room size so one wants always to be
above the threshold for safety reasons.
The result of oversizing is that the data center operates well below its
maximum capacity (usually at 30–50%). At this input workload all the equip-
ments are not efficiently operate. Rightsizing mainly affects NCPI power
consumption. In this approach the power and cooling equipments should be
balanced to the IT load of the data center.
There are fixed losses imposed by NCPI independently of IT load.
These losses can exceed IT load in low load systems and they are becom-
ing a large percentage when the IT load increases. Typical data centers
draw approximately two to three times the amount of power required for
the IT equipment because conventional data center designs are oversized
for maximum capacity and older infrastructure components can be very
inefficient.
27. 26 G. Koutitas and P. Demestichas
Rightsizing a data center is a complex task since great losses and costs are
associated to an improper sizing of the equipments. For that reason accurate
models and great management skills need to be performed in order to achieve
an optimum design. The steps that need to be followed are [41]:
• Investigation of data centers workload and sizing that already exist.
• Investigation about the estimated workload and future applications of
the data center.
• Avoid underutilization of data center assuming that this will increase the
reliability.
• Development of sophisticated prediction workload models.
An approach of adaptable infrastructure and dimensioning can provide
rightsizing of the system and this is presented in [41]. The effect of rightsizing
to the data center’s energy efficiency can be observed in Figure 10.
4.2.5 Network Load Management/Network Topology
Various architectures of data center topologies and different routing al-
gorithms than the already existing ones, can increase the system’s perform-
ance. A fat tree data center architecture is presented in [42]. Instead of a
hierarchical enterprise expensive switched (10GigE) to be used, commod-
ity switched (GigE) are utilized in a fat tree configuration. Despite the fact
that more intelligent and complex routing is required (two-level routing al-
gorithm) the cost of the new deployment and the overall power consumption
is reduced. The physical meaning of this observation is that for typical data
centers where the workload is small and so oversizing is occurred, by incor-
porating in the data center small commodity instead of powerful enterprise
equipments, the workload is distributed in such a way that most of the used
elements are operating near their maximum capacity. This has a positive
effect at the efficiency of the system. The study proved more than 50%
reduction of power consumption and heat generation.
Another approach for electricity cost savings is the routing algorithm
presented in [43]. The authors suggested an algorithm that moves workload
to data centers placed at areas where the electricity cost is low using a real
time electricity price information system coupled to the routing algorithm.
The concept is to move workload at data centers placed in areas where the
electricity cost is low and reduce the usage of the other. This means that the
increased power consumption of the fully loaded data centers will result to
low expenses.
28. Energy Efficiency in Local and Regional Data Centers 27
Figure 11 Rightsizing routing algorithm for energy efficiency.
Despite the fact that the presented technique is associated to cost savings
it can also be used in future systems for workload management according
to renewable energy availability or environmental conditions to provide free
cooling.
Network load management techniques are implemented to provide work-
load balance to specific data centers in a network. The main aim is to reduce
the workload of data centers that are underutilized and deliver the workload to
other data centers to operate them near their maximum capacity. The concept
is presented in Figure 11. The model utilizes the available data derived from
the remote monitoring system and the required data for energy efficiency met-
ric computation and performs workload management in the network devices
to provide maximum energy efficiency. In addition, a real time feedback sys-
tem that informs about environmental conditions and availability/production
of renewable energy is coupled to the algorithm.
4.2.6 Avoid Data Duplication
Data duplication produces increase power consumption in storage devices.
The fact that most of the data is duplicated, for safety reasons, reduces the
energy efficiency of the system. Storage virtualization is one approach to
overcome this phenomenon [37].
29. 28 G. Koutitas and P. Demestichas
Table 2 Energy efficiency ranking for 25 typical data centers.
The Green Grid Benchmark PUE Number of Data Centers
Platinum <1.25 0%
Gold 1.25–1.43 0%
Silver 1.43–1.67 0%
Bronze 1.67–2.0 27%
Recognized 2.0–2.5 40%
Non-Recognized >2.5 33%
4.2.7 Alternative Energy Supply
The DCiE investigated in Section 3.1 proved that in order to increase the
efficiency of a data center one approach is to reduce the needs of input power
from the utility. This can be achieved by applying alternative energy supply
in the data center. Of course the technology of renewable energy sources and
the produced power is a small fraction of the actual required power to operate
a data center. But it can be profitable for small traffic demands where the
requirements are reduced. The effect of the penetration of alternative energy
source is also more obvious in the DPPE metric shown in equation (7) and in
Figure 6.
4.3 Standardization
Standardization of energy efficiency procedures and techniques is considered
as a necessary step for achieving green operation of data centers [16]. The
development and the design of data centers based on a common stand-
ard platform could provide great goals for energy efficient data centers. A
benchmark, presented by the green grid association, categorizes data centers
according to the measured DCiE factor. This can be considered as the first
approach to this target and in the near future benchmarks that incorporate the
DCeP or DPPE metrics are expected. Data center benchmarks are presented
in Table 2 [16]. It can be observed that silver and gold benchmarks are not
yet achieved.
5 Conclusions
This paper presented today’s challenges for achieving energy efficiency in
local and regional data center systems. The power consumption of different
layers of the data center was investigated and it was shown that there are great
portions of power wasted both in non critical infrastructure and IT equip-
30. Energy Efficiency in Local and Regional Data Centers 29
ments. A review of the available metrics for energy efficiency was discussed
and the effect of energy efficiency to carbon emissions and operational costs
was computed. It was shown that there are great expectations for cost and
carbon savings when the data center operates in a green manner. Strategies for
developing and transforming a green data center were also presented based on
operational and planning actions. It was shown that energy efficiency should
target overall system optimization of both non critical and IT equipments
with main focus placed on cooling, power delivery systems, virtualization
and workload management.
References
[1] International Telecommunication Union (ITU), Report on Climate Change, October
2008.
[2] G. Koutitasa and P. Demestichas, A review of energy efficiency in telecommunication
networks, in Proc. in Telecomm. Forum (TELFOR), Serbia, November, pp. 1–4, 2009.
[3] Gartner Report, Financial Times, 2007.
[4] I. Cerutti, L. Valcarenghi, and P. Castoldi, Designing power-efficient WDM ring
networks, in Proc. ICST Int. Conf. on Networks for Grid Applic., Athens, 2009.
[5] W. Vereecken, et al., Energy efficiency in thin client solutions, in ICST Int. Conf. on
Networks for Grid Applic., Athens, 2009.
[6] J. Haas, T. Pierce, and E. Schutter, Data center design guide, White Paper, The Greengrid,
2009.
[7] Intel, Turning challenges into opportunities in the data center, White Paper, Energy
Efficiency in the Data Center, 2007.
[8] P. Scheihing, DOE data center energy efficiency program, U.S. Department of Energy,
April 2009.
[9] C. Francalanci, P. Gremonesi, N. Parolini, D. Bianchi, and M. Gunieti, Energ-IT: Models
and actions for reducing IT energy consumption, Focus Group on Green IT and Green
e-Competences, Preliminary Project Results, Milan, 2009.
[10] EATON, Is an energy wasting data center draining you bottom line? New technology
options and power distribution strategies can automatically reduce the cost and carbon
footprint of a data center, White Paper, 2009.
[11] R. Bolla, R. Bruschi, and A. Ranieri, Energy aware equipment for next-generation
networks, in Proc. Int. Conf. on Future Internet, Korea, November, pp. 8–11, 2009.
[12] Report to Congress, Server and data center energy efficiency, U.S Envrionmental
Protection Agency, Energy Star Program, August 2007.
[13] L. MacVittie, A green architectural strategy that puts IT in the black, F5 White Paper,
2010.
[14] IBM, The green data center, White Paper, May 2007.
[15] TheGreenGrid, Guidelines for energy-efficient data centers, White Paper, February 2007.
[16] Hewlett Packard, A blueprint for reducing energy costs in your data center, White Paper,
June 2009.
31. 30 G. Koutitas and P. Demestichas
[17] www.cost804.org
[18] Cisco, Cisco data center infrastructure 2.5 design guide, Technical Report, December
2006.
[19] L.A. Barroso and U. Holzle, The Data Center as a Computer: An Introduction to the
Design of Warehouse-Scale Machines, Morgan and Claypool, 2009.
[20] N. Rasmussen, Implementing energy efficient data centers, White Paper, APC, 2006.
[21] N. Rasmussen, Electrical efficiency modelling for data centers, White Paper, APC, 2007.
[22] M. Parker and S. Walker, An absolute network energy efficiency metric, in Proc. ICST
Int. Conf. on Networks for Grid Applic., Athens, 2009.
[23] TheGreenGrid, Green grid metrics: Describing data center power efficiency, White
Paper, February 2007.
[24] TheGreenGrid, Proxy proposals for measuring data center productivity, White Paper,
2008.
[25] GIPC, Concept of new metric for data center energy efficiency: Introduction to data
center performance per energy DPPE, Green IT Promotion Council, February 2010.
[26] International Energy Agency (IEA), CO2 emissions from fuel combustion: Highlights,
Report, 2009.
[27] N. Rasmussen, Allocating data center energy costs and carbon to IT users, APC White
Paper, 2009.
[28] M. Eastwood, J. Pucciarelli, J. Bozman, and R. Perry, The business value of con-
solidating on energy-efficient servers: Customer findings, IDC White Paper, May
2009.
[29] J. Murphy, Increasing Energy Efficiency with x86 Servers, Robert Francis Group, 2009.
[30] N. Rasmussen, Strategies for deploying blade servers in existing data centers, APC White
Paper, 2005.
[31] L. Chang et al., Practical strategies for power-efficient computing technologies, Proceed-
ings of IEEE, vol. 98, no. 2, February 2010.
[32] J. Koomey et al., Server energy measurement protocol, Energy Star, Report, 2006.
[33] L.A. Barroso and U. Hozle, The case of energy proportional computing, IEEE Computer,
vol. 40, pp. 33–37, 2007.
[34] Force 10, Managing data center power and cooling, White Paper, 2007.
[35] www.thegreengrid.org.
[36] http://www.futurefacilities.com/software/6SigmaRoom.htm.
[37] S. Bigelow, D. Davis, and R. Vanover, Choosing to wait on virtualization, how can the
open virtualization format help you, critical decisions for virtualization storage, Virtual
Datacenter, vol. 20, March 2010.
[38] E. Cigic and N. Nosovic, Server virtualization for energy efficient data centers in Bosnia
and Herzegovina, Int. Telecommunication Forum TELFOR, Serbia, November 2009.
[39] J. Haas, et al., Data center design guide. Program overview, Report by The Green Grid,
2009.
[40] IBM, Taking an integrated approach to energy efficiency with IBM Tivoli software,
White Paper, August 2008.
[41] APC, Avoiding costs from oversizing data center and network room infrastructure, White
Paper, 2003.
32. Energy Efficiency in Local and Regional Data Centers 31
[42] M. Al-Fares, A. Loukissas, and A. Vahdat, A scalable, commodity data center network
architecture, in ACM SIGCOMM, pp. 63–74, August 2008.
[43] A. Qureshi, et al., Cutting the electric bill for internet scale systems, in APC SIGCOMM,
2009.
Biographies
G. Koutitas was born in Thessaloniki, Greece. He received his B.Sc. degree
in Physics from the Aristotle University of Thessaloniki, Greece, in 2002 and
his M.Sc. degree, with distinction, in Mobile and Satellite Communications
from the University of Surrey, UK, in 2003. He succeeded his Ph.D in radio
channel modeling from the Centre for Communications Systems Research
(CCSR) of the University of Surrey in 2007. His main research interests are
in the area of radio wave propagation modeling, wireless communications
(modeling and optimization) and in the area of ICT for sustainable growth
and energy efficiency. He is involved in research activities concerning energy
efficient network deployments and design, Green communications and
sensor networks.
P. Demestichas was born in Athens, Greece, in 1967. Since December 2007
he is Associate Professor at the University of Piraeus, in the Department of
Digital Systems, which he joined in September 2002 as Assistant Professor.
From January 2001 until August 2002 he was adjunct lecturer at NTUA, in
the Department of Applied Mathematics and Physics, of the National Tech-
nical University of Athens (NTUA). From January 1997 until August 2002
he was senior research engineer in the Telecommunications Laboratory of
NTUA. Until December 1996 he had acquired a Diploma and Ph.D. degree
in Electrical and Computer Engineering from NTUA, and had conducted
his military service in the Greek Navy. He has been actively involved in a
number of national and international research and development programs.
His research interests include the design and performance evaluation of
high-speed, wireless and wired, broadband networks, software engineering,
network management, algorithms and complexity theory, and queuing theory.
Most of his current activities focus on the Information Communications Tech-
nologies (ICT), 7th Framework Programme (FP7), “OneFIT” (Opportunistic
Networks and Cognitive Management Systems for Efficient Application Pro-
vision in the Future Internet), in which he is the project manager. Moreover,
he will be involved in the ICT/FP7 projects “UniverSelf” (Autonomics for
the Future Internet), “Acropolis” (Advanced coexistence technologies for ra-
33. 32 G. Koutitas and P. Demestichas
dio optimization in licenced and unlicensed spectrum), and the COST action
IC0902. He has been the technical manager of the “End-to-End Efficiency”
(E3) project, which focused on the introduction of cognitive systems in the
wireless world. He is the chairman of Working Group 6 (WG6), titled “Cog-
nitive Networks and Systems for a Wireless Future Internet”, of the Wireless
World Research Forum (WWRF). He has further experience in project man-
agement from the FP5/IST MONASIDRE project, which worked on the
development of composite wireless networks. He has actively participated
in FP6/IST (E2R I/II, ACE), Eureka/Celtic, as well as FP5, ACTS, RACE
I and II, EURET and BRITE/EURAM. He has around 200 publications in
international journals and refereed conferences. He is a member of the IEEE,
ACM and of the Technical Chamber of Greece.
View publication statsView publication stats