This document discusses a proposed framework for green computing networks. It begins with an introduction that outlines the energy crisis and impact of increased connectivity on the environment. It then reviews existing solutions for wireless networks, including caching, virtualization, network services, energy awareness, and cloud computing. The document proposes an architecture for green computing networks that utilizes software-defined networking and information-centric networking principles. It leverages concepts like caching, virtualization and energy-aware algorithms to more efficiently schedule tasks based on available energy. The goal is to minimize the environmental impact of rapidly growing wireless networks through this software-based approach.
This document discusses energy efficiency in cloud computing. It notes that cloud computing has led to large data centers with significant energy usage and carbon footprints. The resource allocation problem in cloud computing is treated as a linear programming problem aimed at minimizing energy consumption. Several heuristic algorithms are adopted and analyzed for resource allocation using an expected time to compute task model to develop green cloud computing solutions that reduce costs and environmental impacts.
Energy efficient resource allocation in cloud computingDivaynshu Totla
This document discusses energy efficiency in cloud computing. It first provides background on the rising energy consumption of data centers due to increased cloud usage. It then discusses various approaches for improving energy efficiency in clouds, including virtualization and energy-aware scheduling algorithms like round-robin and first-come first-serve. The document proposes an energy-aware VM scheduler that uses these algorithms to minimize server usage and reduce energy consumption while meeting performance requirements. Overall the document analyzes the problem of high cloud energy usage and proposes a scheduler to improve efficiency through virtualization and algorithmic approaches.
Energy Saving by Virtual Machine Migration in Green Cloud Computingijtsrd
Nowadays the innovations have turned out to be so quick and advanced that enormous all big enterprises have to go for cloud. Cloud provides wide range of services, from high performance computing to storage. Datacenter consisting of servers, network, wires, cooling systems etc. is very important part of cloud as it carries various business information onto the servers. Cloud computing is widely used for large data centers but it causes very serious issues to environment such as heat emission, heavy consumption of energy, release of toxic gases like methane, nitrous oxide, carbon dioxide, etc. High energy consumption leads to high operational cost as well as low profit. So we required Green cloud computing, which very environment friendly and energy efficient version of the cloud computing. In this paper the major issues related to cloud computing is discussed. And the various techniques used to minimize the power consumption are also discussed. Ruhi D. Viroja | Dharmendra H. Viroja"Energy Saving by Virtual Machine Migration in Green Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-1 | Issue-1 , December 2016, URL: http://www.ijtsrd.com/papers/ijtsrd104.pdf http://www.ijtsrd.com/engineering/computer-engineering/104/energy-saving-by-virtual-machine-migration-in-green-cloud-computing/ruhi-d-viroja
Green cloud computing aims to minimize environmental impact by optimizing computing resource usage. It focuses on reducing materials, energy, water and e-waste through techniques like virtualization, consolidation, automation and multitenancy. These improvements lead to greater efficiency and resource utilization in cloud data centers and networks. Metrics like PUE, CUE and DCP are used to measure a cloud's environmental footprint and productivity.
The document discusses green cloud computing and describes a technical seminar presented by S.Sai Madhuri. It defines cloud computing and discusses types including SaaS, PaaS, and IaaS. It then explains green computing and green cloud computing, describing the core components and architecture of data centers. The document outlines the objective of calculating energy consumption using a green cloud simulator in VMWare Player to analyze existing systems and develop more efficient solutions.
Green cloud computing aims to make cloud infrastructure more energy efficient and environmentally friendly. Adopting measures like using more renewable energy sources, virtualizing servers, and improving data center cooling can help reduce carbon emissions and operational costs. Virtualizing servers allows multiple virtual machines to run on a single physical server, increasing efficiency and hardware utilization. Data centers also aim to lower their power usage effectiveness rating by implementing designs with hot-aisle/cold-aisle configurations and adopting newer technologies. Transitioning to renewable energy sources for power can further reduce the carbon footprint of cloud infrastructure and lead to more stable energy prices over time.
Green cloud computing aims to make cloud computing more environmentally sustainable by reducing energy consumption and carbon emissions. The document discusses how cloud data centers use significant amounts of energy. It then introduces green cloud computing and the Green Cloud Simulator tool, which can model a data center's energy usage. The document provides steps to build a new virtual data center in the simulator and view statistics on device energy consumption and graphs of the results. The summary highlights the goal of reducing cloud computing's environmental impact.
This document discusses green cloud computing. It begins by defining cloud computing and green computing, noting that cloud computing requires large data centers that consume significant energy. It then discusses how green cloud computing aims to reduce this energy usage through techniques like server virtualization and energy-aware resource allocation. Specific strategies that cloud providers and data centers are taking to improve energy efficiency are also summarized, such as geographic placement of data centers and measures to optimize cooling.
This document discusses energy efficiency in cloud computing. It notes that cloud computing has led to large data centers with significant energy usage and carbon footprints. The resource allocation problem in cloud computing is treated as a linear programming problem aimed at minimizing energy consumption. Several heuristic algorithms are adopted and analyzed for resource allocation using an expected time to compute task model to develop green cloud computing solutions that reduce costs and environmental impacts.
Energy efficient resource allocation in cloud computingDivaynshu Totla
This document discusses energy efficiency in cloud computing. It first provides background on the rising energy consumption of data centers due to increased cloud usage. It then discusses various approaches for improving energy efficiency in clouds, including virtualization and energy-aware scheduling algorithms like round-robin and first-come first-serve. The document proposes an energy-aware VM scheduler that uses these algorithms to minimize server usage and reduce energy consumption while meeting performance requirements. Overall the document analyzes the problem of high cloud energy usage and proposes a scheduler to improve efficiency through virtualization and algorithmic approaches.
Energy Saving by Virtual Machine Migration in Green Cloud Computingijtsrd
Nowadays the innovations have turned out to be so quick and advanced that enormous all big enterprises have to go for cloud. Cloud provides wide range of services, from high performance computing to storage. Datacenter consisting of servers, network, wires, cooling systems etc. is very important part of cloud as it carries various business information onto the servers. Cloud computing is widely used for large data centers but it causes very serious issues to environment such as heat emission, heavy consumption of energy, release of toxic gases like methane, nitrous oxide, carbon dioxide, etc. High energy consumption leads to high operational cost as well as low profit. So we required Green cloud computing, which very environment friendly and energy efficient version of the cloud computing. In this paper the major issues related to cloud computing is discussed. And the various techniques used to minimize the power consumption are also discussed. Ruhi D. Viroja | Dharmendra H. Viroja"Energy Saving by Virtual Machine Migration in Green Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-1 | Issue-1 , December 2016, URL: http://www.ijtsrd.com/papers/ijtsrd104.pdf http://www.ijtsrd.com/engineering/computer-engineering/104/energy-saving-by-virtual-machine-migration-in-green-cloud-computing/ruhi-d-viroja
Green cloud computing aims to minimize environmental impact by optimizing computing resource usage. It focuses on reducing materials, energy, water and e-waste through techniques like virtualization, consolidation, automation and multitenancy. These improvements lead to greater efficiency and resource utilization in cloud data centers and networks. Metrics like PUE, CUE and DCP are used to measure a cloud's environmental footprint and productivity.
The document discusses green cloud computing and describes a technical seminar presented by S.Sai Madhuri. It defines cloud computing and discusses types including SaaS, PaaS, and IaaS. It then explains green computing and green cloud computing, describing the core components and architecture of data centers. The document outlines the objective of calculating energy consumption using a green cloud simulator in VMWare Player to analyze existing systems and develop more efficient solutions.
Green cloud computing aims to make cloud infrastructure more energy efficient and environmentally friendly. Adopting measures like using more renewable energy sources, virtualizing servers, and improving data center cooling can help reduce carbon emissions and operational costs. Virtualizing servers allows multiple virtual machines to run on a single physical server, increasing efficiency and hardware utilization. Data centers also aim to lower their power usage effectiveness rating by implementing designs with hot-aisle/cold-aisle configurations and adopting newer technologies. Transitioning to renewable energy sources for power can further reduce the carbon footprint of cloud infrastructure and lead to more stable energy prices over time.
Green cloud computing aims to make cloud computing more environmentally sustainable by reducing energy consumption and carbon emissions. The document discusses how cloud data centers use significant amounts of energy. It then introduces green cloud computing and the Green Cloud Simulator tool, which can model a data center's energy usage. The document provides steps to build a new virtual data center in the simulator and view statistics on device energy consumption and graphs of the results. The summary highlights the goal of reducing cloud computing's environmental impact.
This document discusses green cloud computing. It begins by defining cloud computing and green computing, noting that cloud computing requires large data centers that consume significant energy. It then discusses how green cloud computing aims to reduce this energy usage through techniques like server virtualization and energy-aware resource allocation. Specific strategies that cloud providers and data centers are taking to improve energy efficiency are also summarized, such as geographic placement of data centers and measures to optimize cooling.
In today’s world the growing demand for knowledge has made cloud computing a center of attraction. Cloud computing is providing utility based services to all the users worldwide. It enables presentation of applications from consumers, scientific and business domains. However, data centers created for cloud computing applications consume huge amounts of energy, contributing to high operational costs and a large amount of carbon dioxide emission to the environment. With enhancement of data center, the power consumption is increasing at such a rate that it has become a key concern these days because it is ultimately leading to energy shortcomings and global climatic change. Therefore, we need green cloud computing solutions that can not only save energy, but also reduce operational costs.
This document discusses green cloud computing and data centers. It provides an overview of green computing principles like efficiency and virtualization. Cloud computing is described as a virtualized and scalable computing platform. Green cloud computing from a data center perspective involves diagnosing issues, measuring energy usage, server virtualization, and building efficiently. Case studies from Senegal, South Africa, and India show how green data center approaches and private clouds can reduce energy costs and increase efficiency. The document advocates for more research on maximizing green data center efficiency to benefit developing regions.
On June 24th I presented to the Dependable Systems Engineering group here in the School of Computer Science, St Andrews. The group meets once a month for a presentation from one of its members over lunch. The presenter talks about their current research, providing a good opportunity to keep up to date with other work within the group.On June 24th I presented to the Dependable Systems Engineering group here in the School of Computer Science, St Andrews. The group meets once a month for a presentation from one of its members over lunch. The presenter talks about their current research, providing a good opportunity to keep up to date with other work within the group.
Green cloud computing using heuristic algorithmsIliad Mnd
Green computing is defined as the study and practice of designing , manufacturing, using, and disposing of computers, servers, and associated sub systems such as
monitors, printers, storage devices, and networking and
communications systems efficiently and effectively with
minimal or no impact on the environment. Research continues into key areas such as making the use of computers as energy efficient as possible, and designing algorithms and systems for efficiency related computer technologies.
A Survey on Virtualization Data Centers For Green Cloud ComputingIJTET Journal
Abstract —Due to trends like Cloud Computing and Green cloud Computing, virtualization technologies are gaining increasing importance. Cloud is a atypical model for computing resources, which intent to computing framework to the network in order to cut down costs of software and hardware resources. Nowadays, power is one of big issue of IDC has huge impacts on society. Researchers are seeking to find solutions to make IDC reduce power consumption. These IDC (Internet Data Center) consume large amounts of energy to process the cloud services, high operational cost, and affecting the lifespan of hardware equipments. The field of Green computing is also becoming more and more important in a world with finite number of energy resources and rising demand. Virtual Machine (VM) mechanism has been broadly applied in data center, including flexibility, reliability, and manageability. The research survey presents about the virtualization IDC in green cloud it contains various key features of the Green cloud, cloud computing, data centers, virtualization, data center with virtualization, power – aware, thermal – aware, network-aware, resource-aware and migration techniques. In this paper the several methods that are utilze to achieve the virtualization in IDC in green cloud computing are discussed.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document summarizes research on techniques for virtual machine (VM) scheduling and management to improve energy efficiency in cloud computing. It discusses how VM scheduling algorithms aim to optimally map VMs to physical servers while minimizing costs and power consumption. Precopy and postcopy live migration techniques are described for managing VMs. The document surveys various algorithms for VM scheduling, including ones based on data transfer time, linear programming, and combinatorial optimization. It also discusses factors that affect VM migration efficiency such as hypervisor options and network configuration. Overall, the document provides an overview of energy-efficient approaches for VM scheduling and management in cloud computing.
Energy Saving by Migrating Virtual Machine to Green Cloud Computingijtsrd
Green computing is characterized as the examination and practice of structuring, assembling, utilizing, and discarding PCs, servers, and related subsystems, for example, screens, printers, storage gadgets, and systems administration and interchanges frameworks proficiently and successfully with negligible or no effect on the earth. The objective of green computing is to diminish the utilization of hazardous materials, amplify energy proficiency during the items lifetime, and advance the recyclability of obsolete items and factory waste. Green computing can be accomplished by either Product Longevity Resource distribution or Virtualization or Power management. power is the bottleneck of improving the system execution. Among all industries, the information communication technology ICT industry is seemingly answerable for a bigger segment of the overall development in energy utilization. The objective of green cloud computing is to advance the recyclability or biodegradability of outdated items and factory waste by diminishing the utilization of hazardous materials and amplifying the energy productivity during the items lifetime. Stephen Fernandes "Energy Saving by Migrating Virtual Machine to Green Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30422.pdf Paper Url :https://www.ijtsrd.com/computer-science/distributed-computing/30422/energy-saving-by-migrating-virtual-machine-to-green-cloud-computing/stephen-fernandes
This document discusses green cloud computing from the perspective of data centers. It begins with background on green computing and cloud computing. It then discusses how green cloud computing can help balance energy usage in data centers through server virtualization, energy-aware consolidation, and locating data centers in developing regions. The document presents two case studies, one on a green data center in Senegal and another on benefits realized by a cell phone company in South Africa from implementing a private cloud. It concludes with sections on the Indian scenario for green IT standardization and a call to continue research efforts to maximize efficiency of green data centers.
RESOURCE ALLOCATION AND STORAGE IN MOBILE USING CLOUD COMPUTINGSathmica K
This document discusses resource allocation and storage in mobile computing using cloud computing. It proposes using a reservation plan and the Hungarian method for virtual machine deployment to efficiently allocate resources. An optical code resources provisioning approach is also implemented using stochastic integer programming to optimize resource scheduling in the cloud. The goal is to reduce reservation and expenditure costs while improving resource utilization for mobile cloud applications.
This presentation brings insights on cloud and green cloud computing and briefs the readers with its potential in india and how it can be achieved. Numerous insights have been collectively put in into this presentation.
Cloud computing has the potential to improve energy efficiency through server consolidation and switching off unused servers, however, increasing internet traffic and data storage demands driven by cloud services could negate these savings; while Microsoft claims its cloud solutions reduce energy use by 30-90% compared to on-premise installations, Greenpeace argues collective cloud demand will increase CO2 emissions even with efficient data centers. The presentation analyzes the environmental sustainability of cloud computing by exploring technologies and mechanisms that support this goal as well as studies with differing views on cloud computing's impact.
A Survey on Resource Allocation & Monitoring in Cloud ComputingMohd Hairey
This document provides an overview of a survey on resource allocation and monitoring in cloud computing. It discusses (1) cloud computing and its key characteristics, (2) elements of resource management including allocation, monitoring, discovery and provisioning, (3) existing mechanisms for resource allocation and monitoring, and (4) gaps in current approaches. The survey aims to study resource allocation and monitoring in cloud computing and describe issues and current solutions to help develop a better resource management framework.
An energy optimization with improved QOS approach for adaptive cloud resources IJECEIAES
In recent times, the utilization of cloud computing VMs is extremely enhanced in our day-to-day life due to the ample utilization of digital applications, network appliances, portable gadgets, and information devices etc. In this cloud computing VMs numerous different schemes can be implemented like multimedia-signal-processing-methods. Thus, efficient performance of these cloud-computing VMs becomes an obligatory constraint, precisely for these multimedia-signal-processing-methods. However, large amount of energy consumption and reduction in efficiency of these cloud-computing VMs are the key issues faced by different cloud computing organizations. Therefore, here, we have introduced a dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability (퐴퐶푅푅) technique for cloud computing devices, which efficiently reduces energy consumption, as well as perform operations in very less time. We have demonstrated an efficient resource allocation and utilization technique to optimize by reducing different costs of the model. We have also demonstrated efficient energy optimization techniques by reducing task loads. Our experimental outcomes shows the superiority of our proposed model 퐴퐶푅푅 in terms of average run time, power consumption and average power required than any other state-of-art techniques.
This document discusses green cloud computing. It begins by defining green computing and cloud computing individually. Green computing aims to reduce power consumption and environmental impact of IT, while cloud computing involves virtualized and interconnected computers. Green cloud computing combines these concepts by making cloud infrastructure and operations more energy efficient. The document then covers benefits like reduced energy use, the role of dynamic provisioning and multi-tenancy in cloud enabling green computing, and a case study on a green cloud architecture and scheduling policies that can reduce carbon emissions by 20%.
Green Cloud Computing :Emerging TechnologyIRJET Journal
This document discusses green cloud computing and how cloud infrastructure contributes to high energy consumption. It summarizes that while cloud computing provides cost and scalability benefits, the growing demand on data centers has increased energy usage and carbon emissions. However, the document also explains that cloud computing technologies like dynamic provisioning, multi-tenancy, high server utilization, and efficient data center design can help reduce the environmental impact and enable more sustainable "green" cloud computing through higher efficiency. Future research directions are needed to further optimize cloud resource usage and energy efficiency from a holistic perspective.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Service oriented cloud architecture for improved performance of smart grid ap...eSAT Journals
Abstract An effective and flexible computational platform is needed for the data coordination and processing associated with real time operational and application services in smart grid. A server environment where multiple applications are hosted by a common pool of virtualized server resources demands an open source structure for ensuring operational flexibility. In this paper, open source architecture is proposed for real time services which involve data coordination and processing. The architecture enables secure and reliable exchange of information and transactions with users over the internet to support various services. Prioritizing the applications based on complexity enhances efficiency of resource allocation in such situations. A priority based scheduling algorithm is proposed in the work for application level performance management in the structure. Analytical model based on queuing theory is developed for evaluating the performance of the test bed. The implementation is done using open stack cloud and the test results show a significant gain of 8% with the algorithm. Index Terms: Service Oriented Architecture, Smart grid, Mean response time, Open stack, Queuing model
An Efficient Cloud Scheduling Algorithm for the Conservation of Energy throug...IJECEIAES
Method of broadcasting is the well known operation that is used for providing support to different computing protocols in cloud computing. Attaining energy efficiency is one of the prominent challenges, that is quite significant in the scheduling process that is used in cloud computing as, there are fixed limits that have to be met by the system. In this research paper, we are particularly focusing on the cloud server maintenance and scheduling process and to do so, we are using the interactive broadcasting energy efficient computing technique along with the cloud computing server. Additionally, the remote host machines used for cloud services are dissipating more power and with that they are consuming more and more energy. The effect of the power consumption is one of the main factors for determining the cost of the computing resources. With the idea of using the avoidance technology for assigning the data center resources that dynamically depend on the application demands and supports the cloud computing with the optimization of the servers in use.
Cloud computing offers utility-oriented IT services worldwide, enabling hosting of applications from various domains. However, data centers consuming huge amounts of energy contributing to high costs and carbon footprints. Green cloud computing solutions are needed to save energy and reduce costs while powering down servers when not in use.
Finding your Way in the Fog: Towards a Comprehensive Definition of Fog ComputingHarshitParkar6677
The cloud is migrating to the edge of the network, where
routers themselves may become the virtualisation infrastructure,
in an evolution labelled as “the fog”. However, many
other complementary technologies are reaching a high level
of maturity. Their interplay may dramatically shift the information
and communication technology landscape in the
following years, bringing separate technologies into a common
ground. This paper offers a comprehensive definition
of the fog, comprehending technologies as diverse as cloud,
sensor networks, peer-to-peer networks, network virtualisation
functions or configuration management techniques. We
highlight the main challenges faced by this potentially breakthrough
technology amalgamation.
DESIGN OF INTELLIGENT DEVICE TO SAVE STANDBY POWER IN NETWORK ENABLED DEVICESIAEME Publication
This document proposes an Automatic Power Cut-Off and Reset Device (APCRD) to reduce standby power consumption in network-enabled devices. The APCRD would automatically cut power to appliances when they enter standby mode, completely eliminating standby power use. The document analyzes current and projected network device electricity consumption and savings potential from various efficiency approaches. It finds that the APCRD could achieve the highest energy, economic, and emissions savings compared to the Eco Design Directive or best available technologies by fully eliminating standby power in eligible devices.
In today’s world the growing demand for knowledge has made cloud computing a center of attraction. Cloud computing is providing utility based services to all the users worldwide. It enables presentation of applications from consumers, scientific and business domains. However, data centers created for cloud computing applications consume huge amounts of energy, contributing to high operational costs and a large amount of carbon dioxide emission to the environment. With enhancement of data center, the power consumption is increasing at such a rate that it has become a key concern these days because it is ultimately leading to energy shortcomings and global climatic change. Therefore, we need green cloud computing solutions that can not only save energy, but also reduce operational costs.
This document discusses green cloud computing and data centers. It provides an overview of green computing principles like efficiency and virtualization. Cloud computing is described as a virtualized and scalable computing platform. Green cloud computing from a data center perspective involves diagnosing issues, measuring energy usage, server virtualization, and building efficiently. Case studies from Senegal, South Africa, and India show how green data center approaches and private clouds can reduce energy costs and increase efficiency. The document advocates for more research on maximizing green data center efficiency to benefit developing regions.
On June 24th I presented to the Dependable Systems Engineering group here in the School of Computer Science, St Andrews. The group meets once a month for a presentation from one of its members over lunch. The presenter talks about their current research, providing a good opportunity to keep up to date with other work within the group.On June 24th I presented to the Dependable Systems Engineering group here in the School of Computer Science, St Andrews. The group meets once a month for a presentation from one of its members over lunch. The presenter talks about their current research, providing a good opportunity to keep up to date with other work within the group.
Green cloud computing using heuristic algorithmsIliad Mnd
Green computing is defined as the study and practice of designing , manufacturing, using, and disposing of computers, servers, and associated sub systems such as
monitors, printers, storage devices, and networking and
communications systems efficiently and effectively with
minimal or no impact on the environment. Research continues into key areas such as making the use of computers as energy efficient as possible, and designing algorithms and systems for efficiency related computer technologies.
A Survey on Virtualization Data Centers For Green Cloud ComputingIJTET Journal
Abstract —Due to trends like Cloud Computing and Green cloud Computing, virtualization technologies are gaining increasing importance. Cloud is a atypical model for computing resources, which intent to computing framework to the network in order to cut down costs of software and hardware resources. Nowadays, power is one of big issue of IDC has huge impacts on society. Researchers are seeking to find solutions to make IDC reduce power consumption. These IDC (Internet Data Center) consume large amounts of energy to process the cloud services, high operational cost, and affecting the lifespan of hardware equipments. The field of Green computing is also becoming more and more important in a world with finite number of energy resources and rising demand. Virtual Machine (VM) mechanism has been broadly applied in data center, including flexibility, reliability, and manageability. The research survey presents about the virtualization IDC in green cloud it contains various key features of the Green cloud, cloud computing, data centers, virtualization, data center with virtualization, power – aware, thermal – aware, network-aware, resource-aware and migration techniques. In this paper the several methods that are utilze to achieve the virtualization in IDC in green cloud computing are discussed.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document summarizes research on techniques for virtual machine (VM) scheduling and management to improve energy efficiency in cloud computing. It discusses how VM scheduling algorithms aim to optimally map VMs to physical servers while minimizing costs and power consumption. Precopy and postcopy live migration techniques are described for managing VMs. The document surveys various algorithms for VM scheduling, including ones based on data transfer time, linear programming, and combinatorial optimization. It also discusses factors that affect VM migration efficiency such as hypervisor options and network configuration. Overall, the document provides an overview of energy-efficient approaches for VM scheduling and management in cloud computing.
Energy Saving by Migrating Virtual Machine to Green Cloud Computingijtsrd
Green computing is characterized as the examination and practice of structuring, assembling, utilizing, and discarding PCs, servers, and related subsystems, for example, screens, printers, storage gadgets, and systems administration and interchanges frameworks proficiently and successfully with negligible or no effect on the earth. The objective of green computing is to diminish the utilization of hazardous materials, amplify energy proficiency during the items lifetime, and advance the recyclability of obsolete items and factory waste. Green computing can be accomplished by either Product Longevity Resource distribution or Virtualization or Power management. power is the bottleneck of improving the system execution. Among all industries, the information communication technology ICT industry is seemingly answerable for a bigger segment of the overall development in energy utilization. The objective of green cloud computing is to advance the recyclability or biodegradability of outdated items and factory waste by diminishing the utilization of hazardous materials and amplifying the energy productivity during the items lifetime. Stephen Fernandes "Energy Saving by Migrating Virtual Machine to Green Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30422.pdf Paper Url :https://www.ijtsrd.com/computer-science/distributed-computing/30422/energy-saving-by-migrating-virtual-machine-to-green-cloud-computing/stephen-fernandes
This document discusses green cloud computing from the perspective of data centers. It begins with background on green computing and cloud computing. It then discusses how green cloud computing can help balance energy usage in data centers through server virtualization, energy-aware consolidation, and locating data centers in developing regions. The document presents two case studies, one on a green data center in Senegal and another on benefits realized by a cell phone company in South Africa from implementing a private cloud. It concludes with sections on the Indian scenario for green IT standardization and a call to continue research efforts to maximize efficiency of green data centers.
RESOURCE ALLOCATION AND STORAGE IN MOBILE USING CLOUD COMPUTINGSathmica K
This document discusses resource allocation and storage in mobile computing using cloud computing. It proposes using a reservation plan and the Hungarian method for virtual machine deployment to efficiently allocate resources. An optical code resources provisioning approach is also implemented using stochastic integer programming to optimize resource scheduling in the cloud. The goal is to reduce reservation and expenditure costs while improving resource utilization for mobile cloud applications.
This presentation brings insights on cloud and green cloud computing and briefs the readers with its potential in india and how it can be achieved. Numerous insights have been collectively put in into this presentation.
Cloud computing has the potential to improve energy efficiency through server consolidation and switching off unused servers, however, increasing internet traffic and data storage demands driven by cloud services could negate these savings; while Microsoft claims its cloud solutions reduce energy use by 30-90% compared to on-premise installations, Greenpeace argues collective cloud demand will increase CO2 emissions even with efficient data centers. The presentation analyzes the environmental sustainability of cloud computing by exploring technologies and mechanisms that support this goal as well as studies with differing views on cloud computing's impact.
A Survey on Resource Allocation & Monitoring in Cloud ComputingMohd Hairey
This document provides an overview of a survey on resource allocation and monitoring in cloud computing. It discusses (1) cloud computing and its key characteristics, (2) elements of resource management including allocation, monitoring, discovery and provisioning, (3) existing mechanisms for resource allocation and monitoring, and (4) gaps in current approaches. The survey aims to study resource allocation and monitoring in cloud computing and describe issues and current solutions to help develop a better resource management framework.
An energy optimization with improved QOS approach for adaptive cloud resources IJECEIAES
In recent times, the utilization of cloud computing VMs is extremely enhanced in our day-to-day life due to the ample utilization of digital applications, network appliances, portable gadgets, and information devices etc. In this cloud computing VMs numerous different schemes can be implemented like multimedia-signal-processing-methods. Thus, efficient performance of these cloud-computing VMs becomes an obligatory constraint, precisely for these multimedia-signal-processing-methods. However, large amount of energy consumption and reduction in efficiency of these cloud-computing VMs are the key issues faced by different cloud computing organizations. Therefore, here, we have introduced a dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability (퐴퐶푅푅) technique for cloud computing devices, which efficiently reduces energy consumption, as well as perform operations in very less time. We have demonstrated an efficient resource allocation and utilization technique to optimize by reducing different costs of the model. We have also demonstrated efficient energy optimization techniques by reducing task loads. Our experimental outcomes shows the superiority of our proposed model 퐴퐶푅푅 in terms of average run time, power consumption and average power required than any other state-of-art techniques.
This document discusses green cloud computing. It begins by defining green computing and cloud computing individually. Green computing aims to reduce power consumption and environmental impact of IT, while cloud computing involves virtualized and interconnected computers. Green cloud computing combines these concepts by making cloud infrastructure and operations more energy efficient. The document then covers benefits like reduced energy use, the role of dynamic provisioning and multi-tenancy in cloud enabling green computing, and a case study on a green cloud architecture and scheduling policies that can reduce carbon emissions by 20%.
Green Cloud Computing :Emerging TechnologyIRJET Journal
This document discusses green cloud computing and how cloud infrastructure contributes to high energy consumption. It summarizes that while cloud computing provides cost and scalability benefits, the growing demand on data centers has increased energy usage and carbon emissions. However, the document also explains that cloud computing technologies like dynamic provisioning, multi-tenancy, high server utilization, and efficient data center design can help reduce the environmental impact and enable more sustainable "green" cloud computing through higher efficiency. Future research directions are needed to further optimize cloud resource usage and energy efficiency from a holistic perspective.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Service oriented cloud architecture for improved performance of smart grid ap...eSAT Journals
Abstract An effective and flexible computational platform is needed for the data coordination and processing associated with real time operational and application services in smart grid. A server environment where multiple applications are hosted by a common pool of virtualized server resources demands an open source structure for ensuring operational flexibility. In this paper, open source architecture is proposed for real time services which involve data coordination and processing. The architecture enables secure and reliable exchange of information and transactions with users over the internet to support various services. Prioritizing the applications based on complexity enhances efficiency of resource allocation in such situations. A priority based scheduling algorithm is proposed in the work for application level performance management in the structure. Analytical model based on queuing theory is developed for evaluating the performance of the test bed. The implementation is done using open stack cloud and the test results show a significant gain of 8% with the algorithm. Index Terms: Service Oriented Architecture, Smart grid, Mean response time, Open stack, Queuing model
An Efficient Cloud Scheduling Algorithm for the Conservation of Energy throug...IJECEIAES
Method of broadcasting is the well known operation that is used for providing support to different computing protocols in cloud computing. Attaining energy efficiency is one of the prominent challenges, that is quite significant in the scheduling process that is used in cloud computing as, there are fixed limits that have to be met by the system. In this research paper, we are particularly focusing on the cloud server maintenance and scheduling process and to do so, we are using the interactive broadcasting energy efficient computing technique along with the cloud computing server. Additionally, the remote host machines used for cloud services are dissipating more power and with that they are consuming more and more energy. The effect of the power consumption is one of the main factors for determining the cost of the computing resources. With the idea of using the avoidance technology for assigning the data center resources that dynamically depend on the application demands and supports the cloud computing with the optimization of the servers in use.
Cloud computing offers utility-oriented IT services worldwide, enabling hosting of applications from various domains. However, data centers consuming huge amounts of energy contributing to high costs and carbon footprints. Green cloud computing solutions are needed to save energy and reduce costs while powering down servers when not in use.
Finding your Way in the Fog: Towards a Comprehensive Definition of Fog ComputingHarshitParkar6677
The cloud is migrating to the edge of the network, where
routers themselves may become the virtualisation infrastructure,
in an evolution labelled as “the fog”. However, many
other complementary technologies are reaching a high level
of maturity. Their interplay may dramatically shift the information
and communication technology landscape in the
following years, bringing separate technologies into a common
ground. This paper offers a comprehensive definition
of the fog, comprehending technologies as diverse as cloud,
sensor networks, peer-to-peer networks, network virtualisation
functions or configuration management techniques. We
highlight the main challenges faced by this potentially breakthrough
technology amalgamation.
DESIGN OF INTELLIGENT DEVICE TO SAVE STANDBY POWER IN NETWORK ENABLED DEVICESIAEME Publication
This document proposes an Automatic Power Cut-Off and Reset Device (APCRD) to reduce standby power consumption in network-enabled devices. The APCRD would automatically cut power to appliances when they enter standby mode, completely eliminating standby power use. The document analyzes current and projected network device electricity consumption and savings potential from various efficiency approaches. It finds that the APCRD could achieve the highest energy, economic, and emissions savings compared to the Eco Design Directive or best available technologies by fully eliminating standby power in eligible devices.
AN INVESTIGATION OF THE ENERGY CONSUMPTION BY INFORMATION TECHNOLOGY EQUIPMENTSijcsit
The World Wide Web and the rise of servers and PC's data centers have become a major position in the
overall power consumption of the world. In order to prevent global warming and ensuing disasters, already
Internet-service providers, hosting providers on green power have changed. Even household energy
suppliers offer green electricity from renewable energy such as wind, solar, biomass and hydro, which
emits no carbon dioxide, to stand against global warming. Only a global change for the information
technology can prevent the global-warming. The switch to renewable energy is the beginning of our future
and must be pursued as well as the research and development in information and communication
technology.
Module 10 - Section 2: ICTs, the environment and climate change & Section 3: ...Richard Labelle
Innovation in ICTs can have a significant impact in mitigating the impact of climate change and have an important role to play in facilitating and managing adaptation to climate change.
Slide presentations developed to demonstrate how Information and Communication Technologies (ICTs) be used to address climate change, and why ICTs are a crucial part of the solution – i.e. in promoting efficiency, Green Growth & sustainable development, in dealing with climate change and for climate and environmental action. These slide presentations were delivered in February 2011 in Seongnam, near Seoul in Korea.
These presentations were developed and delivered over 2.5 days on the occasion of a Regional Training of Trainers Workshop for upcoming Academy modules on ICT for Disaster Risk Management and Climate Change Abatement. These modules were developed as part of the Academy of ICT Essentials for Government leaders developed by the United Nations (UN) Asia Pacific Centre for ICT Training (APCICT), based in Songdo City, in the Republic of South Korea.
These presentations were developed in 2011, and are somewhat out of date, but most of the principles still apply. Module 10, which has been published, does not include much of the information outlined in these presentations, which are fairly technical. They were developed to address a significant gap in understanding of the technical basis of using ICTs for climate action and because there is a clear bias in development circles against the importance of dealing with climate change mitigation in developing countries. These presentations are an attempt to redress this lack and are published here with this purpose in mind.
The author, Richard Labelle, is presently working on updating these presentations to further highlight the importance of addressing climate change and the important role that technology including ICTs, play in this effort.
Green networking aims to reduce the carbon footprint of information and communication technology (ICT) networks by improving energy efficiency. Key strategies include optimizing network infrastructure utilization through technologies like virtualization, improving equipment energy efficiency, and locating network resources closer to renewable energy sources. Measurement of energy savings is important to track progress towards a lower carbon "Green Network".
Green Commputing - Paradigm Shift in Computing Technology, ICT & its Applicat...Dr. Sunil Kr. Pandey
I was invited as Key Note Speaker in a National Event organized at Gajadhar Bhagat College, Naugachia, (TM Bhagalpur University). I took session on "Paradigm Shift in Computing Technology, ICT & its Applications - Socioeconomic and Environmental Perspective". It was a wonderful learning experience to meet, interact and experience sharing with delegates, faculty and students there.
The document discusses computational grids and their potential impact. Computational grids aim to provide users with dramatically more computing power by pooling unused resources and enabling transparent access to high-performance systems. This would allow for widespread use of computation in new applications, similar to how the electric power grid enabled universal access to electricity. Realizing this vision will require overcoming challenges to build an infrastructure that provides dependable, consistent, and inexpensive access to computational capabilities on a large scale.
Abstract: Energy efficiency in all the aspects of human life has become a major concern, due to significant environment impact as well as it economic importance. Information and Communication Technology (ICT) estimated 2-10% of the global consumption but is also expected to enable global energy efficiency through new technologies tightly dependent on networks. Specially, a network model based on G-network quening theory is built, which can incorporate all the important parameters of power consumption together with traditional performance metric and routing control capability. Our goal is to control both power configuration of pipeline and way to distribute traffic flow among them. Optimization policy having best tradeoff between power consumption and packet latency times. The achieved results demonstrate how the proposed model can effectively represent energy and network-aware performance indexes.
In tech demand-management_and_wireless_sensor_networks_in_the_smart_gridAWe Shinkansen
The document discusses demand management and wireless sensor networks in the smart grid. It begins with an introduction that outlines the factors driving the renovation of electrical power grids, including resilience problems, growing demand, inefficiency, and environmental issues. It then discusses how smart grid aims to address these issues by integrating information and communication technologies. Demand management will play a key role in increasing grid efficiency. Wireless sensor networks provide opportunities for demand management applications due to their low cost and pervasive communication capabilities. Challenges for smart grid implementation include standardization, security issues, successful adoption of demand management systems, and coordinating increased loads from electric vehicles.
IRJET - Energy Efficient Approach for Data Aggregation in IoTIRJET Journal
This document summarizes a research paper on developing an energy efficient approach for data aggregation in IoT networks. The paper proposes using cache nodes between cluster heads and the base station to reduce energy consumption. It analyzes that the proposed technique of deploying cache nodes performs better than the existing LEACH protocol in terms of packet loss, throughput and energy consumption. The simulation results show that the proposed approach lowers packet loss and energy usage compared to directly transmitting data from cluster heads to the base station.
IRJET-A Review: IoT and Cloud Computing for Future InternetIRJET Journal
This document reviews the integration of Internet of Things (IoT) and cloud computing for future internet applications. It discusses how IoT allows billions of devices to connect and communicate over networks, while cloud computing provides scalable backend processing and storage. However, there is currently no common framework integrating the two. The document argues that IP Multimedia Subsystem (IMS) communication platform provides the most suitable framework. It then reviews several related works discussing challenges and solutions in integrating IoT and cloud computing. Areas like healthcare, transportation, and environmental monitoring are discussed as domains that could benefit from an IoT and cloud computing integration.
This document proposes a network-based solution to integrate building energy management systems. Buildings currently have disconnected energy monitoring systems that result in inefficient energy use. The proposed solution involves creating a network infrastructure to connect these systems, a mediator to translate their different protocols, and user software to monitor energy use. This would help optimize building energy efficiency and reduce greenhouse gas emissions by providing integrated energy consumption data. However, challenges include proprietary protocols, internet vulnerabilities, collaboration between companies, and ensuring qualified personnel can support the system.
About SIGFOX
SIGFOX is the first and only operator of a cellular network fully dedicated to low throughput communication for connected objects. Leveraging on its patented UNB technology SIGFOX brings a revolution to the M2M and Internet of Things world by enabling large-scale connection of objects. The network already connects tens of thousands of objects in France and international cities.
SIGFOX provides an end-to-end solution for your communication chain, from your objects through to your information system, with unprecedented pricing models and low energy consumption.
As a network operator SIGFOX operates fixed-location
transceivers enabling your objects to be connected “out of the box”. However contrary to the telecommunication networks, the SIGFOX transceivers and the entire SIGFOX connectivity solution has been developed, built and deployed to only serve the low throughput M2M and IoT applications. As an operated longrange network, SIGFOX provides connectivity without the need to deploy specific network infrastructures for each application.
Unlike other narrow band or white space solution providers we do not require our customers to invest in network equipment, the SIGFOX network is simply available to any object equipped with our certified connectivity solutions.
From an application point of view, the SIGFOX connectivity solution functions as follows:
• SIGFOX compatible modems are integrated within the physical objects by our certified partner network
• The objects instruct the modems to send messages whenever and wherever needed
• The transmitted data is picked up by the SIGFOX transceivers, and routed to our managed service
• The SIGFOX servers verify the data integrity and route the messages to the application’s IT system.
(...)
Sample use cases.
MAAF Assurances, one of the leading French insurance companies, anticipate the upcoming regulation that will impose by 2015 that each household be equipped with a smoke detector. The fire and/or intrusion alert service that will be using the SIGFOX network will enable MAAF insured customers to be warned directly through SMS, in case the intrusion or smoke detectors send alarms and allow MAAF and their customers to be alerted if there is an anomaly, such as low battery, with the
smoke detector.
Clear Channel Outdooroperates stations throughout France. In order to avoid constant manual inspection of the ad stations, a remote monitoring application has been deployed and the SIGFOX network is used to communicate status information from each ad station to the IT system.
For further info:
• contact@sigfox.com
• www.sigfox.com
The document summarizes Cisco EnergyWise, a new approach from Cisco Systems to managing corporate energy consumption through the enterprise network. Cisco EnergyWise allows organizations to measure, manage, and control the power usage of all devices connected to the corporate network, including both IT and non-IT systems. It provides a way to centrally monitor and optimize energy usage across the entire organization. The architecture is built on Cisco switches and uses the network to distribute commands and aggregate power data from all connected devices. This allows organizations to gain visibility and control over their total energy footprint and costs.
This document discusses issues related to green computing and power consumption as it relates to personal computer monitors. It analyzes the power consumed by three different types of monitors: CRT, LCD, and LED. The key findings are:
1) CRT monitors consume the most power at 150W on average, while LCD monitors consume around 30W and LED monitors consume around 20W.
2) When accounting for the entire personal computer system, a system using a CRT monitor consumes around 420W while an LCD system consumes 300W and an LED system consumes 290W.
3) Switching to LCD or LED monitors can reduce total computer power consumption by around 30-40% compared to using a CRT
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The document reviews various techniques for achieving green networking and energy efficiency in computer networks. It discusses four main techniques: 1) Adaptive link rate, which allows network links to operate at lower speeds during periods of low utilization; 2) Interface proxying, which uses proxies to process network traffic and allow end devices to enter low-power modes; 3) Dynamic voltage and frequency scaling, which reduces processor voltage and frequency to decrease energy usage when processors are underutilized; and 4) Energy-aware applications and software, which incorporate energy-efficient techniques without changing existing network architectures. The document analyzes the advantages and disadvantages of each technique and concludes that while each has its own benefits, combining multiple techniques can maximize energy savings for computer networks
Empirical studies have revealed that a significant amount of energy is lost unnecessarily in the
network architectures, protocols, routers and various other network devices. Thus there is a need for techniques
to obtain green networking in the computer architecture which can lead to energy saving. Green networking is
an emerging phenomenon in the computer industry because of its economic and environmental benefits. Saving
energy leads to cost-cutting and lower emission of greenhouse gases which are apparently one of the major
threats to the environment. ’Greening’ as the name suggests is the process of constructing network architecture
in such a way so as to avoid unnecessary loss of power and energy due its various components and can be
implemented using various techniques out of which four are mentioned in this review paper, namely Adaptive
link rate (ALR), Dynamic Voltage and Frequency scaling(DVFS), Interface proxying and energy aware
applications and software.
The document discusses the use of IoT in the energy industry. It describes how IoT can be applied to remotely monitor energy assets, automate processes, integrate renewable energy sources with the grid, and help consumers reduce energy consumption through smart meters and decision making. Some benefits of IoT include improved reliability, reduced costs and labor, and more eco-friendly measures. Potential challenges involve security, connectivity, integration complexity. Edge computing and infrastructure modernization are proposed as solutions.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
1. TABLE OF CONTENTS:
1. INTRODUCTION.............................................................................................................2
1.1 Energy Crisis ................................................................................................................3
1.2 Current status of devices................................................................................................4
1.3 Industrial View .............................................................................................................4
1.4 Impact of Change..........................................................................................................4
2. LITERATURE REVIEW .................................................................................................5
2.1 Wireless computing networks.........................................................................................5
2.2 Solutions developed which are also used in the proposed framework architecture ..............6
2.2.1 Caching..................................................................................................................6
2.2.2 Virtualization..........................................................................................................6
2.2.3 Network & Computing Services (NCS) ....................................................................6
2.2.4 Energy Awareness...................................................................................................7
2.2.5 Cloud Services........................................................................................................7
3. GREEN COMPUTING NETWORKS..............................................................................8
3.1 Basic terminologies .......................................................................................................8
3.1.1 DENS.....................................................................................................................8
3.1.2 ICN........................................................................................................................9
3.3 Need of Green Computing ...........................................................................................10
4. ARCHITECTURE..........................................................................................................11
5. ALGORITHM ................................................................................................................13
6. APPLICATIONS............................................................................................................16
7. ADVANTAGES..............................................................................................................17
8. DISADVANTAGES........................................................................................................18
9. SCOPES FOR FUTURE DEVELOPMENT ...................................................................19
2. CHAPTER 1
1. INTRODUCTION
Energy crisis has been largely talked of, also various measures to ensure that we don’t
run of out power have been adopted worldwide but still most areas of the world don’t
have uninterrupted supply today. The future scenario is even worse. It is the need of the
hour to encourage people to use and develop systems which don’t have an impact on the
globe directly or indirectly.
The most significant part in evolution of modern computers from the 20th to the 21st
century can be marked by the developments in networking. The rise of the Internet
through World Wide Web (WWW) caused a boom in the computer networks sector.
Further, today the focus is rather shifted more towards wireless networking with
technologies like Wi-Fi, Bluetooth, ZigBee being on the rise due to its numerous
advantages. During all these developments various applications based on computer
systems were designed, and most of them have integrated in modern lifestyle as without
it live would be tough. Social media, Digital banking being the best example of the same.
But one area which was left rather uncovered is the impact of this rapid development of
our environment. During projects the cost and risks are estimated and the project is
optimized accordingly. But seldom do we consider effects of our systems on nature. If we
break the computer system into modules like display, power supply, networking, input
output peripherals, etc. only the networking modules needs consent interaction with the
outside world which is possible only with networking devices. Also in mobiles and
embedded systems the power supply is extremely limited and these devices are constantly
connected to different devices through the Internet. These areas of concern are discussed
in this seminar and the best possible ways and approaches to solve these are highlighted.
3. And if the networking world is broadly classified further as hardware components and
software components, we observe that the hardware consumption is easy to track and
necessary actions can be taken by manufactures, having said this digital waste is now a
priority issue being addressed in most developed countries and constant research is going
in the this field . This narrows the problem left due to software and algorithms used in
networking applications. Here we discuss the solutions to the above discussed problem
that is minimizing effects of networking on environment through software based
networking.
The concept of Green computing for evolved for the solution of the same and is under
developments. Various aspects of Green computing have been discussed in the following
content. But before that the need of developing such systems and related content is
discussed.
1.1 Energy Crisis
The future global economy is likely to consume ever more energy, especially with the
rising energy demand in developing countries such as India and China. At the same time,
the tremendous risk of climate change associated with the use of fossil fuels makes
supplying this energy increasingly difficult.
The potential for crisis if we run out of energy is very real but there is still time before
that occurs. At expected rates of demand growth we have enough for thirty years supply
[1], the limited supply potential of non-renewable energy sources cannot ensure that the
world does not fall short of its energy needs. Global warming is been on the rise and the
infinite servers of various organizations are contributes to it too. Also data replication and
need of continuous power supply of these servers are also not having positive impact on
the environment.[2]
4. 1.2 Current status of devices
According to estimation the world population will be around 8 billion in 2017
compared to the number of devices connected to Internet will be up to 24 billion. Internet
usage means use of networking services. [3]
Because mobile devices are dependent on battery power, it is important to minimize
their energy consumption. The energy consumption of the network interface can be
significant, especially for smaller devices. Most research in energy conservation
strategies has targeted wireless networks that are structured around base stations and
centralized servers, which do not have the limitations associated with small, portable
devices.
1.3 Industrial View
The major issue which is being focused in almost all the industries is the energy
consumption issue. The increase in energy consumption results in many problems related
to the environment. One of these problems include the emission of Green House Gases
(GHG).[4] During the past few years, the emission of Green House Gases has increased
exponentially and it has had a destructive effect on the atmosphere. Even computer
systems have a carbon footprint and heating emissions have been related to them. The
proposed framework aims for faster information retrieval there by needing less
computational power to avoid the heating effects.
1.4 Impact of Change
As stated above about 24 billion devices will be connected to the Internet by end of
2017 so any positive change bought about in networking will have a huge impact overall.
The solutions discussed are software related so there will not be any need to change any
5. hardware and they will not be restricted specific hardware configurations as software can
be remodeled for different systems.
CHAPTER 2
2. LITERATURE REVIEW
Various terms defined and involved in existing solutions to Green computing
networks are discussed.
2.1 Wireless computing networks
Wireless networks are computer networks that are not connected by cables of any
kind. The use of a wireless network enables enterprises to avoid the costly process of
introducing cables into buildings or as a connection between different equipment
locations. The basis of wireless systems are radio waves, an implementation that takes
place at the physical level of network structure.
Wireless networks use radio waves to connect devices such as laptops to the Internet,
the business network and applications.[2] When laptops are connected to Wi-Fi hot spots
in public places, the connection is established to that business’s wireless network. The
smartphone boom has been a major contributing factor to need of wireless computing
networks.
There are four main types of wireless networks:
Wireless Local Area Network (LAN): Links two or more devices using a wireless
distribution method, providing a connection through access points to the wider
Internet.
Wireless Metropolitan Area Networks (MAN): Connects several wireless LANs.
Wireless Wide Area Network (WAN): Covers large areas such as neighboring
towns and cities.
6. Wireless Personal Area Network (PAN): Interconnects devices in a short span,
generally within a person’s reach.
2.2 Solutions developed which are also used in the proposed framework architecture
2.2.1 Caching
Network caching is the technique of keeping frequently accessed information in a
location close to the requester.[4] A Web cache stores Web pages and content on a
storage device that is physically or logically closer to the user-closer and faster than a
Web lookup. Similarly data caches are also present.
2.2.2 Virtualization
Network virtualization refers to the management and monitoring of an entire computer
network as a single administrative entity from a single software-based administrator’s
console.[7] Network virtualization also may include storage virtualization, which
involves managing all storage as a single resource. Network virtualization is designed to
allow network optimization of data transfer rates, flexibility, scalability, reliability and
security. It automates many network administrative tasks, which actually disguise a
network's true complexity. All network servers and services are considered one pool of
resources, which may be used without regard to the physical components.
Network virtualization is especially useful for networks experiencing a rapid, large
and unpredictable increase in usage. The intended result of network virtualization is
improved network productivity and efficiency, as well as simplifying work for the
network administrator.
2.2.3 Network & Computing Services (NCS)
NCS provides computer/network technical support and is committed to delivering
secure, responsive, high-quality, customer-oriented services and support that foster a
productive system.[11]
7. NCS achieves this mission by incorporating innovative technology products from the
private sector with the highest-quality products and services developed internally. This
cost-effective and balanced technology helps to ensure that the users enjoy a solid
technological infrastructure, reliable critical services and customer-focused support
systems to meet needs of today and tomorrow.
2.2.4 Energy Awareness
An energy aware system as the name suggests is always aware of amount of energy in
the same system.[9] These systems are scheduled on basis on amount of power supply
left. Hence we use energy aware algorithms in such systems as standard algorithms are
inefficient. High importance applications are always based on this standard. DENS is
one of the most popular algorithm available.
2.2.5 Cloud Services
Cloud computing is a type of Internet-based computing that provides shared computer
processing resources and data to computers and other devices on demand.[12] It is a
model for enabling ubiquitous, on-demand access to a shared pool of configurable
computing resources (e.g., computer networks, servers, storage, applications and
services), which can be rapidly provisioned and released with minimal management
effort. Cloud computing and storage solutions provide users and enterprises with various
capabilities to store and process their data in either privately owned, or third-party data
centers that may be located far from the user–ranging in distance from across a city to
across the world. Cloud computing relies on sharing of resources to achieve coherence
and economy of scale, similar to a utility (e.g., like the electricity grid over an electricity
network).
8. CHAPTER 3
3. GREEN COMPUTINGNETWORKS
Green wireless computing requires the in depth study of networking caching and
computing. It is basically aimed at reducing energy consumption of the system. With the
developments in technology all these have been studied individually to a very large
extent. A new concept of SDN came into being with it. [5]
Green computing also uses cloud computing but cloud computing but is not fully
developed yet. Cloud computing is not cost effective and environment friendly when
considered minutely.
3.1 Basic terminologies
Some basic terms used in Green computing are discussed
3.1.1 DENS
It’s a fact that each datacenter comprises of thousands of physical machines running
millions of Virtual machines and arranged in massive racks. It’s natural that this will
consume huge amounts of energy. For this the Datacenter Energy-efficient Network-
aware Scheduling algorithm (DENS) is proposed.
The DENS methodology minimizes the total energy consumption of a data center
by selecting the best-fit computing resources for job execution based on the load level
and communication potential of data center components [10]. The communicational
potential is defined as the amount of end-to-end bandwidth provided to individual servers
or group of servers by the data center architecture. Contrary to traditional scheduling
solutions that model data centers as a homogeneous pool of computing servers, the
DENS methodology develops a hierarchical model consistent with the state of the art data
center topologies.
9. 3.1.2 SDN
Software-defined networking (SDN) is an umbrella term encompassing several kinds
of network technology aimed at making the network as agile and flexible as the
virtualized server and storage infrastructure of the modern data center. The goal of SDN
is to allow network engineers and administrators to respond quickly to changing business
requirements. [1]
In a software-defined network, a network administrator can shape traffic from a
centralized control console without having to touch individual switches, and can deliver
services to wherever they are needed in the network, without regard to what specific
devices a server or other hardware components are connected to. [8] The key
technologies for SDN implementation are functional separation, network virtualization
and automation through programmability
Software Defined Networking describes how the network can be programmed via a
logically software defined controller and separate the control from the data. [6] The
framework of SDN will be elaborated further. If the wireless networks are software
defined then it means that the wireless network connections are directly enabled and hide
the underlying infrastructure for applications in green wireless network management.
3.1.2 ICN
Information-centric networking (ICN) is an approach to evolve the Internet infrastructure
away from a host-centric paradigm based on perpetual connectivity and the end-to-end
principle, to a network architecture in which the focal point is “named information” (or
content or data). In this paradigm, connectivity may well be intermittent, end-host and in-
network storage can be capitalized upon transparently, as bits in the network and on
storage devices have exactly the same value, mobility and multi access are the norm and
anycast, multicast, and broadcast are natively supported. Data becomes independent from
location, application, storage, and means of transportation, enabling in-network caching
and replication. The expected benefits are improved efficiency, better scalability with
respect to information/bandwidth demand and better robustness in challenging
communication scenarios.
10. 3.2 Evolution of Green Computing Networks
The term green computing is not yet very well defined technically, so any technology
which is more energy efficient can be deemed into this category. It is very difficult to
classify in other ways.
Green Computing Networks started with introduction of caching in networks, later
new algorithms started to be written and this led to Software Defined Networking
furthermore recently energy aware systems are being employed. While Artificial
Intelligence can change the networking system by data mining to have better caching,
less congestion and efficient scheduling
3.3 Needof Green Computing Networking
The impact of green networks is endless as huge datacenters consume electricity
almost equal to normal public usage, while people still don’t have power supply in all
parts of the world these datacenters eat up a massive amount of energy.[8]
Following graph shows the impact of green computing in networks.
Figure 3.1: Impact of Green Computing Networks[1]
12. At the top of architecture there are network operating systems which consists of actual
data and we implement various routing and scheduling algorithms, the approach of SDN
which is defined above is used here. The approach is completely software centric which
makes the system more flexible, a generic approach is not followed
The switch hypervisor mainly implements and administrates the communication
between switches and controller. The network hypervisor is used to monitor the
networking status, such as congestion. The topology hypervisor masters all the physical
nodes, links, and ports through regular monitoring. These hypervisors will map the
abstracted resource slices to the physical infrastructure. Based on the information
mastered by these hypervisors, the controller could implement some operations or
strategies from the network applications layer, and ensure the isolation. Furthermore, the
controller could guide packet forwarding of the devices in data plane, as well as perform
the commands of communicating, computing, and accessing according to these
information lists.
The switch hypervisor is connected is connected to Heterogeneous wireless network
where there are different wireless devices like WiFi, routers which intercommunicate
using various gateway at every unit there is a cache memory and virtualization is allowed
at every stage to make sure no system is overloaded.
13. CHAPTER 5
5. ALGORITHM
The standard algorithms for scheduling are well known and now being from a long
time to date in numerous applications. These algorithms are to be modified so as to more
energy efficient while not compromising on the throughput of the system.
First Come First Serve
First come, first served (FCFS) is an operating system process scheduling algorithm
and a network routing management mechanism that automatically executes queued
requests and processes by the order of their arrival. With first come, first served, what
comes first is handled first; the next request in line will be executed once the one before it
is complete.
The proposed modification is that in preemptive scheduling FCFS also keeps track of
tasks, if a task need 10 units of energy while the system has less than 10 units remaining
there is no point in scheduling that task as it will to failure eventually.
Round-Robin
Round robin scheduling (RRS) is a job-scheduling algorithm that is considered to be
very fair, as it uses time slices that are assigned to each process in the queue or line. Each
process is then allowed to use the CPU for a given amount of time, and if it does not
finish within the allotted time, it is preempted and then moved at the back of the line so
that the next process in line is able to use the CPU for the same amount of time.
As this scheduling has free states it is not considered efficient the proposed solution is
to assign processors to tasks which consume energy proportional to the system. Example:
The system is charging at 5 units per minute and when is charged 10 units a task T1
enters the ready queue. T1 needs 10 units per minute power for 3 minutes. In this case if
the scheduler directly assigns the processor without checking power needs the system
will fail. While when the processor is free it must go to power saving modes.
14. Min-Min Algorithm
Min-Min algorithm schedules the task which has minimum of the parameters under
consideration. The Min Min algorithm computes the solution with limited resources and
in minimal cost.
Max-Min Algorithm
Max-Min algorithm is quite similar to Min-Min algorithm except for in this case we
have one attribute which does not cause an impact on the efficiency and it is having a
higher respective value . For example, a system may have a very high computing
processor and the system is developed for basic operations. So the algorithm used in this
case does not need to worry about the processing time needed .Also the algorithm must
not context switch much as it would hardly make a difference to the system efficiency.
The Min-Min and Max-Min algorithms are oriented according to systems and hence
cannot be implemented directly before analyzing the system.
Swarm Optimization
In networking, particle swarm optimization (PSO) is a computational method that
optimizes a problem by iteratively trying to improve a candidate solution with regard to a
given measure of quality. It solves a problem by having a population of candidate
solutions, here dubbed particles, and moving these particles around in the search-space
according to simple mathematical formulae over the particle's position and velocity. Each
particle's movement is influenced by its local best known position, but is also guided
toward the best known positions in the search-space, which are updated as better
positions are found by other particles. This is expected to move the swarm toward the
best solutions.
In case of networking instead of distance we check for energy consumption to make
the system energy efficient.
15.
16. CHAPTER 6
6. APPLICATIONS
1. In software industries.
Each software organization has its own database stored on either the Cloud or
some private datacenter. These systems could be made better by applying this
framework.
2. In making public systems more energy efficient
As discussed networking is a basic computing element as people would be
volunteering to upgrade their system at a very low cost. Also it will help in cost
saving the long run.
3. In embedded systems
Embedded systems have the biggest limitation in power supply also all
networking here is wireless. Power consumption matters a lot in such appliances
which would be reduced.
4. IOT based applications.
IOT is defined as a network of devices. Network connectivity of all nodes is
required throughout the application also most IOT components are battery based
and need to be charged if power is drained.
5. Datacenters where servers are powered on all time.
The biggest impact of this framework has to on Daacenters where huge
amount of power is continuously consumed. Even small savings of Energy here
would have a huge impact overall.
17. CHAPTER 7
7. ADVANTAGES
Software-defined networking
The control function is no longer confined to routers, or programmed and defined only by
the manufacturers of equipment. Therefore, SDN achieves better flexibility and
controllability.
Information-centric networking
Popular contents are transmitted repeatedly on the Internet, wasting resources and
reducing quality of service (QoS).
Energy efficient coding
The principle behind energy efficient coding is to save power by getting software to make
less use of the hardware, rather than continuing to run the same code on hardware that
uses less power.
Improved repair, re-use, recycling and disposal
Popular contents are transmitted repeatedly on the Internet, wasting resources and
reducing quality of service (QoS).
18. CHAPTER 8
8. DISADVANTAGES
To achieve scalability
The framework discussed uses software defined networking approach centrally manage
and control Networking, caching, and computing resources. Since there are various
access
Devices, gateway devices, and network nodes in heterogeneous wireless networks, the
controller has to maintain a large central database
In developing new Resource allocation strategies
Resources are the most important aspect in SDN , they include Networking, caching, and
computing resources .Therefore, it is important to design the Resource allocation
strategies to make a tradeoff between the deployment and operation costs(e.g., energy
consumption) and performance benefits (e.g., decreasing latency).
Security
If attacked software could be a single point of failure resulting in the attacker getting all
permissions to modify systems. DOS attacks could be carried out using dummy nodes
disguised as routers, hubs, etc. So the system must be designed in a way that it is attack
tolerant. It is recommended that the system uses some kind of encryption.
Cooperation incentives among stakeholders
As we jointly consider networking, caching, and computing techniques in our proposed
framework, it is nontrivial to develop this framework in practice. It is possible that
Internet service Providers (ISPs) will take the responsibility to develop this framework
due to the improved user Experience and energy efficiency. Nevertheless, it is a
significant challenge for ISPs to develop this Framework.
19. CHAPTER 9
9. SCOPES FOR FUTURE DEVELOPMENT
To develop the proposed framework in an optimal way.
The framework is yet to be developed which makes it vulnerable to design issues
faced at time of development. Also only simulation results are available now which
are not always accurate.
To develop algorithms which can perform scheduling in a better way.
Here we discuss only the basic algorithms but better algorithms for more powerful
systems need to be developed accordingly.
Expanding the framework from networking to the other parts of the system.
The software centric approach can be used in basic OS operations as well saving
the systems need. But this must not affect computational power of the system or
introduce delays.
Replace existing systems.
The current systems must be upgraded with this framework this should not be a
major issue at the client level but vast changes need to be addressed at the server
level.
Power off replicated servers alternatively.
A task in addition to the current framework will be developing a algorithms which
can power off the replicated data as when the main system is working well this
replicated server is on without purpose.
20. CONCLUSION
In this seminar, recent advances in networking, caching, and computing have been
reviewed. I propose to integrate networking, caching, and computing in a systematic
framework for next generation green wireless networks. The architecture of the proposed
framework is developed by software defined networking, caching, and computing.
Details in its key components of data, control, and management planes are specified.
Some expected results have been shown to assure that this proposed framework can
improve users’ experience and energy efficiency. In addition, some open research
challenges including scalable controller design, networking/ caching/computing resources
allocation strategies, and security issues are also mentioned. Future work is expected to
be in progress to address these research challenges. Finally a revolutionary change in the
way of computing networks is desired.
21. REFERENCES
[1]. Ru Huo, Fei Richard Yu, Tao Huang, Renchao Xie, Jiang Liu, Victor C.M. Leung,
and Yunjie Liu,” Software Defined Networking, Caching, and Computing for Green
Wireless Networks, IEEE Communications Magazine November 2016”
[2]. Shivam Singh, “Green Computing Strategies & Challenges”, 2015 International
Conference on Green Computing and Internet of Things (ICGCIoT) ,Pg.758-760
[3]. Shaden M. AlIsmail, Heba A. Kurdi, “Green Algorithm to Reduce the Energy
Consumption in Cloud Computing Data Centres”, SAI Computing Conference 2016 July
13-15, 2016, London, UK
[4]. Rubyga. G1, Dr. Ponsy R.K Sathia Bhama ,” A Survey Of Computing Strategies For
Green Cloud”, 2016 Second International Conference on Science Technology
Engineering and Management (ICONSTEM)
[5]. Yahav Biran.” Coordinating Green Clouds as Data-Intensive Computing”, 2016
IEEE Green Technologies Conference
[6]. Bruno Astuto A. Nunes, Marc Mendonca, Xuan-Nam Nguyen, Katia Obraczka, and
Thierry Turletti, “A Survey of Software-Defined Networking: Past, Present, and Future
of Programmable Networks”, IEEE COMMUNICATIONS SURVEYS &
TUTORIALS,2014
[7]. Durga Chowdary E, Neelam R Vaishnav, G. Apoorva ,“Green Networking using a
combination of network virtualization and adaptive link rate”, IEEE International
Conference On Recent Trends In Electronics Information Communication Technology,
2015
22. [8]. Chao Qiu, Tiehong Tian, “Multiple Controllers Sleeping Management in Green
Software Defined Wireless Networking”, IEEE ICT conference, 2016
[9]. Jayant Adhikari, Prof. Sulabha Patil, “Comparison of Energy Aware Load Balancing
Algorithms in Cloud Computing”, International Journal of Scientific & Engineering
Research, Volume 4, Issue 12, December-2013
[10]. Dzmitry Kliazovich, Pascal Bouvry, Samee Ullah Khan, “DENS: Data Center
Energy-Efficient Network-Aware Scheduling”, Green Computing and Communications
(GreenCom), 2010 IEEE
[11]. Muhammad Ismail, Muhammad Zeeshan Shakir, Khalid A. Qaraqe, Erchin
Serpedin, “Green Network Solutions”, IEEE Green Heterogeneous Wireless
Networks,2016
[12]. Guangjie Han; Jinfang Jiang; Mohsen Guizani; Joel J. P. C Rodrigues, “Green
Routing Protocols for Wireless Multimedia Sensor Networks”, IEEE Wireless
Communications, 2016