Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies, which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
Cloud computing offers to users worldwide a low cost on-demand services, according to their requirements. In the recent years, the rapid growth and service quality of cloud computing has made it an attractive technology for different Tech Companies. However with the growing number of data centers resources, high levels of energy cost are being consumed with more carbon emissions in the air. For instance, the Google data center estimation of electric power consumption is equivalent to the energy requirement of a small sized city. Also, even if the virtualization of resources in cloud computing datacenters may reduce the number of physical machines and hardware equipments cost, it is still restrained by energy consumption issue. Energy efficiency has become a major concern for today’s cloud datacenter researchers, with a simultaneous improvement of the cloud service quality and reducing operation cost. This paper analyses and discusses the literature review of works related to the contribution of energy efficiency enhancement in cloud computing datacenters. The main objective is to have the best management of the involved physical machines which host the virtual ones in the cloud datacenters.
An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...IJECEIAES
Energy consumption in cloud computing occur due to the unreasonable way in which tasks are scheduled. So energy aware task scheduling is a major concern in cloud computing as energy consumption results into significant waste of energy, reduce the profit margin and also high carbon emissions which is not environmentally sustainable. Hence, energy efficient task scheduling solutions are required to attain variable resource management, live migration, minimal virtual machine design, overall system efficiency, reduction in operating costs, increasing system reliability, and prompting environmental protection with minimal performance overhead. This paper provides a comprehensive overview of the energy efficient techniques and approaches and proposes the energy aware resource utilization framework to control traffic in cloud networks and overloads.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...ijccsa
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
KEYWORDS
Energy Consumption, Virtual Machine Placement, Harmony Search Algorithm, Server Consolidati
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
Cloud computing offers to users worldwide a low cost on-demand services, according to their requirements. In the recent years, the rapid growth and service quality of cloud computing has made it an attractive technology for different Tech Companies. However with the growing number of data centers resources, high levels of energy cost are being consumed with more carbon emissions in the air. For instance, the Google data center estimation of electric power consumption is equivalent to the energy requirement of a small sized city. Also, even if the virtualization of resources in cloud computing datacenters may reduce the number of physical machines and hardware equipments cost, it is still restrained by energy consumption issue. Energy efficiency has become a major concern for today’s cloud datacenter researchers, with a simultaneous improvement of the cloud service quality and reducing operation cost. This paper analyses and discusses the literature review of works related to the contribution of energy efficiency enhancement in cloud computing datacenters. The main objective is to have the best management of the involved physical machines which host the virtual ones in the cloud datacenters.
An Energy Aware Resource Utilization Framework to Control Traffic in Cloud Ne...IJECEIAES
Energy consumption in cloud computing occur due to the unreasonable way in which tasks are scheduled. So energy aware task scheduling is a major concern in cloud computing as energy consumption results into significant waste of energy, reduce the profit margin and also high carbon emissions which is not environmentally sustainable. Hence, energy efficient task scheduling solutions are required to attain variable resource management, live migration, minimal virtual machine design, overall system efficiency, reduction in operating costs, increasing system reliability, and prompting environmental protection with minimal performance overhead. This paper provides a comprehensive overview of the energy efficient techniques and approaches and proposes the energy aware resource utilization framework to control traffic in cloud networks and overloads.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...ijccsa
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
KEYWORDS
Energy Consumption, Virtual Machine Placement, Harmony Search Algorithm, Server Consolidati
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
An energy optimization with improved QOS approach for adaptive cloud resources IJECEIAES
In recent times, the utilization of cloud computing VMs is extremely enhanced in our day-to-day life due to the ample utilization of digital applications, network appliances, portable gadgets, and information devices etc. In this cloud computing VMs numerous different schemes can be implemented like multimedia-signal-processing-methods. Thus, efficient performance of these cloud-computing VMs becomes an obligatory constraint, precisely for these multimedia-signal-processing-methods. However, large amount of energy consumption and reduction in efficiency of these cloud-computing VMs are the key issues faced by different cloud computing organizations. Therefore, here, we have introduced a dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability (퐴퐶푅푅) technique for cloud computing devices, which efficiently reduces energy consumption, as well as perform operations in very less time. We have demonstrated an efficient resource allocation and utilization technique to optimize by reducing different costs of the model. We have also demonstrated efficient energy optimization techniques by reducing task loads. Our experimental outcomes shows the superiority of our proposed model 퐴퐶푅푅 in terms of average run time, power consumption and average power required than any other state-of-art techniques.
Achieving Energy Proportionality In Server ClustersCSCJournals
a great amount of interests in the past few years. Energy proportionality is a principal to ensure that energy consumption is proportional to the system workload. Energy proportional design can effectively improve energy efficiency of computing systems. In this paper, an energy proportional model is proposed based on queuing theory and service differentiation in server clusters, which can provide controllable and predictable quantitative control over power consumption with theoretically guaranteed service performance. Futher study for the transition overhead is carried out corresponding strategy is proposed to compensate the performance degradation caused by transition overhead. The model is evaluated via extensive simulations and is justified by the real workload data trace. The results show that our model can achieve satisfied service performance while still preserving energy efficiency in the system.
Scheduling Divisible Jobs to Optimize the Computation and Energy Costsinventionjournals
ABSTRACT : The important challenge in cloud computing environment is to design a scheduling strategy to handle jobs, and to process them in a heterogeneous environment with shared data centers. In this paper, we attempt to investigate a new analytical framework model that enables an existing private cloud data-center for scheduling jobs and minimizing the overall computation and energy cost together. Our model is based on Divisible Load Theory (DLT) model to derive closed-form solution for the load fractions to be assigned to each machines considering computation and energy cost. Our analysis also attempts to schedule the jobs such a way that cloud provider can gain maximum benefit for his service and Quality of Service (QoS) requirement user’s job. Finally, we quantify the performance of the strategies via rigorous simulation studies.
A MULTI-OBJECTIVE PERSPECTIVE FOR OPERATOR SCHEDULING USING FINEGRAINED DVS A...VLSICS Design
The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for mostavailable benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND R...IAEME Publication
Cloud Computing is an internet based computing which makes and different types of services available to users. For customers based on their required services over the internet virtualized resources are provided. The fast growth of cloud resources with customers demand increases the energy consumption results in carbon dioxide emission. However, energy consumption and carbon dioxide emission in cloud data centre have massive impact on global environment triggering intense research in this area. To minimize the energy consumption in this paper we propose VM Assignment scheduling algorithm, it is based energy consumption and balancing the resource utilization. we consider both the VM and host energy consumption and classify the VMs based the resource usage and schedule them to balance the resources utilization among the hosts in the cloud data centre which leads to better energy efficiency and reduces the heat generation. The effectiveness of the proposed technique has been verified by simulating on CloudSim. Experimental results confirm that the technique proposed here can significantly reduce energy consumption in cloud.
REGION BASED DATA CENTRE RESOURCE ANALYSIS FOR BUSINESSESijsrd.com
To meet up the needs of large-scale multi-tenant data centres and clouds, data center and cloud architectures are continuously forming modifications. These needs are primarily focused around seven dimensions: scalability in computing, storage, and bandwidth, scalability in network services, efficiency in resource utilization, agility in service creation, cost efficiency, service reliability, and security. This article focuses on the first five dimensions as they are related to networking. . Large data centers are handling thousands of servers, exabytes of storage, terabits per second of traffic, and tens of thousands of tenants. Data centres are interconnected across the wide area network via routing and transport technologies to provide a pool of resources, known as the cloud. High-capacity transport intra- and inter-datacentre are being achieved by High-speed optical interfaces and dense wavelength-division multiplexing optical transport. This paper presents data centre resource analysis based on region
Quality of Service based Task Scheduling Algorithms in Cloud Computing IJECEIAES
In cloud computing resources are considered as services hence utilization of the resources in an efficient way is done by using task scheduling and load balancing. Quality of service is an important factor to measure the trustiness of the cloud. Using quality of service in task scheduling will address the problems of security in cloud computing. This paper studied quality of service based task scheduling algorithms and the parameters used for scheduling. By comparing the results the efficiency of the algorithm is measured and limitations are given. We can improve the efficiency of the quality of service based task scheduling algorithms by considering these factors arriving time of the task, time taken by the task to execute on the resource and the cost in use for the communication.
Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing IJORCS
This paper shows the importance of fair scheduling in grid environment such that all the tasks get equal amount of time for their execution such that it will not lead to starvation. The load balancing of the available resources in the computational grid is another important factor. This paper considers uniform load to be given to the resources. In order to achieve this, load balancing is applied after scheduling the jobs. It also considers the Execution Cost and Bandwidth Cost for the algorithms used here because in a grid environment, the resources are geographically distributed. The implementation of this approach the proposed algorithm reaches optimal solution and minimizes the make span as well as the execution cost and bandwidth cost.
Reliable and efficient webserver management for task scheduling in edge-cloud...IJECEIAES
The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...neirew J
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
An energy optimization with improved QOS approach for adaptive cloud resources IJECEIAES
In recent times, the utilization of cloud computing VMs is extremely enhanced in our day-to-day life due to the ample utilization of digital applications, network appliances, portable gadgets, and information devices etc. In this cloud computing VMs numerous different schemes can be implemented like multimedia-signal-processing-methods. Thus, efficient performance of these cloud-computing VMs becomes an obligatory constraint, precisely for these multimedia-signal-processing-methods. However, large amount of energy consumption and reduction in efficiency of these cloud-computing VMs are the key issues faced by different cloud computing organizations. Therefore, here, we have introduced a dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability (퐴퐶푅푅) technique for cloud computing devices, which efficiently reduces energy consumption, as well as perform operations in very less time. We have demonstrated an efficient resource allocation and utilization technique to optimize by reducing different costs of the model. We have also demonstrated efficient energy optimization techniques by reducing task loads. Our experimental outcomes shows the superiority of our proposed model 퐴퐶푅푅 in terms of average run time, power consumption and average power required than any other state-of-art techniques.
Achieving Energy Proportionality In Server ClustersCSCJournals
a great amount of interests in the past few years. Energy proportionality is a principal to ensure that energy consumption is proportional to the system workload. Energy proportional design can effectively improve energy efficiency of computing systems. In this paper, an energy proportional model is proposed based on queuing theory and service differentiation in server clusters, which can provide controllable and predictable quantitative control over power consumption with theoretically guaranteed service performance. Futher study for the transition overhead is carried out corresponding strategy is proposed to compensate the performance degradation caused by transition overhead. The model is evaluated via extensive simulations and is justified by the real workload data trace. The results show that our model can achieve satisfied service performance while still preserving energy efficiency in the system.
Scheduling Divisible Jobs to Optimize the Computation and Energy Costsinventionjournals
ABSTRACT : The important challenge in cloud computing environment is to design a scheduling strategy to handle jobs, and to process them in a heterogeneous environment with shared data centers. In this paper, we attempt to investigate a new analytical framework model that enables an existing private cloud data-center for scheduling jobs and minimizing the overall computation and energy cost together. Our model is based on Divisible Load Theory (DLT) model to derive closed-form solution for the load fractions to be assigned to each machines considering computation and energy cost. Our analysis also attempts to schedule the jobs such a way that cloud provider can gain maximum benefit for his service and Quality of Service (QoS) requirement user’s job. Finally, we quantify the performance of the strategies via rigorous simulation studies.
A MULTI-OBJECTIVE PERSPECTIVE FOR OPERATOR SCHEDULING USING FINEGRAINED DVS A...VLSICS Design
The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for mostavailable benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.
Intelligent Workload Management in Virtualized Cloud EnvironmentIJTET Journal
Abstract— Cloud computing is a rising high performance computing environment with a huge scale, heterogeneous collection of self-sufficient systems and elastic computational design. To develop the overall performance of cloud computing, through the deadline constraint, a task scheduling replica is traditional for falling the system power utilization of cloud computing and recovering the yield of service providers. To improve the overall act of cloud environment, with the deadline constraint, a task scheduling model is conventional for reducing the system performance time of cloud computing and improving the profit of service providers. In favor of scheduling replica, a solving technique based on multi-objective genetic algorithm (MO-GA) is considered and the study is determined on programming rules, intersect operators, mixture operators and the scheme of arrangement of Pareto solutions. The model is designed based on open source cloud computing simulation platform CloudSim, to obtainable scheduling algorithms, the result shows that the proposed algorithm can obtain an enhanced solution, thus balancing the load for the concert of multiple objects.
ENERGY EFFICIENT VIRTUAL MACHINE ASSIGNMENT BASED ON ENERGY CONSUMPTION AND R...IAEME Publication
Cloud Computing is an internet based computing which makes and different types of services available to users. For customers based on their required services over the internet virtualized resources are provided. The fast growth of cloud resources with customers demand increases the energy consumption results in carbon dioxide emission. However, energy consumption and carbon dioxide emission in cloud data centre have massive impact on global environment triggering intense research in this area. To minimize the energy consumption in this paper we propose VM Assignment scheduling algorithm, it is based energy consumption and balancing the resource utilization. we consider both the VM and host energy consumption and classify the VMs based the resource usage and schedule them to balance the resources utilization among the hosts in the cloud data centre which leads to better energy efficiency and reduces the heat generation. The effectiveness of the proposed technique has been verified by simulating on CloudSim. Experimental results confirm that the technique proposed here can significantly reduce energy consumption in cloud.
REGION BASED DATA CENTRE RESOURCE ANALYSIS FOR BUSINESSESijsrd.com
To meet up the needs of large-scale multi-tenant data centres and clouds, data center and cloud architectures are continuously forming modifications. These needs are primarily focused around seven dimensions: scalability in computing, storage, and bandwidth, scalability in network services, efficiency in resource utilization, agility in service creation, cost efficiency, service reliability, and security. This article focuses on the first five dimensions as they are related to networking. . Large data centers are handling thousands of servers, exabytes of storage, terabits per second of traffic, and tens of thousands of tenants. Data centres are interconnected across the wide area network via routing and transport technologies to provide a pool of resources, known as the cloud. High-capacity transport intra- and inter-datacentre are being achieved by High-speed optical interfaces and dense wavelength-division multiplexing optical transport. This paper presents data centre resource analysis based on region
Quality of Service based Task Scheduling Algorithms in Cloud Computing IJECEIAES
In cloud computing resources are considered as services hence utilization of the resources in an efficient way is done by using task scheduling and load balancing. Quality of service is an important factor to measure the trustiness of the cloud. Using quality of service in task scheduling will address the problems of security in cloud computing. This paper studied quality of service based task scheduling algorithms and the parameters used for scheduling. By comparing the results the efficiency of the algorithm is measured and limitations are given. We can improve the efficiency of the quality of service based task scheduling algorithms by considering these factors arriving time of the task, time taken by the task to execute on the resource and the cost in use for the communication.
Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing IJORCS
This paper shows the importance of fair scheduling in grid environment such that all the tasks get equal amount of time for their execution such that it will not lead to starvation. The load balancing of the available resources in the computational grid is another important factor. This paper considers uniform load to be given to the resources. In order to achieve this, load balancing is applied after scheduling the jobs. It also considers the Execution Cost and Bandwidth Cost for the algorithms used here because in a grid environment, the resources are geographically distributed. The implementation of this approach the proposed algorithm reaches optimal solution and minimizes the make span as well as the execution cost and bandwidth cost.
Reliable and efficient webserver management for task scheduling in edge-cloud...IJECEIAES
The development in the field of cloud webserver management for the execution of the workflow and meeting the quality-of-service (QoS) prerequisites in a distributed cloud environment has been a challenging task. Though, internet of things (IoT) of work presented for the scheduling of the workflow in a heterogeneous cloud environment. Moreover, the rapid development in the field of cloud computing like edge-cloud computing creates new methods to schedule the workflow in a heterogenous cloud environment to process different tasks like IoT, event-driven applications, and different network applications. The current methods used for workflow scheduling have failed to provide better trade-offs to meet reliable performance with minimal delay. In this paper, a novel web server resource management framework is presented namely the reliable and efficient webserver management (REWM) framework for the edge-cloud environment. The experiment is conducted on complex bioinformatic workflows; the result shows the significant reduction of cost and energy by the proposed REWM in comparison with standard webserver management methodology.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
An Approach to Reduce Energy Consumption in Cloud data centers using Harmony ...neirew J
Fast development of knowledge and communication has established a new computational style which is
known as cloud computing. One of the main issues considered by the cloud infrastructure providers, is to
minimize the costs and maximize the profitability. Energy management in the cloud data centers is very
important to achieve such goal. Energy consumption can be reduced either by releasing idle nodes or by
reducing the virtual machines migrations. To do the latter, one of the challenges is to select the placement
approach of the migrated virtual machines on the appropriate node. In this paper, an approach to reduce
the energy consumption in cloud data centers is proposed. This approach adapts harmony search
algorithm to migrate the virtual machines. It performs the placement by sorting the nodes and virtual
machines based on their priority in descending order. The priority is calculated based on the workload.
The proposed approach is simulated. The evaluation results show the reduction in the virtual machine
migrations, the increase of efficiency and the reduction of energy consumption.
Contents lists available at ScienceDirectOptical Switching a.docxdickonsondorris
Contents lists available at ScienceDirect
Optical Switching and Networking
Optical Switching and Networking 23 (2017) 225–240
http://d
1573-42
n Corr
305-701
E-m
dkkang
chyoun
journal homepage: www.elsevier.com/locate/osn
Energy and QoS aware resource allocation for heterogeneous
sustainable cloud datacenters
Yuyang Peng, Dong-Ki Kang, Fawaz Al-Hazemi, Chan-Hyun Youn n
Department of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
a r t i c l e i n f o
Article history:
Received 31 August 2015
Received in revised form
1 February 2016
Accepted 19 February 2016
Available online 27 February 2016
Keywords:
Sustainable cloud datacenters
Renewable energy
Virtual machine allocation
Heterogeneity
x.doi.org/10.1016/j.osn.2016.02.001
77/& 2016 Elsevier B.V. All rights reserved.
esponding author at: 373-1 Guseong-dong,
, Korea. Tel.: +82 42 350 3495; fax: +82 42
ail addresses: [email protected] (Y. Pen
@kaist.ac.kr (D.-K. Kang), [email protected] (
@kaist.ac.kr (C.-H. Youn).
a b s t r a c t
As the demand on Internet services such as cloud and mobile cloud services drastically
increases recently, the energy consumption consumed by the cloud datacenters has
become a burning topic. The deployment of renewable energy generators such as Pho-
toVoltaic (PV) and wind farms is an attractive candidate to reduce the carbon footprint
and, achieve the sustainable cloud datacenters. However, current studies have focused on
geographical load balancing of Virtual Machine (VM) requests to reduce the cost of brown
energy usage, while most of them have ignored the heterogeneity of power consumption
of each cloud datacenter and the incurred performance degradation by VM co-location. In
this paper, we propose Evolutionary Energy Efficient Virtual Machine Allocation (EEE-
VMA), a Genetic Algorithm (GA) based metaheuristic which supports a power hetero-
geneity aware VM request allocation of multiple sustainable cloud datacenters. This
approach provides a novel metric called powerMark which diagnoses the power efficiency
of each cloud datacenter in order to reduce the energy consumption of cloud datacenters
more efficiently. Furthermore, performance degradation caused by VM co-location and
bandwidth cost between cloud service users and cloud datacenters are considered to
avoid the deterioration of Quality-of-Service (QoS) required by cloud service users by
using our proposed cost model. Extensive experiments including real-world traces based
simulation and the implementation of cloud testbed with a power measuring device are
conducted to demonstrate the energy efficiency and performance assurance of the pro-
posed EEE-VMA approach compared to the existing VM request allocation strategies.
& 2016 Elsevier B.V. All rights reserved.
1. Introduction
The electric energy consumption of datacenters is
accounted to be 1.5% of the worldwide electricity usage in
2010, and the energy cost is a primary fraction of a data-
c.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
The ever-increasing status of the cloud computing hypothesis and the budding concept of federated cloud computing have enthused research efforts towards intellectual cloud service selection aimed at developing techniques for enabling the cloud users to gain maximum benefit from cloud computing by selecting services which provide optimal performance at lowest possible cost. Cloud computing is a novel paradigm for the provision of computing infrastructure, which aims to shift the location of the computing infrastructure to the network in order to reduce the maintenance costs of hardware and software resources. Cloud computing systems vitally provide access to large pools of resources. Resources provided by cloud computing systems hide a great deal of services from the user through virtualization. In this paper, the cloud data center is modelled as queuing system with a single task arrivals and a task request buffer of infinite capacity.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDMOD...ijgca
The ever-increasing status of the cloud computing hypothesis and the budding concept of federated cloud computing have enthused research efforts towards intellectual cloud service selection aimed at developing techniques for enabling the cloud users to gain maximum benefit from cloud computing by selecting services which provide optimal performance at lowest possible cost. Cloud computing is a novel paradigm for the provision of computing infrastructure, which aims to shift the location of the computing infrastructure to the network in order to reduce the maintenance costs of hardware and software resources. Cloud computing systems vitally provide access to large pools of resources. Resources provided by cloud computing systems hide a great deal of services from the user through virtualization. In this paper, the cloud data center is modelled as queuing system with a single task arrivals and a task request buffer of infinite capacity.
Energy-Aware Adaptive Four Thresholds Technique for Optimal Virtual Machine P...IJECEIAES
With the increasing expansion of cloud data centers and the demand for cloud services, one of the major problems facing these data centers is the “increasing growth in energy consumption ". In this paper, we propose a method to balance the burden of virtual machine resources in order to reduce energy consumption. The proposed technique is based on a four-adaptive threshold model to reduce energy consumption in physical servers and minimize SLA violation in cloud data centers. Based on the proposed technique, hosts will be grouped into five clusters: hosts with low load, hosts with a light load, hosts with a middle load, hosts with high load and finally, hosts with a heavy load. Virtual machines are transferred from the host with high load and heavy load to the hosts with light load. Also, the VMs on low hosts will be migrated to the hosts with middle load, while the host with a light load and hosts with middle load remain unchanged. The values of the thresholds are obtained on the basis of the mathematical modeling approach and the 퐾-Means Clustering Algorithm is used for clustering of hosts. Experimental results show that applying the proposed technique will improve the load balancing and reduce the number of VM migration and reduce energy consumption.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...ijgca
The ever-increasing status of the cloud computing h
ypothesis and the budding concept of federated clou
d
computing have enthused research efforts towards in
tellectual cloud service selection aimed at develop
ing
techniques for enabling the cloud users to gain max
imum benefit from cloud computing by selecting
services which provide optimal performance at lowes
t possible cost. Cloud computing is a novel paradig
m
for the provision of computing infrastructure, whic
h aims to shift the location of the computing
infrastructure to the network in order to reduce th
e maintenance costs of hardware and software resour
ces.
Cloud computing systems vitally provide access to l
arge pools of resources. Resources provided by clou
d
computing systems hide a great deal of services fro
m the user through virtualization. In this paper, t
he
cloud data center is modelled as
queuing system with a single task arrivals
and a task request buffer of infinite capacity.
Recently, lot of interest have been put forth by researchers to improve workload scheduling in cloud platform. However, execution of scientific workflow on cloud platform is time consuming and expensive. As users are charged based on hour of usage, lot of research work have been emphasized in minimizing processing time for reduction of cost. However, the processing cost can be reduced by minimizing energy consumption especially when resources are heterogeneous in nature; very limited work have been done considering optimizing cost with energy and processing time parameters together in meeting task quality of service (QoS) requirement. This paper presents cost and performance aware workload scheduling (CPA-WS) technique under heterogeneous cloud platform. This paper presents a cost optimization model through minimization of processing time and energy dissipation for execution of task. Experiments are conducted using two widely used workflow such as Inspiral and CyberShake. The outcome shows the CPA-WS significantly reduces energy, time, and cost in comparison with standard workload scheduling model.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...cscpconf
Data centers have become indispensable infrastructure for data storage and facilitating the development of diversified network services and applications offered by the cloud. Rapid
development of these applications and services imposes various resource demands that results in increased energy consumption. This necessitates the development of efficient energy management techniques in data center not only for operational cost but also to reduce the amount of heat released from storage devices. Virtualization is a powerful tool for energy
management that achieves efficient utilization of data center resources. Though, energy management at data centers can be static or dynamic, virtualization level energy management
techniques contributes more energy conservation than hardware level. This paper surveys various issues related to dynamic energy management at virtualization level in cloud data
centers.
Similar to REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTING (20)
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTING
1. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
DOI:10.5121/ijdps.2019.10101 1
REAL-TIME ADAPTIVE ENERGY-SCHEDULING
ALGORITHM FOR VIRTUALIZED CLOUD COMPUTING
D B Srinivas1
, H K Krishnappa2
, Rajan M A3,
Sujay N. Hegde4
1,4
Nitte Meenakshi Institute of Technology, Bangalore, India
2
R V College of Engineering, Bangalore, India
3
TCS Research and Innovation, Bangalore, India
ABSTRACT
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive Energy-
Scheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud Computing-
Virtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
KEYWORDS
Virtual Machine (VM); Virtualized Data Centers (VDCs); Quality of Service (QoS); Dynamic Voltage and
Frequency Scaling (DVFS); Real Time Adaptive Energy-Scheduling (RTAES).
1. INTRODUCTION
Increased availability of the cloud models and allied developing models allows cloud service
providers to provide easier and cost effective services to the cloud users. The cloud computing
model generally depends upon virtualization technology, which allows multiple operating
systems (OSs) to share a hardware system in an easy, secure and manageable way.
As more and more services are digitized and added to the cloud, cloud users have started using
cloud services. Thus, the computing resources in cloud consume significant amount of power to
serve user requests. This is evident from the following facts. Middle size data centers consume
80000 kW [1]. It is predicted that computing resources consumes nearly 0.5% of the world’s total
power usage [2], and if current demand continues, this is projected to quadruple by 2020. From
these observations, it is evident that optimizing energy consumption is an important issue for
cloud services in a data center. Energy minimization can be done in two ways (i) Designing
energy-efficient processors. (ii) Minimizing the energy consumption by task scheduling.
To reduce energy consumption in the cloud environment, a Dynamic Voltage and Frequency
Scaling (DVFS) approach is used. A DVFS is the hardware model, that has been used to increase
2. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
2
and decrease the frequency of CPUs depending upon the user’s requirements [3] and is widely
used in modern processors. The optimistic correlation of energy consumption and the CPU
frequency can control the energy consumption. The use of a DVFS in a processor can
dynamically change the working frequency and leads to a diverse consumption of energy. A
lower frequency that consumes less energy is always not correct because a low frequency that
maximizes the execution time of a task; also depends upon both the execution time and the
power. In [4, 5], the authors specify that there are some optimum frequencies for a given
processor, where the consumption of energy can reduced during the execution of a task.
In a data center, operating a processor at an optimal frequency results in minimum energy
consumption. Generally user application contains set of tasks. If all tasks have been executed
under the optimum frequency then there is a chance that some tasks may not meet their deadline.
Therefore, efficient task allocation with a suitable frequency is a major challenge, since the
optimum suitable frequency for each processor can optimize the total consumption of energy. An
optimal DVFS-based strategy for task allocation should minimize the overall consumption of
energy and meeting the required QoS (i.e., perform the task on time). A cost-constrained
Heterogeneous Earliest Finish-Time (HEFT) model was studied in [6, 7], and these algorithms
provides the optimum budget in different scenarios. The execution cost minimization for
scientific workflows with a deadline constraint is considered and the meta-heuristic algorithm is
proposed in [8]. The investigation of servers’ power consumption with different numbers of VMs
and various utilization has discussed in [9]. A previous study of the ‘worst case of execution
cycle/time’ (WCEC/WCET) was presented in [10-12], where the execution time of the upper
bound at the maximal frequency acts as the task workload. However, the WCET generally does
not contest the running workload of a real task, which may result in a difference between the real
energy savings and the theoretical results. Similarly in [13, 14] have made use of the probability
function based on WCET in a workload to enhance the effectiveness of scheduling tasks.
Assessing the energy utilized by Virtualized Data Centers (VDCs) is one emerging technique that
aims to implement an effective energy management for a virtualized computing platform [15, 16].
The main aim is to deliver high-class service to large number of clients, while minimizing the
overall networking and computing energy consumption. In paper [17], they considered the power
management and optimal resource allocation in a VDC switch with heterogeneous applications;
but they did not control for the internal and external switching to server frequencies, which causes
degrades the performance. In this paper, we propose the RTAES algorithm by manipulating the
reconfiguration proficiency of Cloud Computing-Virtualized Data Centers (CCVDCs) that
process massive amounts of information/data in parallel processing. RTAES algorithm minimizes
energy consumption in the computational, reconfiguration and communication costs; under the
scenario of parallel processing data in a cloud computing environment, while satisfying the
service level, which is expressed as the maximized time to process the job (including
communication and computational times). The main contribution of this research is; we consider
the energy objective approach, which is a non convex function. In addition, we also propose a
mathematical function to convert the non-convexity into a convex function. Moreover, our
proposed model is easy to implement, scalable and independent of workload scheduling.
The organization of the paper is as follows. Section 2 presents the existing state of art methods
related to our research in the cloud environment. Section 3 describes proposed RTAES
methodology. In Section 4 we discuss the experimental results and the performance analysis of
the proposed methodology. Finally the paper is concluded in Section 5.
3. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
3
2. RELATED WORK
The present cloud computing environment provides the necessary QoS to attract users from
different segments. In the cloud computing, several resources such as; storage, memory, CPUs
etc, where key resources are provisioned and then, they are leased to the users/clients for a
guaranteed QoS. In [18], authors have presented a novel-SLA (Service Level Agreement)
framework for cloud computing. The framework contains the reinforcement learning (RL) to
develop the hiring policy of VM, which can be considered to assure the QoS. In [19], authors
have proposed a DVFS approach that counters the trade-off between different energy
consumption level and the performance dilapidation. CPU re-allocation approach is presented in
[20] for effective energy-management in real-time services. Energy aware resource provisioning
was considered in [21]. The proposed framework considers both server and communication
network’s energy consumptions in order to provide the complete solution. Efficient energy
resources allocation model for real time services in the cloud environment is presented in[22].
There are several VMs and hosts that have been applied in order to minimize energy
consumption. The scheduling of VMs on servers to meet the energy saving parameter in the cloud
computing environment was discussed [23, 30], where a VM-scheduling approach was proposed
in order to schedule the VMs in a cloud cluster environment. Because of the higher time
complexity, this model is not as effective. Our previous work in [27], we measure the energy
consumption of a parallel application on computational grids based on processor clock cycles to
understand energy consumption issues.
Therefore, a novel scheduling approach called EVMP (Energy-aware VM-Placement) was
considered to schedule the VMs, which was more effective at reducing the power consumption.
In [8, 24], authors have proposed a meta–heuristic algorithm to minimize cost with deadline
constraints. The investigation of server power consumption with different number of VMs and
various utilizations was discussed in [9]. A VM-scheduling algorithm, which ensures the
efficient energy budget of an individual VM, was proposed in [25]. A cost constrained model
through Heterogeneous Earliest Finish-Time (HEFT) is presented in [23], in which the algorithm
provides the optimum budget in various scenarios. A similar approach is proposed in [6], where
authors proposed the heterogeneous Budget Constrained Scheduling (BCS) approach that
provided the efficient cost model by adjusting the costs and budget ratio within the assigned tasks
workflow. One multi-objective HEFT approach is a Pareto heuristic-based algorithm [7], which is
an extension to HEFT that, upgrades the costs and time factors simultaneously in order to provide
a scheduling trade-off between the objectives. The real-time scheduling of a VM in the cloud to
reduce consumption of energy was briefly addressed in [24]. Moreover, the reduction of energy
consumption can also minimize the carbon dioxide emissions (from the datacenter cooling
system) [25].
Therefore, this study mainly focuses on the VMs scheduling in order to minimize the energy
consumption in cloud environment.
3. PROPOSED RTAES ARCHITECTURE MODELING
There are several virtualized processing units that are interconnected through a single
virtual-hop network is considered in the CCVDCs model. Here the central controller is used to
manage all the devices.
4. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
4
An individual processing component executes the primary assigned task through its
computing resources and local virtual-storage. Whenever a novel task request is submitted at the
CCVDC, the performances of the location resource availability and admission control are
performed by the virtual central controller. Figure 1 shows the considered RTAES architecture,
which shows the computing and communication system model. The multiple numbers of VMs are
configured in the CCVDC where they are interconnected by a limited-switched V-LAN (virtual
local area network). Here, a star-shaped V-LAN is considered that guarantee the inter
communication of VMs and that the switching network in Fig. 1 will work as a central node. The
VM-manager generally operates both the V-LAN and VMs jointly and performs the task
scheduling activity by dynamically allocating the use of communication resources and available
virtual computing to virtual links and VMs.
Here, we consider that VMs that are deployed over the server can alter their service
resources according to the prototype presented in [31-33], this prototype has been widely adopted
in the ‘private-cloud environment’. The high computation demands in the model tend to consider
some large VMs in the place of several small VMs. In this paper, we consider a single VM that is
positioned as on individual server to simplify the model, and for that server we can use all
resources. In the physical server, the CPU functions at a frequency (𝑎𝑖) at any considered time
from the predefined existing set of frequencies that are present in the DVFS approach.
The computation of energy depends upon the VM states and the curve of the CPU energy. Here,
the DVFS is used to hosting the physical servers in order to stretch the tasks processing times
and reduce the consumption of energy by minimizing the CPU frequencies of the working VMs.
The purpose of an individual server is to work at various voltage levels and, functions at
various CPU frequencies. The set of permitted frequencies is;
𝑎( 𝑖) ∈ {𝑎𝑗( 𝑖)}, 𝑤𝑖𝑡ℎ𝑖 ∈ {1, . . . , 𝑍}, 𝑗 ∈ {0,1, . . . , 𝐵} (1)
Where, 𝐵 is the number of the ‘CPU-frequency’ that is allowed at each of the VMs, 𝑎𝑗( 𝑖) is the
𝑗𝑡ℎ discrete frequency of VMi, and 𝑗 = 0represents its idle state.
5. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
5
Figure 1. Proposed RTAES architecture.
The dynamic consumption of power 𝐶 of hosting the CPU will increase with the 3rd
power of the
‘CPU frequency’. Therefore, in generic VM (𝑖) the energy consumption is
𝜕 𝐹𝐶( 𝑖) ≜ ∑ 𝐷𝐸𝑐𝑒 𝑎𝑗( 𝑖)3
𝑝𝑗( 𝑖)𝐵
𝑗=0 , [𝐽]𝑤𝑖𝑡ℎ∀𝑖 ∈ {1, . . . , 𝑍} (2)
where, 𝑝𝑗( 𝑖) is the time with the 𝑗𝑡ℎ VM operator at frequency𝑎𝑗( 𝑖).
The two costs are considered the switching frequency costs. In the first, the changing
costs are from the internal switching at the discrete frequencies of VM (i) from𝑎𝑗( 𝑖) to 𝑎𝑗+𝑓( 𝑖).In
the second, the costs are from the external switching through the final activated ‘discrete
frequency’ for the upcoming job with a job-size of𝐺𝑠𝑧. The list of discrete active frequencies for
individual VMs in each upcoming task is input into the system, and the switching activity from
the discrete active frequency to a different one disturbs the switching costs. The first discrete
active frequency is 𝑎 𝑓( 𝑖), and the second one is 𝑎 𝑓+1( 𝑖). The difference between the frequencies
is
△ 𝑎 𝑓( 𝑖) ≜ 𝑎 𝑓+1( 𝑖) − 𝑎 𝑓( 𝑖) (3)
The switching costs are𝐻𝑐 △ 𝑎 𝑓( 𝑖)2
, where the switching costs are calculate by a unit frequency
switching as 𝐻𝑐 (J/Hz2
). In the homogeneous VMs, the internal switching costs of all VMs are
𝑘 𝑒 ∑ ∑ [△ 𝑎 𝑓( 𝑖)]2𝐹
𝑓=0
𝑍
𝑖=1 , where 𝑓 ∈ {0, 1 , . . . , 𝐹}. 𝐹 ≤ 𝐵 is the number of discrete active
frequencies at VM (𝑖). Therefore, the total switching frequency is
∑ 𝜕𝑆𝑤( 𝑖)𝑍
𝑖=1 ≜ 𝐻𝑐 ∑ ∑ [△ 𝑎 𝑓( 𝑖)]2𝐹
𝑓=0
𝑍
𝑖=1 + 𝐻𝑐 ∑ 𝐸𝑐
𝑍
𝑖=1 (4)
6. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
6
The ‘external-switching costs’ (𝐸𝑐) are considered by the multiplication of 𝐻𝑐with the last and
first discrete frequencies’ quadratic differences.
The virtual link is responsible for the communication between VM (𝑖) and the scheduler, which
operates at𝐾(𝑖) [bits/s] transmission rate (i.e., 𝑖 = 1, … , 𝑍) and is equipped with a virtual network
interface (VIF), as given in Fig. 1. The power consumed in the transmission is one-way, and the
switching function at the 𝑖 − 𝑡ℎ virtual link is
𝐿𝑐𝑜𝑛( 𝑖) = 𝐿 𝑐𝑜𝑛
𝑇 ( 𝑖) + 𝐿 𝑐𝑜𝑛
𝑅 ( 𝑖), [𝑊] (5)
where the power consumed through the transmitted VIF is 𝐿 𝑐𝑜𝑛
𝑇 ( 𝑖) and the receiving power with
the switching function related to VIF is𝐿 𝑐𝑜𝑛
𝑅 ( 𝑖). Here, we consider that the receipt and
transmission are same, and it can be calculated as
𝐿𝑐𝑜𝑛( 𝑖) = 𝑀𝑖(2 𝑅(𝑖)/𝑁 𝑖 − 1) + 𝐿𝑖𝑑𝑙𝑒( 𝑖), [𝑊] (5)
where
𝑀𝑖 ≜ 𝑄0( 𝑖) 𝑁𝑖 /𝑟𝑖,[𝑊/𝐻𝑧] (6)
𝑟𝑖and 𝑁𝑖[Hz] are the transmission power bandwidth and the noise ‘spectral-power-density’ with
the positive gain of the ith
link. Therefore, the one-way corresponding delay in transmission is
𝑆( 𝑖) = ∑
𝐴 𝑗(𝑖)𝑝 𝑗(𝑖)
𝐾(𝑖)
𝐵
𝑗=1 (7)
In addition, the corresponding energy at one-way communication is
𝜕 𝑐𝑜𝑚( 𝑖) ≜ 𝑆( 𝑖) 𝐿𝑐𝑜𝑚( 𝑖) [𝐽𝑜𝑢𝑙𝑒] (8)
Here, our main aim is to minimize the computational energy and overall resulting
communication. This can be shown as
𝜕𝑡𝑜𝑡 ≜ ∑ 𝜕 𝐹𝐶( 𝑖)𝑍
𝑗=1 + ∑ 𝜕𝑆𝑤( 𝑖)𝑍
𝑖=1 + ∑ 𝜕 𝑐𝑜𝑚( 𝑖)𝑍
𝑗=1 [𝑗𝑜𝑢𝑙𝑒] (9)
where 𝜕𝑡𝑜𝑡is the overall computational energy that is formed by the different cost function of VM
(𝑖). Moreover, the problem is with the difficulty constraint (𝐸𝑡
̅̅̅) allowed in the per-job ‘execution
time’.
3.1. Proposed RTAES technique for Optimization Problem
The goal of our proposed work is to minimize the total consumption of energy (from the
incoming workload) by choosing the optimal computational resources in order to complete the
task’s execution, which depends upon the optimal bandwidth (for the consumption of energy) and
the current load level while adopting the content dependent switching frequencies for individual
VMs. The computation of resources consists of several (PMs) physical machines, where each one
comprises a single or multiple cores, a local I/O network interface and memory. The resulting
7. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
7
overall communications and energy include the 𝜕 𝐹𝐶( 𝑖),𝜕𝑆𝑤( 𝑖) and 𝜕 𝑐𝑜𝑚( 𝑖) for 𝑍 number of VMs
in the 𝜕𝑡𝑜𝑡, which are estimated for individual VMs.
To optimize the problem, it is important to apply an additional parameter 𝐸𝑡, which is the
threshold value in the task’s processing operation. Therefore, the service level is split into two
constraints, which are that the network-dependent time is less than ‘or’ equal to 𝐸𝑡
´ − 𝐸𝑡 and that
the computational time is less than or equal to 𝐸𝑡. Therefore, the resulting equation of the
optimization problem can be expressed as;
(10)
Equation (10) consists of three major terms to find the computational energy. The decision
variable for optimizing the difficulty is 𝑝𝑗( 𝑖), where the time duration is 𝑝𝑗( 𝑖) for each VM
operating at frequency 𝑎𝑗frequency and the transmission rate for each VM (𝑖) is 𝐾( 𝑖).
𝐺𝑠𝑧 = ∑ ∑ 𝐴𝑗( 𝑖) 𝑝𝑗( 𝑖)𝐵
𝑗=0
𝑍
𝑗=1
(11)
The above equation (11) is a global constraint that insures that the overall task is split
into 𝑍number of parallel tasks. The 𝐺𝑠𝑧 is incoming job size, which needs to spread over 𝑍
number of VMs during computation.
(12)
The inequality bandwidth in the above equation (12) ensures that the summation of the bandwidth
at each VM should be less than the maximum global network’s bandwidth.
∑ 𝑝𝑗( 𝑖)
𝐵
𝑗=0
≤ 𝐸𝑡, 𝑖 = 1, … , 𝑍
(13)
This is the computational time constraint 𝑇.
∑
2𝐴𝑗( 𝑖) 𝑝𝑗( 𝑖)
𝐾( 𝑖)
𝐵
𝑗=0
≤ 𝐸𝑡
̅̅̅ − 𝐸𝑡, 𝑖 = 1, … , 𝑍
(14)
Equation (14) describes the constraint at the data exchange period/time.
0 ≤ 𝑝𝑗( 𝑖) ≤ 𝐸𝑡, 𝑖 = 1, … , 𝑍, 𝑗 ∈ {0, . . . , 𝐵} (15)
0 ≤ 𝐾( 𝑖) ≤ 𝐾𝑡 , ∀𝑖 ∈ {1 , . . . , 𝑍} (16)
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 ∑ 𝜕 𝐹𝐶( 𝑖)𝑍
𝑗=1 + ∑ 𝜕𝑆𝑤( 𝑖)𝑍
𝑖=1 + 𝜕 𝑐𝑜𝑚( 𝑖)
∑ 𝐾( 𝑖)
𝑍
𝑗=1
≤ 𝐾𝑡
(12)
8. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
8
Equation (15) guarantees the time of individual computing intervals (i.e., positive and less
than𝐸𝑡). The final constraint ensures that the 𝐾 control parameter (i.e., the channel
communication rate) should be positive and less than the maximum network capacity. The power-
cost at the end-to-end link comes from equation (8) and the outcome can be given as
∑
2𝐿𝑐𝑜𝑚( 𝑖) 𝐴𝑗( 𝑖) 𝑝𝑗( 𝑖)
𝐾( 𝑖)
𝐵
𝑗=0
(17)
Where 𝐾( 𝑖) > 0 and∀𝑖 ∈ {1 , . . . , 𝑍}. withThe multi-variable coefficient value and arrow sign
implies that
∑
2 𝐴𝑗( 𝑖) 𝑝𝑗( 𝑖)
𝐾( 𝑖)
≤ 𝐸𝑡
̅̅̅ − 𝐸𝑡 → (∑
𝐴𝑗( 𝑖) 𝑝𝑗( 𝑖)
𝐾( 𝑖)
𝐵
𝑗=0
)
𝐵
𝑗=0
≤
𝐸𝑡
̅̅̅ − 𝐸𝑡
2
(18)
The above equation (18) is obtained from equation (13). Furthermore, to make it easier to
optimize the problem, here we modify the ‘second control’ variable through 𝐾( 𝑖)with the other
control variable𝑝𝑗( 𝑖)as
∑
2 𝐴𝑗( 𝑖) 𝑝𝑗( 𝑖)
𝐾( 𝑖)
≤ 𝐸𝑡
̅̅̅ − 𝐸𝑡 → 𝐾( 𝑖) ≥ ∑
2 𝐴𝑗( 𝑖) 𝑝𝑗( 𝑖)
𝐸𝑡
̅̅̅ − 𝐸𝑡
𝐵
𝑗=0
𝐵
𝑗=0
(19)
The outcome of equations (18) and (19) is applied to get the 𝜕 𝑐𝑜𝑚( 𝑖) end-to-end function that is
dependent upon the control variable 𝑅 (𝑈(𝑖); 𝑝𝑗( 𝑖)), which can be modified by altering the
control variable 𝑈( 𝑖)to the other ‘control variable’𝑝𝑗( 𝑖) in eq. (20).
𝜕 𝑐𝑜𝑚( 𝑖) = 𝑅 (𝑈(𝑖); 𝑝𝑗( 𝑖)) ≜ 𝑉 (𝑎𝑗( 𝑖)) (20)
Eq. (20) represents the formula for the adaptive energy communication in the end-to-end link,
which depends upon the time variables’ simulation for each VM and the main function 𝑉(. ).that
satisfies the third term in eq. (12) is convex.
4. RESULT AND ANALYSIS
Over the past several years, the embedded processor demand has become extremely high
worldwide due to the vast usage of network equipment, information devices, portable gadgets and
digital instruments. Supreme performance is always required from the embedded equipment
because it is used in our daily lives, such as in the multimedia ‘digital-signal-processing’
technique. However, these devices that are equipped with embedded processors have some
drawbacks that affect their device’s efficiency due to the consumption of high power by the
devices. Moreover, there are performance imbalances between devices. Therefore, here we
evaluate the performance of our proposed model and existing models with respect to power
consumption. Several results are presented using our proposed RTAES algorithm that is based
upon the DVFS approach. In this section, we have considered the different jobs of ‘30’,
‘50’,‘100’ and ‘1000’ in order evaluate the execution time and the ‘𝑀𝑜𝑛𝑡𝑎𝑔𝑒’ scientific dataset
9. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
9
(MSD) has been adapted to validate our model. The power consumption and execution time can
be evaluated using several parameters, which are shown in below Table 1. This model is
simulated using cloudsim and implemented on the Windows 10, 64-bit operating system and uses
a computer with an i5 processor with 8 GB RAM and the programming language used for code is
JAVA.
4.1. Comparative Analysis
In the present technical era, cloud computing has conquered the world market in several areas,
such as trading, healthcare, the software sector and medical. Thus, the upcoming technologies are
largely based on cloud computing processor operations because of its extensive demand. The lack
of accurate resource utilization can affect their device efficiency due to their high power
consumption; therefore, to address these issues, here we introduce the RTAES algorithm that is
based upon the DVFS approach. The effective scheduling of tasks can improve users’
interactions, avoid task overloads, and enhance resource utilization and throughput. Therefore,
here, we presented the comparison of results with existing techniques with respect to the
consumption of power and operational execution time.
Table 1: Various parameters comparison for proposed technique vs DVFS using MSDX
Parameters
Total
execution
time (sec)
Energy
consumption
( 𝑊ℎ)
Power
Sum
(W)
Power
Average
(W)
DVFS
MSD-25
VM-
100
174.67 482.29 1663184 95.22
MSD-50 VM-80 387.53 944.14 2953095 76.2
MSD-100 VM-60 817.35 1839.12 4673928 57.18
MSD-
1000
VM-40 8591.39 70377.1 3.3E+07 38.16
RTAES
MSD-25
VM-
100
57.72 120 417907 72.4
MSD-50 VM-80 71.77 124.88 418104 58.25
MSD-100 VM-60 101.91 143.79 449547 44.11
MSD-
1000
VM-40 777.04 1046.14 2328420 29.96
The several parameters that are used in Table 2 are the total simulation time, the energy
consumption, the power sum and the power average. The scientific model of Montage was
considered with different job sizes, such as MSD-25, MSD-50, MSD-100 and MSD-1000.
Moreover, each MSD that we have considered was tested with different virtual machines (VMs)
that are shown in Table1 above. The energy consumption values are 482.29Wh for MSD-25,
944.14Wh for MSD-50, 1839.12Wh for MSD-100 and 70377.1Wh for MSD-1000. As shown in
Table 1, these values are substantially lower than those of the existing technique. Here, we
obtained 75%, 86%, 92% and 98% less energy consumption with respect to the DVFS approach.
10. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
10
The total simulation times in the existing approach for different MSDs are 174.67, 387.53, 817.35
and 8591.39 sec, which consumed 66%, 81%, 87% and 90% more time compared to that of our
proposed model respectively. Table 2 represents the Average Execution time (sec) comparison of
the proposed technique with other existing techniques using MSD. The proposed RTAES
approach calculates the average execution times for MSD-25 as 2.3 sec, MSD-50 as 1.4 sec,
MSD-100 as 1.02 sec and MSD-1000 is 0.77 sec, which are substantially lower than those with
existing techniques such as EMOA [26] and DVFS.
Table 1: Average execution time (sec) comparison of the proposed technique with other existing techniques using MSD
Datasets Number of
nodes
Average Execution time (s)
EMOA [34] DVFS RTAES
MSD-25 30 8.44 6.9868 2.3088
MSD-50 50 9.78 7.7506 1.4354
MSD-100 100 10.58 8.1735 1.0191
MSD-1000 1000 11.36 8.59139 0.77704
4.2. Graphical Representation
Here, we represent the graphical form of our obtained results. Figure 2 shows the total simulation
time comparison of the existing DVFS model and the proposed RTAES approach using the
Montage scientific model datasets for different numbers of nodes and VM numbers. Figure 3
shows the energy consumption comparison of the existing DVFS model and the proposed
RTAES approach using the Montage scientific model datasets for different numbers of nodes and
VM numbers. Figure 4 shows the power sum comparison of the existing DVFS model and the
proposed RTAES approach using the Montage scientific model datasets for different numbers of
nodes and VM numbers. Figure 5 shows the power average comparison of the existing DVFS
model and the proposed RTAES approach using the Montage scientific model datasets for
different numbers of nodes and VM numbers. Figure 6 shows the average execution time (s)
comparison of the existing DVFS model and the proposed RTAES approach using the Montage
scientific model datasets for different numbers of nodes and VM numbers.
Figure 2: Total simulation time comparison of the existing and the proposed model using MSD
174.67 387.53 817.35
8591.39
57.72 71.77 101.91
777.04
0
2000
4000
6000
8000
10000
VM-100 VM-80 VM-60 VM-40
MSD-25 MSD-50 MSD-100 MSD-1000
DVFS RTAES
11. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
11
Figure 3: Energy consumption comparison of the existing and the proposed model using MSD
Figure 4: Power comparison of the existing and the proposed model using MSD
Figure 5: Power average comparison of the existing and the proposed model using MSD
482.29 944.14 1839.12
70377.06
120 124.88 143.79 1046.14
0
20000
40000
60000
80000
VM-100 VM-80 VM-60 VM-40
MSD-25 MSD-50 MSD-100 MSD-1000
DVFS RTAES
1663184.16 2953094.75 4673928.26
32789145.44
417907.46 418103.69 449547.24 2328420.37
0
5000000
10000000
15000000
20000000
25000000
30000000
35000000
VM-100 VM-80 VM-60 VM-40
MSD-25 MSD-50 MSD-100 MSD-1000
DVFS RTAES
95.22
76.2
57.18
38.16
72.4
58.25
44.11
29.96
0
20
40
60
80
100
VM-100 VM-80 VM-60 VM-40
MSD-25 MSD-50 MSD-100 MSD-1000
DVFS RTAES
12. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
12
Figure 6: Average execution time (sec) comparison of the existing techniques and the proposed model
using MSD
5. CONCLUSION
Task scheduling in embedded processors to minimize the consumption of energy in individual
VMs, has essential significance. Therefore, in this paper, we proposed the real time adaptive
energy-scheduling (RTAES) algorithm by manipulating the reconfiguration proficiency of cloud
computing-virtualized data centers (CCVDCs) by processing massive amounts of information /
data of an application. The RTAES algorithm minimizes the consumption of energy in the
computational, reconfiguration and communication costs under the scenario of parallel processing
data in a cloud computing environment. The RTAES algorithm is based on DVFS. We also tried
to minimize three cost functions, including communication costs, computational costs, and
switching costs, to improve the system’s performance. The modeling details were presented
above. The Montage scientific dataset (MSD) was used to validate our model with respect to
other existing techniques. The above results were demonstrated in terms of the total execution
time, the decrease in power consumption and the average power that was essential for processor
operation. Using our proposed RTAES model, the total simulation times are 57.72 sec for MSD-
25, 71.77 sec for MSD-50, 101.91 sec for MSD-100 and 777.04 sec for MSD-1000, values that
are substantially lower than those of the existing DVFS technique. Similarly, the energy
consumption values using our proposed RTAES model are 120Wh for MSD-25, 124.88Wh for
MSD-50, 143.79Wh for MSD-100 and 1046.14Wh for MSD-1000, values that are substantially
lower than those of the existing DVFS technique. Therefore, the results of our proposed RTAES
model demonstrate its effectiveness in power consumption, performance and execution time with
respect to other existing approaches.
REFERENCES
[1] Lizhe Wang, Gregor von Laszewski, Jai Dayal, Thomas R. Furlani, Thermal aware workload
scheduling with backfilling for green data centers, in: IPCCC, 2009, pp. 289–296.
[2] William Forrest, How to cut data centre carbon emissions? Website, December 2008.
8.44
9.78
10.58
11.36
6.9868
7.7506 8.1735 8.59139
2.3088
1.4354 1.0191 0.77704
0
2
4
6
8
10
12
Montage-25 Montage-50 Montage-100 Montage-1000
EMOA [34] DVFS RTAES
13. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
13
[3] Yongpan Liu et al. “Thermal vs energy optimization for dvfs-enabled processors in embedded
systems”. In: Quality Electronic Design, 2007. ISQED’07. 8th International Symposium on. IEEE.
2007, pp. 204–209.
[4] J.-J. Chen and C.-F. Kuo, ``Energy-efficient scheduling for real-time systems on dynamic voltage
scaling (DVS) platforms,'' in Proc. IEEE RTCSA, Aug. 2007, pp. 28-38.
[5] V. Devadas and H. Aydin, ``Coordinated power management of periodic real-time tasks on chip
multiprocessors,'' in Proc. Green Comput. Conf., 2010, pp. 61-72.
[6] Arabnejad, H., Barbosa, J.G.: A budget constrained scheduling algorithm for workflow applications.
J. Grid Comput. 12(4), 15 (2014)
[7] Durillo, J.J., Prodan, R.: Multi-objective workflow scheduling in amazon ec2. Cluster Comput. 17(2),
169–189 (2014)
[8] Q. Wu, F. Ishikawa, Q. Zhu, Y. Xia and J. Wen, "Deadline-Constrained Cost Optimization
Approaches for Workflow Scheduling in Clouds," in IEEE Transactions on Parallel and Distributed
Systems, vol. 28, no. 12, pp. 3401-3412, Dec. 2017.
[9] S. Chinprasertsuk and S. Gertphol, "Power model for virtual machine in cloud computing," 2014 11th
International Joint Conference on Computer Science and Software Engineering (JCSSE), Chon Buri,
2014, pp. 140-145.
[10] T. AlEnawy et al., ``Energy-aware task allocation for rate monotonic scheduling,'' in Proc. IEEE
RTAS, Apr. 2005, pp. 213-223.
[11] C.-Y. Yang, J.-J. Chen, T.-W. Kuo, and L. Thiele, ``An approximation scheme for energy- efficient
scheduling of real-time tasks in heterogeneous multiprocessor systems,'' in Proc. DATE, 2009, pp.
694-699.
[12] E. Seo, J. Jeong, S. Park, and J. Lee, ``Energy efcient scheduling of realtime tasks on multicore
processors,'' IEEE Trans. Parallel Distrib. Syst., vol. 19, no. 11, pp. 1540-1552, Nov. 2008.
[13] C. Xian and Y.-H. Lu, ``Dynamic voltage scaling for multitasking real-time systems with uncertain
execution time,'' in Proc. ACM GLSVLSI, 2006, pp. 392-397.
[14] C. Xian, Y.-H. Lu, and Z. Li, ``Energy-aware scheduling for real-time multiprocessor systems with
uncertain task execution time,'' in Proc. ACM DAC, 2007, pp. 664-669.
[15] Baliga, J, Ayre, R. W., Hinton, K., and Tucker, R. 2011. Green cloud computing: Balancing energy in
processing, storage, and transport. Proceedings of the IEEE, 2011. 99(1):149–167.
[16] Canali, C. and Lancellotti, R. Parameter Tuning for Scalable Multi-Resource Server Consolidation in
Cloud Systems. Communications Software and Systems, 2016.
[17] Urgaonkar, R., Kozat, U. C., Igarashi, K., and Neely, M. J. Dynamic resource allocation and power
management in virtualized data centers. In NOMS, 2010, pp 479–486. IEEE.
[18] A. Alsarhan, A. Itradat, A. Y. Al-Dubai, A. Y. Zomaya and G. Min, "Adaptive Resource Allocation
and Provisioning in Multi-Service Cloud Environments," in IEEE Transactions on Parallel and
Distributed Systems, vol. 29, no. 1, pp. 31-42, Jan. 1 2018.
14. International Journal of Distributed and Parallel Systems (IJDPS) Vol.10, No.1, January 2019
14
[19] P. Arroba, J. M. Moya, J. L. Ayala and R. Buyya, "DVFS-Aware Consolidation for Energy-Efficient
Clouds," 2015 International Conference on Parallel Architecture and Compilation (PACT), San
Francisco, CA, 2015, pp. 494-495.
[20] W. Chawarut and L. Woraphon, "Energy-aware and real-time service management in cloud
computing," 2013 10th International Conference on Electrical Engineering/Electronics, Computer,
Telecommunications and Information Technology, Krabi, 2013, pp. 1-5.
[21] H. Faragardi, A. Rajabi, K. Sandström and T. Nolte, "EAICA: An energy-aware resource
provisioning algorithm for Real-Time Cloud services," 2016 IEEE 21st International Conference on
Emerging Technologies and Factory Automation (ETFA), Berlin, 2016, pp.1-10.
[22] Z. Bagheri and K. Zamanifar, "Enhancing energy efficiency in resource allocation for real-time cloud
services," Telecommunications (IST), 2014 7th International Symposium on, Tehran, 2014, pp. 701-
706.
[23] Zheng, W., Sakellariou, R.: Budget-deadline constrained workflow planning for admission control. J.
Grid Comput. 11(4), 105–119, 2013.
[24] T. Knauth and C. Fetzer, “Energy-aware scheduling for infrastructure clouds,” in Cloud Computing
Technology and Science (CloudCom), 2012,IEEE 4th International Conference on. IEEE, 2012, pp.
58–65.
[25] S. K. Garg, C. S. Yeo, and R. Buyya, “Green cloud framework for improving carbon efficiency of
clouds,” in Euro-Par 2011 Parallel Processing. Springer, 2011, pp. 491–502.
[26] Z. Zhu, G. Zhang, M. Li and X. Liu, "Evolutionary Multi-Objective Workflow Scheduling in Cloud,"
in IEEE Transactions on Parallel and Distributed Systems, vol. 27, no. 5, pp. 1344-1357, May 1 2016.
[27] D.B Srinivas, Puneeth R.P, Rajan M.A, Sanjay. H.A, An integrated framework to measure the
energy consumption of a parallel application on a computational grid, International Journal of
Computer Science and Network Security, VOL.16 No.7, pp. 94-98, 2016,