Energy consumption is a critical design issue in real-time systems, especially in battery- operated systems. Maintaining high performance, while extending the battery life between charges is an interesting challenge for system designers. Dynamic Voltage Scaling and Dynamic Frequency Scaling allow us to adjust supply voltage and processor frequency to adapt to the workload demand for better energy management. Usually, higher processor voltage and frequency leads to higher system throughput while energy reduction can be obtained using lower voltage and frequency. Many real-time scheduling algorithms have been developed recently to reduce energy consumption in the portable devices that use voltage scalable processors. For a real-time application, comprising a set of real-time tasks with precedence and resource constraints executing on a distributed system, we propose a dynamic energy efficient scheduling algorithm with weighted First Come First Served (WFCFS) scheme. This also considers the run-time behaviour of tasks, to further explore the idle periods of processors for energy saving. Our algorithm is compared with the existing Modified Feedback Control Scheduling (MFCS), First Come First Served (FCFS), and Weighted scheduling (WS) algorithms that uses Service-Rate-Proportionate (SRP) Slack Distribution Technique. Our proposed algorithm achieves about 5 to 6 percent more energy savings and increased reliability over the existing ones.
Energy efficient task scheduling algorithms for cloud data centerseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Energy efficient task scheduling algorithms for cloud data centerseSAT Journals
Abstract
Life time of Wireless device Networks (WSNs) has perpetually been a important issue and has received enlarged attention within the
recent years. Typically wireless device nodes area unit equipped with low power batteries that area unit impossible to recharge.
Wireless device networks ought to have enough energy to satisfy the specified necessities of applications. during this paper, we have a
tendency to propose Energy economical Routing and Fault node Replacement (EERFNR) formula to extend the lifespan of wireless
device network, cut back information loss and conjointly cut back device node replacement value. Transmission drawback and device
node loading drawback is solved by adding many relay nodes and composition device node’s routing mistreatment stratified Gradient
Diffusion. The device node will save backup nodes to cut back the energy for re-looking the route once the device node routing is
broken. Genetic formula can calculate the device nodes to exchange, apply the foremost on the market routing methods to replace the
fewest device nodes.
Keywords: Genetic algorithmic rule, stratified gradient diffusion, grade diffusion, wireless device networks
A survey on dynamic energy management at virtualization level in cloud data c...csandit
Data centers have become indispensable infrastructure for data storage and facilitating the
development of diversified network services and applications offered by the cloud. Rapid
development of these applications and services imposes various resource demands that results
in increased energy consumption. This necessitates the development of efficient energy
management techniques in data center not only for operational cost but also to reduce the
amount of heat released from storage devices. Virtualization is a powerful tool for energy
management that achieves efficient utilization of data center resources. Though, energy
management at data centers can be static or dynamic, virtualization level energy management
techniques contributes more energy conservation than hardware level. This paper surveys
various issues related to dynamic energy management at virtualization level in cloud data
centers.
DYNAMIC VOLTAGE SCALING FOR POWER CONSUMPTION REDUCTION IN REAL-TIME MIXED TA...cscpconf
The reduction in energy consumption without any deadline miss is one of the main challenges in real-time embedded systems. Dynamic voltage scaling (DVS) is a technique that reduces the power consumption of processors by utilizing various operating points provided to the DVS processor. These operating points consist of pairs of voltage and frequency. The selection of operating points can be done based on the load to the system at a particular point of time. In this work DVS is applied to both periodic and sporadic tasks, and an average of 40% of energy is reduced. The energy consumption of the processor is further reduced by 2-10% by reducing the number of pre-emption and frequency switching
Energy efficient task scheduling algorithms for cloud data centerseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Energy efficient task scheduling algorithms for cloud data centerseSAT Journals
Abstract
Life time of Wireless device Networks (WSNs) has perpetually been a important issue and has received enlarged attention within the
recent years. Typically wireless device nodes area unit equipped with low power batteries that area unit impossible to recharge.
Wireless device networks ought to have enough energy to satisfy the specified necessities of applications. during this paper, we have a
tendency to propose Energy economical Routing and Fault node Replacement (EERFNR) formula to extend the lifespan of wireless
device network, cut back information loss and conjointly cut back device node replacement value. Transmission drawback and device
node loading drawback is solved by adding many relay nodes and composition device node’s routing mistreatment stratified Gradient
Diffusion. The device node will save backup nodes to cut back the energy for re-looking the route once the device node routing is
broken. Genetic formula can calculate the device nodes to exchange, apply the foremost on the market routing methods to replace the
fewest device nodes.
Keywords: Genetic algorithmic rule, stratified gradient diffusion, grade diffusion, wireless device networks
A survey on dynamic energy management at virtualization level in cloud data c...csandit
Data centers have become indispensable infrastructure for data storage and facilitating the
development of diversified network services and applications offered by the cloud. Rapid
development of these applications and services imposes various resource demands that results
in increased energy consumption. This necessitates the development of efficient energy
management techniques in data center not only for operational cost but also to reduce the
amount of heat released from storage devices. Virtualization is a powerful tool for energy
management that achieves efficient utilization of data center resources. Though, energy
management at data centers can be static or dynamic, virtualization level energy management
techniques contributes more energy conservation than hardware level. This paper surveys
various issues related to dynamic energy management at virtualization level in cloud data
centers.
DYNAMIC VOLTAGE SCALING FOR POWER CONSUMPTION REDUCTION IN REAL-TIME MIXED TA...cscpconf
The reduction in energy consumption without any deadline miss is one of the main challenges in real-time embedded systems. Dynamic voltage scaling (DVS) is a technique that reduces the power consumption of processors by utilizing various operating points provided to the DVS processor. These operating points consist of pairs of voltage and frequency. The selection of operating points can be done based on the load to the system at a particular point of time. In this work DVS is applied to both periodic and sporadic tasks, and an average of 40% of energy is reduced. The energy consumption of the processor is further reduced by 2-10% by reducing the number of pre-emption and frequency switching
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage based on needs. This is because of the virtualization technology. The scheduling objectives are to improve the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically adjust the scale of cloud in a while meets the real-time requirements and to save energy.
A tricky task scheduling technique to optimize time cost and reliability in m...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
A SURVEY: TO HARNESS AN EFFICIENT ENERGY IN CLOUD COMPUTINGijujournal
Cloud computing affords huge potential for dynamism, flexibility and cost-effective IT operations. Cloud computing requires many tasks to be executed by the provided resources to achieve good performance, shortest response time and high utilization of resources. To achieve these challenges there is a need to develop a new energy aware scheduling algorithm that outperform appropriate allocation map of task to optimize energy consumption. This study accomplished with all the existing techniques mainly focus on reducing energy consumption.
A survey to harness an efficient energy in cloud computingijujournal
Cloud computing affords huge potential for dynamism, flexibility and cost-effective IT operations. Cloud computing requires many tasks to be executed by the provided resources to achieve good performance, shortest response time and high utilization of resources. To achieve these challenges there is a need to develop a new energy aware scheduling algorithm that outperform appropriate allocation map of task to
optimize energy consumption. This study accomplished with all the existing techniques mainly focus on reducing energy consumption.
An energy optimization with improved QOS approach for adaptive cloud resources IJECEIAES
In recent times, the utilization of cloud computing VMs is extremely enhanced in our day-to-day life due to the ample utilization of digital applications, network appliances, portable gadgets, and information devices etc. In this cloud computing VMs numerous different schemes can be implemented like multimedia-signal-processing-methods. Thus, efficient performance of these cloud-computing VMs becomes an obligatory constraint, precisely for these multimedia-signal-processing-methods. However, large amount of energy consumption and reduction in efficiency of these cloud-computing VMs are the key issues faced by different cloud computing organizations. Therefore, here, we have introduced a dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability (퐴퐶푅푅) technique for cloud computing devices, which efficiently reduces energy consumption, as well as perform operations in very less time. We have demonstrated an efficient resource allocation and utilization technique to optimize by reducing different costs of the model. We have also demonstrated efficient energy optimization techniques by reducing task loads. Our experimental outcomes shows the superiority of our proposed model 퐴퐶푅푅 in terms of average run time, power consumption and average power required than any other state-of-art techniques.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies, which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
Energy harvesting earliest deadline first scheduling algorithm for increasing...IJECEIAES
In this paper, a new approach for energy minimization in energy harvesting real time systems has been investigated. Lifetime of a real time systems is depend upon its battery life. Energy is a parameter by which the lifetime of system can be enhanced. To work continuously and successively, energy harvesting is used as a regular source of energy. EDF (Earliest Deadline First) is a traditional real time tasks scheduling algorithm and DVS (Dynamic Voltage Scaling) is used for reducing energy consumption. In this paper, we propose an Energy Harvesting Earliest Deadline First (EH-EDF) scheduling algorithm for increasing lifetime of real time systems using DVS for reducing energy consumption and EDF for tasks scheduling with energy harvesting as regular energy supply. Our experimental results show that the proposed approach perform better to reduce energy consumption and increases the system lifetime as compared with existing approaches.
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUSVLSICS Design
Automobile manufacturers are controlled by stringent govt. regulations for safety and fuel emissions and
motivated towards adding more advanced features and sophisticated applications to the existing electronic
system. Ever increasing customer’s demands for high level of comfort also necessitate providing even more
sophistication in vehicle electronics system. All these, directly make the vehicle software system more
complex and computationally more intensive. In turn, this demands very high computational capability of
the microprocessor used in electronic control unit (ECU). In this regard, multicore processors have
already been implemented in some of the task rigorous ECUs like, power train, image processing and
infotainment. To achieve greater performance from these multicore processors, parallelized ECU software
needs to be efficiently scheduled by the underlaying operating system for execution to utilize all the
computational cores to the maximum extent possible and meet the real time constraint. In this paper, we
propose a dynamic task scheduler for multicore engine control ECU that provides maximum CPU
utilization, minimized preemption overhead, minimum average waiting time and all the tasks meet their
real time deadlines while compared to the static priority scheduling suggested by Automotive Open Systems
Architecture (AUTOSAR).
Dynamic task scheduling on multicore automotive ec usVLSICS Design
Automobile manufacturers are controlled by stringent govt. regulations for safety and fuel emissions and
motivated towards adding more advanced features and sophisticated applications to the existing electronic
system. Ever increasing customer’s demands for high level of comfort also necessitate providing even more
sophistication in vehicle electronics system. All these, directly make the vehicle software system more
complex and computationally more intensive. In turn, this demands very high computational capability of
the microprocessor used in electronic control unit (ECU). In this regard, multicore processors have
already been implemented in some of the task rigorous ECUs like, power train, image processing and
infotainment. To achieve greater performance from these multicore processors, parallelized ECU software
needs to be efficiently scheduled by the underlaying operating system for execution to utilize all the
computational cores to the maximum extent possible and meet the real time constraint. In this paper, we
propose a dynamic task scheduler for multicore engine control ECU that provides maximum CPU
utilization, minimized preemption overhead, minimum average waiting time and all the tasks meet their
real time deadlines while compared to the static priority scheduling suggested by Automotive Open Systems
Architecture (AUTOSAR).
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUSVLSICS Design
Automobile manufacturers are controlled by stringent govt. regulations for safety and fuel emissions and motivated towards adding more advanced features and sophisticated applications to the existing electronic system. Ever increasing customer’s demands for high level of comfort also necessitate providing even more
sophistication in vehicle electronics system. All these, directly make the vehicle software system more complex and computationally more intensive. In turn, this demands very high computational capability of the microprocessor used in electronic control unit (ECU). In this regard, multicore processors have already been implemented in some of the task rigorous ECUs like, power train, image processing and infotainment. To achieve greater performance from these multicore processors, parallelized ECU software needs to be efficiently scheduled by the underlaying operating system for execution to utilize all the computational cores to the maximum extent possible and meet the real time constraint. In this paper, we propose a dynamic task scheduler for multicore engine control ECU that provides maximum CPU
utilization, minimized preemption overhead, minimum average waiting time and all the tasks meet their real time deadlines while compared to the static priority scheduling suggested by Automotive Open Systems Architecture (AUTOSAR).
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
Embedded systems with multi core processors are increasingly popular because of the diversity of applications that can be run on it. In this work, a reinforcement learning based scheduling method is proposed to handle the real time tasks in multi core systems with effective CPU usage and lower response time. The priority of the tasks is varied dynamically to ensure fairness with reinforcement learning based priority assignment and Multi Core MultiLevel Feedback queue (MCMLFQ) to manage the task execution in multi core system
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
Optimization of energy consumption in cloud computing datacenters IJECEIAES
Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage based on needs. This is because of the virtualization technology. The scheduling objectives are to improve the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically adjust the scale of cloud in a while meets the real-time requirements and to save energy.
A tricky task scheduling technique to optimize time cost and reliability in m...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
A SURVEY: TO HARNESS AN EFFICIENT ENERGY IN CLOUD COMPUTINGijujournal
Cloud computing affords huge potential for dynamism, flexibility and cost-effective IT operations. Cloud computing requires many tasks to be executed by the provided resources to achieve good performance, shortest response time and high utilization of resources. To achieve these challenges there is a need to develop a new energy aware scheduling algorithm that outperform appropriate allocation map of task to optimize energy consumption. This study accomplished with all the existing techniques mainly focus on reducing energy consumption.
A survey to harness an efficient energy in cloud computingijujournal
Cloud computing affords huge potential for dynamism, flexibility and cost-effective IT operations. Cloud computing requires many tasks to be executed by the provided resources to achieve good performance, shortest response time and high utilization of resources. To achieve these challenges there is a need to develop a new energy aware scheduling algorithm that outperform appropriate allocation map of task to
optimize energy consumption. This study accomplished with all the existing techniques mainly focus on reducing energy consumption.
An energy optimization with improved QOS approach for adaptive cloud resources IJECEIAES
In recent times, the utilization of cloud computing VMs is extremely enhanced in our day-to-day life due to the ample utilization of digital applications, network appliances, portable gadgets, and information devices etc. In this cloud computing VMs numerous different schemes can be implemented like multimedia-signal-processing-methods. Thus, efficient performance of these cloud-computing VMs becomes an obligatory constraint, precisely for these multimedia-signal-processing-methods. However, large amount of energy consumption and reduction in efficiency of these cloud-computing VMs are the key issues faced by different cloud computing organizations. Therefore, here, we have introduced a dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability (퐴퐶푅푅) technique for cloud computing devices, which efficiently reduces energy consumption, as well as perform operations in very less time. We have demonstrated an efficient resource allocation and utilization technique to optimize by reducing different costs of the model. We have also demonstrated efficient energy optimization techniques by reducing task loads. Our experimental outcomes shows the superiority of our proposed model 퐴퐶푅푅 in terms of average run time, power consumption and average power required than any other state-of-art techniques.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The increased availability of the cloud models and allied developing models creates easier computing cloud environment. Energy consumption and effective energy management are the two important challenges in virtualized computing platforms. Energy consumption can be minimized by allocating computationally intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling (DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the required QoS. However, they do not control the internal and external switching to server frequencies, which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm minimizes consumption of energy and time during computation, reconfiguration and communication. Our proposed model confirms the effectiveness of its implementation, scalability, power consumption and execution time with respect to other existing approaches.
Energy harvesting earliest deadline first scheduling algorithm for increasing...IJECEIAES
In this paper, a new approach for energy minimization in energy harvesting real time systems has been investigated. Lifetime of a real time systems is depend upon its battery life. Energy is a parameter by which the lifetime of system can be enhanced. To work continuously and successively, energy harvesting is used as a regular source of energy. EDF (Earliest Deadline First) is a traditional real time tasks scheduling algorithm and DVS (Dynamic Voltage Scaling) is used for reducing energy consumption. In this paper, we propose an Energy Harvesting Earliest Deadline First (EH-EDF) scheduling algorithm for increasing lifetime of real time systems using DVS for reducing energy consumption and EDF for tasks scheduling with energy harvesting as regular energy supply. Our experimental results show that the proposed approach perform better to reduce energy consumption and increases the system lifetime as compared with existing approaches.
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUSVLSICS Design
Automobile manufacturers are controlled by stringent govt. regulations for safety and fuel emissions and
motivated towards adding more advanced features and sophisticated applications to the existing electronic
system. Ever increasing customer’s demands for high level of comfort also necessitate providing even more
sophistication in vehicle electronics system. All these, directly make the vehicle software system more
complex and computationally more intensive. In turn, this demands very high computational capability of
the microprocessor used in electronic control unit (ECU). In this regard, multicore processors have
already been implemented in some of the task rigorous ECUs like, power train, image processing and
infotainment. To achieve greater performance from these multicore processors, parallelized ECU software
needs to be efficiently scheduled by the underlaying operating system for execution to utilize all the
computational cores to the maximum extent possible and meet the real time constraint. In this paper, we
propose a dynamic task scheduler for multicore engine control ECU that provides maximum CPU
utilization, minimized preemption overhead, minimum average waiting time and all the tasks meet their
real time deadlines while compared to the static priority scheduling suggested by Automotive Open Systems
Architecture (AUTOSAR).
Dynamic task scheduling on multicore automotive ec usVLSICS Design
Automobile manufacturers are controlled by stringent govt. regulations for safety and fuel emissions and
motivated towards adding more advanced features and sophisticated applications to the existing electronic
system. Ever increasing customer’s demands for high level of comfort also necessitate providing even more
sophistication in vehicle electronics system. All these, directly make the vehicle software system more
complex and computationally more intensive. In turn, this demands very high computational capability of
the microprocessor used in electronic control unit (ECU). In this regard, multicore processors have
already been implemented in some of the task rigorous ECUs like, power train, image processing and
infotainment. To achieve greater performance from these multicore processors, parallelized ECU software
needs to be efficiently scheduled by the underlaying operating system for execution to utilize all the
computational cores to the maximum extent possible and meet the real time constraint. In this paper, we
propose a dynamic task scheduler for multicore engine control ECU that provides maximum CPU
utilization, minimized preemption overhead, minimum average waiting time and all the tasks meet their
real time deadlines while compared to the static priority scheduling suggested by Automotive Open Systems
Architecture (AUTOSAR).
DYNAMIC TASK SCHEDULING ON MULTICORE AUTOMOTIVE ECUSVLSICS Design
Automobile manufacturers are controlled by stringent govt. regulations for safety and fuel emissions and motivated towards adding more advanced features and sophisticated applications to the existing electronic system. Ever increasing customer’s demands for high level of comfort also necessitate providing even more
sophistication in vehicle electronics system. All these, directly make the vehicle software system more complex and computationally more intensive. In turn, this demands very high computational capability of the microprocessor used in electronic control unit (ECU). In this regard, multicore processors have already been implemented in some of the task rigorous ECUs like, power train, image processing and infotainment. To achieve greater performance from these multicore processors, parallelized ECU software needs to be efficiently scheduled by the underlaying operating system for execution to utilize all the computational cores to the maximum extent possible and meet the real time constraint. In this paper, we propose a dynamic task scheduler for multicore engine control ECU that provides maximum CPU
utilization, minimized preemption overhead, minimum average waiting time and all the tasks meet their real time deadlines while compared to the static priority scheduling suggested by Automotive Open Systems Architecture (AUTOSAR).
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
Embedded systems with multi core processors are increasingly popular because of the diversity of applications that can be run on it. In this work, a reinforcement learning based scheduling method is proposed to handle the real time tasks in multi core systems with effective CPU usage and lower response time. The priority of the tasks is varied dynamically to ensure fairness with reinforcement learning based priority assignment and Multi Core MultiLevel Feedback queue (MCMLFQ) to manage the task execution in multi core system
A SURVEY ON DYNAMIC ENERGY MANAGEMENT AT VIRTUALIZATION LEVEL IN CLOUD DATA C...cscpconf
Data centers have become indispensable infrastructure for data storage and facilitating the development of diversified network services and applications offered by the cloud. Rapid
development of these applications and services imposes various resource demands that results in increased energy consumption. This necessitates the development of efficient energy management techniques in data center not only for operational cost but also to reduce the amount of heat released from storage devices. Virtualization is a powerful tool for energy
management that achieves efficient utilization of data center resources. Though, energy management at data centers can be static or dynamic, virtualization level energy management
techniques contributes more energy conservation than hardware level. This paper surveys various issues related to dynamic energy management at virtualization level in cloud data
centers.
AN EFFICIENT HYBRID SCHEDULER USING DYNAMIC SLACK FOR REAL-TIME CRITICAL TASK...ijesajournal
Task intensive electronic control units (ECUs) in automotive domain, equipped with multicore processors ,
real time operating systems (RTOSs) and various application software, should perform efficiently and time
deterministically. The parallel computational capability offered by this multicore hardware can only be
exploited and utilized if the ECU application software is parallelized. Having provided with such
parallelized software, the real time operating system scheduler component should schedule the time critical
tasks so that, all the computational cores are utilized to a greater extent and the safety critical deadlines
are met. As original equipment manufacturers (OEMs) are always motivated towards adding more
sophisticated features to the existing ECUs, a large number of task sets can be effectively scheduled for
execution within the bounded time limits. In this paper, a hybrid scheduling algorithm has been proposed,
that meticulously calculates the running slack of every task and estimates the probability of meeting
deadline either being in the same partitioned queue or by migrating to another. This algorithm was run and
tested using a scheduling simulator with different real time task models of periodic tasks . This algorithm
was also compared with the existing static priority scheduler, which is suggested by Automotive Open
Systems Architecture (AUTOSAR). The performance parameters considered here are, the % of core
utilization, average response time and task deadline missing rate. It has been verified that, this proposed
algorithm has considerable improvements over the existing partitioned static priority scheduler based on
each performance parameter mentioned above.
Energy efficient task scheduling algorithms for cloud data centerseSAT Journals
Abstract Cloud computing is a modern technology which contains a network of systems that form a cloud. Energy conservation is one of the major concern in cloud computing. Large amount of energy is wasted by the computers and other devices and the carbon dioxide gas is released into the atmosphere polluting the environment. Green computing is an emerging technology which focuses on preserving the environment by reducing various kinds of pollutions. Pollutions include excessive emission of greenhouse gas, disposal of e-waste and so on leading to greenhouse effect. So pollution needs to be reduced by lowering the energy usage. By doing this, utilization of resources should not be reduced. With less usage of energy, maximum resource utilization should be possible. For this purpose, many green task scheduling algorithms are used so that the energy consumption can be minimized in servers of cloud data centers. In this paper, ESF-ES algorithm is developed which focuses on minimizing energy consumption by minimizing the number of servers used. The comparison is made with hybrid algorithms and most-efficient-server first scheme. Keywords: Cloud computing, Green computing, Energy-efficiency, Green data centers and Task scheduling.
An optimized cost-based data allocation model for heterogeneous distributed ...IJECEIAES
Continuous attempts have been made to improve the flexibility and effectiveness of distributed computing systems. Extensive effort in the fields of connectivity technologies, network programs, high processing components, and storage helps to improvise results. However, concerns such as slowness in response, long execution time, and long completion time have been identified as stumbling blocks that hinder performance and require additional attention. These defects increased the total system cost and made the data allocation procedure for a geographically dispersed setup difficult. The load-based architectural model has been strengthened to improve data allocation performance. To do this, an abstract job model is employed, and a data query file containing input data is processed on a directed acyclic graph. The jobs are executed on the processing engine with the lowest execution cost, and the system's total cost is calculated. The total cost is computed by summing the costs of communication, computation, and network. The total cost of the system will be reduced using a Swarm intelligence algorithm. In heterogeneous distributed computing systems, the suggested approach attempts to reduce the system's total cost and improve data distribution. According to simulation results, the technique efficiently lowers total system cost and optimizes partitioned data allocation.
JAVA 2013 IEEE NETWORKING PROJECT Harvesting aware energy management for time...IEEEGLOBALSOFTTECHNOLOGIES
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Harvesting aware energy management for time-critical wireless sensor networksIEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
Achieving Energy Proportionality In Server ClustersCSCJournals
a great amount of interests in the past few years. Energy proportionality is a principal to ensure that energy consumption is proportional to the system workload. Energy proportional design can effectively improve energy efficiency of computing systems. In this paper, an energy proportional model is proposed based on queuing theory and service differentiation in server clusters, which can provide controllable and predictable quantitative control over power consumption with theoretically guaranteed service performance. Futher study for the transition overhead is carried out corresponding strategy is proposed to compensate the performance degradation caused by transition overhead. The model is evaluated via extensive simulations and is justified by the real workload data trace. The results show that our model can achieve satisfied service performance while still preserving energy efficiency in the system.
A Prolific Scheme for Load Balancing Relying on Task Completion Time IJECEIAES
In networks with lot of computation, load balancing gains increasing significance. To offer various resources, services and applications, the ultimate aim is to facilitate the sharing of services and resources on the network over the Internet. A key issue to be focused and addressed in networks with large amount of computation is load balancing. Load is the number of tasks„t‟ performed by a computation system. The load can be categorized as network load and CPU load. For an efficient load balancing strategy, the process of assigning the load between the nodes should enhance the resource utilization and minimize the computation time. This can be accomplished by a uniform distribution of load of to all the nodes. A Load balancing method should guarantee that, each node in a network performs almost equal amount of work pertinent to their capacity and availability of resources. Relying on task subtraction, this work has presented a pioneering algorithm termed as E-TS (Efficient-Task Subtraction). This algorithm has selected appropriate nodes for each task. The proposed algorithm has improved the utilization of computing resources and has preserved the neutrality in assigning the load to the nodes in the network.
Similar to ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AND RESOURCE CONSTRAINTS (20)
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Water Industry Process Automation and Control Monthly - May 2024.pdf
ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AND RESOURCE CONSTRAINTS
1. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.2, No.2, April 2012
DOI : 10.5121/ijcsea.2012.2216 185
ENERGY EFFICIENT SCHEDULING FOR REAL-TIME
EMBEDDED SYSTEMS WITH PRECEDENCE AND
RESOURCE CONSTRAINTS
Santhi Baskaran1
and P. Thambidurai2
1
Department of Information Technology, Pondicherry Engineering College, Puducherry,
India
santhibaskaran@pec.edu
2
Department of Computer Science and Engineering, Pondicherry Engineering College,
Puducherry, India
ptdurai@pec.edu
ABSTRACT
Energy consumption is a critical design issue in real-time systems, especially in battery- operated systems.
Maintaining high performance, while extending the battery life between charges is an interesting challenge
for system designers. Dynamic Voltage Scaling and Dynamic Frequency Scaling allow us to adjust supply
voltage and processor frequency to adapt to the workload demand for better energy management. Usually,
higher processor voltage and frequency leads to higher system throughput while energy reduction can be
obtained using lower voltage and frequency. Many real-time scheduling algorithms have been developed
recently to reduce energy consumption in the portable devices that use voltage scalable processors. For a
real-time application, comprising a set of real-time tasks with precedence and resource constraints
executing on a distributed system, we propose a dynamic energy efficient scheduling algorithm with
weighted First Come First Served (WFCFS) scheme. This also considers the run-time behaviour of tasks, to
further explore the idle periods of processors for energy saving. Our algorithm is compared with the
existing Modified Feedback Control Scheduling (MFCS), First Come First Served (FCFS), and Weighted
scheduling (WS) algorithms that uses Service-Rate-Proportionate (SRP) Slack Distribution Technique. Our
proposed algorithm achieves about 5 to 6 percent more energy savings and increased reliability over the
existing ones.
KEYWORDS
Distributed Real-time System, DVS, Dynamic Slack, Energy efficient Scheduling
1. INTRODUCTION
Many embedded command and control systems used in manufacturing, chemical processing,
avionics, telemedicine, and sensor networks are mission-critical. These systems usually comprise
of applications that must accomplish certain functionalities in real-time [1]. Dynamic voltage
scaling (DVS) is an effective technique to reduce CPU energy. DVS takes advantage of the
quadratic relationship between supply voltage and energy consumption, which can result in
significant energy savings. By reducing processor clock frequency and supply voltage, it is
2. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.2, No.2, April 2012
186
possible to reduce the energy consumption at the cost of performance of processors [2]. Battery
powered portable systems have been widely used in many applications. As the quantity and the
functional complexity of battery powered portable devices continue to rise, energy-efficient
design of such devices has become increasingly important. Also these systems have to
concurrently perform a multitude of complex tasks under stringent time constraints. Thus
minimizing power consumption and extending battery lifespan while guaranteeing the timing
constraints has become a critical aspect in designing such systems. The focus is on task
scheduling algorithms to meet timing constraint while minimizing system energy consumption.
In order to make energy efficient, in the scheduling, the execution time of the tasks can be
extended up to the worst case delay for each task set. In real-time system designs, Slack
Management is increasingly applied to reduce energy consumption and optimize the system with
respect to its performance and time overheads. In energy efficient scheduling, the set of tasks will
have certain deadline before which they should finish their execution and hence there is always a
time gap between the actual execution time and the deadline. It is called slack time [3]. Therefore
to minimize the energy consumed and to satisfy the deadline of the tasks, the processors run at
variable speeds there by reducing the energy consumed by them. This is simulated with various
task sets on different set of processors using proposed and existing task scheduling algorithms.
In this paper homogeneous distributed embedded systems executing a set of dependent tasks of a
real-time application, which are normally represented by a task graph, is considered. The
algorithm aims to reduce the energy consumption without missing any deadlines for a hard real-
time task and with minimum deadline misses for soft real-time tasks [4]. Resources should be
allocated efficiently among tasks and also care should be taken to see that no deadlock occurs [5].
Therefore it is necessary to introduce resource management mechanisms that can adapt to
dynamic changes in resource availability and requirement.
This paper is organized in the following way. The related work is addressed in Section II. The
various models used in this work are described in Section III. Proposed algorithm is described in
Section IV. Section V discusses the simulation and analysis of results. Finally Section VI
concludes this paper with future work.
2. RELATED WORK
The two most commonly used techniques that can be used for energy minimization in distributed
embedded systems are Dynamic Voltage Scaling (DVS) [6] and Dynamic Power Management
(DPM) [7]. The application of these system-level energy management techniques can be
exploited to the maximum if we can take advantage of almost all of the idle time and slack time in
between processor busy times. Hence, the major challenge is to design an efficient scheduling
algorithm which can exploit the slack time and idle time of processors in the distributed
embedded systems to the maximum. Various energy-efficient slack management schemes have
been proposed for these real-time distributed systems. The static scheduling algorithm uses
critical path analysis and distributes the slack during the initial schedule [8]. The dynamic
scheduling algorithm [9] provides best effort service to soft aperiodic tasks and reduces power
consumption by varying voltage and frequencies. Resource adaptation techniques for energy
management in distributed real-time systems need to be coordinated to meet global energy and
real-time requirements. This issue is addressed based on feedback-based techniques [10], [11] to
allocate the overall slack in the entire system.
One of the existing works for distributed real-time system is based on feedback control
scheduling. This MFCS algorithm [11] can provide real-time performance guarantees efficiently,
even in open environments. This algorithm framework has three components; Monitors that track
3. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.2, No.2, April 2012
187
the CPU utilization of each processor; Resource reclaimer that computes difference between
task’s actual execution time and worst case execution time for local and global slack adjustment;
and Feedback scheduler that performs resource reclaimer recommended schedule adaptations
dynamically. A processor in the distributed system receives subtasks of different tasks, arrived to
the global Feedback scheduler. Within each processor the tasks are scheduled by the Basic
scheduler using Earliest Deadline First algorithm (EDF), since EDF has been proven to be an
optimal uniprocessor scheduling algorithm [12]. EDF algorithm allows dynamic rescheduling and
pre-emption in the queues and processing nodes. The Monitor module periodically checks the
processor utilization and sends message to the global feedback scheduler, which on arrival of a
new task decides to assign the task to a processor where the utilization criterion for the local EDF
scheduler is still satisfied.
Another existing work for distributed embedded real-time system uses Service Rate Proportionate
(SRP) slack distribution technique [13] for energy efficiency. Both the Dynamic and the Rate-
Based scheduling schemes have been examined with this technique. It introduces SRP a dynamic
slack management technique to reduce power consumption. The SRP technique improves
performance/overhead by 29 percent compared to contemporary techniques. This system does not
consider resource constraints among the dependent tasks. In this work, a scheduling algorithm
known as Weighted First Come First Served (WFCFS) with an efficient dynamic slack
management technique is proposed, such that energy efficiency of the processors increases still
maintaining the precedence, resource and timing constraints.
3. MODELS
In this section, we briefly discuss the system, application, precedence, resource and slack
management models that we have used in our work.
3.1. System and Application Model
A distributed embedded system with p homogeneous processors each with its private memory is
considered for scheduling the given real-time application. The system requires the complete
details of the task processing times (i.e.) the execution time and deadline before program
execution. Each processing element (PE) in the system is voltage scalable and, can support
continuous voltage and speed changes. We assume that the energy consumption, when the
processor is idle, is ignored.
The real-time applications can be modelled by a task graph G = (V,E), where V is the set of
vertices each of which represents one computation (task), and E is the set of directed edges that
represent the data dependencies between vertices. For each directed edge (vi, vj), there is a
significant inter-processor communication (IPC) cost when the data from vertex vi in one PE is
transmitted to vertex vj in another PE. The data communication cost in the same processor can be
ignored. Each real-time application has an end-to-end deadline D, by which it has to complete its
execution and produce the result.
The frequency selection is influenced by making a task more or less urgent by shifting its
deadline back and forth. The range within which the local deadline at node i can be varied is
bounded by [Si−,Si+]. The values for S can be derived from the local task parameters. If wceti
represents the worst case execution times of the local task at node i, then
4. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.2, No.2, April 2012
188
3.2. Precedence Model
Tasks of a real-time application considered in this paper have precedence constraints. For
example, a task Ti can become eligible for execution only after a task Tj has completed because Ti
may require Tj’s results. For implementing precedence constraints DAG is used. Precedence
constraints between tasks can also be modelled as resource dependencies. The precedence
constraint that Tj precedes Ti is equivalent to the situation where Ti requires a logical resource
(before it can start its execution) that is available only after Tj has completed its execution.
Figure 1. Precedence Constraints of a Task Set
Thus, if Tj has completed its execution before Ti arrives, then this logical resource is immediately
available for Ti and Ti becomes eligible to execute upon arrival. Furthermore, if Tj has not
completed its execution when Ti arrives, then the logical resource is not available and, therefore,
Ti is conceptually blocked upon arrival. Later, when Tj completes its execution, the logical
resource becomes available and Ti is unblocked.
3.3. Resource Model
To model non-CPU resources and resource requests, we make the following assumptions:
1) Resources are reusable and can be shared, but have mutual exclusion constraints. Thus,
only one task can be using a resource at any given time. This applies to physical
resources, such as disks and network segments, as well as logical resources, such as
critical code sections that are guarded by semaphores.
2) Only a single instance of a resource is present in the system. This requires that a task
explicitly specify which resource it wants to access. This is exactly the same resource
model as assumed in protocols such as the Priority Inheritance Protocol and Priority
Ceiling Protocol [14].
3) A task can only request a single instance of a resource. If multiple resources are needed
for a task to make progress, it must acquire all the resources through a set of consecutive
resource requests. In general, the requested time intervals of holding resources may be
overlapped.
5. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.2, No.2, April 2012
189
We assume that a task can explicitly release resources before the end of its execution. Thus, it is
necessary for a task that is requesting a resource to specify the time to hold the requested
resource. We refer to this time as HoldTime [15]. The scheduler uses the HoldTime information
at run time to make scheduling decisions.
3.4. Slack Management Model
In real-time system designs, slack management is increasingly applied to reduce power
consumption and optimize the system with respect to its performance and time overheads. This
slack management technique exploits the idle time and slack time of the system through DVS in
order to achieve the highest possible energy consumption. In energy efficient scheduling, the set
of tasks will have certain deadline before which they should finish their execution and hence there
is always a time gap between the actual execution time and the deadline. It is called slack time.
Conventional real-time systems are usually over estimated to schedule and provide resources
using the wcet. In average case, real-time tasks rarely execute up to their worst case execution
time (wcet). In many applications, actual case execution time (acet) is often a small fraction of
their wcet. However, such slack times are only known at runtime through resource reclaimers.
This slack is passed to schedulers to determine whether the next job should utilize the slack time
or not.
The main challenge is to obtain and distribute the available slack in order to achieve the highest
possible energy savings with minimum overhead. But most of these do not address dynamic task
inputs. Only a few that attempt to handle dynamic task inputs assume no resource constraints
among tasks. But in reality, few tasks need exclusive accesses to a resource. In exclusive mode no
two real-time tasks are allowed to share a common resource. If a resource is accessed by a real-
time task, it is not left free until the task’s execution is completed. Other tasks in need of the same
resource must wait until the resource gets freed. Our proposed algorithm handles this issue
through the Restriction Vector (RV) [16] algorithm.
Our slack management algorithm decides when and at which voltage should each task be
executed in order to reduce the system's energy consumption while meeting the timing and other
constraints. Our solution includes two phases: First we use static power management schemes
based on wcet to statically assign a time slot to each task. Then we apply dynamic scheduling
algorithm to further reduce energy consumption by exploiting the slack arising from the run-time
execution time variation. Here a small amount of slack time called unit slack is added to all the
tasks and finally we find the subset of tasks that can be allocated this slack time so that total
energy consumption is minimized while the deadline constraint is also met.
4. PROPOSED ENERGY EFFICIENT SCHEDULING ALGORITHM
We proposed a new algorithm called Weighted FCFS which increases the efficiency of the
system by reducing the total energy consumption. In addition to this deadline hit ratio is a major
factor to be considered in a soft real-time system for better QoS. Our proposed algorithm
increases the deadline hit ratio and thereby improving the quality of the system.
4.1. Weighted FCFS Scheduling Algorithm
In the weighted FCFS method weight is assigned to each task based on the number of tasks on
which it depends. First the tasks are inserted into the queue in the non decreasing order of the
6. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.2, No.2, April 2012
190
arrival time. Then the tasks in the queue are sorted according to the increasing order of their
weight. The pseudo code for weighted FCFS is given in Figure 2.
4.2. Restriction Vector (RV) Algorithm
Resource reclaiming [17] refers to the problem of utilizing resources left unused by a task when it
executes less than its wcet, because of data-dependent loops and conditional statements in the task
code or architectural features of the system, such as cache hits and branch predictions, or both.
Resource reclaiming is used to adapt dynamically to these unpredictable situations so as to
improve the system’s resource utilization and thereby improve its schedulability.
The resource reclaiming algorithm used is a restriction vector (RV) based algorithm proposed in
[15] for tasks having resource and precedence constraints. Two data structures namely restriction
vector (RV) and completion bit matrix (CBM) are used in the RV algorithm. Each task Ti has an
associated n component vector, RVi [1 . . . n], where n is the number of processors. RVi[ j] for a
task Ti contains the last task in T< i(j) that must be completed before the execution of Ti begins,
where T< i(j) denotes the set of tasks assigned to processor Pj that are scheduled in feasible
schedule (pre-run schedule) to finish before Ti starts. CBM is an m X n Boolean matrix indicating
whether a task has completed execution, where m is the number of tasks in the feasible schedule.
7. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.2, No.2, April 2012
191
Figure 2. Pseudo Code for WFCFS Algorithm
4.3. Dynamic Slack Management Algorithm
The pseudo code of the dynamic slack management algorithm for energy efficiency is given in
Figure 3.
8. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.2, No.2, April 2012
192
Figure 3. Pseudo Code for Dynamic Slack Management Algorithm
5. SIMULATION AND ANALYSIS OF RESULTS
A simulator was developed to simulate voltage scalable processor which dynamically adjusts the
processor speed according to the algorithm selected. A continuous voltage scaling model is used
and hence the processor speed can be adjusted continuously from its maximum speed to a
minimum speed which is assumed to be 25% of its maximum speed.
For simulation, scheduling task sets and task graphs are generated using the following approach:
• Task sets are randomly generated with parameters such as arrival time, acet, wcet and
resource constraints.
• The wcet is taken randomly and acet is also randomly generated such that it is 40 to 100
percent of wcet.
• The overall deadline is generated such that it is always greater than or equal to the sum of
acet of all the tasks in DAG.
9. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.2, No.2, April 2012
193
Task graph is randomly generated using adjacency matrix where 0 represents the tasks that are
not dependent on any other tasks and 1 represents the dependency, with varying breadth and
depth.
5.1. Evaluation Parameters
The performance parameters considered for evaluating and comparing our system with the
existing systems are Power Consumption Ratio and Deadline Hit Ratio.
Power consumption ratio is the rate of power consumed by the system at scaled down voltage and
frequency to the rate of power consumed at maximum operating voltage and frequency. Deadline
hit ratio is defined as the number of input tasks that complete execution before their respective
deadlines to the total number of tasks admitted in the system for execution.
5.2. Result Analysis
The power consumption ratio between the existing and the proposed algorithms were analyzed.
For each set of tasks varied from 5 to 50, the number of processors is kept constant and the power
consumption ratio for a minimum of ten randomly generated DAGs was calculated. The average
of these values was plotted in the chart. This method was repeated by changing the number of
processors and the comparison was made between the existing and the proposed algorithms.
Table 1. Comparison of Power Consumption
The average power consumption for the existing and proposed algorithms is shown in Table 1.
The value in each row is average of values obtained by varying the number of processors from 2
to 10. If the number of processors in the distributed embedded system is two the algorithms were
not able to schedule more than 10 tasks, as all task set cases above 10 missed their deadline. From
the table it is seen that the average power consumption of the proposed algorithm is 5 to 6 percent
more than the exixting ones. This comparison is presented as a chart in Figure 4.
10. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.2, No.2, April 2012
194
Figure 4. Comparison of Power Consumption Ratio
Figure 5. Comparison of Deadline Hit Ratio
For all cases from p = 2 to 10, proposed algorithm WFCFS with the dynamic slack management
technique consumed less power than the existing MFCS, FCFS and WS algorithms with the SRP
slack management technique, when resource contraint is added to the real-time application in
addition to the precedence and time constraints.
The deadline hit ratio is also compared among the existing and proposed algorithms by varying
the number of tasks from 5 to 50 and the number of processors from 2 to 10. The averge deadline
hit ratio is shown in Figure 5. The maximum number of tasks is taken to be 50, because real-time
applications with dependent tasks above 50 is very very rare. The number of processors is also
limited to 10, as this number itself rare for a distributed embedded system. Upto p = 6 the
proposed algorithm gives better deadline hit ratio compared to the existing ones. For p>=7, all the
algorithms resulted in 100% deadline hit ratio.
6. CONCLUSION
In this work, an energy efficient real-time scheduling algorithm for distributed embedded systems
is presented. This scheduling algorithm is capable of handling task graphs with precedence and
resource constraints in addition to timing constraints. The major contribution of this work is the
development of improved FCFS scheduling algorithm called Weighted FCFS. The proposed
dynamic slack distribution technique efficiently utilizes the available slack and in turn increases
the efficiency of the system. It also exploits the idle intervals by putting the processor in power
down mode to reduce the power consumption. The proposed algorithm increases the deadline hit
11. International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.2, No.2, April 2012
195
ratio when compared to the existing ones and thus increases the reliability. Simulation results
show 5 to 6 percent less power is consumed by the proposed algorithm compared with the
existing algorithms.
Fault tolerant issues may be considered in the future work for such real-time distributed
embedded systems. Another important future work may be extending this algorithm for a
heterogeneous distributed real-time system with varying capacity processors and communication
links.
REFERENCES
[1] Rabi N. Mahapatra and Wei Zhao, (2005) “An Energy-Efficient Slack Distribution Technique for
Multimode Distributed Real-Time Embedded Systems”, IEEE Transactions on Parallel and
Distributed Systems, vol. 16, no. 7.
[2] Changjiu Xian, Yung-Hsiang Lu and Zhiyuan Li, (2008) Dynamic Voltage Scaling for Multitasking
Real-Time Systems with Uncertain Execution Times, In: IEEE Transactions on Computer-Aided
Design of Integrated Circuits and Systems, vol. 27, no. 8.
[3] Subrata Acharya and Rabi N. Mahapatra, (2008) A Dynamic Slack Management Technique for Real-
Time Distributed Embedded Systems, In: IEEE Transactions on Computers, vol. 57, no. 2.
[4] W. Yuan and K. Nahrstedt, (2003) Energy-efficient soft real-time CPU scheduling for mobile
multimedia systems, In: ACM Symposium on Operating Systems Principles, pp. 149-163.
[5] C. Shen, K. Ramamritham and J.A. Stankovic, (1993) Resource reclaiming in multiprocessor real-
time systems, In: IEEE Transactions on . Parallel and Distributed Systems, vol. 4, no. 4, pp. 382-397.
[6] T. Ishihara and H. Yasuura, (1998) Voltage Scheduling Problem for Dynamically Variable Voltage
Processors, In: International Symposium on Low Power Electronics and Design, pp. 197-202.
[7] L. Benini, A. Bogliolo, and G. De Micheli, (2000) A Survey of Design Techniques for System-Level
Dynamic Power Management, In: IEEE Transactions on VLSI Systems, pp. 299-316.
[8] P. B. Jorgensen and J. Madsen, (1997) Critical path driven co synthesis for heterogeneous target
architectures, In: International Workshop on Hardware/ Software Code, pp. 15–19.
[9] M. T. Schmitz and B. M. Al-Hashimi, ( 2001) Considering power variations of DVS processing
elements for energy minimization in distributed systems, In: International Symposium on System
Synthesis, pp. 250–255.
[10] C. Lu, J.A. Stankovic, G. Tao, and S.H. Son, (2002) Feedback Control Real- Time Scheduling:
Framework, Modeling, and Algorithms, In: Real-Time Systems Journal, Special Issue on Control-
theoretical Approaches to Real-Time Computing, pp. 85-126.
[11] Santhi Baskaran and P. Thambidurai, (2010) Power Aware Scheduling for Resource Constrained
Distributed Real Time Systems, In: International Journal on Computer Science and Engineering Vol.
02, No. 05, pp. 1746 -1753.
[12] C.M. Krishna and Shin K. G., Real-Time Systems, (1997) Tata McGraw-Hill.
[13] Subrata Acharya and Rabi N. Mahapatra, (2008) A Dynamic Slack Management Technique for Real-
Time Distributed Embedded Systems, In: IEEE Transactions on Computers, Vol. 57, No. 2.
[14] L. Sha, R. Rajkumar, and J.P. Lehoczky, (1990) Priority Inheritance Protocols: An Approach to Real-
Time Synchronization, In: IEEE Transactions on Computers, Vol. 39, No. 9, pp. 1175-1185.
[15] Peng Li, Haisang Wu, Binoy Ravindran and E. (2006) Douglas Jensen, A Utility Accrual Scheduling
Algorithm for Real-Time Activities with Mutual Exclusion Resource Constraints, In: IEEE
Transactions on Computers, Vol. 55, No. 4.
[16] G. Manimaran, C. Siva ram Murthy, Machiraju Vijay, and K. Ramamritham, (1997) New algorithms
for resource reclaiming from precedence constrained tasks in multiprocessor real-time systems, In:
Journal of Parallel and Distributed Computing, Vol. 44, No. 2, pp. 123-132.
[17] C. Shen, K. Ramamritham and J.A. Stankovic, (1993) Resource reclaiming in multiprocessor real-
time systems, In: IEEE Transactions on Parallel and Distributed Systems, Vol. 4, No. 4, pp. 382-397.