Power
-
aware scheduling reduces CPU energy consumption in hard real
-
time systems through dynamic
voltage scaling(DVS). The basic idea of power
-
aware scheduling
is to find slacks available to tasks and
reduce CPU‟s frequency or lower its voltage using the found slacks. In this paper, we introduce temporal
workload of a system which specifies how much busy its CPU is to complete the tasks at current time.
Analyzin
g temporal workload provides a sufficient condition of schedulability of preemptive early
-
deadline
first scheduling and an effective method to identify and distribute slacks generated by early completed
tasks. The simulation results show that proposed algo
rithm reduces the energy consumption by 10
-
70%
over the existing algorithm and its algorithm complexity is O(n). So, practical on
-
line scheduler could be
devised using the proposed algorithm.
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Journals
Abstract This paper concentrates on methods which provide efficient processing time of a virtual machine, CPU utilization time of a virtual machine. As the user increases, the performance may be significantly reduced if the tasks are not scheduled in a proper order. In this paper the performance of two already existing algorithms DSP (Dependency Structural Prioritization) algorithm and credit scheduling algorithm are analyzed and compared. A single virtual machine’s processing time and CPU utilization time are measured .Satisfactory results are achieved while comparing the two algorithms. This study concludes that the DSP algorithm can perform efficiently than the credit scheduling algorithm. Keywords: Virtual Machine, DSP algorithm, credit scheduling algorithm
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
DYNAMIC VOLTAGE SCALING FOR POWER CONSUMPTION REDUCTION IN REAL-TIME MIXED TA...cscpconf
The reduction in energy consumption without any deadline miss is one of the main challenges in real-time embedded systems. Dynamic voltage scaling (DVS) is a technique that reduces the power consumption of processors by utilizing various operating points provided to the DVS processor. These operating points consist of pairs of voltage and frequency. The selection of operating points can be done based on the load to the system at a particular point of time. In this work DVS is applied to both periodic and sporadic tasks, and an average of 40% of energy is reduced. The energy consumption of the processor is further reduced by 2-10% by reducing the number of pre-emption and frequency switching
(Paper) Task scheduling algorithm for multicore processor system for minimiz...Naoki Shibata
Shohei Gotoda, Naoki Shibata and Minoru Ito : "Task scheduling algorithm for multicore processor system for minimizing recovery time in case of single node fault," Proceedings of IEEE International Symposium on Cluster Computing and the Grid (CCGrid 2012), pp.260-267, DOI:10.1109/CCGrid.2012.23, May 15, 2012.
In this paper, we propose a task scheduling al-gorithm for a multicore processor system which reduces the
recovery time in case of a single fail-stop failure of a multicore
processor. Many of the recently developed processors have
multiple cores on a single die, so that one failure of a computing
node results in failure of many processors. In the case of a failure
of a multicore processor, all tasks which have been executed
on the failed multicore processor have to be recovered at once.
The proposed algorithm is based on an existing checkpointing
technique, and we assume that the state is saved when nodes
send results to the next node. If a series of computations that
depends on former results is executed on a single die, we need
to execute all parts of the series of computations again in
the case of failure of the processor. The proposed scheduling
algorithm tries not to concentrate tasks to processors on a die.
We designed our algorithm as a parallel algorithm that achieves
O(n) speedup where n is the number of processors. We evaluated
our method using simulations and experiments with four PCs.
We compared our method with existing scheduling method, and
in the simulation, the execution time including recovery time in
the case of a node failure is reduced by up to 50% while the
overhead in the case of no failure was a few percent in typical
scenarios.
Schedulability of Rate Monotonic Algorithm using Improved Time Demand Analysi...IJECEIAES
Real-Time Monotonic algorithm (RMA) is a widely used static priority scheduling algo- rithm. For application of RMA at various systems, it is essential to determine the system’s feasibility first. The various existing algorithms perform the analysis by reducing the scheduling points in a given task set. In this paper we propose a schedubility test algorithm, which reduces the number of tasks to be analyzed instead of reducing the scheduling points of a given task. This significantly reduces the number of iterations taken to compute feasibility. This algorithm can be used along with the existing algorithms to effectively reduce the high complexities encountered in processing large task sets. We also extend our algorithm to multiprocessor environment and compare number of iterations with different number of processors. This paper then compares the proposed algorithm with existing algorithm. The expected results show that the proposed algorithm performs better than the existing algorithms.
Modified Active Monitoring Load Balancing with Cloud Computingijsrd.com
Cloud computing is internet-based computing in which large groups of remote servers are networked to allow the centralized data storage, and online access to computer services or resources. Load Balancing is essential for efficient operations in distributed environments. As Cloud Computing is growing rapidly and clients are demanding more services and better results, load balancing for the Cloud has become a very interesting and important research area. In the absence of proper load balancing strategy/technique the growth of CC will never go as per predictions. The main focus of this paper is to verify the approach that has been proposed in the model paper [3]. An efficient load balancing algorithm has the ability to reduce the data center processing time, overall response time and to cope with the dynamic changes of cloud computing environments. The traditional load balancing Active Monitoring algorithm has been modified to achieve better data center processing time and overall response time. The algorithm presented in this paper efficiently distributes the requests to all the VMs for their execution, considering the CPU utilization of all VMs.
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Journals
Abstract This paper concentrates on methods which provide efficient processing time of a virtual machine, CPU utilization time of a virtual machine. As the user increases, the performance may be significantly reduced if the tasks are not scheduled in a proper order. In this paper the performance of two already existing algorithms DSP (Dependency Structural Prioritization) algorithm and credit scheduling algorithm are analyzed and compared. A single virtual machine’s processing time and CPU utilization time are measured .Satisfactory results are achieved while comparing the two algorithms. This study concludes that the DSP algorithm can perform efficiently than the credit scheduling algorithm. Keywords: Virtual Machine, DSP algorithm, credit scheduling algorithm
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
DYNAMIC VOLTAGE SCALING FOR POWER CONSUMPTION REDUCTION IN REAL-TIME MIXED TA...cscpconf
The reduction in energy consumption without any deadline miss is one of the main challenges in real-time embedded systems. Dynamic voltage scaling (DVS) is a technique that reduces the power consumption of processors by utilizing various operating points provided to the DVS processor. These operating points consist of pairs of voltage and frequency. The selection of operating points can be done based on the load to the system at a particular point of time. In this work DVS is applied to both periodic and sporadic tasks, and an average of 40% of energy is reduced. The energy consumption of the processor is further reduced by 2-10% by reducing the number of pre-emption and frequency switching
(Paper) Task scheduling algorithm for multicore processor system for minimiz...Naoki Shibata
Shohei Gotoda, Naoki Shibata and Minoru Ito : "Task scheduling algorithm for multicore processor system for minimizing recovery time in case of single node fault," Proceedings of IEEE International Symposium on Cluster Computing and the Grid (CCGrid 2012), pp.260-267, DOI:10.1109/CCGrid.2012.23, May 15, 2012.
In this paper, we propose a task scheduling al-gorithm for a multicore processor system which reduces the
recovery time in case of a single fail-stop failure of a multicore
processor. Many of the recently developed processors have
multiple cores on a single die, so that one failure of a computing
node results in failure of many processors. In the case of a failure
of a multicore processor, all tasks which have been executed
on the failed multicore processor have to be recovered at once.
The proposed algorithm is based on an existing checkpointing
technique, and we assume that the state is saved when nodes
send results to the next node. If a series of computations that
depends on former results is executed on a single die, we need
to execute all parts of the series of computations again in
the case of failure of the processor. The proposed scheduling
algorithm tries not to concentrate tasks to processors on a die.
We designed our algorithm as a parallel algorithm that achieves
O(n) speedup where n is the number of processors. We evaluated
our method using simulations and experiments with four PCs.
We compared our method with existing scheduling method, and
in the simulation, the execution time including recovery time in
the case of a node failure is reduced by up to 50% while the
overhead in the case of no failure was a few percent in typical
scenarios.
Schedulability of Rate Monotonic Algorithm using Improved Time Demand Analysi...IJECEIAES
Real-Time Monotonic algorithm (RMA) is a widely used static priority scheduling algo- rithm. For application of RMA at various systems, it is essential to determine the system’s feasibility first. The various existing algorithms perform the analysis by reducing the scheduling points in a given task set. In this paper we propose a schedubility test algorithm, which reduces the number of tasks to be analyzed instead of reducing the scheduling points of a given task. This significantly reduces the number of iterations taken to compute feasibility. This algorithm can be used along with the existing algorithms to effectively reduce the high complexities encountered in processing large task sets. We also extend our algorithm to multiprocessor environment and compare number of iterations with different number of processors. This paper then compares the proposed algorithm with existing algorithm. The expected results show that the proposed algorithm performs better than the existing algorithms.
Modified Active Monitoring Load Balancing with Cloud Computingijsrd.com
Cloud computing is internet-based computing in which large groups of remote servers are networked to allow the centralized data storage, and online access to computer services or resources. Load Balancing is essential for efficient operations in distributed environments. As Cloud Computing is growing rapidly and clients are demanding more services and better results, load balancing for the Cloud has become a very interesting and important research area. In the absence of proper load balancing strategy/technique the growth of CC will never go as per predictions. The main focus of this paper is to verify the approach that has been proposed in the model paper [3]. An efficient load balancing algorithm has the ability to reduce the data center processing time, overall response time and to cope with the dynamic changes of cloud computing environments. The traditional load balancing Active Monitoring algorithm has been modified to achieve better data center processing time and overall response time. The algorithm presented in this paper efficiently distributes the requests to all the VMs for their execution, considering the CPU utilization of all VMs.
Burton Smith passed away on April 3, 2018. He gave this talk in 2014.
"This talk proposes a scheme for addressing the operating system resource management problem. Resource management is the dynamic allocation and de-allocation by an operating system of processor cores, memory pages, and various types of bandwidth to computations that compete for those resources. The objective is to allocate resources for optimized responsiveness based on the finite resources available."
Read the full story: https://wp.me/p3RLHQ-ihD
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Task Scheduling Algorithm for Multicore Processor Systems with Turbo Boost an...Naoki Shibata
Yosuke Wakisaka, Naoki Shibata, Keiichi Yasumoto, Minoru Ito, and Junji Kitamichi : Task Scheduling Algorithm for Multicore Processor Systems with Turbo Boost and Hyper-Threading, In Proc. of The 2014 International Conference on Parallel and Distributed Processing Techniques and Applications(PDPTA'14), pp. 229-235
In this paper, we propose a task scheduling algorithm for multiprocessor systems with Turbo Boost and Hyper-Threading technologies. The proposed algorithm minimizes the total computation time taking account of dynamic changes of the processing speed by the two technologies, in addition to the network contention among the processors. We constructed a clock speed model with which the changes of processing speed with Turbo Boost and Hyper-threading can be estimated for various processor usage patterns. We then constructed a new scheduling algorithm that minimizes the total execution time of a task graph considering network contention and the two technologies. We evaluated the proposed algorithm by simulations and experiments with a multiprocessor system consisting of 4 PCs. In the experiment, the proposed algorithm produced a schedule that reduces the total execution time by 36% compared to conventional methods which are straightforward extensions of an existing method.
A study of load distribution algorithms in distributed schedulingeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Multilevel Hybrid Cognitive Load Balancing Algorithm for Private/Public Cloud...IDES Editor
Cloud computing is an emerging computing
paradigm. It aims to share data, resources and services
transparently among users of a massive grid. Although the
industry has started selling cloud-computing products,
research challenges in various areas, such as architectural
design, task decomposition, task distribution, load
distribution, load scheduling, task coordination, etc. are still
unclear. Therefore, we study the methods to reason and model
cloud computing as a step towards identifying fundamental
research questions in this paradigm. In this paper, we propose
a model for load distribution on cloud computing by modeling
them as cognitive systems and using aspects which not only
depend on the present state of the system, but also, on a set of
predefined transitions and conditions. The entirety of this
model is then bundled to cater the task of job distribution
using the concept of application metadata. Later, we draw a
qualitative and simulation based summarization for the
proposed model. We finally evaluate the results and draw up
a series of key conclusions in cloud computing for future
exploration.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
Analysis of Multi Level Feedback Queue Scheduling Using Markov Chain Model wi...Eswar Publications
When a process gets the CPU, the scheduler has no idea of the precise amount of CPU time the process will need.
Process scheduling algorithms are used for better utilization of CPU. The number of processes arriving to the CPU at a time comes in mass volume which causes a long waiting queue. In Multilevel feedback queue scheduling, the scheduler moves from one queue to another in order to perform the processing follow the transition mechanism. This paper analysed a general transition scenario for the functioning of CPU scheduler in multilevel queue with feedback mechanism. We proposed a Markov chain model to analyze this transition phenomenon
with a general class of scheduling scheme. Simulation study is performed to evaluate the comparative study with the help of varying values of α and d in a mathematical model.
Developing Scheduler Test Cases to Verify Scheduler Implementations In Time-T...ijesajournal
Despite that there is a “one-to-many” mapping between scheduling algorithms and scheduler implementations, only a few studies have discussed the challenges and consequences of translating between these two system models. There has been an argument that a wide gap exists between scheduling theory and scheduling implementation in practical systems, where such a gap must be bridged to obtain an effective validation of embedded systems. In this paper, we introduce a technique called “Scheduler Test Case” (STC) aimed at bridging the gap between scheduling algorithms and scheduler implementations in single-processor embedded systems implemented using Time-Triggered Co-operative (TTC) architectures. We will demonstrate how the STC technique can provide a simple and systematic way for documenting, verifying (testing) and comparing various TTC scheduler implementations on particular hardware. However, STC is a generic technique that provides a black-box tool for assessing and predicting the behaviour of representative implementation sets of any real-time scheduling algorithm.
TASK SCHEDULING USING AMALGAMATION OF MET HEURISTICS SWARM OPTIMIZATION ALGOR...Journal For Research
Cloud Computing is the latest networking technology and also popular archetype for hosting the application and delivering of services over the network. The foremost technology of the cloud computing is virtualization which enables of building the applications, dynamically sharing of resources and providing diverse services to the cloud users. With virtualization, a service provider can guarantee Quality of Service to the user at the same time as achieving higher server consumption and energy competence. One of the most important challenges in the cloud computing environment is the VM placemnt and task scheduling problem. This paper focus on Metaheuristic Swarm Optimisation Algorithms(MSOA) deals with the problem of VM placement and Task scheduling in cloud environment. The MSOA is a simple parallel algorithm that can be applied in different ways to resolve the task scheduling problems. The proposed algorithm is considered an amalgamation of the SO algorithm and the Cuckoo search (CS) algorithm; called MSOACS. The proposed algorithm is evaluated using Cloudsim Simulator. The results proves the reduction of the makespan and increase the utilization ratio of the proposed MSOACS algorithm compared with SOA algorithms and Randomised Allocation Allocation (RA).
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
Embedded systems with multi core processors are increasingly popular because of the diversity of applications that can be run on it. In this work, a reinforcement learning based scheduling method is proposed to handle the real time tasks in multi core systems with effective CPU usage and lower response time. The priority of the tasks is varied dynamically to ensure fairness with reinforcement learning based priority assignment and Multi Core MultiLevel Feedback queue (MCMLFQ) to manage the task execution in multi core system
Scheduling and Allocation Algorithm for an Elliptic Filterijait
A new evolutionary algorithm for scheduling and allocation algorithm is developed for an elliptic filter. The elliptic filter is scheduled and allocated in the proposed work which is then compared with the different scheduling algorithms like As Soon As Possible algorithm, As Late As Possible algorithm, Mobility Based Shift algorithm, FDLS, FDS and MOGS. In this paper execution time and resource utilization is calculated using different scheduling algorithm for an Elliptic Filter and reported that proposed Scheduling and Allocation increases the speed of operation by reducing the control step. The proposed work to analyse the magnitude, phase and noise responses for different scheduling algorithm in an elliptic filter.
Stochastic Analysis of a Cold Standby System with Server Failureinventionjournals
This paper throws light on the stochastic analysis of a cold standby system having two identical units. The operative unit may fail directly from normal mode and the cold standby unit can fail owing to remain unused for a longer period of time. There is a single server, who may also follow as precedent unit happens to be in the line of failure .After having treated, the operant unit for functional purposes, the server may eventually perform the better service efficiently. The time to take treatment and repair activity follows negative exponential distribution whereas the distributions of unit and server failure are taken as arbitrary with different probability density functions. The expressions of various stochastic measures are analyzed in steady state using semiMarkov process and regenerative point technique. The graphs are sketched for arbitrary values of the parameters to delineate the behavior of some important performance measures to check the efficacy of the system model under such situations.
Image segmentation by modified map ml estimationsijesajournal
Though numerous algorithms exist to perform image segmentation there are several issues
related to execution time of these algorithm. Image Segmentation is nothing but label relabeling
problem under probability framework. To estimate the label configuration, an iterative
optimization scheme is implemented to alternately carry out the maximum a posteriori (MAP)
estimation and the maximum likelihood (ML) estimations. In this paper this technique is
modified in such a way so that it performs segmentation within stipulated time period. The
extensive experiments shows that the results obtained are comparable with existing algorithms.
This algorithm performs faster execution than the existing algorithm to give automatic
segmentation without any human intervention. Its result match image edges very closer to
human perception.
Task scheduling methodologies for high speed computing systemsijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
Burton Smith passed away on April 3, 2018. He gave this talk in 2014.
"This talk proposes a scheme for addressing the operating system resource management problem. Resource management is the dynamic allocation and de-allocation by an operating system of processor cores, memory pages, and various types of bandwidth to computations that compete for those resources. The objective is to allocate resources for optimized responsiveness based on the finite resources available."
Read the full story: https://wp.me/p3RLHQ-ihD
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Task Scheduling Algorithm for Multicore Processor Systems with Turbo Boost an...Naoki Shibata
Yosuke Wakisaka, Naoki Shibata, Keiichi Yasumoto, Minoru Ito, and Junji Kitamichi : Task Scheduling Algorithm for Multicore Processor Systems with Turbo Boost and Hyper-Threading, In Proc. of The 2014 International Conference on Parallel and Distributed Processing Techniques and Applications(PDPTA'14), pp. 229-235
In this paper, we propose a task scheduling algorithm for multiprocessor systems with Turbo Boost and Hyper-Threading technologies. The proposed algorithm minimizes the total computation time taking account of dynamic changes of the processing speed by the two technologies, in addition to the network contention among the processors. We constructed a clock speed model with which the changes of processing speed with Turbo Boost and Hyper-threading can be estimated for various processor usage patterns. We then constructed a new scheduling algorithm that minimizes the total execution time of a task graph considering network contention and the two technologies. We evaluated the proposed algorithm by simulations and experiments with a multiprocessor system consisting of 4 PCs. In the experiment, the proposed algorithm produced a schedule that reduces the total execution time by 36% compared to conventional methods which are straightforward extensions of an existing method.
A study of load distribution algorithms in distributed schedulingeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Multilevel Hybrid Cognitive Load Balancing Algorithm for Private/Public Cloud...IDES Editor
Cloud computing is an emerging computing
paradigm. It aims to share data, resources and services
transparently among users of a massive grid. Although the
industry has started selling cloud-computing products,
research challenges in various areas, such as architectural
design, task decomposition, task distribution, load
distribution, load scheduling, task coordination, etc. are still
unclear. Therefore, we study the methods to reason and model
cloud computing as a step towards identifying fundamental
research questions in this paradigm. In this paper, we propose
a model for load distribution on cloud computing by modeling
them as cognitive systems and using aspects which not only
depend on the present state of the system, but also, on a set of
predefined transitions and conditions. The entirety of this
model is then bundled to cater the task of job distribution
using the concept of application metadata. Later, we draw a
qualitative and simulation based summarization for the
proposed model. We finally evaluate the results and draw up
a series of key conclusions in cloud computing for future
exploration.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
Analysis of Multi Level Feedback Queue Scheduling Using Markov Chain Model wi...Eswar Publications
When a process gets the CPU, the scheduler has no idea of the precise amount of CPU time the process will need.
Process scheduling algorithms are used for better utilization of CPU. The number of processes arriving to the CPU at a time comes in mass volume which causes a long waiting queue. In Multilevel feedback queue scheduling, the scheduler moves from one queue to another in order to perform the processing follow the transition mechanism. This paper analysed a general transition scenario for the functioning of CPU scheduler in multilevel queue with feedback mechanism. We proposed a Markov chain model to analyze this transition phenomenon
with a general class of scheduling scheme. Simulation study is performed to evaluate the comparative study with the help of varying values of α and d in a mathematical model.
Developing Scheduler Test Cases to Verify Scheduler Implementations In Time-T...ijesajournal
Despite that there is a “one-to-many” mapping between scheduling algorithms and scheduler implementations, only a few studies have discussed the challenges and consequences of translating between these two system models. There has been an argument that a wide gap exists between scheduling theory and scheduling implementation in practical systems, where such a gap must be bridged to obtain an effective validation of embedded systems. In this paper, we introduce a technique called “Scheduler Test Case” (STC) aimed at bridging the gap between scheduling algorithms and scheduler implementations in single-processor embedded systems implemented using Time-Triggered Co-operative (TTC) architectures. We will demonstrate how the STC technique can provide a simple and systematic way for documenting, verifying (testing) and comparing various TTC scheduler implementations on particular hardware. However, STC is a generic technique that provides a black-box tool for assessing and predicting the behaviour of representative implementation sets of any real-time scheduling algorithm.
TASK SCHEDULING USING AMALGAMATION OF MET HEURISTICS SWARM OPTIMIZATION ALGOR...Journal For Research
Cloud Computing is the latest networking technology and also popular archetype for hosting the application and delivering of services over the network. The foremost technology of the cloud computing is virtualization which enables of building the applications, dynamically sharing of resources and providing diverse services to the cloud users. With virtualization, a service provider can guarantee Quality of Service to the user at the same time as achieving higher server consumption and energy competence. One of the most important challenges in the cloud computing environment is the VM placemnt and task scheduling problem. This paper focus on Metaheuristic Swarm Optimisation Algorithms(MSOA) deals with the problem of VM placement and Task scheduling in cloud environment. The MSOA is a simple parallel algorithm that can be applied in different ways to resolve the task scheduling problems. The proposed algorithm is considered an amalgamation of the SO algorithm and the Cuckoo search (CS) algorithm; called MSOACS. The proposed algorithm is evaluated using Cloudsim Simulator. The results proves the reduction of the makespan and increase the utilization ratio of the proposed MSOACS algorithm compared with SOA algorithms and Randomised Allocation Allocation (RA).
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
Embedded systems with multi core processors are increasingly popular because of the diversity of applications that can be run on it. In this work, a reinforcement learning based scheduling method is proposed to handle the real time tasks in multi core systems with effective CPU usage and lower response time. The priority of the tasks is varied dynamically to ensure fairness with reinforcement learning based priority assignment and Multi Core MultiLevel Feedback queue (MCMLFQ) to manage the task execution in multi core system
Scheduling and Allocation Algorithm for an Elliptic Filterijait
A new evolutionary algorithm for scheduling and allocation algorithm is developed for an elliptic filter. The elliptic filter is scheduled and allocated in the proposed work which is then compared with the different scheduling algorithms like As Soon As Possible algorithm, As Late As Possible algorithm, Mobility Based Shift algorithm, FDLS, FDS and MOGS. In this paper execution time and resource utilization is calculated using different scheduling algorithm for an Elliptic Filter and reported that proposed Scheduling and Allocation increases the speed of operation by reducing the control step. The proposed work to analyse the magnitude, phase and noise responses for different scheduling algorithm in an elliptic filter.
Stochastic Analysis of a Cold Standby System with Server Failureinventionjournals
This paper throws light on the stochastic analysis of a cold standby system having two identical units. The operative unit may fail directly from normal mode and the cold standby unit can fail owing to remain unused for a longer period of time. There is a single server, who may also follow as precedent unit happens to be in the line of failure .After having treated, the operant unit for functional purposes, the server may eventually perform the better service efficiently. The time to take treatment and repair activity follows negative exponential distribution whereas the distributions of unit and server failure are taken as arbitrary with different probability density functions. The expressions of various stochastic measures are analyzed in steady state using semiMarkov process and regenerative point technique. The graphs are sketched for arbitrary values of the parameters to delineate the behavior of some important performance measures to check the efficacy of the system model under such situations.
Image segmentation by modified map ml estimationsijesajournal
Though numerous algorithms exist to perform image segmentation there are several issues
related to execution time of these algorithm. Image Segmentation is nothing but label relabeling
problem under probability framework. To estimate the label configuration, an iterative
optimization scheme is implemented to alternately carry out the maximum a posteriori (MAP)
estimation and the maximum likelihood (ML) estimations. In this paper this technique is
modified in such a way so that it performs segmentation within stipulated time period. The
extensive experiments shows that the results obtained are comparable with existing algorithms.
This algorithm performs faster execution than the existing algorithm to give automatic
segmentation without any human intervention. Its result match image edges very closer to
human perception.
Task scheduling methodologies for high speed computing systemsijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
Evaluating the performance and behaviour of rt xenijesajournal
Virtualization, together with real
-
time support
emerges to
be used in an
increasing amount of use cases,
varying
from embedded system
s
to enterprise computing
. One of the most popular
open
-
source
virtualization
softw
are’s
is
Xen
.
Its current implementation does not qualify it to be
a candidate for
time
-
critical systems. Researchers
and developers extend
ed
it
and claim
the
efficient usage
of their versions
in
such systems.
RT
-
Xen is one of these versions. It is a vir
tualization platform based on Compositional Scheduling and uses
a suite of fixed
-
priority schedulers.
This paper evaluates the performance and scheduling
behaviour
of RT
-
Xen.
The test results show that only
two
proposed
schedulers
are
suitable
to be used
for
soft
real
-
time applications.
Dominant block guided optimal cache size estimation to maximize ipc of embedd...ijesajournal
Embedded system software is highly constrained from performance, memory footprint, energy consumption
and implementing cost view point. It is always desirable to obtain better Instructions per Cycle (IPC).
Instruction cache has major contribu
tion in improving IPC. Cache memories are realized on the same chip
where the processor is running. This considerably increases the system cost as well. Hence, it is required to
maintain a trade
-
off between cache sizes and performance improvement offered.
Determining the number
of cache lines and size of cache line are important parameters for cache designing. The design space for
cache is quite large. It is time taking to execute the given application with different cache sizes on an
instruction set simula
tor (ISS) to figure out the optimal cache size. In this paper, a technique is proposed to
identify a number of cache lines and cache line size for the L1 instruction cache that will offer best or
nearly best IPC. Cache size is derived, at a higher abstract
ion level, from basic block analysis in the Low
Level Virtual Machine (LLVM) environment. The cache size estimated from the LLVM environment is cross
validated by simulating the set of benchmark applications with different cache sizes in SimpleScalar’s out
-
of
-
order simulator. The proposed method seems to be superior in terms of estimation accuracy and/or
estimation time as compared to the existing methods for estimation of optimal cache size parameters (cache
line size, number of cache lines).
Dominant block guided optimal cache size estimation to maximize ipc of embedd...ijesajournal
Embedded system software is highly constrained from performance, memory footprint, energy consumption and implementing cost view point. It is always desirable to obtain better Instructions per Cycle (IPC). Instruction cache has major contribution in improving IPC. Cache memories are realized on the same chip where the processor is running. This considerably increases the system cost as well. Hence, it is required to maintain a trade-off between cache sizes and performance improvement offered. Determining the number of cache lines and size of cache line are important parameters for cache designing. The design space for cache is quite large. It is time taking to execute the given application with different cache sizes on an instruction set simulator (ISS) to figure out the optimal cache size. In this paper, a technique is proposed to identify a number of cache lines and cache line size for the L1 instruction cache that will offer best or nearly best IPC. Cache size is derived, at a higher abstraction level, from basic block analysis in the Low Level Virtual Machine (LLVM) environment. The cache size estimated from the LLVM environment is cross validated by simulating the set of benchmark applications with different cache sizes in SimpleScalar’s out-of-order simulator. The proposed method seems to be superior in terms of estimation accuracy and/or estimation time as compared to the existing methods for estimation of optimal cache size parameters (cache line size, number of cache lines).
Oplægget blev holdt ved et seminar i InfinIT-interessegruppen Højniveausprog til indlejrede systemer den 12. marts 2014. Læs mere om interessegruppen her: http://infinit.dk/dk/interessegrupper/hoejniveau_sprog_til_indlejrede_systemer/hoejniveau_sprog_til_indlejrede_systemer.htm
Compositional Analysis for the Multi-Resource ServerEricsson
The Multi-Resource Server (MRS) technique has been proposed to enable predictable execution of memory intensive real-time applications on COTS multi-core platforms.
Evaluating the performance and behaviour of rt xenijesajournal
Virtualization, together with real-time support emerges to be used in an increasing amount of use cases, varying from embedded systems to enterprise computing. One of the most popular open-source virtualization software’s is Xen. Its current implementation does not qualify it to be a candidate for time-critical systems. Researchers and developers extended it and claim the efficient usage of their versions in such systems.
RT-Xen is one of these versions. It is a virtualization platform based on Compositional Scheduling and uses a suite of fixed-priority schedulers.
This paper evaluates the performance and scheduling behaviour of RT-Xen. The test results show that only two proposed schedulers are suitable to be used for soft real-time applications.
Microcontroller system is one of the vital subjects offered by students during the sequence of study in universities and other colleges of science, engineering and technology in the world. In this paper, we solve the problem of student comprehension and skill development in embedded system design using microcontroller chip PIC16F887 by demonstration of hands-on laboratory experiments. Also, developments of software code, circuit diagram simulation were carried out. This is to help students connect their theoretical knowledge with the practical experience. Each of the experiments was carried out using BK300 development board, PICKit3 programmer, Proteus 8.0 software. Our years of experience in the teaching of microcontroller course and the active involvement of students as manifested in complete in-depth hands-on laboratory projects on real life problem solving. Laboratory session with the development board and software demonstrated in this article is unambiguous. Future embedded system laboratory session could be designed around ATMel lines of Microcontrollers.
Mixed Criticality Systems and Many-Core PlatformsAdaCore
An increasingly important trend in the design of real-time and embedded systems is the integration of components with different levels of criticality onto a common hardware platform. At the same time, these platforms are migrating from single cores to multi-cores and, in the future, manycore architectures. Criticality is a designation of the level of assurance against failure needed for a system component.
A mixed criticality system (MCS) is one that has two or more distinct levels (for example safety critical, mission critical and non-critical). Perhaps up to five levels may be identified (see, for example, the IEC 61508, DO-178B, DO-254 and ISO 26262 standards). In this talk some of the techniques being developed for MCS will be outlined, as will schemes by which the different assuance methods for each criticality level can be exploited to reduce resource usage.
Timing verification of automotive communication architecture using quantile ...RealTime-at-Work (RTaW)
Slides of a paper at ERTSS'2014 co-authored by Nicolas NAVET (University of Luxembourg), Shehnaz LOUVART (Renault), Jose VILLANUEVA (Renault), Sergio CAMPOY-MARTINEZ (Renault) and Jörn MIGGE (RealTime-at-Work). Early stage timing verification on CAN traditionally relies on simulation and schedulability analysis, also known as worst-case response time (WCRT) analysis. Despite recent progresses, the latter technique remains pessimistic especially in complex networking architectures with gateways and heterogeneous communication stacks. Indeed, there are practical cases where no exact WCRT analysis is available, and merely upper bounds on the response times can be derived, on the basis of which unnecessary conservative design choices may be made. Simulation, on the other hand, does not provide anyguarantees per se and, in the context of critical networks, should only be used along with an adequate methodology. In this paper, we argue for the use of quantiles of the response time distribution as performance
metrics providing an adjustable trade-off between safety and resource usage optimization. We discuss how the exact value of the quantile to consider should be chosen with regard to the criticality of the frames, and illustrate the approach on two typical automotive use-cases.
ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...IJCSEA Journal
Energy consumption is a critical design issue in real-time systems, especially in battery- operated systems. Maintaining high performance, while extending the battery life between charges is an interesting challenge for system designers. Dynamic Voltage Scaling and Dynamic Frequency Scaling allow us to adjust supply voltage and processor frequency to adapt to the workload demand for better energy management. Usually, higher processor voltage and frequency leads to higher system throughput while energy reduction can be obtained using lower voltage and frequency. Many real-time scheduling algorithms have been developed recently to reduce energy consumption in the portable devices that use voltage scalable processors. For a real-time application, comprising a set of real-time tasks with precedence and resource constraints executing on a distributed system, we propose a dynamic energy efficient scheduling algorithm with weighted First Come First Served (WFCFS) scheme. This also considers the run-time behaviour of tasks, to further explore the idle periods of processors for energy saving. Our algorithm is compared with the existing Modified Feedback Control Scheduling (MFCS), First Come First Served (FCFS), and Weighted scheduling (WS) algorithms that uses Service-Rate-Proportionate (SRP) Slack Distribution Technique. Our proposed algorithm achieves about 5 to 6 percent more energy savings and increased reliability over the existing ones.
SWARM INTELLIGENCE SCHEDULING OF SOFT REAL-TIME TASKS IN HETEROGENEOUS MULTIP...ecij
In this paper, a hybrid swarm intelligence algorithm (named VNABCSA) is presented for the scheduling of non-preemptive soft real-time tasks in heterogeneous multiprocessor platforms. The method is based on a combination of artificial bee colony and simulated annealing algorithms. The multi-objective function of the VNABCSA algorithm is defined to minimize the total tardiness of all tasks, total number of utilized processors, total completion time, total waiting time for all tasks, and total waiting time for all processors. We introduce a hybrid variable neighborhood search strategy to improve the convergence speed of the algorithm. Simulation results demonstrate the efficiency of the proposed methodology as compared with the
existing scheduling algorithms.
Efficient Dynamic Scheduling Algorithm for Real-Time MultiCore Systems iosrjce
Imprecise computation model is used in dynamic scheduling algorithm having heuristic function to
schedule task sets. A task is characterized by ready time, worst case computation time, deadline and resource
requirements. A task failing to meet its deadline and resource requirements on time is split into mandatory part
and optional part. These sub-tasks of a task can execute concurrently on multiple cores, thus achieving
parallelization provided by the multi-core system. Mandatory part produces acceptable results while optional
part refines the result further. To study the effectiveness of proposed scheduling algorithm, extensive simulation
studies have been carried out. Performance of proposed scheduling algorithm is compared with myopic and
improved myopic scheduling algorithm. The simulation studies shows that schedulability of task split myopic
algorithm is always higher than myopic and improved myopic algorithm.
SCHEDULING DIFFERENT CUSTOMER ACTIVITIES WITH SENSING DEVICEijait
Most periodic tasks are assigned to processors using partition scheduling policy after checking feasibility conditions. A new approach is proposed for scheduling different activities with one periodic task within the system. In this paper, control strategies are identified for allocating different types of tasks (activities) to
individual computing elements like Smartphone or microphones. In our simulation model, each periodic task generates an aperiodic tasks are taken into consideration. Different sets of periodic tasks and aperiodic tasks are scheduled together. This new approach proves that when all different activities are
scheduled with one periodic tasks leads to better performance.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
We study scheduling algorithms for loading data feeds into real time data warehouses, which are used in applications such as IP network monitoring, online financial trading, and credit card fraud detection. In these applications, the warehouse collects a large number of streaming data feeds that are generated by external sources and arrive asynchronously. We discuss update scheduling in streaming data warehouses, which combine the features of traditional data warehouses and data stream systems. In our setting, external sources push append-only data streams into the warehouse with a wide range of inter-arrival times. While traditional data warehouses are typically refreshed during downtimes, streaming warehouses are updated as new data arrive. In this paper we develop a theory of temporal consistency for stream warehouses that allows for multiple consistency levels. We model the streaming warehouse update problem as a scheduling problem, where jobs correspond to processes that load new data into tables, and whose objective is to minimize data staleness over time.
GENERATIVE SCHEDULING OF EFFECTIVE MULTITASKING WORKLOADS FOR BIG-DATA ANALYT...IAEME Publication
Scheduling of dynamic and multitasking workloads for big-data analytics is a challenging issue, as it requires a significant amount of parameter sweeping and iterations. Therefore, real-time scheduling becomes essential to increase the throughput of many-task computing. The difficulty lies in obtaining a series of optimal yet responsive schedules. In dynamic scenarios, such as virtual clusters in cloud, scheduling must be processed fast enough to keep pace with the unpredictable fluctuations in the workloads to optimize the overall system performance. In this paper, ordinal optimization using rough models and fast simulation is introduced to obtain suboptimal solutions in a much shorter timeframe.
DEVELOPING SCHEDULER TEST CASES TO VERIFY SCHEDULER IMPLEMENTATIONS IN TIMETR...ijesajournal
Despite that there is a “one-to-many” mapping between scheduling algorithms and scheduler
implementations, only a few studies have discussed the challenges and consequences of translating between
these two system models. There has been an argument that a wide gap exists between scheduling theory and
scheduling implementation in practical systems, where such a gap must be bridged to obtain an effective
validation of embedded systems. In this paper, we introduce a technique called “Scheduler Test Case”
(STC) aimed at bridging the gap between scheduling algorithms and scheduler implementations in singleprocessor embedded systems implemented using Time-Triggered Co-operative (TTC) architectures. We will
demonstrate how the STC technique can provide a simple and systematic way for documenting, verifying
(testing) and comparing various TTC scheduler implementations on particular hardware. However, STC is
a generic technique that provides a black-box tool for assessing and predicting the behaviour of
representative implementation sets of any real-time scheduling algorithm.
Task scheduling is an important aspect to improve the utilization of resources in the Cloud Computing. This paper proposes a Divide and Conquer based approach for heterogeneous earliest finish time algorithm. The proposed system works in two phases. In the first phase it assigns the ranks to the incoming tasks with respect to size of it. In the second phase, we properly assign and manage the task to the virtual machine with the consideration of ideal time of respective virtual machine. This helps to get more effective resource utilization in Cloud Computing. The experimental results using Cybershake Scientific Workflow shows that the proposed Divide and Conquer HEFT performs better than HEFT in terms of task's finish time and response time. The result obtained by experimentally demonstrate that the proposed DCHEFT performance superiorly.
Integrating fault tolerant scheme with feedback control scheduling algorithm ...ijics
In order to provide Quality of Service (QoS) in open and unpredictable environment, Feedback based
Control Scheduling Algorithm (FCSA) is designed to keep the processor utilization at the scheduling
utilization bound. FCSA controls CPU utilization by assigning task periods that optimize overall control
performance, meeting deadlines even if the task execution time is unpredictable and through performance
control feedback loop. Current FCSA doesn’t ensure Fault Tolerance (FT) while providing QoS in terms of
CPU utilization and resource management. In order to assure that tasks should meet their deadlines even
in the presence of faults, a FT scheme has to be integrated at control scheduling co-design level. This paper
presents a novel approach on integrating FT scheme with FCSA for real time embedded systems. This
procedure is especially important for control scheduling co-design of embedded systems.
EFFICIENT SCHEDULING STRATEGY USING COMMUNICATION AWARE SCHEDULING FOR PARALL...ijdpsjournal
In the area of Computer Science, Parallel job scheduling is an important field of research. Finding a best
suitable processor on the high performance or cluster computing for user submitted jobs plays an
important role in measuring system performance. A new scheduling technique called communication aware
scheduling is devised and is capable of handling serial jobs, parallel jobs, mixed jobs and dynamic jobs.
This work focuses the comparison of communication aware scheduling with the available parallel job
scheduling techniques and the experimental results show that communication aware scheduling performs
better when compared to the available parallel job scheduling techniques.
Similar to Temporal workload analysis and its application to power aware scheduling (20)
DESIGN OF AN EMBEDDED SYSTEM: BEDSIDE PATIENT MONITORijesajournal
Embedded systems in the range of from a tiny microcontroller-based sensor device to mobile smart phones
have vast variety of applications. However, in the literature there is no up to date system-level design of
embedded hardware and software, instead academic publications are mainly focused on the improvement
of specific features of embedded software/hardware and the embedded system designs for specific
applications. Moreover, commercially available embedded systems are not disclosed for the view of
researchers in the literature. Therefore, in this paper we first present how to design a state of art embedded
system including emerged hardware and software technologies. Bedside Patient monitor devices used in
intensive cares units of hospitals are also classified as embedded systems and run sophisticated software
and algorithms for better diagnosis of diseases. We reveal the architecture of our, commercially available,
bedside patient monitor to provide a design example of embedded systemsrelating to emerged technologies.
DESIGN OF AN EMBEDDED SYSTEM: BEDSIDE PATIENT MONITORijesajournal
Embedded systems in the range of from a tiny microcontroller-based sensor device to mobile smart phones
have vast variety of applications. However, in the literature there is no up to date system-level design of
embedded hardware and software, instead academic publications are mainly focused on the improvement
of specific features of embedded software/hardware and the embedded system designs for specific
applications. Moreover, commercially available embedded systems are not disclosed for the view of
researchers in the literature. Therefore, in this paper we first present how to design a state of art embedded
system including emerged hardware and software technologies. Bedside Patient monitor devices used in
intensive cares units of hospitals are also classified as embedded systems and run sophisticated software
and algorithms for better diagnosis of diseases. We reveal the architecture of our, commercially available,
bedside patient monitor to provide a design example of embedded systemsrelating to emerged technologies.
PIP-MPU: FORMAL VERIFICATION OF AN MPUBASED SEPARATION KERNEL FOR CONSTRAINED...ijesajournal
Pip-MPU is a minimalist separation kernel for constrained devices (scarce memory and power resources).
In this work, we demonstrate high-assurance of Pip-MPU’s isolation property through formal verification.
Pip-MPU offers user-defined on-demand multiple isolation levels guarded by the Memory Protection Unit
(MPU). Pip-MPU derives from the Pip protokernel, with a full code refactoring to adapt to the constrained
environment and targets equivalent security properties. The proofs verify that the memory blocks loaded in
the MPU adhere to the global partition tree model. We provide the basis of the MPU formalisation and the
demonstration of the formal verification strategy on two representative kernel services. The publicly
released proofs have been implemented and checked using the Coq Proof Assistant for three kernel
services, representing around 10000 lines of proof. To our knowledge, this is the first formal verification of
an MPU based separation kernel. The verification process helped discover a critical isolation-related bug.
International Journal of Embedded Systems and Applications (IJESA)ijesajournal
International Journal of Embedded Systems and Applications (IJESA) is a quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Embedded Systems and applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Embedded Systems and establishing new collaborations in these areas.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Embedded Systems & applications.
Pip-MPU: Formal Verification of an MPU-Based Separationkernel for Constrained...ijesajournal
Pip-MPU is a minimalist separation kernel for constrained devices (scarce memory and power resources). In this work, we demonstrate high-assurance of Pip-MPU’s isolation property through formal verification. Pip-MPU offers user-defined on-demand multiple isolation levels guarded by the Memory Protection Unit (MPU). Pip-MPU derives from the Pip protokernel, with a full code refactoring to adapt to the constrained environment and targets equivalent security properties. The proofs verify that the memory blocks loaded in the MPU adhere to the global partition tree model. We provide the basis of the MPU formalisation and the demonstration of the formal verification strategy on two representative kernel services. The publicly released proofs have been implemented and checked using the Coq Proof Assistant for three kernel services, representing around 10000 lines of proof. To our knowledge, this is the first formal verification of an MPU based separation kernel. The verification process helped discover a critical isolation-related bug.
International Journal of Embedded Systems and Applications (IJESA)ijesajournal
International Journal of Embedded Systems and Applications (IJESA) is a quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Embedded Systems and applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Embedded Systems and establishing new collaborations in these areas.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Embedded Systems & applications.
Call for papers -15th International Conference on Wireless & Mobile Network (...ijesajournal
15th International Conference on Wireless & Mobile Network (WiMo 2023) is dedicated to addressing the challenges in the areas of wireless & mobile networks. The Conference looks for significant contributions to the Wireless and Mobile computing in theoretical and practical aspects. The Wireless and Mobile computing domain emerges from the integration among personal computing, networks, communication technologies, cellular technology, and the Internet Technology. The modern applications are emerging in the area of mobile ad hoc networks and sensor networks. This Conference is intended to cover contributions in both the design and analysis in the context of mobile, wireless, ad-hoc, and sensor networks. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced wireless and Mobile computing concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to.
Call for Papers -International Conference on NLP & Signal (NLPSIG 2023)ijesajournal
Scope & Topics
International Conference on NLP & Signal (NLPSIG 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Signal and Natural Language Processing (NLP).
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to:
Topics of interest include, but are not limited to, the following
Chunking/Shallow Parsing
Dialogue and Interactive Systems
Deep learning and NLP
Discourseand Pragmatics
Information Extraction, Retrieval, Text Mining
Interpretability and Analysis of Models for NLP
Language Grounding to Vision, Robotics and Beyond
Lexical Semantics
Linguistic Resources
Machine Learning for NLP
Machine Translation
NLP and Signal Processing
NLP Applications
Ontology
Paraphrasing/Entailment/Generation
Parsing/Grammatical Formalisms
Phonology, Morphology
POS tagging
Question Answering
Resources and Evaluation
Semantic Processing
Sentiment Analysis, Stylistic Analysis, and Argument Mining
Speech and Multimodality
Speech Recognition and Synthesis
Spoken Language Processing
Statistical and Knowledge based methods
Summarization
Theory and Formalism in NLP
Signal Processing & NLP
Computer Vision, Image Processing& NLP
NLP, AI & Signal
Paper Submission
Authors are invited to submit papers through the conference Submission System by May 06, 2023. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this conference. The proceedings of the conference will be published by International Journal on Cybernetics & Informatics (IJCI) (Confirmed).
Selected papers from NLPSIG 2023, after further revisions, will be published in the special issue of the following journals.
International Journal on Natural Language Computing (IJNLC)
International Journal of Ubiquitous Computing (IJU)
International Journal of Data Mining & Knowledge Management Process (IJDKP)
Signal & Image Processing : An International Journal (SIPIJ)
International Journal of Ambient Systems and Applications (IJASA)
International Journal of Grid Computing & Applications (IJGCA)
Important Dates
Submission Deadline : May 06, 2023
Authors Notification : May 25, 2023
Final Manuscript Due : June 08, 2023
International Conference on NLP & Signal (NLPSIG 2023)ijesajournal
International Conference on NLP & Signal (NLPSIG 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Signal and Natural Language Processing (NLP).
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to:
11th International Conference on Software Engineering & Trends (SE 2023)ijesajournal
11th International Conference on Software Engineering & Trends (SE 2023)
May 27 ~ 28, 2023, Vancouver, Canada
https://acsit2023.org/se/index
Scope & Topics
11th International Conference on Software Engineering & Trends (SE 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Software Engineering. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of software engineering & applications. Topics of interest include, but are not limited to, the following.
Topics of interest include, but are not limited to, the following
The Software Process
Software Engineering Practice
Web Engineering
Quality Management
Managing Software Projects
Advanced Topics in Software Engineering
Multimedia and Visual Software Engineering
Software Maintenance and Testing
Languages and Formal Methods
Web-based Education Systems and Learning Applications
Software Engineering Decision Making
Knowledge-based Systems and Formal Methods
Search Engines and Information Retrieval
Paper Submission
Authors are invited to submit papers through the conference Submission System by April 08, 2023. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this conference. The proceedings of the conference will be published by Computer Science Conference Proceedings (H index 35) in Computer Science & Information Technology (CS & IT) series (Confirmed).
Selected papers from SE 2023, after further revisions, will be published in the special issue of the following journals.
The International Journal of Software Engineering & Applications (IJSEA) -ERA indexed
International Journal of Computer Science, Engineering and Applications (IJCSEA)
Important Dates
Submission Deadline : April 08, 2023
Authors Notification : April 29, 2023
Final Manuscript Due : May 06, 2023
11th International Conference on Software Engineering & Trends (SE 2023)ijesajournal
11th International Conference on Software Engineering & Trends (SE 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Software Engineering. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of software engineering & applications. Topics of interest include, but are not limited to, the following.
PERFORMING AN EXPERIMENTAL PLATFORM TO OPTIMIZE DATA MULTIPLEXINGijesajournal
This article is based on preliminary work on the OSI model management layers to optimized industrial
wired data transfer on low data rate wireless technology. Our previous contribution deal with the
development of a demonstrator providing CAN bus transfer frames (1Mbps) on a low rate wireless channel
provided by Zigbee technology. In order to be compatible with all the other industrial protocols, we
describe in this paper our contribution to design an innovative Wireless Device (WD) and a software tool,
which will aim to determine the best architecture (hardware/software) and wireless technology to be used
taking in account of the wired protocol requirements. To validate the proper functioning of this WD, we
will develop an experimental platform to test different strategies provided by our software tool. We can
consequently prove which is the best configuration (hardware/software) compared to the others by the
inclusion (inputs) of the required parameters of the wired protocol (load, binary rate, acknowledge
timeout) and the analysis of the WD architecture characteristics proposed (outputs) as the delay introduced
by system, buffer size needed, CPU speed, power consumption, meeting the input requirement. It will be
important to know whether gain comes from a hardware strategy with hardware accelerator e.g or a
software strategy with a more perf
GENERIC SOPC PLATFORM FOR VIDEO INTERACTIVE SYSTEM WITH MPMC CONTROLLERijesajournal
Today, a significant number of embedded systems focus on multimedia applications with almost insatiable demand for low-cost, high performance, and low power hardware cosumption. In this paper, we present a re-configurable and generic hardware platform for image and video processing. The proposed platform uses the benefits offered by the Field Programmable Gate Array (FPGA) to attain this goal. In this context,
a prototype system is developed based on the Xilinx Virtex-5 FPGA with the integration of embedded processors, embedded memory, DDR, interface technologies, Digital Clock Managers (DCM) and MPMC.
The MPMC is an essential component for design performance tuning and real time video processing. We demonstrate the importance role of this interface in multi video applications. In fact, to successful the
deployment of DRAM it is mandatory to use a flexible and scalable interface. Our system introduces diverse modules, such as cut video detection, video zoom-in and out. This provides the utility of using this architecture as a universal video processing platform according to different application requirements. This platform facilitates the development of video and image processing applications.
This paper presents an inverting buck-boost DCDC converter design. A negative supply voltage is needed
in a variety of applications, but only a few DCDC converters are available on the market. OLED, a new
display type especially suited for small digital camera or mobile phone displays. Design challenges that
came up when negative voltages have to be handled on chip will be discussed, such as
continuous/discontinuous mode transition problems, negative voltage feedback and negative over-voltage
protection. Both devices operate in a fixed frequency PWM mode or alternatively in PFM mode. The single
inductor topology is called inverting buck-boost converter or simply inverter. The proposed converter has
been implemented with a TSMC 0.13-um 2P4M CMOS process, and the chip area is 325 x 300 um2.
A Case Study: Task Scheduling Methodologies for High Speed Computing Systems ijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
Payment industry is largely aligned in their desire to create embedded payment systems ready for the
modern digital age. The trend to embed payments into a software platform is often regarded as first step
towards a broader trend of embedded finance based on digital representation of fiat currencies. Since it
became clear to our research team that there are no technologies and protocols that are protected against
attacks of quantum computing, and that enable automatic embedded payments, online or offline with no
fear of counterfeit, P2P or device-to-device to be made in real time without intermediaries, in any
denomination, even continuous payments per time or service, while preserving the privacy of all parties,
without enabling illicit activities, we decided to utilize the Generic Innovation Engine [1] that is based on
the Artificial Intelligence Assistance Innovation acceleration methodologies and tools in order to boost the
progress of innovation of the necessary solutions. These methodologies accelerate innovation across the
board. It proposes a framework for natural and artificial intelligence collaboration in pursuit of an
innovative (R&D) objective The outcome of deploying these Artificial Innovation Assistant (AIA)
methodologies was tens of patents that yield solutions, that a few of them are described in this paper. We
argue that a promising avenue for automated embedded payment systems to fulfil people’s desire for
privacy when conducting payments, and national security agencies demand for quantum-safe security,
could be based on DeFi and digital currencies platforms that does not suffer from flaws of DLT-based
solutions, while introducing real advantages, in all aspects, including being quantum-resilient, enabling
users to decide with whom, if at all, to share information, identity, transactions details, etc., all without
trade-offs, complying with AML measures, and accommodating the potential for high transaction volumes.
It is not legacy bank accounts, and it is not peer-dependent, nor a self-organizing network.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
2 nd International Conference on Computing and Information Technology ijesajournal
2
nd International Conference on Computing and Information Technology Trends
(CCITT 2023) will provide an excellent international forum for sharing knowledge and
results in theory, methodology and applications of Computing and Information Technology
Trends. The Conference looks for significant contributions to all major fields of the
Computer Science, Compute Engineering, Information Technology and Trends in theoretical
and practical aspects.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Temporal workload analysis and its application to power aware scheduling
1. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
TEMPORAL WORKLOAD ANALYSIS AND ITS
APPLICATION TO POWER-AWARE SCHEDULING
Ye-In Seol1, Jeong-Uk Kim1 and Young-Kuk Kim2,
1
2
Green Energy Institute, Sangmyung University, Seoul, South Korea
Dept. of Computer Sci. & Eng., Chungnam Nat‟l University, Daejeon, South Korea
ABSTRACT
Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic
voltage scaling(DVS). The basic idea of power-aware scheduling is to find slacks available to tasks and
reduce CPU‟s frequency or lower its voltage using the found slacks. In this paper, we introduce temporal
workload of a system which specifies how much busy its CPU is to complete the tasks at current time.
Analyzing temporal workload provides a sufficient condition of schedulability of preemptive early-deadline
first scheduling and an effective method to identify and distribute slacks generated by early completed
tasks. The simulation results show that proposed algorithm reduces the energy consumption by 10-70%
over the existing algorithm and its algorithm complexity is O(n). So, practical on-line scheduler could be
devised using the proposed algorithm.
KEYWORDS
Power-aware Scheduling, Real-time Scheduling, Embedded Systems
1. INTRODUCTION
Energy consumption issues are becoming more important for mobile or battery-operated
embedded systems. Since the energy consumption of CMOS circuits, used in various
microprocessors, has a quadratic dependency on the operating voltage(
)[2], it is a very
useful method for reducing energy consumption to lower the operating voltage of circuits. But,
lowering the operating voltage also decreases its clock speed or frequency, so the execution times
of tasks are prolonged. This makes problem more complex for hard real-time embedded systems
where timing constraints of tasks should be met.
There has been significant research effort on Dynamic Voltage Scaling(DVS) for real-time
systems to reduce energy consumption while satisfying the timing constraints[1,4-6,8-11,13]. The
chance to lower its voltage occurs when there are slacks for the current executing real-time task.
Generally there are two sources of slack, i.e., when the sum of worst case execution times of tasks
is below the CPU‟s processing capacity and when a task completes early without consuming its
worst case execution time. Main concern of DVS algorithms is how to identify those slacks and
how to distribute them.
DVS algorithms also depend on the scheduling policy, task model, and processor architecture. In
this paper, we adopt Early-Deadline First(EDF) scheduling policy, periodic or sporadic task
model and uniprocessor system. EDF assigns dynamic priority for ready tasks and it is known as
optimal for uniprocessor system[7]. Periodic task model assumes tasks are released periodically
and their relative deadlines are the same as their respective periods. Sporadic task model allows
DOI : 10.5121/ijesa.2013.3301
1
2. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
tasks are released randomly, but there is a restriction on the minimum inter-arrival time of the
same task. In those task models, we require a priori knowledge of tasks, i.e., period, worst-case
execution time, or minimum inter-arrival time, etc. Some works don‟t assume these kinds of
information[5], or adopt aperiodic task model[11,12]. But, many hard real-time applications are
classified as periodic or sporadic, so considering periodic and/or sporadic task model is practical.
In this paper, we introduce a notion of temporal workload which reflects how much busy the
system is. It will be showed that analyzing temporal workload provides a useful method for realtime scheduling, especially EDF. We analyze the behaviors of EDF scheduling using temporal
workload, and present some interesting results by which we understand more deeply the features
of EDF scheduling.
Also we apply the analysis results into power-aware scheduling, and present an algorithm which
adopts the results of CC-EDF[9] and temporal workload analysis. The simulation results show
that the proposed algorithm with affordable algorithmic complexity reduces more energy
consumption than previous work.
The rest of the paper is organized as follows. In section 2, we present the system model and
notations adopted in this paper and introduce some previous works which motivate the work done
in this paper. In section 3, we define the temporal workload and analyze EDF scheduling using it.
In section 4, we present a power-aware scheduling algorithm derived from the temporal workload
analysis. In section 5, simulation results will be provided and section 6 will conclude and discuss
the future directions of this paper.
2. MOTIVATION
In this section we present the system model and introduce the result of the related work.
2.1. System Model
We consider preemptive hard real-time system in which all tasks are periodic or sporadic and
mutually independent. The target processor is DVS enabled uniprocessor and its supply voltage
and frequency are varied continuously between [vmin, vmax] and [fmin, fmax], respectively. Let
*
+ be a set of periodic or sporadic tasks. Each task is represented as
(
) where
is period for periodic task or minimum inter-arrival time for sporadic task;
is work-case computation time for task Ti at the maximum frequency;
Di is relative deadline of a task Ti.
If a instance or job of task released at , then its absolute deadline( ) is
. We
(
). Also the
will consider tasks only with
, so task could be represented as
following notations will be used.
or
: the worst-case utilization of task at the maximum frequency, i.e.,
⁄ .
∑ .
or : the worst-case total utilization of all tasks in the system, i.e.,
: task‟s actual computation time which should be less than . For uncompleted
tasks,
.
: actual utilization of task, i.e.,
.
∑
: actual total utilization of system, i.e.,
: task‟s remaining computation time.
: jth instance or job of task .
2
3. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
: current frequency ratio, i.e., fcur/fmax
: task‟s execution time, i.e.,
: task‟s remaining execution time, i.e.,
release time of task
.
2.2. Related Work
EDF scheduling has been extensively investigated in the area of real-time and power-aware
scheduling[1,3-5,7-12]. But, some dynamic natures of EDF were not fully exploited, for example
dynamic density function introduced in [6]. Dynamic density of a job is defined as its remaining
execution time divided by the time to deadline. They deal with the case of unit execution time and
multi-processor system using dynamic density function. Temporal workload introduced in this
paper is similar or identical to dynamic density function, so we can say that we extended their
results into more general task model, but uniprocessor system.
While devising a new power-aware scheduling algorithm based on the temporal workload
analysis, we especially considered the results presented by Pillai and Shin[9]. They introduced a
cycle-conserving method to real-time DVS. This method reduces the operating frequency on each
task completion and increases on each task release. When a task completes its current invocation
after using
computation time, they treat the task as if its worst-case execution time were
.
So processor speed could be set as the actual total utilization
which is always less than or
equals to worst-case total utilization .
Mei et al.[8] integrated the above cycle-conserving method and the result of Qadi et al.[10] for
sporadic task set. But, these method doesn‟t fully utilize the slack generated. Let‟s see the
following figure. If a task completed at , then during the time interval ,
- the system
operated at higher frequency than required. This observation provides a clue to more slow down
the processor when a task completes. We will show later that the amount of slack which could be
used for lowering processor frequency is related with temporal workload of the completed task.
(
)
Figure 1. CC-EDF[9] or CC-DVSST[8] Schedule
3. TEMPORAL WORKLOAD ANALYSIS
In this section, we introduce some definitions and analyze behaviors of EDF scheduling. This
section will provide a concrete theoretical basis for the power-aware scheduling algorithm
presented in the next section.
3.1. Temporal Workload
Definition 1: Temporal workload of a task at time ,
( ) or
if not confusing, is defined
as
(
) or 0 if it is completed before or at . Temporal workload of a system at time t,
( ) or
, is the sum of temporal workloads of all tasks in the system at time t.
Temporal workload of a task is similar or identical to dynamic density of a job[6], but we
introduce new notion of temporal workload to distinguish remaining execution time and
3
4. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
remaining computation time, so more applicable to power-aware scheduling. Following notations
are also used throughout this paper.
or
( ) : temporal workload of task set
or
( ) : temporal workload of a task in task set
)(
( (
)) :
(
) when executes during ,
), and executes
during ,
).
For the time being, R
because we schedule tasks with full speed of CPU, i.e.,
.
Lemma 1:
is monotonically decreasing during a time interval ,
) if no task was released
()
during that interval,
, and we schedule with EDF.
was executed during ,
Proof: Let task
()
(
,
()
(
)
.
)(
)
∑
=
(
(
)(
)
)
)(
(
)
()
By assumption,
)
.
/
(
(
), then by definition of temporal workload,
∑
0
(
.
1
)(
/
)
∑(
))
(1)
, and for we schedule with EDF,
is always less than
or equals to 1, so Equation (1) is always larger than or equals to 0 which implies
monotonically decreasing.
is
Now we consider which task we schedule at time t effects on temporal workload of a system.
))
Lemma 2:
( (
( (
))
.
)) and
Proof:
( (
( (
)) are the temporal workloads of a system at time
when we schedule and respectively at time t. So,
( (
))
. (
)/
( (
))
. (
(
)/
) (2)
By assumption, Eq. 2 is less than 0 which implies Lemma 2 is true.
Lemma 2 provides another clue that EDF scheduling policy is optimal in preemptive uniprocessor
scheduling.
( (
Corollary 1:
))
( (
))
.
Corollary 1 states that temporal workload of a system doesn‟t depend on which task execute at
time t when tasks‟ deadlines are the same.
Lemma 3:
(
(
)(
))
(
(
)(
))
.
4
5. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
)(
Proof:
( (
)) is the temporal workload of a system when we schedule at
,
) and at ,
) and
( (
)(
)) is the temporal workload
of a system when we schedule at ,
) and at ,
) , but at time
(
), has the same
because
is the same for the two cases, and also. So,
Lemma 3 holds.
Lemma 3 implies that the order of execution has no effect on the last temporal workload of a
system if there is no deadline miss. Which task executed and how much it executed during a time
interval concern only the calculation of the last temporal workload of a system.
At Lemma 1, we proved the monotonic decreasing property of temporal workload under EDF
scheduling. More investigation shows that when a task‟s deadline expires, then the temporal
workload of a system decreases as least as the temporal workload of the task if there is no task
release during the time interval. Following Lemma proves the above discussion.
(
)
()
Lemma 4:
and we schedule using EDF.
( ) if there is no task release during (
),
()
Proof: Let‟s consider a ideal CPU that executes every tasks concurrently proportional to their
initial temporal workload. This could be accomplished by minimizing δ as small as possible in
Fig. 2.
Figure 2. Ideal CPU execution
a
b
Figure 3. Real CPU execution
The execution of ideal CPU could be relocated as the right side of Fig. 2. Because only the
amount of execution time contributes to last temporal workload by Lemma 3, temporal workload
of both side of Fig. 2 is the same. Also, the execution of real CPU could be relocated as the right
side of Fig. 3. At the right side of Fig. 3, the areas of „a‟ and „b‟ equal for total computation time
could not be changed. By Lemma 2, temporal workload of the real CPU is less than or equals to
that of the ideal CPU. The temporal workloads of tasks in ideal CPU don‟t change during the time
interval ,
) and the temporal workload of the task whose deadline is
sets to zero at
. So,
(
()
) of the real CPU
()
(
) of the ideal CPU
(3)
5
6. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
In the process of proving Lemma 4, we presented a useful high limit of temporal workload under
EDF scheduling. It is that temporal workload of ideal CPU system provides an intuitive high limit
as Eq. (3) states.
3.2. Temporal Workload Isomorphic
Now we introduce new notion to compare temporal workloads of different systems.
Definition 2: Task systems,
and , are temporal-workload-isomorphic if
()
( ) for all time t.
such that
If temporal workload value of system is always the same or constant multiples of that of system
, then we can easily assume that schedulability conditions of two systems are very close or the
same and schedule behaviors are very similar to each other.
Lemma 5: Following two periodic task systems,
worst-case execution scenarios.
*
*
such that
(
(
and
are temporal-workload-isomorphic under
), where * + could be multiset. EDF scheduling}
∑
) . * + is distinct set of * +, i.e.,
if
. EDF scheduling} .
.
is for all
of A
Proof: Let’s consider in and its cousin tasks * + in , i.e.,
for . Then release times
and deadlines of all above tasks are the same. Under EDF scheduling, deadline of task is the only
criterion of scheduling, then we can make the following condition hold that if we schedule a job
of , then a job of some corresponding ‟s is scheduled. If not, then there could be one or more
jobs whose deadline is coincided with . But, despite the existence of another job of the same
deadline and a different schedule could be done at that moment,
by Lemma 3 and
Corollary 1. Also, the last time at which all jobs of the same deadline are completed is the same
for and . So, Lemma 5 holds.
Now, we consider the schedulability condition of task system using temporal workload.
Following Lemma is directly implied from the definition of temporal workload because temporal
workload identifies the amount of work to be done until deadline of each task.
Lemma 6: If two finite task system
and
are temporal-workload-isomorphic, then
schedulability conditions of two systems are identical.
Proof: We prove this Lemma by contradiction. If task system
and not, then there exists a task of such that
has violated its timing constraints
.
But temporal workload of system could not reach infinite because it has finite tasks and each
task of system has finite value of temporal workload. This contradicts with assumption of this
Lemma.
3.3. Upper Bound of Temporal Workload
Definition 3: Task system is temporal-workload-upper-bounded if
will denote the upper bound .
()
for all time t.
6
7. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
Because temporal workload of a system denotes the ratio of overhead to CPU capacity, we may
schedule all the tasks of the system without violating its timing constraints if
is below or
the same as 1. Following theorem shows above discussion is true.
()
Theorem 1: If
for all time t, i.e.,
, then task set is schedulable using
EDF.
Proof: We will prove it by induction on time t. Let‟s assume that this theorem holds until time t,
i.e., we successfully scheduled the tasks until time t. At time t, we can still schedule the highest
priority task, i.e., the shortest deadline task, without violating its deadline if there is no task
()
release during (
) for
( ). If there is a task release at time
where
, then we can insist that this theorem still hold until time
because there is no
deadline during (
). From the assumption, at
, the temporal workload of task set
should be less or equals to 1. So, we proved that this theorem still holds until
or
which are larger than t if it holds at t.
Theorem 1 states that if we preserve the upper bound of temporal workload below or equal to 1,
then it is always schedulable. Also theorem 1 provides a sufficient condition for the
schedulability of a system. Now we investigate the temporal workload upper bounds of periodic
task systems.
). EDF+,
Theorem 2: For a periodic or sporadic task system
* (
s∑
.
Proof: We already said that ideal CPU with EDF scheduling policy provides an upper bound of
temporal workload. For a periodic or sporadic task system of ideal CPU, clearly its temporal
workload is always less than or equals to its total utilization. So theorem 2 holds.
3.3. Temporal Workload at Lower Processing Speed
Now, we consider the case of
, i.e., CPU‟s processing speed is not always 1.
Lemma 7: Following two periodic task systems, and are temporal-workload-isomorphic under
worst-case execution scenarios.
*
(
)∑
, CPU‟s processing speed is (
), EDF},
*
(
)
, CPU‟s processing speed is 1, EDF}.
()
Proof: Clearly at time t = 0,
( ) by definition 1. And when we execute a task
of system , at time t, following holds.
(
)
()
(
)
()
.
()
()
(
)
(4)
Eq. (4) states that if we execute tasks of the same deadline at the same time for system and ,
then the ratio of temporal workloads of two systems are not changed. Because and use EDF
scheduling policy and execution times(not computation times) and deadlines of
and its
corresponding are identical under worst-case execution scenarios, Lemma 7 holds. There could
be the cases when we execute of
and of , i.e., deadlines of two tasks are coincidently
exact, but by Lemma 3 and Corollary 1, Lemma 7 still holds as we insisted at Lemma 5.
Corollary 2: For a periodic task system
with constant CPU speed .
*
(
)∑
+, then we can schedule
7
8. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
Proof: By Lemma 7, we can construct a temporal-workload-isomorphic system of whose
is 1 and its CPU processing speed is 1. Then by theorem 1, the newly constructed task set
could be successfully scheduled with EDF. By Lemma 6, this shows that original task set could
be successfully scheduled.
Using Lemma 7 and Corollary 2, we can safely lower processing speed for periodic real-time task
system when its total utilization is below 1 as many previous works for power-aware scheduling
said.
Following theorem is the repetition of Theorem 4 of DVSST paper[10]. DVSST algorithm scales
up CPU whenever new task releases and scales down whenever its deadline expires exactly as
much as utilization of that task.
Theorem 3: Sporadic task system
EDF if and only if ∑
.
with ∑
and DVSST algorithm, it is schedulable using
Proof: “Only If” part: As the assertion stated in [10], if
, then EDF will not find a feasible
schedule, therefore DVSST combined with EDF will not find a feasible schedule.
“If” part: As we stated at Lemma 4, ideal CPU can provide an upper bound of temporal
workload. So, whenever a new task released, temporal workload of ideal CPU system increased
- and whenever a deadline of a task expires it
as much as utilization of that task during ,
decreased also as much as that amount. Now suppose that there is neither new task release nor
deadline expiration during ,
- and is the temporal workload of ideal CPU during the time
interval, then we can construct a new task system whose total utilization is 1 and each task has
the same deadline of original corresponding task and its computation time is
.
Then by Theorem 1, we can schedule new task system using EDF and by Lemma 6, we can say
that we can schedule original task system with EDF for two task systems are temporal workload
isomorphic during the interval. And for two task systems have identical deadline distributions, we
always make sure that (
) pair of tasks execute at the same time in each system without
violating EDF policy. So by Lemma 7, above process could be repeated at every time interval
during which neither new task releases nor deadline expires. This proves „if‟ part.
Following theorem 4 states that temporal workload analysis provides another useful
schedulability test.
Theorem 4: Let
= {periodic or sporadic task systems whose total utilizations are less than or
equal to 1},
= {task systems whose
are less than or equal to 1}, and
= {task
systems whose loading factors[5](
((∑
) (
))) are less than or equal to
1}, then
.
Proof: By Theorem 2,3 and Corollary 2, clearly
neither minimum interval nor periodicity of tasks, so
. But
.
doesn’t assume
is the maximum set of tasks which could be scheduled by EDF because loading factor test
provides necessary and sufficient condition[5]. So,
. But, for the following task
*
+,
(
)
set
, but
. So,
Theorem 4 holds.
8
9. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
The temporal workload of the task set in Theorem 4 is infinite when N is infinite, so there is no
upper limited value of
which provides necessary condition for schedulability test.
4. POWER-AWARE SCHEDULING ALGORITHM
4.1. An On-Line Algorithm
In section 3, we analyzed scheduling behaviours of EDF using temporal workload. Now we apply
the results into power-aware scheduling. As we stated at section 2, previous algorithms such as
CC-EDF, DVSST and CC-DVSST don‟t fully utilize the slacks generated by early completed
tasks. But, based on the temporal workload analysis, we can find more slacks to slow down the
CPU speed. Before presenting more discussion, let‟s introduce new definition.
( ) of a task
Definition 4: Temporal idleness
at time t is defined as following.
0 until its completion and after its deadline.
(
) if it was completed at time and
()
if
.
.
The real value of depends on the status of system and how to calculate will be presented later.
Likewise the definition of temporal workload of a system, temporal idleness of a system, ( ) or
, is defined as the sum of ( ). Note that ( ) is the same as
( ) of uncompleted case.
Now, consider the amount of computation time until its completion. At CC-EDF and CC-DVSST,
they lower the processing speed of task when it is completed. This means that its pace of
computation is faster than actual execution needed as stated at section 2. Following figure shows
the situation.
c
f
a
d
g
b
e
h
Figure 4. Computation time comparison
At time , task was completed, so during ,
-, its computation time(=
) is
larger than needed (
) as amount of (
). If we assume that task executed with its
( ) (
temporal workload during , - , then
) equals to
(
) and
( ) (
) equals to
(
) . But,
(
), (
)
(
). So,
, i.e.,
. It is that the
)
(
)
amount of slack could be used until its deadline is (
, i.e.,
(
). Following lemma shows it formally.
Lemma 8: At CC-EDF or CC-DVSST scheduling, the amount of computation time exceeding its
)- (
actual pace until its completion time(
) is the same as , ( ) (
).
Proof: ,
0
( )
(
)1
(
(
)
)
(
)
(
)
9
10. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
,
-
But, by definition,
(5)
, (
(
)
)
(5)
, so,
(
)-
(
,
-
(
)
)
Using Lemma 8, if a task completed at time t, then we can slow down processing speed by
amount of
when executing lower priority tasks than until its deadline, because CC-EDF or
CC-DVSST slows down processing speed by the amount of (
) . The proposed
algorithm tries to use slacks of already completed higher priority tasks which are necessarily
generated by assuming worst-case execution scenarios and follows the result of CC-EDF when
running task‟s priority is higher than those of already completed tasks. Also, if we cannot utilize
those slacks by some reasons, i.e., when executing higher priority tasks or when slacks are too
much to fully utilize, then we evenly distribute those unused slacks until corresponding deadlines.
One more consideration occurs when there is idle period. Let‟s consider periodic task set
( )
( )
( )+ . If
*
(
)
and
complete at t=1 and t=2,
( ) =1.5/4.5=3/7. If we lower the
respectively, and
completes early at t=2.5, then
processing speed as much as
( ), then actual processing capacity during t=[3,6) is (13/7)*3=12/7 which is less than sum of WCET of
and
. Deadline miss occurs because
there is idle period during t=[2.5,3). During the idle period, the total processing capacity which
should be processed under the actual execution scenarios is larger than sum of slack used. So, we
should reduce future slacks to compensate larger processing capacity. Following figure shows it.
idle period
0
0
1
1
2 2.5 3
a
2 2.5 3
deadline miss
4.75
b
6.5
c
4.5
6
Figure 5. Slack reduction example
The amount of slack which should be reduced is
, and is the same as ,
(
(
))-. At the above figure, the area „a‟ is the same as „b‟+‟c‟. And if exceeds , we
could save some slacks because only
(length of idle period) should be processed by CPU.
10
11. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
global variables
last_idle_time
last_cpu_speed
last_calculation_time
ex_ratio, ex_flag, ex_task
∑
// current total utilization, i.e., ∑
// completed task set sorted by deadline
compute_cpu_speed( ) // is the highest priority task or NULL if no ready task
cpu_speed =
for all tasks ti at
if
or is NULL
if ( cpu_speed
(
(
)))
(
))
cpu_speed
(
else
(
)) cpu_speed
ex_ratio =(
ex_flag = 1, ex_task =
cpu_speed = 0
break
(
))
else if ( ex_flag
0 && (
)
(
))
ex_ratio = (
ex_flag = 1, ex_task =
break
last_calculation_time =
return cpu_speed
decrease_temporal_idleness()
idle_time = last_idle_time - tcur
idle_work = idle_time * last_cpu_speed
for all tasks ti at TC
if ( idle_work == 0)
break
(
)) (
(
)
if (
idle_work)
idle_work
(
) (
)
break
else
idle_work = idle_work
(
)
increase_temporal_idleness()
for all tasks whose priority is less than or equals to that of ex_task at
if ( ex_flag
1)
busy_ratio = ex_ratio
ex_flag = 0
else
(
))
busy_ratio = (
busy_time = last_calculation_time
busy_work = busy_time busy_ratio
if (
)
(busy_work)/(
)
Figure 6. Temporal idleness management
11
12. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
when task arrived
if ( exceed_flag)
increase_temporal_idleness()
else if ( cpu was idle)
decrease_temporal_idleness()
last_cpu_speed = compute_cpu_speed(
set cpu speed as last_cpu_speed
when task completed
=
if ( exceed_flag)
increase_temporal_idleness()
insert into
last_cpu_speed = compute_cpu_speed(
if there is no task to execute
last_idle_time =
set cpu as idle
else
set cpu speed as last_cpu_speed
)
)
on deadline of task
if ( exceed_flag)
increase_temporal_idleness()
else if ( cpu was idle)
decrease_temporal_idleness()
delete from
last_cpu_speed = compute_cpu_speed(
if there is no task to execute
last_idle_time =
set cpu as idle
else
set cpu speed as last_cpu_speed
cf.
)
: current time,
: current ready task of highest priority or NULL if no ready task.
Figure 7. Power-aware scheduling algorithm
Now, following theorem proves the correctness of our algorithm.
Theorem 5: The algorithm presented at Fig. 3, schedules every periodic or sporadic task set if and
only if ∑
.
Proof: “Only If” part: If
, then EDF will not find a feasible schedule, therefore our
algorithm will not find a feasible schedule because our algorithm is the same as EDF or DVSST
when tasks execute always at worst-case execution scenarios, i.e., ( )
and
for
t and i.
„If” part: We will show that temporal workload of a system adopting our algorithm is always less
than or equals to that of another ideal system which schedules tasks without violating their timing
constraints. Then by lemma 6 and theorem 1, we can insist that our scheduling algorithm also
satisfy real-time constraints.
12
13. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
Let‟s consider ideal CPU of ideal system which can anticipate task‟s actual computation time
when released and can execute active tasks with concurrently with the speed of CU. This ideal
system of figure 8 resembles the system of figure 2.
Figure 8. CC-EDF schedule : ideal CPU and ideal execution
Our real system starts to execute with TU speed and slows down with (
) when some
(
)- (
)
tasks are completed. And we already said that ,
is exactly the
) (
)
same as (
. Therefore we can apply following operation to scheduling
result of our real system which replaces some areas of by the same areas of . Following
figures shows it.
(
)
Figure 9. Slack exchange process
During the time interval ,
- which has no idle period, we can apply the above operation for all
areas of . Then temporal workload of the last applied result is less than or equals to that of ideal
system because the total computing capacity of the former is larger than or equals to that of the
latter and the computing capacity of the former is always exhausted by tasks which have shorter
or the same deadlines of tasks of the latter.
During idle period, there exist some areas of ideal system which cannot find counterparts of real
system. Those areas could be filled up or substituted with the areas of future slacks of already
completed tasks as was illustrated at figure 5 and above exchange operation could be also applied
for them. The temporal workload of substitution and exchange result is also less than or equals to
that of ideal system for the same reason.
Now temporal workload of our real system is always less than or equals to that of substitution and
exchange result because the total computing capacity of the former is always larger than or the
same as that of the latter and the same tasks are executed. This proves this theorem.
4.2. An Illustrative Example
Let‟s consider an illustrative example, a periodic task system
*(
)( )( )(
)+.
Total utilization of task system is 1 and
is also 1. Suppose that actual computation
times of tasks , and are 1/2, 1/2, and 1/3 respectively. Then following figure shows the
result of our scheduling algorithm.
At time t=1/2,
(
))
completes its execution and we can set processing speed as
=1 - 0.5/1.5=2/3.
(
13
14. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
At time t=1/2+3/4=5/4,
completes its execution, the processing speed should be
=1-1/3-0.5/(7/4)=8/21. During [5/4,2], computation time of
is 8/21*3/4=2/7. Deadline of
is 2 and
is released at that time, so
=1-2/7=5/7.
0
1
2
3
4
5
6
7
Figure 10. Scheduling example
At t=2+7/10,
completes its execution and
=1-0.5/(13/10)-2/7=30/91.
During [2+7/10,3],
will executes with speed of 30/91 and its computation time is
30/91*3/10=9/91, so total computation time of
until released is 2/7+9/91=245/637.
At t=3,
(
releases, and its deadline is less than
. So,
(
))
=1-5/13=8/13.
At time t=3+13/16,
will be completed and
will execute with speed of
will execute with speed of
≒0.387, and it will complete at t≒3.823. Then, during [3.823,4], there is no ready task, so slack
reduction process should be done.
At t=4,
will be released and it will be executed with speed of ≒0.722 until t≒4.693. At t≒
4.693, there is no ready task, so slack reduction process should be done at t=6 when
and
are released. At t=6,
will be executed with speed of ≒0.889, and so on.
5. EXPERIMENTAL RESULTS
We evaluated our proposed algorithm using RTSIM[14] which is a real-time simulator. RTSIM
can simulate the behaviors of dynamic voltage scaling algorithms as well as traditional real-time
scheduling algorithms. In this simulation, it is assumed that a constant amount of energy is
required for each cycle of operation at a given voltage. This quantum is scaled by the square of
the operating voltage, consistent with energy dissipation in CMOS circuits(
)[2,5]. Only
the energy consumed by CPU was computed and any other source of energy consumption was
ignored. Also we do not consider preemption overheads, task switch overheads, and operating
frequency change overheads. It is also assumed that the CPU consumes no energy during idle
period and its operating frequency range is continuous at [fmin=0, fmax=1].
We compared our proposed algorithm with CC-EDF[9], DVSST[10], and CC-DVSST[8]. CCEDF assumes periodic task model, so we compared it at periodic task system. DVSST and CCDVSST assume sporadic task model, so we compared them at sporadic task system.
To evaluate the effect of number of tasks in the system, we generated 10 or 20 tasks for each
comparison. Their periods or minimum inter arrival times are chosen randomly in the interval [11000]ms. We divided task set into three groups to reflect more real environments. One group of
tasks have short period in the interval [1-10]ms, another group of tasks have medium period in the
interval [10-100]ms, and the last group of tasks have long period in the interval [100-1000]ms.
14
15. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
1
0.8
0.6
0.4
0.2
non-DVS
CC-EDF
PAS-TW
0
Energy(normalized)
1.2
1
Energy(normalized)
1.2
0.8
0.6
0.4
0.2
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
10 tasks
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
load ratio
10 tasks
1.2
1
1
0.8
load ratio
0.8
0.6
0.4
0.2
non-DVS
CC-EDF
PAS-TW
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
20 tasks
load ratio
Energy(normalized)
1.2
Energy(normalized)
non-DVS
DVSST
CC-DVSST
PAS-TW
non-DVS
DVSST
CC-DVSST
PAS-TW
0.6
0.4
0.2
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
20 tasks
load ratio
Figure 11. Simulation results : for periodic tasks(left) and for sporadic tasks(right)
The simulation was performed also by varying the load ratio of tasks, i.e., the ratio of the actual
computation time to the worst case computation time. For all simulations, the worst-case total
utilization of system is always 1, i.e.,
. Above figures show the simulation results.
Figure 11(left) shows the result for periodic task system. In this case, our proposed
algorithm(PWA-TW) always outperforms CC-EDF, and the ratio of energy saving is up to 10%.
The effect of number of tasks in the system could be neglected on the simulation result as the
figures show. Also the number of CPU frequency changes was almost the same for the two
algorithms, so more energy saving could be acquired at real environments.
Figure 11(right) shows the result for sporadic task system. In this case, our proposed algorithm
also outperforms both DVSST and CC-DVSST. For DVSST, the ratio of energy saving is up to
70% and for CC-DVSST, up to 10 %. The effect of number of tasks in the system could be also
neglected, but the number of CPU frequency change of our algorithm was larger than that of
DVSST and almost the same as that of CC-DVSST. So, the ratio of energy saving to DVSST
could be shrinked. But our algorithm has huge performance gain to DVSST, so in spite of
frequency change overheads, it is expected that our algorithm still outperforms to DVSST at real
environments.
15
16. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
6. CONCLUSION
In this paper, we analyzed the behaviors of EDF scheduling and presented a power-aware
scheduling algorithm for periodic and sporadic tasks. Temporal workload analysis provides
another sufficient condition for schedulability of preemptive real-time task scheduling and
another formal method to prove the correctness of power-aware scheduling algorithms. The
proposed algorithm also adopts the results of cycle conserving method(CC-EDF) and sporadic
task scheduling(DVSST). The simulation results show that the proposed algorithm outperforms
existing algorithms up to 10-70 % with respect to CPU energy saving.
In the future we would like to improve the proposed algorithm. This could be done if we assign
all slacks generated by early completed higher priority tasks into the task of highest priority
among uncompleted ready tasks instead of evenly distributing them until the ends of deadlines.
This method may lower processor frequency much more than the proposed algorithm. Also we
would like to apply the temporal workload analysis into another area of real-time task scheduling,
for example, aperiodic task acceptance problem. If we could maintain the temporal workload of a
system below or equal to 1, then real-time constraints of all tasks in the system are met. So this
may provide an effective method for aperiodic task scheduling.
ACKNOWLEDGEMENTS
This work was supported by the Industrial Strategic Technology Development Program(10041740, Development of a software that provides customized real-time optimal control
monitoring services by integrating equipments in buildings with web service) funded by the
Ministry of Trade, Industry and Energy(MOTIE, Korea).
REFERENCES
H. Aydin, R. Melhem, D. Mosse and P. Mejia-Alvarez, “Power-aware scheduling for periodic realtime tasks,” IEEE Trans. on Computers, Vol. 53, pp 584-600, 2004.
[2] T. D. Burd and R. W. Brodersen, “Energy efficient CMOS microprocessor design,” In Proc. of
Twenty-Eighth Hawaii Int‟l Conf. on System Sciences, Vol. 1, 1995.
[3] H. Chetto and M. Chetto. “Some Results of the Earliest Deadline Scheduling Algorithm,” IEEE
Transactions on Software Engineering, Vol.15(10),pp.1261–1269, 1989.
[4] W. Kim, J. Kim, and S. L. Min, “A Dynamic Voltage Scaling Algorithm for Dynamic-Priority Hard
Real-Time Systems Using Slack Time Analysis,” In Proc. of Design, Automation and Test in Europe,
pp. 788–794, 2002.
[5] C. H. Lee and K. G. Shin, “On-line dynamic voltage scaling for hard real-time systems using the EDF
algorithm,” In Proc. of IEEE Int‟l Real-Time Systems Symposium, pp. 319-327, 2004.
[6] J. Lee, A. Easwaran, I. Shin, and I. Lee, “Multiprocessor real-time scheduling considering
concurrency and urgency,” ACM SIGBED Review, Vol. 7(1), 2010.
[7] C. L. Liu and J.W. Layland, “Scheduling algorithms for multiprogramming in a hard real-time
environment,” J. ACM Vol.20(1), pp 46-61, 1973.
[8] J. Mei, K. Li, J. Hu, S. Yin, and E. H-M Sha, “Energy-aware preemptive scheduling algorithm for
sporadic tasks on DVS platform,” Microprocessors & Microsystems, Vol.37, pp. 99-112, 2013.
[9] P. Pillai and K. G. Shin, “Real-time dynamic voltage scaling for low-power embedded operating
systems,” ACM SIGOPS Operating System Review,Vol.35(5), pp. 89-102, 2001.
[10] A. Qadi, S. Goddard, and S. Farritor, “A dynamic voltage scaling algorithm for sporadic tasks,” In
Proc. of IEEE Int'l Real-Time Systems Symposium, pp 52-62, 2003.
[11] D. Shin and J. Kim, “Dynamic voltage scaling of periodic and aperiodic tasks in priority-driven
systems,” In Proc. of the Asia and South Pacific Design Automation Conference, pp 653-658, 2004.
[12] M. Spuri and G. Buttazzo, “Scheduling aperiodic tasks in dynamic priority systems,” Real-Time
Systems, Vol.10(2),pp.179–210, 1996.
[1]
16
17. International Journal of Embedded Systems and Applications (IJESA) Vol.3, No.3, September 2013
[13] F. Yao, A. Demers, and S. Shenker, “A scheduling model for reduced CPU energy,” In Proc. of the
IEEE Foundations of Computer Science, pp.374–382, 1995.
[14] RTSIM:Real-time system simulator, http://rtsim.sssup.it.
AUTHORS
Ye-In Seol received his B.S. degree in Nuclear Engineering from Seoul National
University, Korea in 1992, M.S. degree in Computer Science and Statistics from Seoul
National University in 1994. He is a chief researcher in Green Energy Institute of
SangMyung University in Seoul. His research interests include embedded system,
real-time system and building automation system.
Jeong-Uk Kim received his B.S. degree in Control and Instrumentation Engineering
from Seoul National University, Korea in 1987, M.S. and Ph.D. degrees in Electrical
Engineering from Korea Advanced Institute of Science and Technology in 1989, and
1993, respectively. He is a professor in SangMyung University in Seoul. His research
interests include smart grid demand response, building automation system, and
renewable energy.
Young-Kuk Kim received the B.S. and M.S. degrees in Computer Science and
Statistics from Seoul National University, Korea in 1985 and 1987 respectively and the
Ph.D. degree in Computer Science from University of Virginia, Charlottesville,
Virginia in 1995. After his Ph.D., he visited VTT Information Technology, Finland
and SINTEF Telecom & Informatics, Norway as an ERCIM research fellow during
1995-1996. He joined the Chungnam National University as a faculty member of the
Computer Science Department in March 1996. From August 2002 to July 2003, he
visited Computer Science Department at University of California, Davis as an
exchange scholar. His research interests include real-time systems, database systems,
multimedia and mobile information systems
17