The reduction in energy consumption without any deadline miss is one of the main challenges in real-time embedded systems. Dynamic voltage scaling (DVS) is a technique that reduces the power consumption of processors by utilizing various operating points provided to the DVS processor. These operating points consist of pairs of voltage and frequency. The selection of operating points can be done based on the load to the system at a particular point of time. In this work DVS is applied to both periodic and sporadic tasks, and an average of 40% of energy is reduced. The energy consumption of the processor is further reduced by 2-10% by reducing the number of pre-emption and frequency switching
ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...IJCSEA Journal
Energy consumption is a critical design issue in real-time systems, especially in battery- operated systems. Maintaining high performance, while extending the battery life between charges is an interesting challenge for system designers. Dynamic Voltage Scaling and Dynamic Frequency Scaling allow us to adjust supply voltage and processor frequency to adapt to the workload demand for better energy management. Usually, higher processor voltage and frequency leads to higher system throughput while energy reduction can be obtained using lower voltage and frequency. Many real-time scheduling algorithms have been developed recently to reduce energy consumption in the portable devices that use voltage scalable processors. For a real-time application, comprising a set of real-time tasks with precedence and resource constraints executing on a distributed system, we propose a dynamic energy efficient scheduling algorithm with weighted First Come First Served (WFCFS) scheme. This also considers the run-time behaviour of tasks, to further explore the idle periods of processors for energy saving. Our algorithm is compared with the existing Modified Feedback Control Scheduling (MFCS), First Come First Served (FCFS), and Weighted scheduling (WS) algorithms that uses Service-Rate-Proportionate (SRP) Slack Distribution Technique. Our proposed algorithm achieves about 5 to 6 percent more energy savings and increased reliability over the existing ones.
Temporal workload analysis and its application to power aware schedulingijesajournal
Power
-
aware scheduling reduces CPU energy consumption in hard real
-
time systems through dynamic
voltage scaling(DVS). The basic idea of power
-
aware scheduling
is to find slacks available to tasks and
reduce CPU‟s frequency or lower its voltage using the found slacks. In this paper, we introduce temporal
workload of a system which specifies how much busy its CPU is to complete the tasks at current time.
Analyzin
g temporal workload provides a sufficient condition of schedulability of preemptive early
-
deadline
first scheduling and an effective method to identify and distribute slacks generated by early completed
tasks. The simulation results show that proposed algo
rithm reduces the energy consumption by 10
-
70%
over the existing algorithm and its algorithm complexity is O(n). So, practical on
-
line scheduler could be
devised using the proposed algorithm.
Interplay of Communication and Computation Energy Consumption for Low Power S...ijasuc
The sensor network design approach normally considers the communication energy consumption for
evaluating a communication protocol. This is true for the low power devices such as MICAz/MICA2
which do not consume a lot of energy for the data treatment. However, recently developed sensor devices
for multimedia applications such as iMote2 do consume considerable amount of energy for data
processing. In this article, we consider various scenarios for routing the data in wireless multimedia
sensor networks by considering the local design parameters of devices such as PXA27x and beagleboard.
The proposed routing solution considers node level optimizations such as data compression, dynamic
voltage and frequency scaling (DVFS) for making a routing decision. The proposed approaches have
been simulated to prove the effectiveness of the approach.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...IJCSEA Journal
Energy consumption is a critical design issue in real-time systems, especially in battery- operated systems. Maintaining high performance, while extending the battery life between charges is an interesting challenge for system designers. Dynamic Voltage Scaling and Dynamic Frequency Scaling allow us to adjust supply voltage and processor frequency to adapt to the workload demand for better energy management. Usually, higher processor voltage and frequency leads to higher system throughput while energy reduction can be obtained using lower voltage and frequency. Many real-time scheduling algorithms have been developed recently to reduce energy consumption in the portable devices that use voltage scalable processors. For a real-time application, comprising a set of real-time tasks with precedence and resource constraints executing on a distributed system, we propose a dynamic energy efficient scheduling algorithm with weighted First Come First Served (WFCFS) scheme. This also considers the run-time behaviour of tasks, to further explore the idle periods of processors for energy saving. Our algorithm is compared with the existing Modified Feedback Control Scheduling (MFCS), First Come First Served (FCFS), and Weighted scheduling (WS) algorithms that uses Service-Rate-Proportionate (SRP) Slack Distribution Technique. Our proposed algorithm achieves about 5 to 6 percent more energy savings and increased reliability over the existing ones.
Temporal workload analysis and its application to power aware schedulingijesajournal
Power
-
aware scheduling reduces CPU energy consumption in hard real
-
time systems through dynamic
voltage scaling(DVS). The basic idea of power
-
aware scheduling
is to find slacks available to tasks and
reduce CPU‟s frequency or lower its voltage using the found slacks. In this paper, we introduce temporal
workload of a system which specifies how much busy its CPU is to complete the tasks at current time.
Analyzin
g temporal workload provides a sufficient condition of schedulability of preemptive early
-
deadline
first scheduling and an effective method to identify and distribute slacks generated by early completed
tasks. The simulation results show that proposed algo
rithm reduces the energy consumption by 10
-
70%
over the existing algorithm and its algorithm complexity is O(n). So, practical on
-
line scheduler could be
devised using the proposed algorithm.
Interplay of Communication and Computation Energy Consumption for Low Power S...ijasuc
The sensor network design approach normally considers the communication energy consumption for
evaluating a communication protocol. This is true for the low power devices such as MICAz/MICA2
which do not consume a lot of energy for the data treatment. However, recently developed sensor devices
for multimedia applications such as iMote2 do consume considerable amount of energy for data
processing. In this article, we consider various scenarios for routing the data in wireless multimedia
sensor networks by considering the local design parameters of devices such as PXA27x and beagleboard.
The proposed routing solution considers node level optimizations such as data compression, dynamic
voltage and frequency scaling (DVFS) for making a routing decision. The proposed approaches have
been simulated to prove the effectiveness of the approach.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
Optimal Placement and Sizing of Distributed Generation Units Using Co-Evoluti...Radita Apriana
Today, with the increase of distributed generation sources in power systems, it’s important to
optimal location of these sources. Determine the number, location, size and type of distributed generation
(DG) on Power Systems, causes the reducing losses and improving reliability of the system. In this paper
is used Co-evolutionary particle swarm optimization algorithm (CPSO) to determine the optimal values of
the listed parameters. Obtained results through simulations are done in MATLAB software is presented in
the form of figure and table in this paper. These tables and figures, show how to changes the system
losses and improving reliability by changing parameters such as location, size, number and type of DG.
Finally, the results of this method are compared with the results of the Genetic algorithm (GA) method, to
determine the performance of each of these methods.
A REVIEW OF SELF HEALING SMART GRIDS USING THE MULTIAGENT SYSTEMijiert bestjournal
This paper is trying review different techniques us ed for self healing of the smart grid network. A smart grid has taken a very high importance in th e last ten years or so. Then the advancement in smart grid has taken a major importance. One of the most important aspects in the field of smart grid is a self healing of fault,and this att racted the researchers. As described in many research papers,one of the main requirements of th e electrical grid is to maintain zero gap between generation and distribution [2,3,4]. Howe ver deregulation and decentralized generation has given with the information and communication te chnology (ICT). This paper will summarize latest available techniques for self healing smart grids.
A survey on dynamic energy management at virtualization level in cloud data c...csandit
Data centers have become indispensable infrastructure for data storage and facilitating the
development of diversified network services and applications offered by the cloud. Rapid
development of these applications and services imposes various resource demands that results
in increased energy consumption. This necessitates the development of efficient energy
management techniques in data center not only for operational cost but also to reduce the
amount of heat released from storage devices. Virtualization is a powerful tool for energy
management that achieves efficient utilization of data center resources. Though, energy
management at data centers can be static or dynamic, virtualization level energy management
techniques contributes more energy conservation than hardware level. This paper surveys
various issues related to dynamic energy management at virtualization level in cloud data
centers.
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
Embedded systems with multi core processors are increasingly popular because of the diversity of applications that can be run on it. In this work, a reinforcement learning based scheduling method is proposed to handle the real time tasks in multi core systems with effective CPU usage and lower response time. The priority of the tasks is varied dynamically to ensure fairness with reinforcement learning based priority assignment and Multi Core MultiLevel Feedback queue (MCMLFQ) to manage the task execution in multi core system
Discuss on the MTBF Evaluation and Improvement Methods of Power Adapterinventionjournals
The MTBF evaluation methods and reliability improvement technologies of typical power adapter is discussed in this paper. Firstly, the basic structure of a typical power adapter is introduced; secondly, the reliability model is built and the reliability parameter is evaluated at the same time; thirdly, some reliability improving technologies are proposed to improve the reliability and reduce the manufacturing cost; lastly, the specific reliability data is compared between improvement and before. These reliability improvement technologies are effective, it can be applied to design and manufacture high reliability electric products.
Load shedding in power system using the AHP algorithm and Artificial Neural N...IJAEMSJORNAL
This paper proposes the load shedding method based on considering the load importance factor, primary frequency adjustment, secondary frequency adjustment and neuron network. Consideration the process of primary frequency control, secondary frequency control helps to reduce the amount of load shedding power and restore the system’s frequency to the permissible range. The amount of shedding power of each load bus is distributed based on the load importance factor. Neuron network is applied to distribute load shedding strategies in the power system at different load levels. The experimental and simulated results on the IEEE 37- bus system present the frequency can restore to allowed range and reduce the damage compared to the traditional load shedding method using under frequency relay- UFLS.
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs), where application developers can extract data from sensor nodes through a high level abstraction of the system. Instead of developing the entire application, task graph representation of the WSN model presents simplified approach of data collection. However, mapping of tasks onto sensor nodes highlights several problems in energy consumption and routing delay. In this paper, we present an efficient hybrid approach of task mapping for WSN – Hybrid Genetic Algorithm, considering multiple objectives of optimization – energy consumption, routing delay and soft real time requirement. We also present a method to configure the algorithm as per user's need by changing the heuristics used for optimization. The trade-off analysis between energy consumption and delivery delay was performed and simulation results are presented. The algorithm is applicable during macro-programming enabling developers to choose a better mapping according to their application requirements.
Now a days there is a widespread use of semiconductor devices, which are mostly implemented as the power switches for converters and inverters. These converters and inverters play a vital role in power systems both in transmission and distribution systems. This provides a way for the introduction of harmonics in the power system which leads to poor power quality. To overcome this many solutions have been suggested by the research community but each solution holds its own merits and demerits. Of all these suggested solutions, the Dynamic Voltage Restorer is one of the most cost effective systems for various power quality issues. In this paper the DVR is considered for enhancing the power quality by reducing the harmonics generated because of sensitive loads. Here the power quality is enhanced by controlling the DVR using Neural Network Controller which is trained by Levenberg Marquardt algorithm. In this paper the THD analysis of the voltage quantity is analysed by introducing an unbalanced three phase fault in the system. The simulation is done by using MATLAB/Simulink. From the results, it is verified that the harmonics are reduced by the NN controlled DVR unit. Also the simulation results are verified with the hardware results.
Softmax function is an integral part of object detection frameworks based on most deep or shallow neural
networks. While the configuration of different operation layers in a neural network can be quite different,
softmax operation is fixed. With the recent advances in object detection approaches, especially with the
introduction of highly accurate convolutional neural networks, researchers and developers have suggested
different hardware architectures to speed up the overall operation of these compute-intensive algorithms.
Xilinx, one of the leading FPGA vendors, has recently introduced a deep neural network development kit for
exactly this purpose. However, due to the complex nature of softmax arithmetic hardware involving
exponential function, this functionality is only available for bigger devices. For smaller devices, this operation is
bound to be implemented in software. In this paper, a light-weight hardware implementation of this function
has been proposed which does not require too many logic resources when implemented on an FPGA device.
The proposed design is based on the analysis of the statistical properties of a custom convolutional neural
network when used for classification on a standard dataset i.e. CIFAR-10. Specifically, instead of using a brute
force approach to design a generic full precision arithmetic circuit for SoftMax function using real numbers, an
approximate integer-only design has been suggested for the limited range of operands encountered in realworld
scenario. The approximate circuit uses fewer logic resources since it involves computing only a few
iterations of the series expansion of exponential function. However, despite using fewer iterations, the function
has been shown to work as good as the full precision circuit for classification and leads to only minimal error
being introduced in the associated probabilities. The circuit has been synthesized using Hardware Description
Language (HDL) Coder and Vision HDL toolboxes in Simulink® by Mathworks® which provide higher level
abstraction of image processing and machine learning algorithms for quick deployment on a variety of target
hardware. The final design has been implemented on a Xilinx FPGA development board i.e. Zedboard which
contains the necessary hardware components such as USB, Ethernet and HDMI interfaces etc. to implement a
fully working system capable of processing a machine learning application in real-time.
Temporal workload analysis and its application to power aware schedulingijesajournal
Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling(DVS). The basic idea of power-aware scheduling is to find slacks available to tasks and reduce CPU‟s frequency or lower its voltage using the found slacks. In this paper, we introduce temporal workload of a system which specifies how much busy its CPU is to complete the tasks at current time. Analyzing temporal workload provides a sufficient condition of schedulability of preemptive early-deadline first scheduling and an effective method to identify and distribute slacks generated by early completed tasks. The simulation results show that proposed algorithm reduces the energy consumption by 10-70% over the existing algorithm and its algorithm complexity is O(n). So, practical on-line scheduler could be devised using the proposed algorithm.
Battery Aware Dynamic Scheduling for Periodic Task GraphsNicolas Navet
V. Rao, N. Navet, G. Singhal, A. Kumar, G.S. Visweswaran, "Battery Aware Dynamic Scheduling for Periodic Task Graphs", Proc. of the 14th International Workshop on Parallel and Distributed Real-Time Systems (WPDRTS 2006), Island of Rhodes, Greece, April 25-26, 2006.
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Journals
Abstract This paper concentrates on methods which provide efficient processing time of a virtual machine, CPU utilization time of a virtual machine. As the user increases, the performance may be significantly reduced if the tasks are not scheduled in a proper order. In this paper the performance of two already existing algorithms DSP (Dependency Structural Prioritization) algorithm and credit scheduling algorithm are analyzed and compared. A single virtual machine’s processing time and CPU utilization time are measured .Satisfactory results are achieved while comparing the two algorithms. This study concludes that the DSP algorithm can perform efficiently than the credit scheduling algorithm. Keywords: Virtual Machine, DSP algorithm, credit scheduling algorithm
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Optimal Placement and Sizing of Distributed Generation Units Using Co-Evoluti...Radita Apriana
Today, with the increase of distributed generation sources in power systems, it’s important to
optimal location of these sources. Determine the number, location, size and type of distributed generation
(DG) on Power Systems, causes the reducing losses and improving reliability of the system. In this paper
is used Co-evolutionary particle swarm optimization algorithm (CPSO) to determine the optimal values of
the listed parameters. Obtained results through simulations are done in MATLAB software is presented in
the form of figure and table in this paper. These tables and figures, show how to changes the system
losses and improving reliability by changing parameters such as location, size, number and type of DG.
Finally, the results of this method are compared with the results of the Genetic algorithm (GA) method, to
determine the performance of each of these methods.
A REVIEW OF SELF HEALING SMART GRIDS USING THE MULTIAGENT SYSTEMijiert bestjournal
This paper is trying review different techniques us ed for self healing of the smart grid network. A smart grid has taken a very high importance in th e last ten years or so. Then the advancement in smart grid has taken a major importance. One of the most important aspects in the field of smart grid is a self healing of fault,and this att racted the researchers. As described in many research papers,one of the main requirements of th e electrical grid is to maintain zero gap between generation and distribution [2,3,4]. Howe ver deregulation and decentralized generation has given with the information and communication te chnology (ICT). This paper will summarize latest available techniques for self healing smart grids.
A survey on dynamic energy management at virtualization level in cloud data c...csandit
Data centers have become indispensable infrastructure for data storage and facilitating the
development of diversified network services and applications offered by the cloud. Rapid
development of these applications and services imposes various resource demands that results
in increased energy consumption. This necessitates the development of efficient energy
management techniques in data center not only for operational cost but also to reduce the
amount of heat released from storage devices. Virtualization is a powerful tool for energy
management that achieves efficient utilization of data center resources. Though, energy
management at data centers can be static or dynamic, virtualization level energy management
techniques contributes more energy conservation than hardware level. This paper surveys
various issues related to dynamic energy management at virtualization level in cloud data
centers.
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
Embedded systems with multi core processors are increasingly popular because of the diversity of applications that can be run on it. In this work, a reinforcement learning based scheduling method is proposed to handle the real time tasks in multi core systems with effective CPU usage and lower response time. The priority of the tasks is varied dynamically to ensure fairness with reinforcement learning based priority assignment and Multi Core MultiLevel Feedback queue (MCMLFQ) to manage the task execution in multi core system
Discuss on the MTBF Evaluation and Improvement Methods of Power Adapterinventionjournals
The MTBF evaluation methods and reliability improvement technologies of typical power adapter is discussed in this paper. Firstly, the basic structure of a typical power adapter is introduced; secondly, the reliability model is built and the reliability parameter is evaluated at the same time; thirdly, some reliability improving technologies are proposed to improve the reliability and reduce the manufacturing cost; lastly, the specific reliability data is compared between improvement and before. These reliability improvement technologies are effective, it can be applied to design and manufacture high reliability electric products.
Load shedding in power system using the AHP algorithm and Artificial Neural N...IJAEMSJORNAL
This paper proposes the load shedding method based on considering the load importance factor, primary frequency adjustment, secondary frequency adjustment and neuron network. Consideration the process of primary frequency control, secondary frequency control helps to reduce the amount of load shedding power and restore the system’s frequency to the permissible range. The amount of shedding power of each load bus is distributed based on the load importance factor. Neuron network is applied to distribute load shedding strategies in the power system at different load levels. The experimental and simulated results on the IEEE 37- bus system present the frequency can restore to allowed range and reduce the damage compared to the traditional load shedding method using under frequency relay- UFLS.
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs), where application developers can extract data from sensor nodes through a high level abstraction of the system. Instead of developing the entire application, task graph representation of the WSN model presents simplified approach of data collection. However, mapping of tasks onto sensor nodes highlights several problems in energy consumption and routing delay. In this paper, we present an efficient hybrid approach of task mapping for WSN – Hybrid Genetic Algorithm, considering multiple objectives of optimization – energy consumption, routing delay and soft real time requirement. We also present a method to configure the algorithm as per user's need by changing the heuristics used for optimization. The trade-off analysis between energy consumption and delivery delay was performed and simulation results are presented. The algorithm is applicable during macro-programming enabling developers to choose a better mapping according to their application requirements.
Now a days there is a widespread use of semiconductor devices, which are mostly implemented as the power switches for converters and inverters. These converters and inverters play a vital role in power systems both in transmission and distribution systems. This provides a way for the introduction of harmonics in the power system which leads to poor power quality. To overcome this many solutions have been suggested by the research community but each solution holds its own merits and demerits. Of all these suggested solutions, the Dynamic Voltage Restorer is one of the most cost effective systems for various power quality issues. In this paper the DVR is considered for enhancing the power quality by reducing the harmonics generated because of sensitive loads. Here the power quality is enhanced by controlling the DVR using Neural Network Controller which is trained by Levenberg Marquardt algorithm. In this paper the THD analysis of the voltage quantity is analysed by introducing an unbalanced three phase fault in the system. The simulation is done by using MATLAB/Simulink. From the results, it is verified that the harmonics are reduced by the NN controlled DVR unit. Also the simulation results are verified with the hardware results.
Softmax function is an integral part of object detection frameworks based on most deep or shallow neural
networks. While the configuration of different operation layers in a neural network can be quite different,
softmax operation is fixed. With the recent advances in object detection approaches, especially with the
introduction of highly accurate convolutional neural networks, researchers and developers have suggested
different hardware architectures to speed up the overall operation of these compute-intensive algorithms.
Xilinx, one of the leading FPGA vendors, has recently introduced a deep neural network development kit for
exactly this purpose. However, due to the complex nature of softmax arithmetic hardware involving
exponential function, this functionality is only available for bigger devices. For smaller devices, this operation is
bound to be implemented in software. In this paper, a light-weight hardware implementation of this function
has been proposed which does not require too many logic resources when implemented on an FPGA device.
The proposed design is based on the analysis of the statistical properties of a custom convolutional neural
network when used for classification on a standard dataset i.e. CIFAR-10. Specifically, instead of using a brute
force approach to design a generic full precision arithmetic circuit for SoftMax function using real numbers, an
approximate integer-only design has been suggested for the limited range of operands encountered in realworld
scenario. The approximate circuit uses fewer logic resources since it involves computing only a few
iterations of the series expansion of exponential function. However, despite using fewer iterations, the function
has been shown to work as good as the full precision circuit for classification and leads to only minimal error
being introduced in the associated probabilities. The circuit has been synthesized using Hardware Description
Language (HDL) Coder and Vision HDL toolboxes in Simulink® by Mathworks® which provide higher level
abstraction of image processing and machine learning algorithms for quick deployment on a variety of target
hardware. The final design has been implemented on a Xilinx FPGA development board i.e. Zedboard which
contains the necessary hardware components such as USB, Ethernet and HDMI interfaces etc. to implement a
fully working system capable of processing a machine learning application in real-time.
Temporal workload analysis and its application to power aware schedulingijesajournal
Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling(DVS). The basic idea of power-aware scheduling is to find slacks available to tasks and reduce CPU‟s frequency or lower its voltage using the found slacks. In this paper, we introduce temporal workload of a system which specifies how much busy its CPU is to complete the tasks at current time. Analyzing temporal workload provides a sufficient condition of schedulability of preemptive early-deadline first scheduling and an effective method to identify and distribute slacks generated by early completed tasks. The simulation results show that proposed algorithm reduces the energy consumption by 10-70% over the existing algorithm and its algorithm complexity is O(n). So, practical on-line scheduler could be devised using the proposed algorithm.
Battery Aware Dynamic Scheduling for Periodic Task GraphsNicolas Navet
V. Rao, N. Navet, G. Singhal, A. Kumar, G.S. Visweswaran, "Battery Aware Dynamic Scheduling for Periodic Task Graphs", Proc. of the 14th International Workshop on Parallel and Distributed Real-Time Systems (WPDRTS 2006), Island of Rhodes, Greece, April 25-26, 2006.
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Journals
Abstract This paper concentrates on methods which provide efficient processing time of a virtual machine, CPU utilization time of a virtual machine. As the user increases, the performance may be significantly reduced if the tasks are not scheduled in a proper order. In this paper the performance of two already existing algorithms DSP (Dependency Structural Prioritization) algorithm and credit scheduling algorithm are analyzed and compared. A single virtual machine’s processing time and CPU utilization time are measured .Satisfactory results are achieved while comparing the two algorithms. This study concludes that the DSP algorithm can perform efficiently than the credit scheduling algorithm. Keywords: Virtual Machine, DSP algorithm, credit scheduling algorithm
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A Study on Task Scheduling in Could Data Centers for Energy Efficacy Ehsan Sharifi
Abstract: The increasing energy consumption of Physical Machines (PM) in cloud data centers is nowadays a major problem, it has a negative impact on the environment while at the same time increasing the operational costs of data centers. This fosters the development of more energy-efficient scheduling approaches. In this study, we study the barriers of knowledge in energy efficiency for cloud data centers.
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs),
where application developers can extract data from sensor nodes through a high level abstraction of the
system. Instead of developing the entire application, task graph representation of the WSN model presents
simplified approach of data collection.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
APPROXIMATE ARITHMETIC CIRCUIT DESIGN FOR ERROR RESILIENT APPLICATIONSVLSICS Design
When the application context is ready to accept different levels of exactness in solutions and is supported
by human perception quality, then the term ‘Approximate Computing’ tossed before one decade will
become the first priority . Even though computer hardware and software are working to generate exact
results, approximate results are preferred whenever an error is in predefined bound and adaptive. It will
reduce power demand and critical path delay and improve other circuit metrics. When it comes to
traditional arithmetic circuits, those generating correct results with limitations on performance are rapidly
getting replaced by approximate arithmetic circuits which are the need of the hour, and so on about their
design.
APPROXIMATE ARITHMETIC CIRCUIT DESIGN FOR ERROR RESILIENT APPLICATIONSVLSICS Design
When the application context is ready to accept different levels of exactness in solutions and is supported
by human perception quality, then the term ‘Approximate Computing’ tossed before one decade will
become the first priority . Even though computer hardware and software are working to generate exact
results, approximate results are preferred whenever an error is in predefined bound and adaptive. It will
reduce power demand and critical path delay and improve other circuit metrics. When it comes to
traditional arithmetic circuits, those generating correct results with limitations on performance are rapidly
getting replaced by approximate arithmetic circuits which are the need of the hour, and so on about their
design.
Super-linear speedup for real-time condition monitoring using image processi...IJECEIAES
Real-time inspections for the large-scale solar system may take a long time to get the hazard situations for any failures that may take place in the solar panels normal operations, where prior hazards detection is important. Reducing the execution time and improving the system’s performance are the ultimate goals of multiprocessing or multicore systems. Real-time video processing and analysis from two camcorders, thermal and charge-coupling devices (CCD), mounted on a drone compose the embedded system being proposed for solar panels inspection. The inspection method needs more time for capturing and processing the frames and detecting the faulty panels. The system can determine the longitude and latitude of the defect position information in real-time. In this work, we investigate parallel processing for the image processing operations which reduces the processing time for the inspection systems. The results show a super-linear speedup for real-time condition monitoring in large-scale solar systems. Using the multiprocessing module in Python, we execute fault detection algorithms using streamed frames from both video cameras. The experimental results show a superlinear speedup for thermal and CCD video processing, the execution time is efficiently reduced with an average of 3.1 times and 6.3 times using 2 processes and 4 processes respectively.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
The progressive development of Synthetic Aperture Radar (SAR) systems diversify the exploitation of the generated images by these systems in different applications of geoscience. Detection and monitoring surface deformations, procreated by various phenomena had benefited from this evolution and had been realized by interferometry (InSAR) and differential interferometry (DInSAR) techniques. Nevertheless, spatial and temporal decorrelations of the interferometric couples used, limit strongly the precision of analysis results by these techniques. In this context, we propose, in this work, a methodological approach of surface deformation detection and analysis by differential interferograms to show the limits of this technique according to noise quality and level. The detectability model is generated from the deformation signatures, by simulating a linear fault merged to the images couples of ERS1 / ERS2 sensors acquired in a region of the Algerian south.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed . The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT.The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigen lips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips reading modeled , which wasn’t illustrate the superior performance of the
method.
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Universities offer software engineering capstone course to simulate a real world-working environment in which students can work in a team for a fixed period to deliver a quality product. The objective of the paper is to report on our experience in moving from Waterfall process to Agile process in conducting the software engineering capstone project. We present the capstone course designs for both Waterfall driven and Agile driven methodologies that highlight the structure, deliverables and assessment plans.To evaluate the improvement, we conducted a survey for two different sections taught by two different instructors to evaluate students’ experience in moving from traditional Waterfall model to Agile like process. Twentyeight students filled the survey. The survey consisted of eight multiple-choice questions and an open-ended question to collect feedback from students. The survey results show that students were able to attain hands one experience, which simulate a real world-working environment. The results also show that the Agile approach helped students to have overall better design and avoid mistakes they have made in the initial design completed in of the first phase of the capstone project. In addition, they were able to decide on their team capabilities, training needs and thus learn the required technologies earlier which is reflected on the final product quality
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
Using social media in education provides learners with an informal way for communication. Informal communication tends to remove barriers and hence promotes student engagement. This paper presents our experience in using three different social media technologies in teaching software project management course. We conducted different surveys at the end of every semester to evaluate students’ satisfaction and engagement. Results show that using social media enhances students’ engagement and satisfaction. However, familiarity with the tool is an important factor for student satisfaction.
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
In real world computing environment with using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems, that can be used to provide responses for the questions being asked by the user based on certain facts or rules stored in the knowledge base it can generate answers of questions asked in natural , and the first main idea of fuzzy logic was to working on the problem of computer understanding of natural language, so this survey paper provides an overview on what Question-Answering is and its system architecture and the possible relationship and
different with fuzzy logic, as well as the previous related research with respect to approaches that were followed. At the end, the survey provides an analytical discussion of the proposed QA models, along or combined with fuzzy logic and their main contributions and limitations.
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Human beings generate different speech waveforms while speaking the same word at different times. Also, different human beings have different accents and generate significantly varying speech waveforms for the same word. There is a need to measure the distances between various words which facilitate preparation of pronunciation dictionaries. A new algorithm called Dynamic Phone Warping (DPW) is presented in this paper. It uses dynamic programming technique for global alignment and shortest distance measurements. The DPW algorithm can be used to enhance the pronunciation dictionaries of the well-known languages like English or to build pronunciation dictionaries to the less known sparse languages. The precision measurement experiments show 88.9% accuracy.
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
In education, the use of electronic (E) examination systems is not a novel idea, as Eexamination systems have been used to conduct objective assessments for the last few years. This research deals with randomly designed E-examinations and proposes an E-assessment system that can be used for subjective questions. This system assesses answers to subjective questions by finding a matching ratio for the keywords in instructor and student answers. The matching ratio is achieved based on semantic and document similarity. The assessment system is composed of four modules: preprocessing, keyword expansion, matching, and grading. A survey and case study were used in the research design to validate the proposed system. The examination assessment system will help instructors to save time, costs, and resources, while increasing efficiency and improving the productivity of exam setting and assessments.
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
African Buffalo Optimization (ABO) is one of the most recent swarms intelligence based metaheuristics. ABO algorithm is inspired by the buffalo’s behavior and lifestyle. Unfortunately, the standard ABO algorithm is proposed only for continuous optimization problems. In this paper, the authors propose two discrete binary ABO algorithms to deal with binary optimization problems. In the first version (called SBABO) they use the sigmoid function and probability model to generate binary solutions. In the second version (called LBABO) they use some logical operator to operate the binary solutions. Computational results on two knapsack problems (KP and MKP) instances show the effectiveness of the proposed algorithm and their ability to achieve good and promising solutions.
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
In recent years, many malware writers have relied on Dynamic Domain Name Services (DDNS) to maintain their Command and Control (C&C) network infrastructure to ensure a persistence presence on a compromised host. Amongst the various DDNS techniques, Domain Generation Algorithm (DGA) is often perceived as the most difficult to detect using traditional methods. This paper presents an approach for detecting DGA using frequency analysis of the character distribution and the weighted scores of the domain names. The approach’s feasibility is demonstrated using a range of legitimate domains and a number of malicious algorithmicallygenerated domain names. Findings from this study show that domain names made up of English characters “a-z” achieving a weighted score of < 45 are often associated with DGA. When a weighted score of < 45 is applied to the Alexa one million list of domain names, only 15% of the domain names were treated as non-human generated.
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
The amount of piracy in the streaming digital content in general and the music industry in specific is posing a real challenge to digital content owners. This paper presents a DRM solution to monetizing, tracking and controlling online streaming content cross platforms for IP enabled devices. The paper benefits from the current advances in Blockchain and cryptocurrencies. Specifically, the paper presents a Global Music Asset Assurance (GoMAA) digital currency and presents the iMediaStreams Blockchain to enable the secure dissemination and tracking of the streamed content. The proposed solution provides the data owner the ability to control the flow of information even after it has been released by creating a secure, selfinstalled, cross platform reader located on the digital content file header. The proposed system provides the content owners’ options to manage their digital information (audio, video, speech, etc.), including the tracking of the most consumed segments, once it is release. The system benefits from token distribution between the content owner (Music Bands), the content distributer (Online Radio Stations) and the content consumer(Fans) on the system blockchain.
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
This paper discusses the importance of verb suffix mapping in Discourse translation system. In
discourse translation, the crucial step is Anaphora resolution and generation. In Anaphora
resolution, cohesion links like pronouns are identified between portions of text. These binders
make the text cohesive by referring to nouns appearing in the previous sentences or nouns
appearing in sentences after them. In Machine Translation systems, to convert the source
language sentences into meaningful target language sentences the verb suffixes should be
changed as per the cohesion links identified. This step of translation process is emphasized in
the present paper. Specifically, the discussion is on how the verbs change according to the
subjects and anaphors. To explain the concept, English is used as the source language (SL) and
an Indian language Telugu is used as Target language (TL)
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
In this paper, based on the definition of conformable fractional derivative, the functional
variable method (FVM) is proposed to seek the exact traveling wave solutions of two higherdimensional
space-time fractional KdV-type equations in mathematical physics, namely the
(3+1)-dimensional space–time fractional Zakharov-Kuznetsov (ZK) equation and the (2+1)-
dimensional space–time fractional Generalized Zakharov-Kuznetsov-Benjamin-Bona-Mahony
(GZK-BBM) equation. Some new solutions are procured and depicted. These solutions, which
contain kink-shaped, singular kink, bell-shaped soliton, singular soliton and periodic wave
solutions, have many potential applications in mathematical physics and engineering. The
simplicity and reliability of the proposed method is verified.
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
The using of information technology resources is rapidly increasing in organizations,
businesses, and even governments, that led to arise various attacks, and vulnerabilities in the
field. All resources make it a must to do frequently a penetration test (PT) for the environment
and see what can the attacker gain and what is the current environment's vulnerabilities. This
paper reviews some of the automated penetration testing techniques and presents its
enhancement over the traditional manual approaches. To the best of our knowledge, it is the
first research that takes into consideration the concept of penetration testing and the standards
in the area.This research tackles the comparison between the manual and automated
penetration testing, the main tools used in penetration testing. Additionally, compares between
some methodologies used to build an automated penetration testing platform.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
In order to treat and analyze real datasets, fuzzy association rules have been proposed. Several
algorithms have been introduced to extract these rules. However, these algorithms suffer from
the problems of utility, redundancy and large number of extracted fuzzy association rules. The
expert will then be confronted with this huge amount of fuzzy association rules. The task of
validation becomes fastidious. In order to solve these problems, we propose a new validation
method. Our method is based on three steps. (i) We extract a generic base of non redundant
fuzzy association rules by applying EFAR-PN algorithm based on fuzzy formal concept analysis.
(ii) we categorize extracted rules into groups and (iii) we evaluate the relevance of these rules
using structural equation model.
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
In many applications of data mining, class imbalance is noticed when examples in one class are
overrepresented. Traditional classifiers result in poor accuracy of the minority class due to the
class imbalance. Further, the presence of within class imbalance where classes are composed of
multiple sub-concepts with different number of examples also affect the performance of
classifier. In this paper, we propose an oversampling technique that handles between class and
within class imbalance simultaneously and also takes into consideration the generalization
ability in data space. The proposed method is based on two steps- performing Model Based
Clustering with respect to classes to identify the sub-concepts; and then computing the
separating hyperplane based on equal posterior probability between the classes. The proposed
method is tested on 10 publicly available data sets and the result shows that the proposed
method is statistically superior to other existing oversampling methods.
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Data collection is an essential, but manpower intensive procedure in ecological research. An
algorithm was developed by the author which incorporated two important computer vision
techniques to automate data cataloging for butterfly measurements. Optical Character
Recognition is used for character recognition and Contour Detection is used for imageprocessing.
Proper pre-processing is first done on the images to improve accuracy. Although
there are limitations to Tesseract’s detection of certain fonts, overall, it can successfully identify
words of basic fonts. Contour detection is an advanced technique that can be utilized to
measure an image. Shapes and mathematical calculations are crucial in determining the precise
location of the points on which to draw the body and forewing lines of the butterfly. Overall,
92% accuracy were achieved by the program for the set of butterflies measured.
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Smart cities utilize Internet of Things (IoT) devices and sensors to enhance the quality of the city
services including energy, transportation, health, and much more. They generate massive
volumes of structured and unstructured data on a daily basis. Also, social networks, such as
Twitter, Facebook, and Google+, are becoming a new source of real-time information in smart
cities. Social network users are acting as social sensors. These datasets so large and complex
are difficult to manage with conventional data management tools and methods. To become
valuable, this massive amount of data, known as 'big data,' needs to be processed and
comprehended to hold the promise of supporting a broad range of urban and smart cities
functions, including among others transportation, water, and energy consumption, pollution
surveillance, and smart city governance. In this work, we investigate how social media analytics
help to analyze smart city data collected from various social media sources, such as Twitter and
Facebook, to detect various events taking place in a smart city and identify the importance of
events and concerns of citizens regarding some events. A case scenario analyses the opinions of
users concerning the traffic in three largest cities in the UAE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
The anonymity of social networks makes it attractive for hate speech to mask their criminal
activities online posing a challenge to the world and in particular Ethiopia. With this everincreasing
volume of social media data, hate speech identification becomes a challenge in
aggravating conflict between citizens of nations. The high rate of production, has become
difficult to collect, store and analyze such big data using traditional detection methods. This
paper proposed the application of apache spark in hate speech detection to reduce the
challenges. Authors developed an apache spark based model to classify Amharic Facebook
posts and comments into hate and not hate. Authors employed Random forest and Naïve Bayes
for learning and Word2Vec and TF-IDF for feature selection. Tested by 10-fold crossvalidation,
the model based on word2vec embedding performed best with 79.83%accuracy. The
proposed method achieve a promising result with unique feature of spark for big data.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
2. 48 Computer Science & Information Technology (CS & IT)
Crusoe etc. They have discrete number of voltages and its corresponding frequency levels to
switch in between. As the number of voltage levels increased the power consumption of the
system will get decreased, in fact the tasks will get more choices to select frequency and voltage.
The problem with increase in number of voltage and frequency level is that there is a chance of
increased frequency switching.
A real-time task can be defined as a set of instructions that executes to perform a function such as
I/O, peripherals etc. The characteristics of a periodic task are release time, deadline and its
execution time, whereas the release time and deadline of a sporadic task is undefined. Release
time is the time at which a task is ready for running. Execution time is the maximum time
required for a task to complete. Deadline is the time before that particular task should complete its
work. In a realistic scenario the real-time systems have to handle a number of periodic and
sporadic tasks without crossing its deadlines. In this work we consider the system with both
sporadic and periodic tasks. For simulation, the release time of sporadic task is generated
randomly.
Dynamic voltage scaling technique has more to do with real time scenarios like wireless sensor
networks which are battery powered. In such systems the sensor node has to monitor the
environmental changes and send the information to a controlling station. In industries the sensor
node continuously monitors the environmental parameters such as temperature, pressure etc, and
sends them periodically to a controlling station; such an application can be named as a periodic
task. In a sudden if fire occurs in that industry then it has to send that information with prior
importance. Such a task can be defined as a sporadic one. A DVS processor can handle all these
tasks with minimum power consumption.
When applying Dynamic voltage scaling in real time tasks, if the frequency of a processor
decreases, its corresponding voltage also reduces and the task will take more time to complete the
execution. Thus the power consumption reduces, with the assumption that the tasks will not miss
any of its instances when it executes with maximum frequency.
The next phase of this paper delivers the contributions of other researchers in the field. Third
phase explains the computational model. Fourth section describes in dynamic voltage scaling
algorithm given by Pillai et.al.which has been implemented with modifications. Fifth phase deals
with handling of sporadic tasks along with periodic tasks. Sixth phase conveys the simulation
results.
2. RELATED WORKS
Over the last years, DVS has been adopted as one of the most effective technique for reducing
energy consumption in embedded systems. An introduction to DPM and DVS is given in paper
[1]. In that paper the authors discussed the trading-off between energy consumption and
performance [1]. The paper by Johan Pouwelse et al. describes the importance of DVS in
wearable computers. An implementation was done in Embedded Strong ARM 1100 processor
which supports frequency scaling. The result of their experiment is given as energy per
instruction at minimal speed is 1/5 of energy required at full speed [2].
One of the disadvantages of Dynamic Voltage scaling is increasing the number of preemptions.
To reduce that overhead Preemption-Aware DVS for hard Real-Time Systems is implemented.
Two preemption control Methods are proposed in this paper they are accelerated-completion
based technique and delayed preemption based technique. In that work the preemption is reduced
by postponing the high priority tasks and forces low priority tasks to complete the execution [3].
Integration of preemption threshold scheduling and dynamic voltage scaling is given by Jejurikar
et al. They present an algorithm to compute threshold preemption levels for tasks with given
3. Computer Science & Information Technology (CS & IT) 49
static slowdown factors. The proposed algorithm improves upon known algorithms in terms of
time complexity. Experimental results show that preemption threshold scheduling reduces on an
average 90% context switches, even in the presence of task slowdown. The system model for the
experiment consists of periodic tasks only. [4]
Transition time is another overhead in the case of DVS processors. Transition time is the time
taken by a task to switch from one frequency to another. There are two methods for transition
elimination is given in the paper [5], they are offline and online. Another effort for reducing the
frequency switching is done by Muhammad et al. [6] Frequency switching not only increase the
power consumption but it wastes the processor cycles. By this algorithm an average of 30% of the
frequency switching is reduced. The dynamic voltage scaling algorithm implementation and
testing is done by Pillai et al. in [7]. In this paper the authors proposed three algorithms for
voltage scaling and the proposed algorithms are implemented in 2.2.16 Linux kernel with
hardware platform as AMD K6-2+ processor. [7].
An efficient work for task set generation is done by Bini et al. The main algorithms proposed in
this work are Uunifast, Ufitting, Uscaling, Uunisort. Among these algorithms Uunifast is widely
accepted one. These algorithms provide utilizations for each task in the task set and, from these
utilizations a set of execution times and deadlines are generated. [8]
Sporadic task handling is another feature for Dynamic voltage scaling systems. A DVS algorithm,
called DVSST, is introduced by Ala′ Qadi et al. In that algorithm they handled sporadic tasks in
conjunction with the pre-emptive EDF scheduling algorithm [9]. An acceptance test for sporadic
task instances is given in paper [10]. There are two steps in acceptance test. First step is
processing and second step is slack updating. In this algorithm the acceptance of a sporadic
instance is based on slacks available in the system. Generally the parameters of sporadic tasks are
undefined. In paper [11], the authors provided one algorithm called Total Bandwidth Server
algorithm for calculating the virtual deadline for the aperiodic tasks.
3. SYSTEM MODEL
3.1. Task Model
The task set consists of both periodic and sporadic tasks. Let R (ri, ci, di) represents a periodic task
with release time ri, worst case computation time as ci and relative deadline as di. The sporadic
task can be represented by Si (Ai, Ci, Di), where Ai is the arrival time of the task, Ci is worst case
execution time and Di is the deadline. Arrival time of the sporadic task is generated randomly for
simulation.
3.2. Task set Generation
Task set is generated using an algorithm called UUnifast [8]. A task set with required number of
tasks can be generated for a given utilization. The generated task set is scheduled under EDF
scheduler and hence the maximum utilization is taken as 1.
3.3. Task scheduling Algorithm
The task scheduling policy is Earliest Deadline First; which is a dynamic scheduling algorithm.
The necessary and sufficient condition for scheduling a task set under EDF is that the utilization
should be less than or equal to 1.
U=∑i=1:nCi/Di, (2)
4. 50 Computer Science & Information Technology (CS & IT)
In given equation 2, ‘U’ denotes the utilization, Ci denotes the computation time of ith
task, and Di
represents the deadline of ith
task. The assumptions are taken for EDF is that period of the task is
equal to its relative deadline and all tasks are independent.
3.4. Processor Model
The processor selected for this work is a DVS enabled processor with a set of operating points.
Operating points are a set of frequency- voltage pairs. Let subscript… f1, f2, f3…fn be the
frequencies and v1, v2, v3….vn be the corresponding voltages. Based on the load at a particular
point of time, one of these operating points can be selected provided any of the tasks will never
miss its deadline when it is executed with its worst case execution time.
4. DYNAMIC VOLTAGE SCALING
Dynamic voltage scaling algorithm used in this work is a modification of look-ahead RT
Dynamic voltage scaling algorithm proposed by Pillai et al [7]. Look-ahead algorithm calculates
the number of cycles that must run within an interval and from that information the frequency
required to run the task is calculated. This algorithm tries to run the high priority task with low
frequency and low priority task with high frequency; in order to meet the deadline in time. This
algorithm activates only at scheduling points. The algorithm utilizes the slacks to reduce the
frequency. The actual execution time is assumed to be less than the worst case execution time. If
a task runs with execution time less than the worst case execution time; then the tasks can run
with a frequency less than the maximum frequency. In this paper the algorithm concerned only
with periodic tasks. In this algorithm the pre-emption and frequency switching are considered to
be negligible. This algorithm is applied only on periodic tasks.
At an instant if a low priority task is getting pre-empted by a high priority task then the new
algorithm will check if sum of remaining computation time of previous task (pre-empted task)
and the computation time of current task is less than or equal to difference in deadline of current
task and current time, then the present pre-emption can be avoided. In most of the cases pre-
emption causes frequency switching. If frequency of current task is different from the frequency
of previous task then the algorithm will checks whether any task in the ready queue with
frequency same as the previous task frequency. If such a task is there then the algorithm checks
whether to execute that task for current instance. If it is possible then that task will take for
execution otherwise not.
The algorithm can be briefly described by the code given below:
if previous_task ≠current_task
for i=1: no_of_task
if frequency(i)= frequency(previous)
if computation(current_task)+ computation(i) + t <deadline(current_task)
current_task=i
break;
else
current_task=current_task
end
end
if computation(previous_task ≠ 0)
if computation_current_task+computation_previous_task < = deadline_current_task – current_tick
current_task= previous_task
end
end
end
Figure.1. Pseudo code for frequency switching
5. Computer Science & Information Technology (CS & IT) 51
4.1 Example task set for modified algorithm
Table 1. Task set for modified algorithm
TASK NO EXECUTION TIME DEADLINE
1 1 4
2 1 5
3 2 10
4 4 12
In the given example of the task set, first instance of Task1 and Task2 run with normalized
frequency .5, and Task 3 runs with frequency of .75. At time t=4.2 ms the task T1 is ready for
running with a frequency of .5 and there is another task Task4 having same frequency as the
previous task that had already run (T3) , so the algorithm checks whether the Task4 can run
without missing the deadline of Task1. The algorithm selects Task 4 and it executes till time,
t=6.4ms. The same algorithm also checks for pre-emption. At t=12 Task 3 is pre-empted by Task
1 according to DVS algorithm. New algorithm verifies whether that pre-emption can be avoided,
by reducing the pre-emption by continuing the execution of Task3.
Figure 2. Example for frequency switching Reduction
5. SPORADIC TASK HANDLING
In a mixed task model the periodic task is handled along with the sporadic tasks. Sporadic tasks
are tasks which occurs at random intervals. Sporadic tasks does not have predefined release time
or deadline. A sporadic task can be represented by three parameters.
T
i
= (e
i
, g
i
, d
i
), (3)
Where e
i
is the worst case execution time of the task, g
i
denotes the minimum separation between
two consecutive instances of task, d
i
is the relative deadline. The minimum separation (g
i
)
6. 52 Computer Science & Information Technology (CS & IT)
between two successive instances of the task implies that once an instance of a sporadic task
occurs, the next instance cannot occur before g
i
time units elapsed.
If an instance of a sporadic task is getting the high priority by the earliest deadline first policy, the
algorithm will conduct an acceptance test based on the execution time, available slacks and
deadline of sporadic job, and decide whether to accept or reject the job. Accepting a task implies
that the task will place in a ready queue and will take for execution according to priority and will
complete without affecting any deadline of periodic task. Acceptance of sporadic task directly
relates to the amount of slack remaining for that interval. In this work more priority is given to
periodic tasks.
The deadline for sporadic task is obtained by using TBS (Total Bandwidth Server) algorithm.
This is based on the assigned bandwidth, and offer excellent performance with retaining the
optimality of dynamic-priority scheduling [11]. When a sporadic task occurs, the server assigns
the possible deadline to sporadic task, which is a virtual one. Once the sporadic task is assigned
the virtual deadline, it is scheduled with other periodic tasks according to the EDF algorithm. The
virtual deadline of sporadic task is determined based on the remaining utilization of server.
Suppose that a system has a bandwidth Usrv, assuming that the kth sporadic task Sk arrives on
the system at ak , with an execution time Ck, then its virtual deadline Vk is calculated by the
equation given below, where v0=0 by definition [11].
The virtual deadline Vk is given as
Vk=max{ak,vk-1}+(Ck/Usrv), (4)
5.1. Sporadic Acceptance Test
Consider a system having n number of periodic tasks and m number of sporadic tasks. Sporadic
task is represented by Si (Ai, Ci, Di) and periodic task is represented by R ( ri, ci, di). Whenever a
sporadic request occurs then the acceptance test for sporadic task is done. Acceptance test have
two main steps. One is pre-processing step and other is slack updating step [10].
During pre-processing step the system computes the available slacks to incorporate the sporadic
request [10]. Before the system begins execution, the scheduler computes the slack δi, for each
periodic request Ri(ri,ci,di), which is the maximum amount of time available before di to execute
the sporadic request before causing Ri to miss its deadline. Let Hi denotes all the periodic requests
whose deadline is before di.
δi=di–ci- cj (5)
Where cj is the computation time of all the periodic tasks Ri whose deadline is before di.
Acceptance test relies on information on slack of each request in the system. It has two steps [10].
1. When a sporadic request Si arrives, first determine the amount of slack available in the system
before Di. If the amount of slack is less than the computation time of current sporadic request Si
is rejected, since there is not enough time to schedule Si before its deadline.
2. If there is enough slack, then check if accepting Si would cause any request in the system
whose deadline is after Di to miss its deadline. Si is accepted if no other deadline is missed or
rejected otherwise.
Let k be the slack, then if k>1 then the sporadic request at that instant can be accepted;
otherwise rejected.
k=δp+ (Dk-dp)-I-Ck-SP- (6)
7. Computer Science & Information Technology (CS & IT) 53
Where δp – initial slack of periodic request Rp whose deadline is closer to the deadline of
sporadic request Dk, and whose deadline is earlier than Dk.
Dk - deadline of currently requested sporadic task.
Dp- Deadline of periodic request Rp whose deadline is closer to the deadline of sporadic request
Dk, and whose deadline is earlier than Dk.
I - The amount of idle time before the current sporadic request.
Ck is the worst case execution time of currently requested sporadic request.
fi – the sporadic task requests at ‘tc ’, then the current requests of each periodic task can have
deadline later than dp and yet start executing before ‘tc’ but not yet completed.
SP is the amount of time that is consumed by the sporadic requests which are completed.
∑Di<Dk (Ci) This amount of time is reserved for sporadic requests which have not been completed
and have deadlines before Dk.
∑Dk<di (Fi) This amount of time has been consumed by sporadic requests that have not completed
and have deadlines after Dk.
5.2. Example for calculation of virtual deadline
Consider two periodic tasks and one sporadic task given in table. Task 1 and Task 2 are periodic
tasks and Task 3 is a sporadic task. Task 3.1 is the first instance of sporadic request and task 3.2 is
the second instance of sporadic request. The minimum inter arrival time for sporadic task instance
1 is randomly taken as 8, and for second instance as 7.Here in this example the arrival time for
first instance is obtained randomly as 3,and the virtual deadline is 8 from the below example. So
finally deadline for first instance of the sporadic task is taken as 3+8=11.
The virtual deadline for sporadic task is computed as:
Vd1=max{0,3+(execution_sporadic/(1-utilization_of_periodic))} (7)
The arrival time for next instance is randomly taken as 13. Then the virtual deadline for next
instance is obtained as
Vd2=max{11,13+(execution_sporadic/(1-utilization_of_periodictasks))} (8)
5.3 Example for Sporadic Task Handling
Table2. Periodic and sporadic task parameters
Task number Arrival time Execution
time
Relative
deadline
Task 1 0 2 4
Task 2 0 2 8
Task 3 8 2 13
Task 4 3 1 17
In the table given above the tasks 1 and task2 are periodic tasks and tasks 4 and 5 are sporadic
tasks. The arrival time of sporadic tasks are 8 and 3 milli seconds respectively. The minimum
inter arrival time for task 3 and 4 are 7 and 8 respectively. Although these parameters are
undefined, in simulation these parameters are generated randomly. Figure 3 shows the trace for
the task scheduled under EDF, and all tasks scheduled without missing the deadline and power
calculated is 215 mW. In figure 4, the scheduled task after applying DVS is given. In that figure
most of the slacks are utilized and the power measured is 125.65 mW. In figure 5, the output trace
for modified algorithm is given and power consumption reduced to 120 mW. In that trace
preemptions are avoided. At t=12 a frequency switching occurred after applying DVS, but in
modified algorithm the frequency switching due to the pre-emption of task 3 is avoided.
8. 54 Computer Science & Information Technology (CS & IT)
Figure 3 Output Trace For Task Set Scheduled Under EDF
Figure 4 Output Trace For Task Set Scheduled Under DVS
Figure 5 Output Trace For Task Set Scheduled Under Modified Algorithm
6. SIMULATION RESULTS
The simulation is done with 5 different task sets with both periodic and sporadic tasks, and the
average power saving calculated from the simulation result is 40%.
9. Computer Science & Information Technology (CS & IT) 55
3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Normalizedenergy
Task set number
With DVS
Without DVS
Figure 6. Normalized energy plot for mixed task model
Task pre-emption requires about .2 milli joules energy for context switching [4]. In figure 6
Normalized energy Vs utilization graph is plotted. From this figure it is clear that power
consumption is further reduced with the modified algorithm. About 2-10% of power is reduced in
the modified work as compared to the previous algorithm. The simulation is done for periodic
tasks only. Figure 7 shows normalized energy after reducing the pre-emption.
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Utilization
Energy(Normalized)
Without DVS
With DVS
With DVS+Reduced Preemption
Figure 7. Plot for Normalized Energy Vs utilization
7. CONCLUSIONS
The dynamic voltage scaling technique is applied to the processors to alleviate the excessive
power consumption. In real time systems in addition to periodic tasks sporadic task may occur.
The handling of sporadic task without any deadline miss has prime importance. A modified
algorithm for dynamic voltage scaling is discussed above for the mixed task set. An average of
40% of power consumption is reduced in mixed task model by reducing the pre-emption and
10. 56 Computer Science & Information Technology (CS & IT)
frequency switching. Frequency switching in the system is reduced by 35% by applying the new
algorithm. Frequency switching due to pre-emption is avoided in improved algorithm.
ACKNOWLEDGEMENTS
We would like to thank God Almighty for giving us this opportunity to complete this work. We
would like to extend our gratefulness to Dr. T.N Padmanabhan Nambiar for providing the
valuable support and help in this work.
REFERENCES
[1] Niraj K. Jha,” Low Power System Scheduling and Synthesis”, IEEE/ACM International Conference
on Computer Aided Design, 2001.
[2] Johan Pouwelse, Koen Langendoen and Henk Sips “Dynamic Voltage Scaling on a Low-Power
Microprocessor”, Very Large Scale Integration (VLSI) Systems, IEEE Transactions on Oct. 2003,
Volume 11 Issue:5, page(s): 812 – 826,2003.
[3] Woonseok Kim, Jihong Kim, Sang Lyul Min,” Preemption-Aware Dynamic Voltage Scaling in Hard
Real-Time Systems”, Proceedings of the International Symposium on Low Power Electronics and
Design (ISLPED’04), 2004.
[4] Ravindra Jejurikar and Rajesh Gupta, “Integrating Preemption Threshold Scheduling and Dynamic
Voltage Scaling for Energy Efficient Real-Time Systems”, Proceedings of the Ninth International
Conference on Real-Time Computing Systems and Applications, 2004.
[5] Bren Mochocki, Xiaobo Sharon Hu, Gang Quan, “Transition-Overhead-Aware Voltage Scheduling
for Fixed-Priority Real-Time Systems”, ACM Transactions on Design Automation of Electronic
Systems, Vol. 12,No. 2, Article 11, April 2007.
[6] Farooq Muhammad, Bhatti M. Khurram, Fabrice Muller, Cecile Belleudy, Michel Auguin,
“Precognitive DVFS: Minimizing Switching Points to Further Reduce the Energy Consumption”,
Proceedings of the 14th Real-Time and Embedded Technology and Applications Symposium, 2008.
[7] Padmanabhan Pillai and Kang G. Shin,“Real-Time Dynamic Voltage Scaling for Low Power
Embedded Operating Systems”, Symposium on Operating Systems Principles’01, 2001.
[8] Enricobini, Giorgioc Buttazzo, “Biasing Effects in Schedulability Measures”, Proceedings of the 12th
16th Euromicro Conference on Real-Time Systems (ECRTS’04), IEEE, 2004.
[9] Ala′ Qadi, Steve Goddard, Shane Farritor, “A Dynamic Voltage Scaling Algorithm for Sporadic
Tasks”, 24th IEEE International Real-Time Systems Symposium (RTSS'03) 2003.
[10] Too-seng Tia, Jane W.S.Liu, Jun Sun, Rhan Ha, “A linear-time optimal acceptance test for
scheduling of hard real-time tasks”, IEEE Transactions on Software engineering, November15,
1994.
[11] Shinpei Kato and Nobuyuki Yamasaki, “Scheduling Aperiodic Tasks Using Total Bandwidth Server
on Multiprocessors”, Proceedings of the 2008 IEEE/IFIP International Conference on Embedded and
Ubiquitous Computing - Volume 01, 2008.