High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
Embedded systems with multi core processors are increasingly popular because of the diversity of applications that can be run on it. In this work, a reinforcement learning based scheduling method is proposed to handle the real time tasks in multi core systems with effective CPU usage and lower response time. The priority of the tasks is varied dynamically to ensure fairness with reinforcement learning based priority assignment and Multi Core MultiLevel Feedback queue (MCMLFQ) to manage the task execution in multi core system
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the
heterogeneous resources from different network are used simultaneously to solve a particular problem that
need huge amount of resources. Potential of Grid computing depends on my issues such as security of
resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling
is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing
resources and is an NP-complete problem. To achieve the promising potential of grid computing, an
effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to
improve the performance of resources i.e. makespan time & resource utilization. With this, we have
classified various tasks scheduling heuristic in grid on the basis of their characteristics.
A novel methodology for task distributionijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
Comparative Analysis of Various Grid Based Scheduling Algorithmsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
VIRTUAL MACHINE SCHEDULING IN CLOUD COMPUTING ENVIRONMENTijmpict
Cloud computing is an upcoming technology in dispersed computing facilitating paying for each model as
for each user demand and need. Cloud incorporates a set of virtual machine which comprises both storage
and computational facility. The fundamental goal of cloud computing is to offer effective access to isolated
and geographically circulated resources. Cloud is growing every day and experiences numerous problems
such as scheduling. Scheduling means a collection of policies to regulate the order of task to be executed
by a computer system. An excellent scheduler derives its scheduling plan in accordance with the type of
work and the varying environment. This research paper demonstrates a generalized precedence algorithm
for effective performance of work and contrast with Round Robin and FCFS Scheduling. Algorithm needs
to be tested within CloudSim toolkit and outcome illustrates that it provide good presentation compared
some customary scheduling algorithm.
Effective Load Balance Scheduling Schemes for Heterogeneous Distributed System IJECEIAES
Importance of distributed systems for distributing the workload on the processors is globally accepted. It is an agreed fact that divides and conquers is the effective strategy for load balancing problems. In today’s time, load balancing is the major issue for scheduling algorithm such as in Parallel and Distributed Systems including Grid and Cloud Computing and many more. Load Balancing is the phenomena of spreading or distributing the workload among the processors so that all processors keep busy for all the time, in order to prevent ideal time of processors. In this work, presents a load balancing algorithm for heterogeneous distributed system (HeDS) with aim of minimizing the load imbalance factor (LIF). The proposed algorithm is using optimization techniques such as Max-Max and Min-Max strategy and applied on Folded Crossed Cube (FCC) network. Makespan, speedup and average resource utilization are also evaluated for performance matrices. The experimental results of the proposed algorithms have showed better in comparison to with previous work under various test conditions.
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...ijait
Heterogeneous machines can be significantly better than homogeneous machines but for that an effective workload distribution policy is required. Maximum realization of the performance can be achieved when system designer will overcome load imbalance condition within the system. Load
distribution and load balancing policy together can reduce total execution time and increase system throughput.
In this paper; we provide algorithm analysis of a threshold based job allocation and load balancing policy for heterogeneous system where all incoming jobs are judiciously and transparently distributed among sharing nodes on the basis of jobs’ requirement and processor capability for the maximization of performance and decline in execution time. A brief discussion of job allocation, transfer and location policy is given with explanation of how load imbalance condition is solved within the system. A flow of scheme is given with essential code and analysis of present algorithm is given to show how this algorithm is better.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
Embedded systems with multi core processors are increasingly popular because of the diversity of applications that can be run on it. In this work, a reinforcement learning based scheduling method is proposed to handle the real time tasks in multi core systems with effective CPU usage and lower response time. The priority of the tasks is varied dynamically to ensure fairness with reinforcement learning based priority assignment and Multi Core MultiLevel Feedback queue (MCMLFQ) to manage the task execution in multi core system
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the
heterogeneous resources from different network are used simultaneously to solve a particular problem that
need huge amount of resources. Potential of Grid computing depends on my issues such as security of
resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling
is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing
resources and is an NP-complete problem. To achieve the promising potential of grid computing, an
effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to
improve the performance of resources i.e. makespan time & resource utilization. With this, we have
classified various tasks scheduling heuristic in grid on the basis of their characteristics.
A novel methodology for task distributionijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
Comparative Analysis of Various Grid Based Scheduling Algorithmsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
VIRTUAL MACHINE SCHEDULING IN CLOUD COMPUTING ENVIRONMENTijmpict
Cloud computing is an upcoming technology in dispersed computing facilitating paying for each model as
for each user demand and need. Cloud incorporates a set of virtual machine which comprises both storage
and computational facility. The fundamental goal of cloud computing is to offer effective access to isolated
and geographically circulated resources. Cloud is growing every day and experiences numerous problems
such as scheduling. Scheduling means a collection of policies to regulate the order of task to be executed
by a computer system. An excellent scheduler derives its scheduling plan in accordance with the type of
work and the varying environment. This research paper demonstrates a generalized precedence algorithm
for effective performance of work and contrast with Round Robin and FCFS Scheduling. Algorithm needs
to be tested within CloudSim toolkit and outcome illustrates that it provide good presentation compared
some customary scheduling algorithm.
Effective Load Balance Scheduling Schemes for Heterogeneous Distributed System IJECEIAES
Importance of distributed systems for distributing the workload on the processors is globally accepted. It is an agreed fact that divides and conquers is the effective strategy for load balancing problems. In today’s time, load balancing is the major issue for scheduling algorithm such as in Parallel and Distributed Systems including Grid and Cloud Computing and many more. Load Balancing is the phenomena of spreading or distributing the workload among the processors so that all processors keep busy for all the time, in order to prevent ideal time of processors. In this work, presents a load balancing algorithm for heterogeneous distributed system (HeDS) with aim of minimizing the load imbalance factor (LIF). The proposed algorithm is using optimization techniques such as Max-Max and Min-Max strategy and applied on Folded Crossed Cube (FCC) network. Makespan, speedup and average resource utilization are also evaluated for performance matrices. The experimental results of the proposed algorithms have showed better in comparison to with previous work under various test conditions.
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...ijait
Heterogeneous machines can be significantly better than homogeneous machines but for that an effective workload distribution policy is required. Maximum realization of the performance can be achieved when system designer will overcome load imbalance condition within the system. Load
distribution and load balancing policy together can reduce total execution time and increase system throughput.
In this paper; we provide algorithm analysis of a threshold based job allocation and load balancing policy for heterogeneous system where all incoming jobs are judiciously and transparently distributed among sharing nodes on the basis of jobs’ requirement and processor capability for the maximization of performance and decline in execution time. A brief discussion of job allocation, transfer and location policy is given with explanation of how load imbalance condition is solved within the system. A flow of scheme is given with essential code and analysis of present algorithm is given to show how this algorithm is better.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
DYNAMIC TASK PARTITIONING MODEL IN PARALLEL COMPUTINGcscpconf
Parallel computing systems compose task partitioning strategies in a true multiprocessing
manner. Such systems share the algorithm and processing unit as computing resources which
leads to highly inter process communications capabilities. The main part of the proposed
algorithm is resource management unit which performs task partitioning and co-scheduling .In
this paper, we present a technique for integrated task partitioning and co-scheduling on the
privately owned network. We focus on real-time and non preemptive systems. A large variety of
experiments have been conducted on the proposed algorithm using synthetic and real tasks.
Goal of computation model is to provide a realistic representation of the costs of programming
The results show the benefit of the task partitioning. The main characteristics of our method are
optimal scheduling and strong link between partitioning, scheduling and communication. Some
important models for task partitioning are also discussed in the paper. We target the algorithm
for task partitioning which improve the inter process communication between the tasks and use
the recourses of the system in the efficient manner. The proposed algorithm contributes the
inter-process communication cost minimization amongst the executing processes.
EFFICIENT SCHEDULING STRATEGY USING COMMUNICATION AWARE SCHEDULING FOR PARALL...ijdpsjournal
In the area of Computer Science, Parallel job scheduling is an important field of research. Finding a best
suitable processor on the high performance or cluster computing for user submitted jobs plays an
important role in measuring system performance. A new scheduling technique called communication aware
scheduling is devised and is capable of handling serial jobs, parallel jobs, mixed jobs and dynamic jobs.
This work focuses the comparison of communication aware scheduling with the available parallel job
scheduling techniques and the experimental results show that communication aware scheduling performs
better when compared to the available parallel job scheduling techniques.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
In the era of big data, even though we have large infrastructure, storage data varies in size,
formats, variety, volume and several platforms such as hadoop, cloud since we have problem associated
with an application how to process the data which is varying in size and format. Data varying in
application and resources available during run time is called dynamic workflow. Using large
infrastructure and huge amount of resources for the analysis of data is time consuming and waste of
resources, it’s better to use scheduling algorithm to analyse the given data set, for efficient execution of
data set without time consuming and evaluate which scheduling algorithm is best and suitable for the
given data set. We evaluate with different data set understand which is the most suitable algorithm for
analysis of data being efficient execution of data set and store the data after analysis
Learning scheduler parameters for adaptive preemptioncsandit
An operating system scheduler is expected to not al
low processor stay idle if there is any
process ready or waiting for its execution. This pr
oblem gains more importance as the numbers
of processes always outnumber the processors by lar
ge margins. It is in this regard that
schedulers are provided with the ability to preempt
a running process, by following any
scheduling algorithm, and give us an illusion of si
multaneous running of several processes. A
process which is allowed to utilize CPU resources f
or a fixed quantum of time (termed as
timeslice for preemption) and is then preempted for
another waiting process. Each of these
'process preemption' leads to considerable overhead
of CPU cycles which are valuable resource
for runtime execution. In this work we try to utili
ze the historical performances of a scheduler
and predict the nature of current running process,
thereby trying to reduce the number of
preemptions. We propose a machine-learning module t
o predict a better performing timeslice
which is calculated based on static knowledge base
and adaptive reinforcement learning based
suggestive module. Results for an "adaptive timesli
ce parameter" for preemption show good
saving on CPU cycles and efficient throughput time.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...IJCSEA Journal
Energy consumption is a critical design issue in real-time systems, especially in battery- operated systems. Maintaining high performance, while extending the battery life between charges is an interesting challenge for system designers. Dynamic Voltage Scaling and Dynamic Frequency Scaling allow us to adjust supply voltage and processor frequency to adapt to the workload demand for better energy management. Usually, higher processor voltage and frequency leads to higher system throughput while energy reduction can be obtained using lower voltage and frequency. Many real-time scheduling algorithms have been developed recently to reduce energy consumption in the portable devices that use voltage scalable processors. For a real-time application, comprising a set of real-time tasks with precedence and resource constraints executing on a distributed system, we propose a dynamic energy efficient scheduling algorithm with weighted First Come First Served (WFCFS) scheme. This also considers the run-time behaviour of tasks, to further explore the idle periods of processors for energy saving. Our algorithm is compared with the existing Modified Feedback Control Scheduling (MFCS), First Come First Served (FCFS), and Weighted scheduling (WS) algorithms that uses Service-Rate-Proportionate (SRP) Slack Distribution Technique. Our proposed algorithm achieves about 5 to 6 percent more energy savings and increased reliability over the existing ones.
Comparative Analysis of Job Scheduling for Grid Environment ............................................................1
Neeraj Pandey, Ashish Arya and Nitin Kumar Agrawal
Hackers Portfolio and its Impact on Society ........................................................................................1
Dr. Adnan Omar and Terrance Sanchez, M.S.
Ontology Based Multi-Viewed Approach for Requirements Engineering ..............................................1
R. Subha and S. Palaniswami
Modified Colonial Competitive Algorithm: An Approach for Graph Coloring Problem ..........................1
Hojjat Emami and Parvaneh Hasanzadeh
Security and Privacy in E-Passport Scheme using Authentication Protocols and Multiple Biometrics
Technology ........................................................................................................................................1
V. K. Narendira Kumar and B. Srinivasan
Comparative Study of WLAN, WPAN, WiMAX Technologies ................................................................1
Prof. Mangesh M. Ghonge and Prof. Suraj G. Gupta
A New Method for Web Development using Search Engine Optimization ............................................1
Chutisant Kerdvibulvech and Kittidech Impaiboon
A New Design to Improve the Security Aspects of RSA Cryptosystem ..................................................1
Sushma Pradhan and Birendra Kumar Sharma
A Hybrid Model of Multimodal Approach for Multiple Biometrics Recognition ...................................1
P. Prabhusundhar, V.K. Narendira Kumar and B. Srinivasan
Efficient Resource Management Mechanism with Fault Tolerant Model for Computa...Editor IJCATR
Grid computing provides a framework and deployment environment that enables resource
sharing, accessing, aggregation and management. It allows resource and coordinated use of various
resources in dynamic, distributed virtual organization. The grid scheduling is responsible for resource
discovery, resource selection and job assignment over a decentralized heterogeneous system. In the
existing system, primary-backup approach is used for fault tolerance in a single environment. In this
approach, each task has a primary copy and backup copy on two different processors. For dependent
tasks, precedence constraint among tasks must be considered when scheduling backup copies and
overloading backups. Then, two algorithms have been developed to schedule backups of dependent and
independent tasks. The proposed work is to manage the resource failures in grid job scheduling. In this
method, data source and resource are integrated from different geographical environment. Faulttolerant
scheduling with primary backup approach is used to handle job failures in grid environment.
Impact of communication protocols is considered. Communication protocols such as Transmission
Control Protocol (TCP), User Datagram Protocol (UDP) which are used to distribute the message of
each task to grid resources.
Evaluating the performance and behaviour of rt xenijesajournal
Virtualization, together with real
-
time support
emerges to
be used in an
increasing amount of use cases,
varying
from embedded system
s
to enterprise computing
. One of the most popular
open
-
source
virtualization
softw
are’s
is
Xen
.
Its current implementation does not qualify it to be
a candidate for
time
-
critical systems. Researchers
and developers extend
ed
it
and claim
the
efficient usage
of their versions
in
such systems.
RT
-
Xen is one of these versions. It is a vir
tualization platform based on Compositional Scheduling and uses
a suite of fixed
-
priority schedulers.
This paper evaluates the performance and scheduling
behaviour
of RT
-
Xen.
The test results show that only
two
proposed
schedulers
are
suitable
to be used
for
soft
real
-
time applications.
Image segmentation by modified map ml estimationsijesajournal
Though numerous algorithms exist to perform image segmentation there are several issues
related to execution time of these algorithm. Image Segmentation is nothing but label relabeling
problem under probability framework. To estimate the label configuration, an iterative
optimization scheme is implemented to alternately carry out the maximum a posteriori (MAP)
estimation and the maximum likelihood (ML) estimations. In this paper this technique is
modified in such a way so that it performs segmentation within stipulated time period. The
extensive experiments shows that the results obtained are comparable with existing algorithms.
This algorithm performs faster execution than the existing algorithm to give automatic
segmentation without any human intervention. Its result match image edges very closer to
human perception.
Dominant block guided optimal cache size estimation to maximize ipc of embedd...ijesajournal
Embedded system software is highly constrained from performance, memory footprint, energy consumption and implementing cost view point. It is always desirable to obtain better Instructions per Cycle (IPC). Instruction cache has major contribution in improving IPC. Cache memories are realized on the same chip where the processor is running. This considerably increases the system cost as well. Hence, it is required to maintain a trade-off between cache sizes and performance improvement offered. Determining the number of cache lines and size of cache line are important parameters for cache designing. The design space for cache is quite large. It is time taking to execute the given application with different cache sizes on an instruction set simulator (ISS) to figure out the optimal cache size. In this paper, a technique is proposed to identify a number of cache lines and cache line size for the L1 instruction cache that will offer best or nearly best IPC. Cache size is derived, at a higher abstraction level, from basic block analysis in the Low Level Virtual Machine (LLVM) environment. The cache size estimated from the LLVM environment is cross validated by simulating the set of benchmark applications with different cache sizes in SimpleScalar’s out-of-order simulator. The proposed method seems to be superior in terms of estimation accuracy and/or estimation time as compared to the existing methods for estimation of optimal cache size parameters (cache line size, number of cache lines).
Dominant block guided optimal cache size estimation to maximize ipc of embedd...ijesajournal
Embedded system software is highly constrained from performance, memory footprint, energy consumption
and implementing cost view point. It is always desirable to obtain better Instructions per Cycle (IPC).
Instruction cache has major contribu
tion in improving IPC. Cache memories are realized on the same chip
where the processor is running. This considerably increases the system cost as well. Hence, it is required to
maintain a trade
-
off between cache sizes and performance improvement offered.
Determining the number
of cache lines and size of cache line are important parameters for cache designing. The design space for
cache is quite large. It is time taking to execute the given application with different cache sizes on an
instruction set simula
tor (ISS) to figure out the optimal cache size. In this paper, a technique is proposed to
identify a number of cache lines and cache line size for the L1 instruction cache that will offer best or
nearly best IPC. Cache size is derived, at a higher abstract
ion level, from basic block analysis in the Low
Level Virtual Machine (LLVM) environment. The cache size estimated from the LLVM environment is cross
validated by simulating the set of benchmark applications with different cache sizes in SimpleScalar’s out
-
of
-
order simulator. The proposed method seems to be superior in terms of estimation accuracy and/or
estimation time as compared to the existing methods for estimation of optimal cache size parameters (cache
line size, number of cache lines).
DYNAMIC TASK PARTITIONING MODEL IN PARALLEL COMPUTINGcscpconf
Parallel computing systems compose task partitioning strategies in a true multiprocessing
manner. Such systems share the algorithm and processing unit as computing resources which
leads to highly inter process communications capabilities. The main part of the proposed
algorithm is resource management unit which performs task partitioning and co-scheduling .In
this paper, we present a technique for integrated task partitioning and co-scheduling on the
privately owned network. We focus on real-time and non preemptive systems. A large variety of
experiments have been conducted on the proposed algorithm using synthetic and real tasks.
Goal of computation model is to provide a realistic representation of the costs of programming
The results show the benefit of the task partitioning. The main characteristics of our method are
optimal scheduling and strong link between partitioning, scheduling and communication. Some
important models for task partitioning are also discussed in the paper. We target the algorithm
for task partitioning which improve the inter process communication between the tasks and use
the recourses of the system in the efficient manner. The proposed algorithm contributes the
inter-process communication cost minimization amongst the executing processes.
EFFICIENT SCHEDULING STRATEGY USING COMMUNICATION AWARE SCHEDULING FOR PARALL...ijdpsjournal
In the area of Computer Science, Parallel job scheduling is an important field of research. Finding a best
suitable processor on the high performance or cluster computing for user submitted jobs plays an
important role in measuring system performance. A new scheduling technique called communication aware
scheduling is devised and is capable of handling serial jobs, parallel jobs, mixed jobs and dynamic jobs.
This work focuses the comparison of communication aware scheduling with the available parallel job
scheduling techniques and the experimental results show that communication aware scheduling performs
better when compared to the available parallel job scheduling techniques.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
In the era of big data, even though we have large infrastructure, storage data varies in size,
formats, variety, volume and several platforms such as hadoop, cloud since we have problem associated
with an application how to process the data which is varying in size and format. Data varying in
application and resources available during run time is called dynamic workflow. Using large
infrastructure and huge amount of resources for the analysis of data is time consuming and waste of
resources, it’s better to use scheduling algorithm to analyse the given data set, for efficient execution of
data set without time consuming and evaluate which scheduling algorithm is best and suitable for the
given data set. We evaluate with different data set understand which is the most suitable algorithm for
analysis of data being efficient execution of data set and store the data after analysis
Learning scheduler parameters for adaptive preemptioncsandit
An operating system scheduler is expected to not al
low processor stay idle if there is any
process ready or waiting for its execution. This pr
oblem gains more importance as the numbers
of processes always outnumber the processors by lar
ge margins. It is in this regard that
schedulers are provided with the ability to preempt
a running process, by following any
scheduling algorithm, and give us an illusion of si
multaneous running of several processes. A
process which is allowed to utilize CPU resources f
or a fixed quantum of time (termed as
timeslice for preemption) and is then preempted for
another waiting process. Each of these
'process preemption' leads to considerable overhead
of CPU cycles which are valuable resource
for runtime execution. In this work we try to utili
ze the historical performances of a scheduler
and predict the nature of current running process,
thereby trying to reduce the number of
preemptions. We propose a machine-learning module t
o predict a better performing timeslice
which is calculated based on static knowledge base
and adaptive reinforcement learning based
suggestive module. Results for an "adaptive timesli
ce parameter" for preemption show good
saving on CPU cycles and efficient throughput time.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
ENERGY EFFICIENT SCHEDULING FOR REAL-TIME EMBEDDED SYSTEMS WITH PRECEDENCE AN...IJCSEA Journal
Energy consumption is a critical design issue in real-time systems, especially in battery- operated systems. Maintaining high performance, while extending the battery life between charges is an interesting challenge for system designers. Dynamic Voltage Scaling and Dynamic Frequency Scaling allow us to adjust supply voltage and processor frequency to adapt to the workload demand for better energy management. Usually, higher processor voltage and frequency leads to higher system throughput while energy reduction can be obtained using lower voltage and frequency. Many real-time scheduling algorithms have been developed recently to reduce energy consumption in the portable devices that use voltage scalable processors. For a real-time application, comprising a set of real-time tasks with precedence and resource constraints executing on a distributed system, we propose a dynamic energy efficient scheduling algorithm with weighted First Come First Served (WFCFS) scheme. This also considers the run-time behaviour of tasks, to further explore the idle periods of processors for energy saving. Our algorithm is compared with the existing Modified Feedback Control Scheduling (MFCS), First Come First Served (FCFS), and Weighted scheduling (WS) algorithms that uses Service-Rate-Proportionate (SRP) Slack Distribution Technique. Our proposed algorithm achieves about 5 to 6 percent more energy savings and increased reliability over the existing ones.
Comparative Analysis of Job Scheduling for Grid Environment ............................................................1
Neeraj Pandey, Ashish Arya and Nitin Kumar Agrawal
Hackers Portfolio and its Impact on Society ........................................................................................1
Dr. Adnan Omar and Terrance Sanchez, M.S.
Ontology Based Multi-Viewed Approach for Requirements Engineering ..............................................1
R. Subha and S. Palaniswami
Modified Colonial Competitive Algorithm: An Approach for Graph Coloring Problem ..........................1
Hojjat Emami and Parvaneh Hasanzadeh
Security and Privacy in E-Passport Scheme using Authentication Protocols and Multiple Biometrics
Technology ........................................................................................................................................1
V. K. Narendira Kumar and B. Srinivasan
Comparative Study of WLAN, WPAN, WiMAX Technologies ................................................................1
Prof. Mangesh M. Ghonge and Prof. Suraj G. Gupta
A New Method for Web Development using Search Engine Optimization ............................................1
Chutisant Kerdvibulvech and Kittidech Impaiboon
A New Design to Improve the Security Aspects of RSA Cryptosystem ..................................................1
Sushma Pradhan and Birendra Kumar Sharma
A Hybrid Model of Multimodal Approach for Multiple Biometrics Recognition ...................................1
P. Prabhusundhar, V.K. Narendira Kumar and B. Srinivasan
Efficient Resource Management Mechanism with Fault Tolerant Model for Computa...Editor IJCATR
Grid computing provides a framework and deployment environment that enables resource
sharing, accessing, aggregation and management. It allows resource and coordinated use of various
resources in dynamic, distributed virtual organization. The grid scheduling is responsible for resource
discovery, resource selection and job assignment over a decentralized heterogeneous system. In the
existing system, primary-backup approach is used for fault tolerance in a single environment. In this
approach, each task has a primary copy and backup copy on two different processors. For dependent
tasks, precedence constraint among tasks must be considered when scheduling backup copies and
overloading backups. Then, two algorithms have been developed to schedule backups of dependent and
independent tasks. The proposed work is to manage the resource failures in grid job scheduling. In this
method, data source and resource are integrated from different geographical environment. Faulttolerant
scheduling with primary backup approach is used to handle job failures in grid environment.
Impact of communication protocols is considered. Communication protocols such as Transmission
Control Protocol (TCP), User Datagram Protocol (UDP) which are used to distribute the message of
each task to grid resources.
Evaluating the performance and behaviour of rt xenijesajournal
Virtualization, together with real
-
time support
emerges to
be used in an
increasing amount of use cases,
varying
from embedded system
s
to enterprise computing
. One of the most popular
open
-
source
virtualization
softw
are’s
is
Xen
.
Its current implementation does not qualify it to be
a candidate for
time
-
critical systems. Researchers
and developers extend
ed
it
and claim
the
efficient usage
of their versions
in
such systems.
RT
-
Xen is one of these versions. It is a vir
tualization platform based on Compositional Scheduling and uses
a suite of fixed
-
priority schedulers.
This paper evaluates the performance and scheduling
behaviour
of RT
-
Xen.
The test results show that only
two
proposed
schedulers
are
suitable
to be used
for
soft
real
-
time applications.
Image segmentation by modified map ml estimationsijesajournal
Though numerous algorithms exist to perform image segmentation there are several issues
related to execution time of these algorithm. Image Segmentation is nothing but label relabeling
problem under probability framework. To estimate the label configuration, an iterative
optimization scheme is implemented to alternately carry out the maximum a posteriori (MAP)
estimation and the maximum likelihood (ML) estimations. In this paper this technique is
modified in such a way so that it performs segmentation within stipulated time period. The
extensive experiments shows that the results obtained are comparable with existing algorithms.
This algorithm performs faster execution than the existing algorithm to give automatic
segmentation without any human intervention. Its result match image edges very closer to
human perception.
Dominant block guided optimal cache size estimation to maximize ipc of embedd...ijesajournal
Embedded system software is highly constrained from performance, memory footprint, energy consumption and implementing cost view point. It is always desirable to obtain better Instructions per Cycle (IPC). Instruction cache has major contribution in improving IPC. Cache memories are realized on the same chip where the processor is running. This considerably increases the system cost as well. Hence, it is required to maintain a trade-off between cache sizes and performance improvement offered. Determining the number of cache lines and size of cache line are important parameters for cache designing. The design space for cache is quite large. It is time taking to execute the given application with different cache sizes on an instruction set simulator (ISS) to figure out the optimal cache size. In this paper, a technique is proposed to identify a number of cache lines and cache line size for the L1 instruction cache that will offer best or nearly best IPC. Cache size is derived, at a higher abstraction level, from basic block analysis in the Low Level Virtual Machine (LLVM) environment. The cache size estimated from the LLVM environment is cross validated by simulating the set of benchmark applications with different cache sizes in SimpleScalar’s out-of-order simulator. The proposed method seems to be superior in terms of estimation accuracy and/or estimation time as compared to the existing methods for estimation of optimal cache size parameters (cache line size, number of cache lines).
Dominant block guided optimal cache size estimation to maximize ipc of embedd...ijesajournal
Embedded system software is highly constrained from performance, memory footprint, energy consumption
and implementing cost view point. It is always desirable to obtain better Instructions per Cycle (IPC).
Instruction cache has major contribu
tion in improving IPC. Cache memories are realized on the same chip
where the processor is running. This considerably increases the system cost as well. Hence, it is required to
maintain a trade
-
off between cache sizes and performance improvement offered.
Determining the number
of cache lines and size of cache line are important parameters for cache designing. The design space for
cache is quite large. It is time taking to execute the given application with different cache sizes on an
instruction set simula
tor (ISS) to figure out the optimal cache size. In this paper, a technique is proposed to
identify a number of cache lines and cache line size for the L1 instruction cache that will offer best or
nearly best IPC. Cache size is derived, at a higher abstract
ion level, from basic block analysis in the Low
Level Virtual Machine (LLVM) environment. The cache size estimated from the LLVM environment is cross
validated by simulating the set of benchmark applications with different cache sizes in SimpleScalar’s out
-
of
-
order simulator. The proposed method seems to be superior in terms of estimation accuracy and/or
estimation time as compared to the existing methods for estimation of optimal cache size parameters (cache
line size, number of cache lines).
Evaluating the performance and behaviour of rt xenijesajournal
Virtualization, together with real-time support emerges to be used in an increasing amount of use cases, varying from embedded systems to enterprise computing. One of the most popular open-source virtualization software’s is Xen. Its current implementation does not qualify it to be a candidate for time-critical systems. Researchers and developers extended it and claim the efficient usage of their versions in such systems.
RT-Xen is one of these versions. It is a virtualization platform based on Compositional Scheduling and uses a suite of fixed-priority schedulers.
This paper evaluates the performance and scheduling behaviour of RT-Xen. The test results show that only two proposed schedulers are suitable to be used for soft real-time applications.
Microcontroller system is one of the vital subjects offered by students during the sequence of study in universities and other colleges of science, engineering and technology in the world. In this paper, we solve the problem of student comprehension and skill development in embedded system design using microcontroller chip PIC16F887 by demonstration of hands-on laboratory experiments. Also, developments of software code, circuit diagram simulation were carried out. This is to help students connect their theoretical knowledge with the practical experience. Each of the experiments was carried out using BK300 development board, PICKit3 programmer, Proteus 8.0 software. Our years of experience in the teaching of microcontroller course and the active involvement of students as manifested in complete in-depth hands-on laboratory projects on real life problem solving. Laboratory session with the development board and software demonstrated in this article is unambiguous. Future embedded system laboratory session could be designed around ATMel lines of Microcontrollers.
Temporal workload analysis and its application to power aware schedulingijesajournal
Power
-
aware scheduling reduces CPU energy consumption in hard real
-
time systems through dynamic
voltage scaling(DVS). The basic idea of power
-
aware scheduling
is to find slacks available to tasks and
reduce CPU‟s frequency or lower its voltage using the found slacks. In this paper, we introduce temporal
workload of a system which specifies how much busy its CPU is to complete the tasks at current time.
Analyzin
g temporal workload provides a sufficient condition of schedulability of preemptive early
-
deadline
first scheduling and an effective method to identify and distribute slacks generated by early completed
tasks. The simulation results show that proposed algo
rithm reduces the energy consumption by 10
-
70%
over the existing algorithm and its algorithm complexity is O(n). So, practical on
-
line scheduler could be
devised using the proposed algorithm.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
Optimized Assignment of Independent Task for Improving Resources Performance ...Ricardo014
Grid computing has emerged from category of distributed and parallel computing where the heterogeneous resources from different network are used simultaneously to solve a particular problem that need huge amount of
resources. Potential of Grid computing depends on my issues such as security of resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling is one of the core steps to
efficiently exploit the capabilities of heterogeneous distributed computing resources and is an NP-complete problem. To achieve the promising potential of grid computing, an effective and efficient job scheduling algorithm is
proposed, which will optimized two important criteria to improve the performance of resources i.e. makespan time & resource utilization. With this, we have classified various tasks scheduling heuristic in grid on the basis of
their characteristics.
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the heterogeneous resources from different network are used simultaneously to solve a particular problem that need huge amount of resources. Potential of Grid computing depends on my issues such as security of resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing resources and is an NP-complete problem. To achieve the promising potential of grid computing, an effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to improve the performance of resources i.e. makespan time & resource utilization. With this, we have classified various tasks scheduling heuristic in grid on the basis of their characteristics.
A survey of various scheduling algorithm in cloud computing environmenteSAT Journals
Abstract Cloud computing is known as a provider of dynamic services using very large scalable and virtualized resources over the Internet. Due to novelty of cloud computing field, there is no many standard task scheduling algorithm used in cloud environment. Especially that in cloud, there is a high communication cost that prevents well known task schedulers to be applied in large scale distributed environment. Today, researchers attempt to build job scheduling algorithms that are compatible and applicable in Cloud Computing environment Job scheduling is most important task in cloud computing environment because user have to pay for resources used based upon time. Hence efficient utilization of resources must be important and for that scheduling plays a vital role to get maximum benefit from the resources. In this paper we are studying various scheduling algorithm and issues related to them in cloud computing. Index Terms: cloud computing, scheduling, algorithm
SURVEY ON SCHEDULING AND ALLOCATION IN HIGH LEVEL SYNTHESIScscpconf
This paper presents the detailed survey of scheduling and allocation techniques in the High Level Synthesis (HLS) presented in the research literature. It also presents the methodologies and techniques to improve the Speed, (silicon) Area and Power in High Level Synthesis, which are presented in the research literature.
A developer needs to evaluate software performance metrics such as power consumption at an early stage of design phase to make a device or a software efficient especially in real-time embedded systems. Constructing performance models and evaluation techniques of a given system requires a significant effort. This paper presents a framework to bridge between a Functional Modeling Approach such as FSM, UML etc. and an Analytical (Mathematical) Modeling Approach such as Hierarchical Performance Modeling (HPM) as a technique to find the expected average power consumption for different layers of abstractions. A Hierarchical Generic FSM “HGFSM” is developed to be used in order to estimate the expected average power. A case study is presented to illustrate the concepts of how the framework is used to estimate the average power and energy produced.
Job Resource Ratio Based Priority Driven Scheduling in Cloud Computingijsrd.com
Cloud Computing is an emerging technology in the area of parallel and distributed computing. Clouds consist of a collection of virtualized resources, which include both computational and storage facilities that can be provisioned on demand, depending on the users' needs. Job scheduling is one of the major activities performed in all the computing environments. Cloud computing is one the upcoming latest technology which is developing drastically. To efficiently increase the working of cloud computing environments, job scheduling is one the tasks performed in order to gain maximum profit. In this paper we proposed a new scheduling algorithm based on priority and that priority is based on ratio of job and resource. To calculate priority of job we use analytical hierarchy process. In this paper we also compare result with other algorithm like First come first serve and round robin algorithms.
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
In this paper we will present a reliable scheduling algorithm in cloud computing environment. In this algorithm we create a new algorithm by means of a new technique and with classification and considering request and acknowledge time of jobs in a qualification function. By evaluating the previous algorithms, we understand that the scheduling jobs have been performed by parameters that are associated with a failure rate. Therefore in the roposed algorithm, in addition to previous parameters, some other important parameters are used so we can gain the jobs with different scheduling based on these parameters. This work is associated with a mechanism. The major job is divided to sub jobs. In order to balance the jobs we should calculate the request and acknowledge time separately. Then we create the scheduling of each job by calculating the request and acknowledge time in the form of a shared job. Finally efficiency of the system is increased. So the real time of this algorithm will be improved in comparison with the other algorithms. Finally by the mechanism presented, the total time of processing in cloud computing is improved in comparison with the other algorithms.
Comparative analysis of the essential CPU scheduling algorithmsjournalBEEI
CPU scheduling algorithms have a significant function in multiprogramming operating systems. When the CPU scheduling is effective a high rate of computation could be done correctly and also the system will maintain in a stable state. As well as, CPU scheduling algorithms are the main service in the operating systems that fulfill the maximum utilization of the CPU. This paper aims to compare the characteristics of the CPU scheduling algorithms towards which one is the best algorithm for gaining a higher CPU utilization. The comparison has been done between ten scheduling algorithms with presenting different parameters, such as performance, algorithm’s complexity, algorithm’s problem, average waiting times, algorithm’s advantages-disadvantages, allocation way, etc. The main purpose of the article is to analyze the CPU scheduler in such a way that suits the scheduling goals. However, knowing the algorithm type which is most suitable for a particular situation by showing its full properties.
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
The aim of cloud computing is to share a large number of resources and pieces of equipment to compute and store knowledge and information for great scientific sources. Therefore, the scheduling algorithm is regarded as one of the most important challenges and problems in the cloud. To solve the task scheduling problem in this study, the ant colony optimization (ACO) algorithm was adapted from social theories with a fair and accurate resource allocation approach based on machine performance and capacity. This study was intended to decrease the runtime and executive costs. It was also meant to optimize the use of machines and reduce their idle time. Finally, the proposed method was compared with Berger and greedy algorithms. The simulation results indicate that the proposed algorithm reduced the makespan and executive cost when tasks were added. It also increased fairness and load balancing. Moreover, it made the optimal use of machines possible and increased user satisfaction. According to evaluations, the proposed algorithm improved the makespan by 80%.
SCHEDULING DIFFERENT CUSTOMER ACTIVITIES WITH SENSING DEVICEijait
Most periodic tasks are assigned to processors using partition scheduling policy after checking feasibility conditions. A new approach is proposed for scheduling different activities with one periodic task within the system. In this paper, control strategies are identified for allocating different types of tasks (activities) to
individual computing elements like Smartphone or microphones. In our simulation model, each periodic task generates an aperiodic tasks are taken into consideration. Different sets of periodic tasks and aperiodic tasks are scheduled together. This new approach proves that when all different activities are
scheduled with one periodic tasks leads to better performance.
Similar to Task scheduling methodologies for high speed computing systems (20)
DESIGN OF AN EMBEDDED SYSTEM: BEDSIDE PATIENT MONITORijesajournal
Embedded systems in the range of from a tiny microcontroller-based sensor device to mobile smart phones
have vast variety of applications. However, in the literature there is no up to date system-level design of
embedded hardware and software, instead academic publications are mainly focused on the improvement
of specific features of embedded software/hardware and the embedded system designs for specific
applications. Moreover, commercially available embedded systems are not disclosed for the view of
researchers in the literature. Therefore, in this paper we first present how to design a state of art embedded
system including emerged hardware and software technologies. Bedside Patient monitor devices used in
intensive cares units of hospitals are also classified as embedded systems and run sophisticated software
and algorithms for better diagnosis of diseases. We reveal the architecture of our, commercially available,
bedside patient monitor to provide a design example of embedded systemsrelating to emerged technologies.
DESIGN OF AN EMBEDDED SYSTEM: BEDSIDE PATIENT MONITORijesajournal
Embedded systems in the range of from a tiny microcontroller-based sensor device to mobile smart phones
have vast variety of applications. However, in the literature there is no up to date system-level design of
embedded hardware and software, instead academic publications are mainly focused on the improvement
of specific features of embedded software/hardware and the embedded system designs for specific
applications. Moreover, commercially available embedded systems are not disclosed for the view of
researchers in the literature. Therefore, in this paper we first present how to design a state of art embedded
system including emerged hardware and software technologies. Bedside Patient monitor devices used in
intensive cares units of hospitals are also classified as embedded systems and run sophisticated software
and algorithms for better diagnosis of diseases. We reveal the architecture of our, commercially available,
bedside patient monitor to provide a design example of embedded systemsrelating to emerged technologies.
PIP-MPU: FORMAL VERIFICATION OF AN MPUBASED SEPARATION KERNEL FOR CONSTRAINED...ijesajournal
Pip-MPU is a minimalist separation kernel for constrained devices (scarce memory and power resources).
In this work, we demonstrate high-assurance of Pip-MPU’s isolation property through formal verification.
Pip-MPU offers user-defined on-demand multiple isolation levels guarded by the Memory Protection Unit
(MPU). Pip-MPU derives from the Pip protokernel, with a full code refactoring to adapt to the constrained
environment and targets equivalent security properties. The proofs verify that the memory blocks loaded in
the MPU adhere to the global partition tree model. We provide the basis of the MPU formalisation and the
demonstration of the formal verification strategy on two representative kernel services. The publicly
released proofs have been implemented and checked using the Coq Proof Assistant for three kernel
services, representing around 10000 lines of proof. To our knowledge, this is the first formal verification of
an MPU based separation kernel. The verification process helped discover a critical isolation-related bug.
International Journal of Embedded Systems and Applications (IJESA)ijesajournal
International Journal of Embedded Systems and Applications (IJESA) is a quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Embedded Systems and applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Embedded Systems and establishing new collaborations in these areas.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Embedded Systems & applications.
Pip-MPU: Formal Verification of an MPU-Based Separationkernel for Constrained...ijesajournal
Pip-MPU is a minimalist separation kernel for constrained devices (scarce memory and power resources). In this work, we demonstrate high-assurance of Pip-MPU’s isolation property through formal verification. Pip-MPU offers user-defined on-demand multiple isolation levels guarded by the Memory Protection Unit (MPU). Pip-MPU derives from the Pip protokernel, with a full code refactoring to adapt to the constrained environment and targets equivalent security properties. The proofs verify that the memory blocks loaded in the MPU adhere to the global partition tree model. We provide the basis of the MPU formalisation and the demonstration of the formal verification strategy on two representative kernel services. The publicly released proofs have been implemented and checked using the Coq Proof Assistant for three kernel services, representing around 10000 lines of proof. To our knowledge, this is the first formal verification of an MPU based separation kernel. The verification process helped discover a critical isolation-related bug.
International Journal of Embedded Systems and Applications (IJESA)ijesajournal
International Journal of Embedded Systems and Applications (IJESA) is a quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Embedded Systems and applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Embedded Systems and establishing new collaborations in these areas.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Embedded Systems & applications.
Call for papers -15th International Conference on Wireless & Mobile Network (...ijesajournal
15th International Conference on Wireless & Mobile Network (WiMo 2023) is dedicated to addressing the challenges in the areas of wireless & mobile networks. The Conference looks for significant contributions to the Wireless and Mobile computing in theoretical and practical aspects. The Wireless and Mobile computing domain emerges from the integration among personal computing, networks, communication technologies, cellular technology, and the Internet Technology. The modern applications are emerging in the area of mobile ad hoc networks and sensor networks. This Conference is intended to cover contributions in both the design and analysis in the context of mobile, wireless, ad-hoc, and sensor networks. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced wireless and Mobile computing concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to.
Call for Papers -International Conference on NLP & Signal (NLPSIG 2023)ijesajournal
Scope & Topics
International Conference on NLP & Signal (NLPSIG 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Signal and Natural Language Processing (NLP).
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to:
Topics of interest include, but are not limited to, the following
Chunking/Shallow Parsing
Dialogue and Interactive Systems
Deep learning and NLP
Discourseand Pragmatics
Information Extraction, Retrieval, Text Mining
Interpretability and Analysis of Models for NLP
Language Grounding to Vision, Robotics and Beyond
Lexical Semantics
Linguistic Resources
Machine Learning for NLP
Machine Translation
NLP and Signal Processing
NLP Applications
Ontology
Paraphrasing/Entailment/Generation
Parsing/Grammatical Formalisms
Phonology, Morphology
POS tagging
Question Answering
Resources and Evaluation
Semantic Processing
Sentiment Analysis, Stylistic Analysis, and Argument Mining
Speech and Multimodality
Speech Recognition and Synthesis
Spoken Language Processing
Statistical and Knowledge based methods
Summarization
Theory and Formalism in NLP
Signal Processing & NLP
Computer Vision, Image Processing& NLP
NLP, AI & Signal
Paper Submission
Authors are invited to submit papers through the conference Submission System by May 06, 2023. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this conference. The proceedings of the conference will be published by International Journal on Cybernetics & Informatics (IJCI) (Confirmed).
Selected papers from NLPSIG 2023, after further revisions, will be published in the special issue of the following journals.
International Journal on Natural Language Computing (IJNLC)
International Journal of Ubiquitous Computing (IJU)
International Journal of Data Mining & Knowledge Management Process (IJDKP)
Signal & Image Processing : An International Journal (SIPIJ)
International Journal of Ambient Systems and Applications (IJASA)
International Journal of Grid Computing & Applications (IJGCA)
Important Dates
Submission Deadline : May 06, 2023
Authors Notification : May 25, 2023
Final Manuscript Due : June 08, 2023
International Conference on NLP & Signal (NLPSIG 2023)ijesajournal
International Conference on NLP & Signal (NLPSIG 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Signal and Natural Language Processing (NLP).
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to:
11th International Conference on Software Engineering & Trends (SE 2023)ijesajournal
11th International Conference on Software Engineering & Trends (SE 2023)
May 27 ~ 28, 2023, Vancouver, Canada
https://acsit2023.org/se/index
Scope & Topics
11th International Conference on Software Engineering & Trends (SE 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Software Engineering. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of software engineering & applications. Topics of interest include, but are not limited to, the following.
Topics of interest include, but are not limited to, the following
The Software Process
Software Engineering Practice
Web Engineering
Quality Management
Managing Software Projects
Advanced Topics in Software Engineering
Multimedia and Visual Software Engineering
Software Maintenance and Testing
Languages and Formal Methods
Web-based Education Systems and Learning Applications
Software Engineering Decision Making
Knowledge-based Systems and Formal Methods
Search Engines and Information Retrieval
Paper Submission
Authors are invited to submit papers through the conference Submission System by April 08, 2023. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this conference. The proceedings of the conference will be published by Computer Science Conference Proceedings (H index 35) in Computer Science & Information Technology (CS & IT) series (Confirmed).
Selected papers from SE 2023, after further revisions, will be published in the special issue of the following journals.
The International Journal of Software Engineering & Applications (IJSEA) -ERA indexed
International Journal of Computer Science, Engineering and Applications (IJCSEA)
Important Dates
Submission Deadline : April 08, 2023
Authors Notification : April 29, 2023
Final Manuscript Due : May 06, 2023
11th International Conference on Software Engineering & Trends (SE 2023)ijesajournal
11th International Conference on Software Engineering & Trends (SE 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Software Engineering. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of software engineering & applications. Topics of interest include, but are not limited to, the following.
PERFORMING AN EXPERIMENTAL PLATFORM TO OPTIMIZE DATA MULTIPLEXINGijesajournal
This article is based on preliminary work on the OSI model management layers to optimized industrial
wired data transfer on low data rate wireless technology. Our previous contribution deal with the
development of a demonstrator providing CAN bus transfer frames (1Mbps) on a low rate wireless channel
provided by Zigbee technology. In order to be compatible with all the other industrial protocols, we
describe in this paper our contribution to design an innovative Wireless Device (WD) and a software tool,
which will aim to determine the best architecture (hardware/software) and wireless technology to be used
taking in account of the wired protocol requirements. To validate the proper functioning of this WD, we
will develop an experimental platform to test different strategies provided by our software tool. We can
consequently prove which is the best configuration (hardware/software) compared to the others by the
inclusion (inputs) of the required parameters of the wired protocol (load, binary rate, acknowledge
timeout) and the analysis of the WD architecture characteristics proposed (outputs) as the delay introduced
by system, buffer size needed, CPU speed, power consumption, meeting the input requirement. It will be
important to know whether gain comes from a hardware strategy with hardware accelerator e.g or a
software strategy with a more perf
GENERIC SOPC PLATFORM FOR VIDEO INTERACTIVE SYSTEM WITH MPMC CONTROLLERijesajournal
Today, a significant number of embedded systems focus on multimedia applications with almost insatiable demand for low-cost, high performance, and low power hardware cosumption. In this paper, we present a re-configurable and generic hardware platform for image and video processing. The proposed platform uses the benefits offered by the Field Programmable Gate Array (FPGA) to attain this goal. In this context,
a prototype system is developed based on the Xilinx Virtex-5 FPGA with the integration of embedded processors, embedded memory, DDR, interface technologies, Digital Clock Managers (DCM) and MPMC.
The MPMC is an essential component for design performance tuning and real time video processing. We demonstrate the importance role of this interface in multi video applications. In fact, to successful the
deployment of DRAM it is mandatory to use a flexible and scalable interface. Our system introduces diverse modules, such as cut video detection, video zoom-in and out. This provides the utility of using this architecture as a universal video processing platform according to different application requirements. This platform facilitates the development of video and image processing applications.
This paper presents an inverting buck-boost DCDC converter design. A negative supply voltage is needed
in a variety of applications, but only a few DCDC converters are available on the market. OLED, a new
display type especially suited for small digital camera or mobile phone displays. Design challenges that
came up when negative voltages have to be handled on chip will be discussed, such as
continuous/discontinuous mode transition problems, negative voltage feedback and negative over-voltage
protection. Both devices operate in a fixed frequency PWM mode or alternatively in PFM mode. The single
inductor topology is called inverting buck-boost converter or simply inverter. The proposed converter has
been implemented with a TSMC 0.13-um 2P4M CMOS process, and the chip area is 325 x 300 um2.
A Case Study: Task Scheduling Methodologies for High Speed Computing Systems ijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
Payment industry is largely aligned in their desire to create embedded payment systems ready for the
modern digital age. The trend to embed payments into a software platform is often regarded as first step
towards a broader trend of embedded finance based on digital representation of fiat currencies. Since it
became clear to our research team that there are no technologies and protocols that are protected against
attacks of quantum computing, and that enable automatic embedded payments, online or offline with no
fear of counterfeit, P2P or device-to-device to be made in real time without intermediaries, in any
denomination, even continuous payments per time or service, while preserving the privacy of all parties,
without enabling illicit activities, we decided to utilize the Generic Innovation Engine [1] that is based on
the Artificial Intelligence Assistance Innovation acceleration methodologies and tools in order to boost the
progress of innovation of the necessary solutions. These methodologies accelerate innovation across the
board. It proposes a framework for natural and artificial intelligence collaboration in pursuit of an
innovative (R&D) objective The outcome of deploying these Artificial Innovation Assistant (AIA)
methodologies was tens of patents that yield solutions, that a few of them are described in this paper. We
argue that a promising avenue for automated embedded payment systems to fulfil people’s desire for
privacy when conducting payments, and national security agencies demand for quantum-safe security,
could be based on DeFi and digital currencies platforms that does not suffer from flaws of DLT-based
solutions, while introducing real advantages, in all aspects, including being quantum-resilient, enabling
users to decide with whom, if at all, to share information, identity, transactions details, etc., all without
trade-offs, complying with AML measures, and accommodating the potential for high transaction volumes.
It is not legacy bank accounts, and it is not peer-dependent, nor a self-organizing network.
2 nd International Conference on Computing and Information Technology ijesajournal
2
nd International Conference on Computing and Information Technology Trends
(CCITT 2023) will provide an excellent international forum for sharing knowledge and
results in theory, methodology and applications of Computing and Information Technology
Trends. The Conference looks for significant contributions to all major fields of the
Computer Science, Compute Engineering, Information Technology and Trends in theoretical
and practical aspects.
TIME CRITICAL MULTITASKING FOR MULTICORE MICROCONTROLLER USING XMOS® KITijesajournal
This paper presents the research work on multicore microcontrollers using parallel, and time critical
programming for the embedded systems. Due to the high complexity and limitations, it is very hard to work
on the application development phase on such architectures. The experimental results mentioned in the
paper are based on xCORE multicore microcontroller form XMOS®
. The paper also imitates multi-tasking
and parallel programming for the same platform. The tasks assigned to multiple cores are executed
simultaneously, which saves the time and energy. The relative study for multicore processor and multicore
controller concludes that micro architecture based controller having multiple cores illustrates better
performance in time critical multi-tasking environment. The research work mentioned here not only
illustrates the functionality of multicore microcontroller, but also express the novel technique of
programming, profiling and optimization on such platforms in real time environments.
4 th International Conference on Signal Processing and Machine Learning (SIGM...ijesajournal
4
th International Conference on Signal Processing and Machine Learning (SIGML
2023) will provide an excellent international forum for sharing knowledge and results in
theory, methodology and applications of on Signal Processing and Machine Learning.
The aim of the conference is to provide a platform to the researchers and practitioners
from both academia as well as industry to meet and share cutting-edge development in
the field.
DESIGN CHALLENGES IN WIRELESS FIRE SECURITY SENSOR NODES ijesajournal
A design of simple hardware circuit with different kind of fire sensors enables every user to use this
wireless fire security system. The challenges in designing the nodes with various types of fire sensors are
discussed and the methods to overcome design problems are also analyzed. The circuit is interfaced with
the different types of sensor to sense different fire sources such as gas leakage, smoke, and heat. The cost,
circuit components, design requirements, power requirements of sensor node are minimized. The methods
to improve the quality of system to detect fire are analyzed. The system is fully controlled by the PIC
microcontroller.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
National Security Agency - NSA mobile device best practices
Task scheduling methodologies for high speed computing systems
1. International Journal of Embedded systems and Applications(IJESA) Vol.4, No.4, December 2014
DOI : 10.5121/ijesa.2014.4401 1
A Case Study: Task Scheduling Methodologies for
High Speed Computing Systems
1
Mahendra Vucha and 2
Arvind Rajawat
1,2
Department of Electronics & Communication Engineering,
Maulana Azad National Institute of Technology, Bhopal, India
ABSTRACT
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
KEYWORDS
High Speed Computing Systems, Heterogeneous Computing System, homogeneous Computing System,
Reconfigurable Hardware, Scheduling, soft core processor, hard core processor
1.INTRODUCTION
Microprocessor is at the core of high performance computing systems but they provide flexible
computing at the expense of performance [1]. Application Specific Integrated Circuit (ASIC)
supports fixed functionality and superior performance for an application but they restrict
flexibility of architecture. Thereafter a new computing paradigm [2] Reconfigurable Systems
(RS) promises greater flexibility without compromise in performance. So complex applications
like MIMO, OFDM and image processing are accelerated by reconfigurable architecture and
achieved higher performance by reducing the instruction fetch, decode and execute bottleneck
[1][2][3]. The RS brings the phenomenon of configuring custom digital circuits dynamically and
modified via software. This ability of creating and modifying digital logic circuits without
physically altering the hardware provides more flexible and low cost solution for real time
applications. This phenomenon of dynamic reconfiguration of an application is enabled by the
availability of high density programmable logic chips called Field Programmable Gate Array
(FPGA). So, High Speed Computing Systems (HSCS) should have one or more resources of such
kind (Reconfigurable System on Chip (RSoC) [15], MOLEN architecture [26]) as Processing
Element (PE) to enhance the speed of real time application. A computing platform described in
[26][27][28][29] and these are made by integrated similar resources through high speed network
to support the execution of parallel applications called Homogeneous Computing System. The
2. International Journal of Embedded systems and Applications(IJESA) Vol.4, No.4, December 2014
2
efficiency of homogeneous computing system critically depends on the methods used in
[22][23][24][26] to schedule tasks of parallel applications. Other hand, diverse set of resources
interconnected with a high speed networks provides a new computing platform [20][21] called
Heterogeneous Computing System, which could support executing computationally intensive
parallel and distributed applications. An emerging computing platform which integrates the array
of programmable logic resources and soft core processors together on a single chip
[15][16][17][18] called Heterogeneous Reconfigurable Computing Systems (HRCS). The HRCS
platform is an emerging paradigm of research that offers cost effective solutions for
computationally intensive applications through hardware reuse and many multimedia applications
[2] were accelerated by HRCS.
In real time, tasks of parallel application must share the resources of HSCS effectively in order to
enhance the execution speed of an application and it could be achieved through effective
scheduling mechanism. There are many researchers presented techniques for mapping multiple
tasks to HSCS with the aim of “minimizing execution time of an application’ and also “efficient
utilization of resources”. In this paper, we bring the review of various existing scheduling
methodologies for HSCS. The task scheduling models are basically two types called static and
dynamic scheduling. Static Scheduling: All information needed for scheduling such as the
structure of the parallel application, execution time of individual tasks and communication cost
between the tasks must be known in advance [10][12][13][14]. Dynamic scheduling: The
scheduling decisions made at runtime and whereas its aim is not only enhance the execution time
and also minimize the communication overheads [8][20][24][26]. The review of static and
dynamic scheduling heuristics for HSCS is described thoroughly in next chapters. In general,
various scheduling heuristic approaches are classified into four categories: List scheduling
algorithms [20], clustering algorithms [11], Duplication Algorithms [22], and genetic algorithms.
Among them, the list scheduling algorithms provides good quality of schedule and their
performance is compatible with all categories of applications [20]. So in this paper, we have
concentrated more on list scheduling algorithm and it has three steps: task selection, processor
selection and status update. For clear understanding, the remaining paper is organized as the task
scheduling for homogeneous computing systems in chapter 2, heterogeneous computing systems
in chapter 3, reconfigurable computing systems in chapter 4, heterogeneous reconfigurable
computing systems in chapter 5, and review summary chart in chapter 6 and finally paper is
concluded in chapter 7.
2. SCHEDULING MODELS FOR HOMOGENEOUS COMPUTING
SYSTEMS
Homogeneous Computing refers the systems which are formulated with multiple similar kinds of
soft core processors and it brings parallelism for application execution. The parallelism is
achieved by effective task scheduling. The static and dynamic list scheduling techniques are
summarized [5] for microprocessor based systems. In [5], the task scheduling is based on cost
function whereas the cost function is an attribute of tasks of an application. There are several list
scheduling algorithms proposed [5][27][28][29] for microprocessor as follows. Rate Monotonic
Algorithm: The Rate Monotonic (RM) algorithm [27] is a static priority based scheduling
algorithm, which assigns the highest priority to the most frequency task and lowest priority to
least frequency task in the system. The RM selects the highest priority task to execute first and
then remaining tasks come for execution as per their priority sequence. So the RM can only used
in statically defined systems and the scheduling bound of RM algorithm is less than 100%. So
that the researchers in task scheduling moved towards the use of dynamic priority based
scheduling algorithms. Earliest Deadline First Algorithm: The Earliest Deadline First (EDF)
algorithm [28] uses the dead line of the task as cost function. The task with earliest deadline has
3. International Journal of Embedded systems and Applications(IJESA) Vol.4, No.4, December 2014
3
highest priority whereas the task with longest deadline has lowest priority. The major advantages
of EDF algorithm is that the priorities are dynamic so the period of tasks can be changed
dynamically and also the schedulable bound for any task set is 100% but there is no control of
which tasks fails during transient overload. Minimum Laxity First Algorithm: The Minimum
Laxity First (MLF) algorithm [29][4] follows dynamic scheduling where it assigns Laxity to each
task in a system and selects the task having minimum laxity to execute next. The Laxity is a
measure of flexibility of a task to schedule and it is defined as follows: Laxity = deadline time –
current time – executing time. It also has 100% schedulable bond like EDF and there is no way to
control which tasks are guaranteed to execute during a transient overload. Maximum Urgency
First Algorithm: The Maximum Urgency First (MUF) algorithm [29] follows both static and
dynamic priority scheduling. In MUF algorithm, each task would be given with an Urgency and
the Urgency is combination of two fixed priorities and one dynamic priority. The static priorities
are defined once and do not changed during execution where as the dynamic priority is assigned
at runtime which is inversely proportional to the Laxity of a task. The MUF scheduler looks first
in static priority and then dynamic priority. A low cost task scheduling [26] described for
Distributed Memory Machines (DMM) based on the heuristics EDF, MLF etc. and stated that the
List Scheduling with Dynamic Priorities (LSDP) gives optimum results than List Scheduling with
Static Priority (LSSP). The task duplication based scheduling [22] for distributed memory
machines designed to reduce the inter processor communication. A Modified TDS (MTDS)
described in [23] and it generates shorter scheduled list then TDS [22]. A Dynamic Critical Path
(DCP) Scheduling algorithm [24] proposed for multiprocessors where the DCP intended to find
critical path of a task graph and rearranges the schedule on each processor dynamically. A
duplication-based scheduling strategy called Selective Duplication (SD) algorithm is developed
[25] for multiprocessor systems with the aim of exploit the available scheduling holes effectively
without scarifying efficiency. In [25], the application is visualized as DAG and the targeted
machine is represented as ܯ = (ܲ, ൣܮ൧, ൣℎ൧); P = {p1, p2, ...,Pp} is set of P homogeneous
processor; [ܮ] is a × matrix describing interconnection network topology and [ℎ] is a
× matrix giving minimum distance in number of hops between processor and . The SD
algorithm [23] is compared with existing duplication TDS [22], MTDS [23], and non-duplication
scheduling algorithms with respect to Normalized Schedule Length (NSL), Efficiency.
3. SCHEDULING MODELS FOR HETEROGENEOUS COMPUTING SYSTEMS
Heterogeneous computing refers to systems that have more than one kind of processing elements
and it gains performance for the application when multifarious execution required. An application
scheduling algorithms called Heterogeneous Earliest Finish Time (HEFT) and Critical-Path-On-
a-Processor (CPOP) formulated [20] for a bounded number of Heterogeneous processors. The
HEFT has two phases, Task prioritizing phase uses HEFT as cost function and processor selection
phase to select the tasks on its best processor. The HEFT has the time complexity O(e × q) for e
edges and q processors. The CPOP used Critical Path, which is sum of computation time and inter
task communication time, as cost function and provides time complexity equal to O(e × p) for e
edges and p processors. The HEFT algorithm outperforms other algorithms in terms of SLR and
Speedup but the CPOP algorithm outperforms the related work in terms of average SLR. On an
average, the HEFT [20] algorithm is faster than the CPOP algorithm by 10 percent, the Mapping
Heuristic (MH) algorithm by 32 percent, the Dynamic Level Scheduling (DLS) algorithm by 84
percent, Levelized-Min Time (LMT) algorithms by 48 percent. A high performance static
scheduling algorithm [21] called Longest Dynamic Critical Path (LDCP) algorithm presented for
Heterogeneous Distributed Computing Systems (HeDCS). In order to compute the LDCP, the
HeDCS is formulated with m heterogeneous processors and application is computed as Direct
Acyclic Graph that corresponds to a Processor Pj (DAGPj) with size of task set to their
4. International Journal of Embedded systems and Applications(IJESA) Vol.4, No.4, December 2014
4
computation cost on that processor Pj. The DAGP nodes are assigned with upward rank [21]
(URank) and URank acts as cost function to prioritize them for scheduling whereas the Urank is
summation of execution time on processor and communication cost between the adjacent tasks.
The LDCP scheduling algorithm outperforms the both HEFT [20] and DLS algorithms in terms
of Normalized Schedule Length (NSL) and speedup. A generalized fixed priority CPU scheduling
model with the notion of pre-emption threshold [10] is developed and it bridges the gap between
pre-emptive and non-preemptive scheduling models in real time. The scheduling model [10]
addresses the problem of finding an optimal priority ordering and pre-emption threshold
assignment for the tasks which are independent and do not suspend themselves whereas the
overheads due to context switching are negligible. The model [10] introduces pre-emptablity as it
is enough to achieve feasibility and ensures optimum schedulability by reducing scheduling
overheads through minimum number of pre-emptions.
4. SCHEDULING MODELS FOR RECONFIGURABLE COMPUTING SYSTEMS
Reconfigurable Computing is an emerging paradigm that satisfies simultaneous demand for
application flexibility and performance. The ability of customize its architecture, to support the
concurrent computation and parallel application execution, demonstrates RCS performance
benefits over the general purpose processor. A Parameterized Module Scheduling (PMS)
algorithm for RCS [8][4]addressed the problem of scheduling and mapping non-preemptive tasks
of an application task graph to platform having variable Reconfigurable Logic Units (RLUs) by
the concept parameterized modules and variable silicon area. The scheduling system [8] follows
the concept Dynamic Programming (DP) to schedule the tasks & it is described in three parts:
application in the form of task graph, computing environment and performance criteria to obtain
the scheduling goal. Here, performance criteria would be the scheduling length ‘L’ i.e. actual
Finish Time (FT) of the exit task vexit (L = FT (vexit)) and the goal is to minimize the scheduling
length ‘L’ of an application. The scheduling algorithm [8] uses the b-level of task as rank function
to prioritize the tasks of an application where b-level of a task node Vi is the length of longest
path from the node Vi to exit task node. Loop Kernel Pipelining Mapping (LKPM) [2] addressed
for Coarse Grained Reconfigurable Architecture (CGRA) to optimize Data Intensive Applications
(DIA). In [2], The Program Information Aided Control Dataflow Task Graph (PIA-CDTG)
represents the functionality and behaviour of DIA, Virtual Instruction Dataflow Graph (Vi-DFG)
represents the behaviour of critical loop kernels and Reconfigurable Architecture Graph (RAG)
represents the loop self pipelining and loop iteration behaviour of CGRA. The M×N CGRA can
be represented by RAG = (PE, C) where PEij consists of memory PE (mPE) and computation PE
(cPE), 0 ≤ ݅ ≤ ܯ, 0 ≤ ݆ ≤ ܰ and C describes the data relevance dependency. The LKPM map
the control conditions of loop to mPEs and body of the loop to cPEs to increase the throughput of
DIA. A dynamic scheduling and placement algorithm [11] has been proposed for RS based on
finishing time mobility of the tasks. The model in [11] integrates an online placement algorithm
with scheduling model to support FPGA clusters. Here the FPGA is divided into slots or clusters
and the arriving tasks are placed inside one of the cluster depending on their execution end time
values. To enhance the efficiency of the device [11], the width of the clusters varies in runtime
when needed and the host processor could control the mapping of hardware task code as an
executable circuit to FPGA. Online scheduling of real time tasks to reconfigurable computing
systems [12][13][14] formalized with the objective of reducing configuration overheads through
resource reuse and minimizes the total execution time in addition to decrease task rejection ratio.
The model in [12] is combination of window based stuffing algorithm and KAMER [11]
placement algorithm. The model in [13] focuses on real time independent tasks and the tasks are
defined with 5 – tuple ܶ = ሼݓ, ℎ, ݁, ܽ, ݀, ݎሽ where ݓ, ℎ, ݁, ܽ, ݀ ܽ݊݀ ݎ represents
width, height, execution time, arrival time, dead line and reconfiguration time of tasks
respectively. The schedulable bound of these algorithms [12, 13] is less than 100%. A heuristic
5. International Journal of Embedded systems and Applications(IJESA) Vol.4, No.4, December 2014
5
approach to schedule periodic real time tasks on RH [14] formalized two scheduling algorithm
called EDF-Next Fit (EDF-NF) and Merge Server Distribute Load (MSDL) for preemptive
periodic tasks. The MSDL constructs a set of servers by properly merging set of tasks for parallel
execution and the resulted servers are then scheduled for sequential execution on FPGA with
EDF-NF.
5. SCHEDULING ALGORITHMS FOR HETEROGENEOUS RECONFIGURABLE
COMPUTING SYSTEMS
A computing platform called MOLEN Polymorphic processor [26] presented and it is
incorporated with both general purpose and custom computing processing elements. The MOLEN
processor is also incorporated with arbitrary number of programmable units to support both
hardware and software tasks. An efficient multi task scheduler for runtime reconfigurable systems
[9] proposed a new parameter called Time-Improvement as cost function for compiler assisted
scheduling algorithm. The Time-Improvement heuristic is defined based on reduction-in-task-
execution time and distance-to-next-call. The scheduling system in [9] target to MOLEN
Polymorphic processor [26] and it assigns less CPU intensive tasks and control of tasks to
General Purpose Processor (GPP) whereas computing intensive tasks are assigned to FPGA. The
task scheduler in [9] outperforms previous algorithms and accelerates task execution from 4% up
to 20%. Online scheduling of Software Tasks (ST), Hardware Tasks (HT) and Hybrid Tasks
(HST) proposed [6] for CPU-FPGA platform, where ST executes only on CPU, HT executes only
on FPGA and the HST execute on both CPU & FPGA. The scheduling model [6] uses reserved
time of tasks as cost function and it is integration of task allocation, placement and task migration
modules. An On-line HW/SW partitioning and co-scheduling algorithm [3] proposed for GPP
and Reconfigurable Processing Unit (RPU) environment in which Hardware Earliest Finish time
(HEFT) and Software Earliest Finish time (SEFT) are calculated for tasks of an application. The
difference between HEFT and SEFT imply to partition tasks and EFT used to define task
scheduled list for GPP and RPU as well. An overview of Tasks co-scheduling is described [7][31]
to µP and FPGA environment from different communities like Embedded Computing (EC),
Heterogeneous Computing (HC) and Reconfigurable Hardware (RH). The Reconfigurable
Computing Co-scheduler (ReCoS) [7] integrates the strengths of HC and RH scheduling to handle
the RC system constraints such as the number of FFs, LUTs, Multiplexers, CLBs, communication
overheads, reconfiguration overheads, throughputs and power constraints. The ReCoS algorithm
as compared with EC, RC and RH scheduling algorithms, shows improvement in optimal
schedule search time and execution time of an application. Hardware supported task scheduling
for Dynamically RSoC [15] described to effectively utilize the RSOC resources for multi task
applications. Task systems in [15] represented as modified Directed Acyclic Graph (DAG) and
the task graph is defined as tuple G = (V, Ed
, Ec
, P), where V is set of nodes, Ed
and Ec
are the set
of directed data edges and control edges respectively and P represents the set of probabilities
associated with Ec
. The RSoC architecture in [15] comprises a general purpose embedded
processor along with two L1 data and instruction cache and a number of reconfigurable logic
units on a single chip. The summary of the paper [15] states that Dynamic Scheduling (DS) does
not degrade as the complexity of the problem increase whereas the performance of Static
Scheduling (SS) decline and finally the DS outperforms the SS when both task system complexity
and degree of dynamism increases. Compiler assisted runtime scheduler [16] is designed for
MOLEN architecture where the compiler describes the run time system as Configuration Call
Graph (CCG). The CCG in [16] demonstrates two parameters called the distance to the next call
and frequency of calls in future to the tasks and these parameters acts as cost function to the
scheduler. Communication aware online task scheduling for partially reconfigurable systems [17]
distributes the tasks to 2D area based on data communication time of tasks. The scheduler in [17]
6. International Journal of Embedded systems and Applications(IJESA) Vol.4, No.4, December 2014
6
can run on host processor and tasks expected end time ݐ = ݐ௧௦௧ + ݐ + ݐଵ +
ݐ௫ + ݐଶ , where ݐ௧௦௧ is completion time of already scheduled task ݐ is task
configuration time, ݐଵ is data/memory read time, ݐ௫ is task execution time and ݐଶ is
data/memory write time. HW/SW co-design techniques [18] are described for dynamically
reconfigurable architectures with the aim of deciding execution order of the event at run time
based on their EDF. Here authors have demonstrated a HW/SW partitioning algorithm, a co-
design methodology with dynamic scheduling for discrete event systems and a dynamic
reconfigurable computing multi-context scheduling algorithm. These three co-design techniques
[18] minimizes the application execution time by paralleling events execution and controlled by
host processor for both shared memory and local memory based Dynamic Reconfigurable Logic
(DRL) architectures. When number of DRL cells is equal or more than three, the techniques in
[18] brings better optimization for shared memory architecture than the local memory
architectures. A HW/SW partitioning algorithm [30] presented to partition the tasks as software
tasks and hardware tasks based on their waiting time. A layer model [20] provides systematic use
of dynamically reconfigurable hardware and also reduces the error-proneness of the system
components. The Layer Model [20] comprises of six layers (Bottom to Top). The lowest or first
layer Hardware Layer represents the reconfigurable hardware, second Configuration Layer
interfaces with the configuration port of FPGA, third Positioning Layer assigns the position to
partial bit-stream, fourth Allocation Layer manages the resources for incoming modules on
FPGA, fifth Module Management Layer provides access to all modules (tasks) that are loaded to
the system and sixth Application Layer represent the application as task graph. These kind of
layer models helps to design efficient operating for HRCS.
6. SUMMARY CHART FOR SCHEDULING MODELS FOR HIGH SPEED
COMPUTING SYSTEMS
The summary chart of scheduling methodologies shown in table 1 demonstrates the author and
paper reference in first column, nature of scheduling algorithm (static or dynamic) in second
column, tasks handling behaviour (single task or multiple task supported) in third column, nature
of computing resources in targeted computing platform ( microprocessor or FPGA or integration
of µP and FPGA) in fourth column, nature of host platform where the scheduling methodology
executes ( µP or FPGA) in fifth column, the targeted performance metrics (schedulable bound,
execution speed enhancement and resources optimization) in sixth column, cost function, which
is used to prioritize the tasks of an application that helps to prepare scheduled task list, in seventh
column and finally future scope and remark of the methodology is described in eighth column.
The overview and summary chart of various scheduling methodologies described for HSCS are as
follows in table 1.
7. International Journal of Embedded systems and Applications(IJESA) Vol.4, No.4, December 2014
7
Table 1: Summary chart of scheduling methodologies
ApproachReferencepaper
Nature
Task
support
Target
Platform
Host
Platform
Performanc
e
Evaluation COST
function
Scope / Remark
Static
Dynamic
Single
Multiple
Microprocessor(µP)
FPGA
µP+FPGA
µP
FPGA
Schedulablebound
SpeedEnhancement
ResourceOptimization
RM – Rate Monotonic
EDF – Earliest deadline
First
MLF – Maximum Laxity
First
MUF – Maximum
Urgency First
EFT – Earliest Finish
Time
ALAP – As Late As
Possible
AET – Average
Execution Time
HEFT - Heterogeneous
Earliest Finish Time
CPOP: Critical-Path-
On-a- Processor
[5]
[27]
× × × × × × Frequenc
y of the
task
Schedulable bound is
less than 100%
[5]
[ 8] × × × × × EDF
There is no control on
which tasks fails during
transient overload
[5]
[29]
× × × × × MLF &
MUF
MLF: There is no
control on which tasks
fails during transient
overload
MUF: Combination of
fixed and dynamic
nature in order to avoid
the drawback of MLF
& EDF
. [26]
× × × × × × EDF
List Scheduling with
Dynamic Priorities
(LSDP) gives optimum
results than List
Scheduling with Static
Priority (LSSP)
[22]
[23]
[25]
× × × × × × Selective
Task
Duplicati
on
Designed for
Distributed Memory
Machines to reduce
inter processor
communication
[24] × × × × × × × Dynamic
Critical
Path of
task
It determines the critical
path of the task graph
and selects the next
node to be scheduled in
a dynamic fashion
[20] × × × × × × HEFT &
HEFT algorithm is
faster than the CPOP
algorithm by 10 %, the
8. International Journal of Embedded systems and Applications(IJESA) Vol.4, No.4, December 2014
8
CPOP Mapping Heuristic
(MH) algorithm by
32%, the Dynamic Level
Scheduling (DLS)
algorithm by 84 %,
Levelized-Min Time
(LMT) algorithms by 48
%.
.
[21]
× × × × × ×
upward
rank
LDCP algorithm
outperforms the both
HEFT and DLS
algorithms in terms of
Normalized Schedule
Length (NSL) and
speedup.
(Urank is summation of
execution time on
processor and
communication cost
between the adjacent
tasks)
[10]
× × × × × × pre-
emption
threshold
It reduces scheduling
overheads through
minimum number of
pre-emption. At most
18% improvement in
execution
. [8] × × × × × × Level of
task
graph
Minimization of
Schedulable length
[2]
× × × × × Level of
task
graph &
task
dependen
cies
Program Information
Aided Control Dataflow
Task Graph (PIA-
CDTG) represents the
functionality and
behaviour of Data
Intensive Application
and Virtual Instruction
Dataflow Graph (Vi-
DFG) represents the
behaviour of critical
loop kernels
[11] × × × × × × EDF
15 – 20% improvement
in tasks execution time
by incorporating
placement algorithm
[12] × × × × × × × Frequenc
y of tasks
Reduction of
configuration overheads
and improvement of
total execution time by
decreasing task rejection
ratio.
schedulable bound of
these algorithms is less
than 100%
. [13] × × × × × × ×
Probabilit
y of task
recurrenc
e in
future
Increment in probability
of configuration reusing
and reduction in overall
execution time of tasks.
EDF- Designed for pre-
9. International Journal of Embedded systems and Applications(IJESA) Vol.4, No.4, December 2014
9
[14] × × × × × × ×
Next Fit
(EDF-
NF)
and
Merge
Server
Distribut
e Load
(MSDL)
emptive periodic tasks
where MSDL constructs
a set of servers for
parallel execution and
the servers are then
scheduled based on
EDF-NF for sequential
execution
[9] × × × × ×
Reductio
n in
Executio
n time.
Distance
to next
call
Introduce a new
parameter called Time-
Improvement
(combination of cost
functions 1 and 2 ) and
It accelerates task
execution from 4% up
to 20%
[6] × × × × × × Reserved
time of
task
It is integration of task
allocation, placement
and task migration
modules.
Minimizes task rejection
ratio and increase
resource utilization
[15] × × × × × × ×
Level of
task
graph &
task
dependen
cies
Dynamic Scheduling
outperforms the Static
Scheduling when both
task system complexity
and degree of dynamism
increases.
[16] × × × × × ×
Distance
to the
next call
and
frequenc
y of calls
in future
Compiler provides
viable information to
scheduler in the form of
Configuration Call
Graph (CCG)
.
[18] × × × × × ×
1. AET
differenc
e b/w SW
and HW
2.
Required
DRL area
and
memory
size of
tasks
Proposed HW/SW
partitioned algorithms
for Shared Memory and
Local memory target
architectures and
Execution time is
minimized by
parallelizing task
execution on DRL
architectures.
.
[17] × × × × × ×
Data
dependen
cy b/w
tasks and
hardware
area
Handles
Communication
constraints to utilize the
free area b/w
subsequent invocations
of periodic tasks
. [30] × × × × × Task
waiting
time
The partitioning
algorithm migrate the
tasks form HW task to
SW task and vice versa
based on waiting time
of tasks
[3] × × × × × EFT
8 – 38% improvement
in execution time
10. International Journal of Embedded systems and Applications(IJESA) Vol.4, No.4, December 2014
10
[7]
[31]
× × × × × × ALAP Enhancement to
discover area utilized on
FPGA by introducing
parallelism
7. CONCLUSION AND FUTURE SCOPE
Optimization of real time applications can be done only when HSCS has multifarious resources to
support parallel processing where the scheduling algorithms play crucial role in distribution of
tasks to the HSCS resources. In this paper, we have demonstrated various High Speed
Computing systems which support runtime requirements of applications and also prepared a
summary chart for existing scheduling methodologies. The homogeneous computing systems
provide parallel processing to the applications at the expense of number of resources, the
heterogeneous computing systems support distributed application with the expense of
communication between resources, the reconfigurable systems brings dynamic reconfiguration in
run time to the application at the expense of soft core processor efficiency and finally the HRCS
provides optimal solution for computing real time application by integrating both soft core and
hardcore processor as computing elements. The summary chart clearly states that the dynamic
scheduling methodologies with multitask are effective in speedup real time application on HSCS
but the scheduling model could run always on soft core processors which degrades the efficiency
of scheduling model in runtime. So, there is a demand for researchers to develop scheduling
model which could run on hard core processor which enhances the efficiency (speed) of scheduler
in runtime.
REFERENCES
[1] Reiner Hartensstein, “Microprocessor Is No More General Purpose: Why Future Reconfigurable
Platforms Will Win” Invited Paper, Of The International Conference On Innovative Systems In
Silicon.Isis’97, Texas, Usa, Pp 1- 10, October 8-10, 1997.
[2] D. Wang, S. Li And Y. Dou, Loop Kernel Pipelining Mapping Onto Coarse-Grained Reconfigurable
Architecture For Data-Intensive Applications, In Journal Of Software, Volume 4, No-1,P81-89, 2009.
[3] Maisam M. Bassiri And Hadi Sh. Shahhoseini, Online Hw/Sw Partitioning And Co-Scheduling In
Reconfigurable Computing Systems, In 2nd Ieee International Conference On Computer Science And
Information Technology, 2009, Iccsit 2009,Pp 557-562.
[4] Mahendra Vucha & Arvind Rajawat, “An Effective Dynamic Scheduler For Reconfigurable
Computing Systems, Ieee International Advanced Computing Conference(Iacc), Pp. 766 – 773, 2014.
[5] David B. Stewart, Pradeep K. Khosla “Real Time Scheduling Of Dynamically Reconfigurable
Systems” In Proceedings Of The Ieee International Conference On Systems Engineering, Dayton
Ohio, Pp. 139-142, August 1991.
[6] Liang Liang, Xue-Gong Zhou, Ying Wang, Cheng-Lian Peng, Online Hybrid Task Scheduling In
Reconfigurable Systems, In Proceedings Of The 11th International Conference On Computer
Supported Cooperative Work In Design, Pp 1072 – 1077, In 2007.
[7] Proshanta Saha, Tarek El-Ghazawi, Software/Hardware Co-Scheduling For Reconfigurable
Computing Systems, In International Symposium On Field-Programmable Custom Computing
Machines, Pp 299-300, 2007.
[8] Solomon Raju Kota, Chandra Shekhar, Archana Kokkula, Durga Toshniwal, M. V. Kartikeyan And
R. C. Joshi, “Parameterized Module Scheduling Algorithm For Reconfigurable Computing Systems”
In 15th International Conference On Advanced Computing And Communications, Pp 473-
478,.2007.
11. International Journal of Embedded systems and Applications(IJESA) Vol.4, No.4, December 2014
11
[9] Mahmood Fazlali, Mojtaba Sabeghi, Ali Zakerolhosseini And Koen Bertels, Efficient Task
Scheduling For Runtime Reconfigurable Systems, Journal Of Systems Architecture, Volume 56, Issue
11, Pages 623-632, November 2010.
[10] Yun Wang And Manas Saksena, Scheduling Fixed –Priority Tasks With Preemption Threshold, In
Sixth International Conference On Real-Time Computing Systems And Applications, Rtcsa '99, Pp
328-335, 1999.
[11] Ali Ahmadinia, Christophe Bodda And Jurgen Teich, A Dynamic Scheduling And Placement
Algorithm For Reconfigurable Hardware, Arcs 2004, Lncs 2981, Pp. 125 – 139, 2004.
[12] Xue-Gong Zhou, Ying Wang, Xun-Zhang Haung And Cheng-Lian Peng, On-Line Scheduling Of
Real Time Tasks For Reconfigurable Computing System, International Conference On Computer
Engineering And Technology, Pp 59-64, 2010.
[13] Maisam Mansub Bassiri And Hadi Shahriar Shahhoseini, A New Approach In On-Line Task
Scheduling For Reconfigurable Computing Systems, In Proceedings Of 2nd International Conference
On Computer Engineering And Technology, Pp. 321-324, April 2010.
[14] Klaus Dane And Marco Platzner, A Heuristic Approach To Schedule Real-Time Tasks On
Reconfigurable Hardware, In Proceedings Of International Conference On Field Programmable Logic
And Applications, Pp 568 – 578, 2005.
[15] Zexin Pan And B. Earl Wells, Hardware Supported Task Scheduling On Dynamically Reconfigurable
Soc Architectures, Ieee Transactions On Vlsi Systems, Vol. 16, No. 11, November 2008.
[16] Mojtaba Sabeghi, Vlad-Mihai Sima And Koen Bertels, Compiler Assigned Runtime Task Scheduling
On A Reconfigurable Computer, International Conference On Field Programmable Logic And
Applications, 2009 (Fpl 2009) Pp 44 – 50, September 2009.
[17] Yi Lu, Thomas Marconi, Koen Bertels And Georgi Gaydadjiev, A Communication Aware Online
Task Scheduling Algorithm For Fpga-Based Partially Reconfigurable Systems, 2010 18th Ieee
Annual International Symposium On Field Programmable Custom Computing Machines, Pp 65-68,
Sept.2010.
[18] Juanjo Noguera And Rosa M. Badia, Hw/Sw Co-Design Techniques For Dynamically Reconfigurable
Architectures, Ieee Transaction On Very Large Scale Integration (Vlsi) Systems, Vol. 10, No. 4, Pp
399 – 415, August 2002.
[19] Boris Kettelhoit And Mario Porrmann, A Layer Model For Systematically Designing Dynamically
Reconfigurable Systems, International Conference On Field Programmable Logic And Applications,
Pp. 1-6, August 2006.
[20] Haluk Topcuoglu, Salim Hariri And Min-You Wu, Performance Effective And Low-Complexity Task
Scheduling For Heterogeneous Computing, Ieee Transactions On Parallel And Distributed Systems,
Vol. 13, No. 3, Pp. 260 – 274, March 2002.
[21] Mohammad I. Daoud And Nawwaf Kharma, A High Performance Algorithm For Static Task
Scheduling In Heterogeneous Distributed Computing Systems, Journal Of Parallel And Distributed
Computing, Vol. 68, No. 4, Pp. 299-309, April 2008.
[22] S. Darba And D.P. Agarwal, “Optimal Scheduling Algorithm For Distributed Memory Machines”,
Ieee Trans. Parallel And Distributed Systems, Vol. 9, No. 1, Pp. 87-95, Jan. 1998.
[23] C.I. Park And T.Y. Choe, “An Optimal Scheduling Algorithm Based On Task Duplication”, Ieee
Trans. Computers, Vol. 51, No. 4, Pp. 444-448, Apr. 2002.
[24] Y.K. Kwok And I. Ahmad, “Dynamic Critical Path Scheduling: An Effective Technique For
Allocating Task Graphs To Multiprocessors”, Ieee Trans. Parallel Distributed Systems, Vol. 7, No. 5,
Pp. 506-521, May 1996.
[25] An Improved Duplication Strategy For Scheduling Precedence Constrained Graphs In Multiprocessor
Systems, Ieee Trans. Parallel And Distribution Systems, Vol. 14, No. 6, June 2003.
[26] S. Vassiliadis, S. Wong, G. N. Gaydadjiev, K.L.M Bertels, G. K. Kuzmanov And E. M. Panainte, The
Molen Polymorphic Processor, Ieee Transaction On Computers, Vol. 53, Issue 11, Pp. 1363-1375,
November 2004.
[27] John Lehoczky, Lui Sha And Ye Ding, The Rate Monotonic Scheduling Algorithm: Exact
Characterization And Average Case Behavior, Proceedings Of Real Time Systems Symposium, Pp.
166-171, Dec. 1989.
[28] C. L. Liu And James W. Layland, Scheduling Algorithms For Multiprogramming In A Hard Real
Time Environment, Journal Of Acm, Vol. 20, No. 1, Pp. 46-61, 1973.
12. International Journal of Embedded systems and Applications(IJESA)
[29] Wei Zhao, Krishivasan Ramamrithm And John A. Stankov
Requirement In Hard Real Time Systems, Ieee Trans. Software Engineering, Vol. Se
1987.
[30] Maisam M. Bassiri And Hadi Sh. Shahhoseini, A Hw/Sw Partitioning Algorithm For Multitask
Reconfigurable Embedded Systems, Ieee International Conference On Microelectronics, 2008.
[31] Proshanta Saha And Tarek El-
Reconfigurable Systems, 2007 3rd Southern Conference On Programmable Logic, Pp 87
February 2007.
AUTHORS
Mahendra Vucha received his B. Tech in Electronics &
JNTU, Hyderabad in 2007 and M. Tech degree in VLSI and Embedded System Design
from MANIT, Bhopal in 2009. He is currently working for his PhD degree at MANIT,
Bhopal and also working as Asst. Prof in Christ University, Dept. of Elec
Communication Engineering, Bangalore (K.A), India. His areas of interest are Hardware
Software Co-Design, Analogy Circuit design, Digital System Design and Embedded
System Design.
Arvind Rajawat received his B. Tech in Electronics & Communica
from Govt. Engineering College in 1989, M. Tech degree in Computer Science
Engineering from SGSITS, Indore in 1991 and Ph. D degree from MANIT, Bhopal. He
is currently working as Professor in Dept. Electronics and Communication, MANIT,
Bhopal (M.P), India. His areas of interest are Hardware Software Co
Embedded System Design and Digital System Design.
International Journal of Embedded systems and Applications(IJESA) Vol.4, No.4, December
Wei Zhao, Krishivasan Ramamrithm And John A. Stankovic, Scheduling Tasks With Resource
Requirement In Hard Real Time Systems, Ieee Trans. Software Engineering, Vol. Se-13, No. 5, May
Maisam M. Bassiri And Hadi Sh. Shahhoseini, A Hw/Sw Partitioning Algorithm For Multitask
ystems, Ieee International Conference On Microelectronics, 2008.
-Ghazawi, Extending Embedded Computing Scheduling Algorithms For
Reconfigurable Systems, 2007 3rd Southern Conference On Programmable Logic, Pp 87
received his B. Tech in Electronics & Comm. Engineering from
JNTU, Hyderabad in 2007 and M. Tech degree in VLSI and Embedded System Design
from MANIT, Bhopal in 2009. He is currently working for his PhD degree at MANIT,
Bhopal and also working as Asst. Prof in Christ University, Dept. of Electronics and
Communication Engineering, Bangalore (K.A), India. His areas of interest are Hardware
Design, Analogy Circuit design, Digital System Design and Embedded
received his B. Tech in Electronics & Communication Engineering
from Govt. Engineering College in 1989, M. Tech degree in Computer Science
Engineering from SGSITS, Indore in 1991 and Ph. D degree from MANIT, Bhopal. He
is currently working as Professor in Dept. Electronics and Communication, MANIT,
pal (M.P), India. His areas of interest are Hardware Software Co-Design,
Embedded System Design and Digital System Design.
ember 2014
12
ic, Scheduling Tasks With Resource
13, No. 5, May
Maisam M. Bassiri And Hadi Sh. Shahhoseini, A Hw/Sw Partitioning Algorithm For Multitask
ystems, Ieee International Conference On Microelectronics, 2008.
Ghazawi, Extending Embedded Computing Scheduling Algorithms For
Reconfigurable Systems, 2007 3rd Southern Conference On Programmable Logic, Pp 87 – 92,