In the area of Computer Science, Parallel job scheduling is an important field of research. Finding a best
suitable processor on the high performance or cluster computing for user submitted jobs plays an
important role in measuring system performance. A new scheduling technique called communication aware
scheduling is devised and is capable of handling serial jobs, parallel jobs, mixed jobs and dynamic jobs.
This work focuses the comparison of communication aware scheduling with the available parallel job
scheduling techniques and the experimental results show that communication aware scheduling performs
better when compared to the available parallel job scheduling techniques.
A GUI-driven prototype for synthesizing self-adaptation decisionjournalBEEI
The ability to ensure an optimal decision is significant for self-adaptive systems especially when dealing with uncertainty. For this reason, a synthesis-driven approach can be used to capture and synthesize a decision that aims to satisfy the multi-objective properties. Assessing the quality of the synthesis-driven approach is challenging, since it involves a set of activities from modeling, simulating, and analyzing the outcomes. This paper presents the design and implementation of a graphical user interface (GUI)-based prototype for assessing synthesis outcome and performance of an adaptation decision. The prototype is designed and developed based on the component-based development approach that is able to integrate the existing and related libraries from PRISM-games model checker for the synthesis engine, JFreeChart libraries for the chart presentation, and Java Universal Network/Graph Framework libraries for the graph visualization. This paper also presents the implementation of the proposed prototype based on the cloud application deployment scenario to illustrate its applicability. This work contributes to provide a fundamental work towards automated synthesis for self-adaptive systems.
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
Embedded systems with multi core processors are increasingly popular because of the diversity of applications that can be run on it. In this work, a reinforcement learning based scheduling method is proposed to handle the real time tasks in multi core systems with effective CPU usage and lower response time. The priority of the tasks is varied dynamically to ensure fairness with reinforcement learning based priority assignment and Multi Core MultiLevel Feedback queue (MCMLFQ) to manage the task execution in multi core system
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...ijait
Heterogeneous machines can be significantly better than homogeneous machines but for that an effective workload distribution policy is required. Maximum realization of the performance can be achieved when system designer will overcome load imbalance condition within the system. Load
distribution and load balancing policy together can reduce total execution time and increase system throughput.
In this paper; we provide algorithm analysis of a threshold based job allocation and load balancing policy for heterogeneous system where all incoming jobs are judiciously and transparently distributed among sharing nodes on the basis of jobs’ requirement and processor capability for the maximization of performance and decline in execution time. A brief discussion of job allocation, transfer and location policy is given with explanation of how load imbalance condition is solved within the system. A flow of scheme is given with essential code and analysis of present algorithm is given to show how this algorithm is better.
Task scheduling methodologies for high speed computing systemsijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
A FUZZY MATHEMATICAL MODEL FOR PEFORMANCE TESTING IN CLOUD COMPUTING USING US...ijseajournal
Software product development life cycle has software testing as an integral part. Conventional testing
requires dedicated infrastructure and resources that are expensive and only used sporadically. In the
growing complexity of business applications it is harder to build in-house testing facilities and also to
maintain that mimic real-time environments. By nature, cloud computing provides resources which are
unlimited in nature along with flexibility, scalability and availability of distributed testing environment,
thus it has opened up new opportunities for software testing. It leads to cost-effective solutions by reducing
the execution time of large application testing. As a part of infrastructure resource, cloud testing can attain
its efficiency by taking care of the parameters like network traffic, Disk Storage and RAM speed. In this
paper we propose a new fuzzy mathematical model to attain better scope for the above parameters.
Learning scheduler parameters for adaptive preemptioncsandit
An operating system scheduler is expected to not al
low processor stay idle if there is any
process ready or waiting for its execution. This pr
oblem gains more importance as the numbers
of processes always outnumber the processors by lar
ge margins. It is in this regard that
schedulers are provided with the ability to preempt
a running process, by following any
scheduling algorithm, and give us an illusion of si
multaneous running of several processes. A
process which is allowed to utilize CPU resources f
or a fixed quantum of time (termed as
timeslice for preemption) and is then preempted for
another waiting process. Each of these
'process preemption' leads to considerable overhead
of CPU cycles which are valuable resource
for runtime execution. In this work we try to utili
ze the historical performances of a scheduler
and predict the nature of current running process,
thereby trying to reduce the number of
preemptions. We propose a machine-learning module t
o predict a better performing timeslice
which is calculated based on static knowledge base
and adaptive reinforcement learning based
suggestive module. Results for an "adaptive timesli
ce parameter" for preemption show good
saving on CPU cycles and efficient throughput time.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A GUI-driven prototype for synthesizing self-adaptation decisionjournalBEEI
The ability to ensure an optimal decision is significant for self-adaptive systems especially when dealing with uncertainty. For this reason, a synthesis-driven approach can be used to capture and synthesize a decision that aims to satisfy the multi-objective properties. Assessing the quality of the synthesis-driven approach is challenging, since it involves a set of activities from modeling, simulating, and analyzing the outcomes. This paper presents the design and implementation of a graphical user interface (GUI)-based prototype for assessing synthesis outcome and performance of an adaptation decision. The prototype is designed and developed based on the component-based development approach that is able to integrate the existing and related libraries from PRISM-games model checker for the synthesis engine, JFreeChart libraries for the chart presentation, and Java Universal Network/Graph Framework libraries for the graph visualization. This paper also presents the implementation of the proposed prototype based on the cloud application deployment scenario to illustrate its applicability. This work contributes to provide a fundamental work towards automated synthesis for self-adaptive systems.
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
Embedded systems with multi core processors are increasingly popular because of the diversity of applications that can be run on it. In this work, a reinforcement learning based scheduling method is proposed to handle the real time tasks in multi core systems with effective CPU usage and lower response time. The priority of the tasks is varied dynamically to ensure fairness with reinforcement learning based priority assignment and Multi Core MultiLevel Feedback queue (MCMLFQ) to manage the task execution in multi core system
ANALYSIS OF THRESHOLD BASED CENTRALIZED LOAD BALANCING POLICY FOR HETEROGENEO...ijait
Heterogeneous machines can be significantly better than homogeneous machines but for that an effective workload distribution policy is required. Maximum realization of the performance can be achieved when system designer will overcome load imbalance condition within the system. Load
distribution and load balancing policy together can reduce total execution time and increase system throughput.
In this paper; we provide algorithm analysis of a threshold based job allocation and load balancing policy for heterogeneous system where all incoming jobs are judiciously and transparently distributed among sharing nodes on the basis of jobs’ requirement and processor capability for the maximization of performance and decline in execution time. A brief discussion of job allocation, transfer and location policy is given with explanation of how load imbalance condition is solved within the system. A flow of scheme is given with essential code and analysis of present algorithm is given to show how this algorithm is better.
Task scheduling methodologies for high speed computing systemsijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
A FUZZY MATHEMATICAL MODEL FOR PEFORMANCE TESTING IN CLOUD COMPUTING USING US...ijseajournal
Software product development life cycle has software testing as an integral part. Conventional testing
requires dedicated infrastructure and resources that are expensive and only used sporadically. In the
growing complexity of business applications it is harder to build in-house testing facilities and also to
maintain that mimic real-time environments. By nature, cloud computing provides resources which are
unlimited in nature along with flexibility, scalability and availability of distributed testing environment,
thus it has opened up new opportunities for software testing. It leads to cost-effective solutions by reducing
the execution time of large application testing. As a part of infrastructure resource, cloud testing can attain
its efficiency by taking care of the parameters like network traffic, Disk Storage and RAM speed. In this
paper we propose a new fuzzy mathematical model to attain better scope for the above parameters.
Learning scheduler parameters for adaptive preemptioncsandit
An operating system scheduler is expected to not al
low processor stay idle if there is any
process ready or waiting for its execution. This pr
oblem gains more importance as the numbers
of processes always outnumber the processors by lar
ge margins. It is in this regard that
schedulers are provided with the ability to preempt
a running process, by following any
scheduling algorithm, and give us an illusion of si
multaneous running of several processes. A
process which is allowed to utilize CPU resources f
or a fixed quantum of time (termed as
timeslice for preemption) and is then preempted for
another waiting process. Each of these
'process preemption' leads to considerable overhead
of CPU cycles which are valuable resource
for runtime execution. In this work we try to utili
ze the historical performances of a scheduler
and predict the nature of current running process,
thereby trying to reduce the number of
preemptions. We propose a machine-learning module t
o predict a better performing timeslice
which is calculated based on static knowledge base
and adaptive reinforcement learning based
suggestive module. Results for an "adaptive timesli
ce parameter" for preemption show good
saving on CPU cycles and efficient throughput time.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
An Implementation on Effective Robot Mission under Critical Environemental Co...IJERA Editor
Software engineering is a field of engineering, for designing and writing programs for computers or other electronic devices. A software engineer, or programmer, writes software (or changes existing software) and compiles software using methods that make it better quality. Is the application of engineering to the design, development, implementation, testingand main tenance of software in a systematic method. Now a days the robotics are also plays an important role in present automation concepts. But we have several challenges in that robots when they are operated in some critical environments. Motion planning and task planning are two fundamental problems in robotics that have been addressed from different perspectives. For resolve this there are Temporal logic based approaches that automatically generate controllers have been shown to be useful for mission level planning of motion, surveillance and navigation, among others. These approaches critically rely on the validity of the environment models used for synthesis. Yet simplifying assumptions are inevitable to reduce complexity and provide mission-level guarantees; no plan can guarantee results in a model of a world in which everything can go wrong. In this paper, we show how our approach, which reduces reliance on a single model by introducing a stack of models, can endow systems with incremental guarantees based on increasingly strengthened assumptions, supporting graceful degradation when the environment does not behave as expected, and progressive enhancement when it does.
A novel methodology for task distributionijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
Modeling enterprise architecture using timed colored petri net single process...ijmpict
The purpose of modeling enterprise architecture and analysis of it is to ease decision making about
architecture of information systems. Planning is one of the most important tasks in an organization and has
a major role in increasing the productivity of it. Scope of this paper is scheduling processes in the
enterprise architecture. Scheduling is decision making on execution start time of processes that are used in
manufacturing and service systems. Different methods and tools have been proposed for modeling
enterprise architecture. Colored Petri net is extension of traditional Petri net that its modeling capability
has grown dramatically. A developed model with Colored Petri net is suitable for verification of
operational aspects and performance evaluation of information systems. With having ability of hierarchical
modeling, colored Petri nets permits that using predesigned modules for smaller parts of the system and
with a general algorithm, any kind of enterprise architecture can be modeled. A two level hierarchical
model is presented as a building block for modeling architecture of Transaction Processing Systems (TPS)
in this paper. This model schedules and runs processes based on a predetermined non-preemptive
scheduling method. The model can be used for scheduling of processes with four non-preemptive methods
named, priority based (PR), shortest job first (SJF), first come first served (FCFS) and highest response
ratio next (HRRN). The presented model is designed such can be used as one of the main components in
modeling any type of enterprise architecture. Most enterprise architectures can be modeled by putting
together appropriate number of these modules and proper composition of them.
Bragged Regression Tree Algorithm for Dynamic Distribution and Scheduling of ...Editor IJCATR
In the past few years, Grid computing came up as next generation computing platform which is a combination of
heterogeneous computing resources combined by a network across dynamic and geographically separated organizations. So, it
provides the perfect computing environment to solve large-scale computational demands. As the Grid computing demands are still
increasing from day to day due to rise in large number of complex jobs worldwide. So, the jobs may take much longer time to
complete due to poor distribution of batches or groups of jobs to inappropriate CPU’s. Therefore there is need to develop an efficient
dynamic job scheduling algorithm that would assign jobs to appropriate CPU’s dynamically. The main problem which dealt in the
paper is, how to distribute the jobs when the payload, importance, urgency, flow time etc. dynamically keeps on changing as the grid
expands or is flooded with number of job requests from different machines within the grid.
In this paper, we present a scheduling strategy which takes the advantage of decision tree algorithm to take dynamic decision
based on the current scenarios and which automatically incorporates factor analysis for considering the distribution of jobs.
Scalable Distributed Job Processing with Dynamic Load Balancingijdpsjournal
We present here a cost effective framework for a robust scalable and distributed job processing system that
adapts to the dynamic computing needs easily with efficient load balancing for heterogeneous systems. The
design is such that each of the components are self contained and do not depend on each other. Yet, they
are still interconnected through an enterprise message bus so as to ensure safe, secure and reliable
communication based on transactional features to avoid duplication as well as data loss. The load
balancing, fault-tolerance and failover recovery are built into the system through a mechanism of health
check facility and a queue based load balancing. The system has a centralized repository with central
monitors to keep track of the progress of various job executions as well as status of processors in real-time.
The basic requirement of assigning a priority and processing as per priority is built into the framework.
The most important aspect of the framework is that it avoids the need for job migration by computing the
target processors based on the current load and the various cost factors. The framework will have the
capability to scale horizontally as well as vertically to achieve the required performance, thus effectively
minimizing the total cost of ownership
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Journals
Abstract This paper concentrates on methods which provide efficient processing time of a virtual machine, CPU utilization time of a virtual machine. As the user increases, the performance may be significantly reduced if the tasks are not scheduled in a proper order. In this paper the performance of two already existing algorithms DSP (Dependency Structural Prioritization) algorithm and credit scheduling algorithm are analyzed and compared. A single virtual machine’s processing time and CPU utilization time are measured .Satisfactory results are achieved while comparing the two algorithms. This study concludes that the DSP algorithm can perform efficiently than the credit scheduling algorithm. Keywords: Virtual Machine, DSP algorithm, credit scheduling algorithm
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
TEMPORALLY EXTENDED ACTIONS FOR REINFORCEMENT LEARNING BASED SCHEDULERSijscai
Temporally extended actions have been proved to enhance the performance of reinforcement learning agents. The broader framework of ‘Options’ gives us a flexible way of representing such extended course of action in Markov decision processes. In this work we try to adapt options framework to model an operating system scheduler, which is expected not to allow processor stay idle if there is any process ready or waiting for its execution. A process is allowed to utilize CPU resources for a fixed quantum of time (timeslice) and subsequent context switch leads to considerable overhead. In this work we try to utilize the historical performances of a scheduler and try to reduce the number of redundant context switches. We propose a machine-learning module, based on temporally extended reinforcement-learning agent, to predict a better performing timeslice. We measure the importance of states, in option framework, by evaluating the impact of their absence and propose an algorithm to identify such checkpoint states. We present empirical evaluation of our approach in a maze-world navigation and their implications on "adaptive time slice parameter" show efficient throughput time.
Efficient Resource Management Mechanism with Fault Tolerant Model for Computa...Editor IJCATR
Grid computing provides a framework and deployment environment that enables resource
sharing, accessing, aggregation and management. It allows resource and coordinated use of various
resources in dynamic, distributed virtual organization. The grid scheduling is responsible for resource
discovery, resource selection and job assignment over a decentralized heterogeneous system. In the
existing system, primary-backup approach is used for fault tolerance in a single environment. In this
approach, each task has a primary copy and backup copy on two different processors. For dependent
tasks, precedence constraint among tasks must be considered when scheduling backup copies and
overloading backups. Then, two algorithms have been developed to schedule backups of dependent and
independent tasks. The proposed work is to manage the resource failures in grid job scheduling. In this
method, data source and resource are integrated from different geographical environment. Faulttolerant
scheduling with primary backup approach is used to handle job failures in grid environment.
Impact of communication protocols is considered. Communication protocols such as Transmission
Control Protocol (TCP), User Datagram Protocol (UDP) which are used to distribute the message of
each task to grid resources.
Improving the Performance of Mapping based on Availability- Alert Algorithm U...AM Publications
Performance of Mapping can be improved and it is needed arise in several fields of science and engineering.
They'll be parallelized in master-worker fashion and relevant programming ways have been projected to cut back
applications. In existing system, the performance of application is considered only for homogenous systems due to simplicity.
In this we use Availability-Alert algorithm using Poisson arrival to extend our approach for Heterogeneous systems in Multi
core Architecture systems. Our proposed algorithm also considers the requirement needed for the application for their
execution in Heterogeneous systems in Multi core Architecture systems while maintaining good performance. Performance
prediction errors are minimized by using this approach at the end of the execution. We present simulation results to quantify
the benefits of our approach.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
An Improved Parallel Activity scheduling algorithm for large datasetsIJERA Editor
Parallel processing is capable of executing a large number of tasks on a multiprocessor at the same time period, and it is also one of the emerging concepts. Complex and computational problems can be resolved in an efficient way with the help of parallel processing. The parallel processing system can be divided into two categories depending on the nature of tasks such are homogenous parallel system and the heterogeneous parallel processing system. In the homogeneous environment, the number of processors required for executing different tasks is similar in capacity. In case of heterogeneous environments, tasks are allocated to various processors with different capacity and speed. The main objective of parallel processing is to optimize the execution speed and to shorten the duration of task execution with independent of environment. In this proposed work, an optimized parallel project selection method was implemented to find the optimal resource utilization and project scheduling. The execution speeds of the task increases and the overall average execution time of the task decreases by allocating different tasks to various processors with the task scheduling algorithm.
Target Detection System (TDS) for Enhancing Security in Ad hoc Networkijdpsjournal
The idea of an ad hoc network is a new pattern that allows mobile hosts (nodes) to converse without relying
on a predefined communications to keep the network connected. Most nodes are implicit to be mobile and
communication is implicit to be wireless. Ad-hoc networks are collaborative in the sense that each node is
assumed to relay packets for other nodes that will in return relay their packets. Thus all nodes in an ad-hoc
network form part of the network’s routing infrastructure. The mobility of nodes in an ad-hoc network
denotes that both the public and the topology of the network are extremely active. It is very difficult to
design a once-for-all target detection system. Instead, an incremental enrichment strategy may be more
feasible. A safe and sound protocol should at least include mechanisms against known assault types. In
addition, it should provide a system to easily add new security features in the future. Due to the
significance of MANET routing protocols, we focus on the recognition of attacks targeted at MANET
routing protocols.
Intrusion detection techniques for cooperation of node in MANET have been chosen as the security
parameter. This includes Watchdog and Path rater approach. It also nearby Reputation Based Schemes in
which Reputation concerning every node is measured and will be move to every node in network.
Reputation is defined as Someone’s donation to network operation. CONFIDANT [23], CORE [25],
OCEAN [24] schemes are analyzed and will be here also compared based on various parameters.
Survey comparison estimation of various routing protocols in mobile ad hoc ne...ijdpsjournal
MANET is
an autonomous system of mobile nodes attached by wireless links. It represents
a complex and
dynamic distributed systems that consist of mobile wireless nodes that can freely self organize into
an ad
-
hoc network topology. The devices in the network may hav
e limited transmission
range therefore multiple
hops may be needed by one node to transfer data to another node in network. This leads to the need f
or an
effective routing protocol. In this paper we study various classifications of routing protocols and
th
eir types
for wireless mobile ad
-
hoc networks like DSDV, GSR, AODV, DSR, ZRP, FSR, CGSR, LAR, and Geocast
Protocols. In this paper we also compare different routing proto
cols on based on a given set of
parameters
Scalability, Latency, Bandwidth, Control
-
ov
erhead, Mobility impact
SURVEY ON QOE\QOS CORRELATION MODELS FORMULTIMEDIA SERVICESijdpsjournal
This paper presents a brief review of some existing correlation models which attempt to map Quality of
Service (QoS) to Quality of Experience (QoE) for multimedia services. The term QoS refers to deterministic
network behaviour, so that data can be transported with a minimum of packet loss, delay and maximum
bandwidth. QoE is a subjective measure that involves human dimensions; it ties together user perception,
expectations, and experience of the application and network performance. The Holy Grail of subjective
measurement is to predict it from the objective measurements; in other words predict QoE from a given set
of QoS parameters or vice versa. Whilst there are many quality models for multimedia, most of them are
only partial solutions to predicting QoE from a given QoS. This contribution analyses a number of previous
attempts and optimisation techniquesthat can reliably compute the weighting coefficients for the QoS/QoE
mapping.
Crypto multi tenant an environment of secure computing using cloud sqlijdpsjournal
Today’s most modern research area of computing is cloud comput
ing due to its ability to diminish the costs
associated with virtualization, high availability, dynamic resource pools and increases the efficien
cy of
computing. But still it contains some drawbacks such as privacy, security, etc. This paper is thorou
ghly
focused on the security of data of multi tenant model obtains from the virtualization feature of clo
ud
computing. We use AES
-
128 bit algorithm and cloud SQL to protect sensitive data before storing in the
cloud. When the authorized customer arises for usag
e of data, then data firstly decrypted after that
provides to the customer. Multi tenant infrastructure is supported by Google, which prefers pushing
of
contents in short iteration cycle. As the customer is distributed and their demands can arise anywhe
re,
anytime so data can’t store at particular site it must be available different sites also. For this f
aster
accessing by different users from different places Google is the best one. To get high reliability a
nd
availability data is stored in encrypted befor
e storing in database and updated every time after usage. It is
very easy to use without requiring any software. This authenticate user can recover their encrypted
and
decrypted data, afford efficient and data storage security in the cloud.
An Implementation on Effective Robot Mission under Critical Environemental Co...IJERA Editor
Software engineering is a field of engineering, for designing and writing programs for computers or other electronic devices. A software engineer, or programmer, writes software (or changes existing software) and compiles software using methods that make it better quality. Is the application of engineering to the design, development, implementation, testingand main tenance of software in a systematic method. Now a days the robotics are also plays an important role in present automation concepts. But we have several challenges in that robots when they are operated in some critical environments. Motion planning and task planning are two fundamental problems in robotics that have been addressed from different perspectives. For resolve this there are Temporal logic based approaches that automatically generate controllers have been shown to be useful for mission level planning of motion, surveillance and navigation, among others. These approaches critically rely on the validity of the environment models used for synthesis. Yet simplifying assumptions are inevitable to reduce complexity and provide mission-level guarantees; no plan can guarantee results in a model of a world in which everything can go wrong. In this paper, we show how our approach, which reduces reliance on a single model by introducing a stack of models, can endow systems with incremental guarantees based on increasingly strengthened assumptions, supporting graceful degradation when the environment does not behave as expected, and progressive enhancement when it does.
A novel methodology for task distributionijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
Modeling enterprise architecture using timed colored petri net single process...ijmpict
The purpose of modeling enterprise architecture and analysis of it is to ease decision making about
architecture of information systems. Planning is one of the most important tasks in an organization and has
a major role in increasing the productivity of it. Scope of this paper is scheduling processes in the
enterprise architecture. Scheduling is decision making on execution start time of processes that are used in
manufacturing and service systems. Different methods and tools have been proposed for modeling
enterprise architecture. Colored Petri net is extension of traditional Petri net that its modeling capability
has grown dramatically. A developed model with Colored Petri net is suitable for verification of
operational aspects and performance evaluation of information systems. With having ability of hierarchical
modeling, colored Petri nets permits that using predesigned modules for smaller parts of the system and
with a general algorithm, any kind of enterprise architecture can be modeled. A two level hierarchical
model is presented as a building block for modeling architecture of Transaction Processing Systems (TPS)
in this paper. This model schedules and runs processes based on a predetermined non-preemptive
scheduling method. The model can be used for scheduling of processes with four non-preemptive methods
named, priority based (PR), shortest job first (SJF), first come first served (FCFS) and highest response
ratio next (HRRN). The presented model is designed such can be used as one of the main components in
modeling any type of enterprise architecture. Most enterprise architectures can be modeled by putting
together appropriate number of these modules and proper composition of them.
Bragged Regression Tree Algorithm for Dynamic Distribution and Scheduling of ...Editor IJCATR
In the past few years, Grid computing came up as next generation computing platform which is a combination of
heterogeneous computing resources combined by a network across dynamic and geographically separated organizations. So, it
provides the perfect computing environment to solve large-scale computational demands. As the Grid computing demands are still
increasing from day to day due to rise in large number of complex jobs worldwide. So, the jobs may take much longer time to
complete due to poor distribution of batches or groups of jobs to inappropriate CPU’s. Therefore there is need to develop an efficient
dynamic job scheduling algorithm that would assign jobs to appropriate CPU’s dynamically. The main problem which dealt in the
paper is, how to distribute the jobs when the payload, importance, urgency, flow time etc. dynamically keeps on changing as the grid
expands or is flooded with number of job requests from different machines within the grid.
In this paper, we present a scheduling strategy which takes the advantage of decision tree algorithm to take dynamic decision
based on the current scenarios and which automatically incorporates factor analysis for considering the distribution of jobs.
Scalable Distributed Job Processing with Dynamic Load Balancingijdpsjournal
We present here a cost effective framework for a robust scalable and distributed job processing system that
adapts to the dynamic computing needs easily with efficient load balancing for heterogeneous systems. The
design is such that each of the components are self contained and do not depend on each other. Yet, they
are still interconnected through an enterprise message bus so as to ensure safe, secure and reliable
communication based on transactional features to avoid duplication as well as data loss. The load
balancing, fault-tolerance and failover recovery are built into the system through a mechanism of health
check facility and a queue based load balancing. The system has a centralized repository with central
monitors to keep track of the progress of various job executions as well as status of processors in real-time.
The basic requirement of assigning a priority and processing as per priority is built into the framework.
The most important aspect of the framework is that it avoids the need for job migration by computing the
target processors based on the current load and the various cost factors. The framework will have the
capability to scale horizontally as well as vertically to achieve the required performance, thus effectively
minimizing the total cost of ownership
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Journals
Abstract This paper concentrates on methods which provide efficient processing time of a virtual machine, CPU utilization time of a virtual machine. As the user increases, the performance may be significantly reduced if the tasks are not scheduled in a proper order. In this paper the performance of two already existing algorithms DSP (Dependency Structural Prioritization) algorithm and credit scheduling algorithm are analyzed and compared. A single virtual machine’s processing time and CPU utilization time are measured .Satisfactory results are achieved while comparing the two algorithms. This study concludes that the DSP algorithm can perform efficiently than the credit scheduling algorithm. Keywords: Virtual Machine, DSP algorithm, credit scheduling algorithm
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
TEMPORALLY EXTENDED ACTIONS FOR REINFORCEMENT LEARNING BASED SCHEDULERSijscai
Temporally extended actions have been proved to enhance the performance of reinforcement learning agents. The broader framework of ‘Options’ gives us a flexible way of representing such extended course of action in Markov decision processes. In this work we try to adapt options framework to model an operating system scheduler, which is expected not to allow processor stay idle if there is any process ready or waiting for its execution. A process is allowed to utilize CPU resources for a fixed quantum of time (timeslice) and subsequent context switch leads to considerable overhead. In this work we try to utilize the historical performances of a scheduler and try to reduce the number of redundant context switches. We propose a machine-learning module, based on temporally extended reinforcement-learning agent, to predict a better performing timeslice. We measure the importance of states, in option framework, by evaluating the impact of their absence and propose an algorithm to identify such checkpoint states. We present empirical evaluation of our approach in a maze-world navigation and their implications on "adaptive time slice parameter" show efficient throughput time.
Efficient Resource Management Mechanism with Fault Tolerant Model for Computa...Editor IJCATR
Grid computing provides a framework and deployment environment that enables resource
sharing, accessing, aggregation and management. It allows resource and coordinated use of various
resources in dynamic, distributed virtual organization. The grid scheduling is responsible for resource
discovery, resource selection and job assignment over a decentralized heterogeneous system. In the
existing system, primary-backup approach is used for fault tolerance in a single environment. In this
approach, each task has a primary copy and backup copy on two different processors. For dependent
tasks, precedence constraint among tasks must be considered when scheduling backup copies and
overloading backups. Then, two algorithms have been developed to schedule backups of dependent and
independent tasks. The proposed work is to manage the resource failures in grid job scheduling. In this
method, data source and resource are integrated from different geographical environment. Faulttolerant
scheduling with primary backup approach is used to handle job failures in grid environment.
Impact of communication protocols is considered. Communication protocols such as Transmission
Control Protocol (TCP), User Datagram Protocol (UDP) which are used to distribute the message of
each task to grid resources.
Improving the Performance of Mapping based on Availability- Alert Algorithm U...AM Publications
Performance of Mapping can be improved and it is needed arise in several fields of science and engineering.
They'll be parallelized in master-worker fashion and relevant programming ways have been projected to cut back
applications. In existing system, the performance of application is considered only for homogenous systems due to simplicity.
In this we use Availability-Alert algorithm using Poisson arrival to extend our approach for Heterogeneous systems in Multi
core Architecture systems. Our proposed algorithm also considers the requirement needed for the application for their
execution in Heterogeneous systems in Multi core Architecture systems while maintaining good performance. Performance
prediction errors are minimized by using this approach at the end of the execution. We present simulation results to quantify
the benefits of our approach.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online
An Improved Parallel Activity scheduling algorithm for large datasetsIJERA Editor
Parallel processing is capable of executing a large number of tasks on a multiprocessor at the same time period, and it is also one of the emerging concepts. Complex and computational problems can be resolved in an efficient way with the help of parallel processing. The parallel processing system can be divided into two categories depending on the nature of tasks such are homogenous parallel system and the heterogeneous parallel processing system. In the homogeneous environment, the number of processors required for executing different tasks is similar in capacity. In case of heterogeneous environments, tasks are allocated to various processors with different capacity and speed. The main objective of parallel processing is to optimize the execution speed and to shorten the duration of task execution with independent of environment. In this proposed work, an optimized parallel project selection method was implemented to find the optimal resource utilization and project scheduling. The execution speeds of the task increases and the overall average execution time of the task decreases by allocating different tasks to various processors with the task scheduling algorithm.
Target Detection System (TDS) for Enhancing Security in Ad hoc Networkijdpsjournal
The idea of an ad hoc network is a new pattern that allows mobile hosts (nodes) to converse without relying
on a predefined communications to keep the network connected. Most nodes are implicit to be mobile and
communication is implicit to be wireless. Ad-hoc networks are collaborative in the sense that each node is
assumed to relay packets for other nodes that will in return relay their packets. Thus all nodes in an ad-hoc
network form part of the network’s routing infrastructure. The mobility of nodes in an ad-hoc network
denotes that both the public and the topology of the network are extremely active. It is very difficult to
design a once-for-all target detection system. Instead, an incremental enrichment strategy may be more
feasible. A safe and sound protocol should at least include mechanisms against known assault types. In
addition, it should provide a system to easily add new security features in the future. Due to the
significance of MANET routing protocols, we focus on the recognition of attacks targeted at MANET
routing protocols.
Intrusion detection techniques for cooperation of node in MANET have been chosen as the security
parameter. This includes Watchdog and Path rater approach. It also nearby Reputation Based Schemes in
which Reputation concerning every node is measured and will be move to every node in network.
Reputation is defined as Someone’s donation to network operation. CONFIDANT [23], CORE [25],
OCEAN [24] schemes are analyzed and will be here also compared based on various parameters.
Survey comparison estimation of various routing protocols in mobile ad hoc ne...ijdpsjournal
MANET is
an autonomous system of mobile nodes attached by wireless links. It represents
a complex and
dynamic distributed systems that consist of mobile wireless nodes that can freely self organize into
an ad
-
hoc network topology. The devices in the network may hav
e limited transmission
range therefore multiple
hops may be needed by one node to transfer data to another node in network. This leads to the need f
or an
effective routing protocol. In this paper we study various classifications of routing protocols and
th
eir types
for wireless mobile ad
-
hoc networks like DSDV, GSR, AODV, DSR, ZRP, FSR, CGSR, LAR, and Geocast
Protocols. In this paper we also compare different routing proto
cols on based on a given set of
parameters
Scalability, Latency, Bandwidth, Control
-
ov
erhead, Mobility impact
SURVEY ON QOE\QOS CORRELATION MODELS FORMULTIMEDIA SERVICESijdpsjournal
This paper presents a brief review of some existing correlation models which attempt to map Quality of
Service (QoS) to Quality of Experience (QoE) for multimedia services. The term QoS refers to deterministic
network behaviour, so that data can be transported with a minimum of packet loss, delay and maximum
bandwidth. QoE is a subjective measure that involves human dimensions; it ties together user perception,
expectations, and experience of the application and network performance. The Holy Grail of subjective
measurement is to predict it from the objective measurements; in other words predict QoE from a given set
of QoS parameters or vice versa. Whilst there are many quality models for multimedia, most of them are
only partial solutions to predicting QoE from a given QoS. This contribution analyses a number of previous
attempts and optimisation techniquesthat can reliably compute the weighting coefficients for the QoS/QoE
mapping.
Crypto multi tenant an environment of secure computing using cloud sqlijdpsjournal
Today’s most modern research area of computing is cloud comput
ing due to its ability to diminish the costs
associated with virtualization, high availability, dynamic resource pools and increases the efficien
cy of
computing. But still it contains some drawbacks such as privacy, security, etc. This paper is thorou
ghly
focused on the security of data of multi tenant model obtains from the virtualization feature of clo
ud
computing. We use AES
-
128 bit algorithm and cloud SQL to protect sensitive data before storing in the
cloud. When the authorized customer arises for usag
e of data, then data firstly decrypted after that
provides to the customer. Multi tenant infrastructure is supported by Google, which prefers pushing
of
contents in short iteration cycle. As the customer is distributed and their demands can arise anywhe
re,
anytime so data can’t store at particular site it must be available different sites also. For this f
aster
accessing by different users from different places Google is the best one. To get high reliability a
nd
availability data is stored in encrypted befor
e storing in database and updated every time after usage. It is
very easy to use without requiring any software. This authenticate user can recover their encrypted
and
decrypted data, afford efficient and data storage security in the cloud.
Management of context aware software resources deployed in a cloud environmen...ijdpsjournal
In cloud computing environments, context information is continuously created by context providers and
consumed by the applications on mobile devices. An important characteristic of cloud-based context aware
services is meeting the service level agreements (SLAs) to deliver a certain quality of service (Qos), such as
guarantees on response time or price. The response time to a request of context-aware software is affected
by loading extensive context data from multiple resources on the chosen server. Therefore, the speed of
such software would be decreased during execution time. Hence, proper scheduling of such services is
indispensable because the customers are faced with time constraints. In this research, a new scheduling
algorithm for context aware services is proposed which is based on classifying similar context consumers
and dynamically scoring the requests to improve the performance of the server hosting highly-requested
context-aware software while reducing costs of cloud provider. The approach is evaluated via simulation
and comparison with gi-FIFO scheduling algorithm. Experimental results demonstrate the efficiency of the
proposed approach.
Spectrum requirement estimation for imt systems in developing countriesijdpsjournal
In this paper
we analyze the methodology develope
d by
the
International Telecommunication Union (ITU)
for
estimat
ing
the spectrum requirement for International Mobile Telecommunications (IMT) systems.
The
International
Telecommunication Union estimates spectrum requirement
s
by following ITU
-
R
-
Rec.M1768.
Although
this
methodology is adopted by ITU
-
R
,
there are
discrepancies
for
estimat
ing
the spectrum
requirement for developing countries. ITU estimates the spectrum requirement by considering technica
l
and market
parameters
that
were provided by the
most de
veloped countries with high income and high
development index
. Developed countries have
a very rapid expansible telecom
market
due to the high level
of penetration
,
dominant
user density
and usage of high
-
volume multimedia services.
In contrast,
developing
countries
use less bandwidth
-
intensive services such as voice communication,
low rate data
,
low and medium multimedia.
However,
while
the input parameters are adequate for developed countries
,
they
do not reflect the status of developing countries. For th
is reason
the
ITU spectrum estimation
overestimate
s
the exact requirement
s
of spectrum for IMT systems for developing countries. This paper
presents an approach based on the technical and market related parameters, which is thought to be
applicable
for
ove
rcom
ing
the shortcomings of
the
current ITU methodology in estimatin
g
the spectrum
requirement
for developing
countries like Bangladesh
A LIGHT-WEIGHT DISTRIBUTED SYSTEM FOR THE PROCESSING OF REPLICATED COUNTER-LI...ijdpsjournal
In order to increase availability in a distributed system some or all of the data items are replicated and
stored at separate sites. This is an issue of key concern especially since there is such a proliferation of
wireless technologies and mobile users. However, the concurrent processing of transactions at separate
sites can generate inconsistencies in the stored information. We have built a distributed service that
manages updates to widely deployed counter-like replicas. There are many heavy-weight distributed
systems targeting large information critical applications. Our system is intentionally, relatively lightweight
and useful for the somewhat reduced information critical applications. The service is built on our
distributed concurrency control scheme which combines optimism and pessimism in the processing of
transactions. The service allows a transaction to be processed immediately (optimistically) at any
individual replica as long as the transaction satisfies a cost bound. All transactions are also processed in a
concurrent pessimistic manner to ensure mutual consistency
Design Of Elliptic Curve Crypto Processor with Modified Karatsuba Multiplier ...ijdpsjournal
ECDSA stands for “Elliptic Curve Digital Signature Algorithm”, it’s used to create a digital signature of
data (a file for example) in order to allow you to verify its authenticity without compromising its security.
This paper presents the architecture of finite field multiplication. The proposed multiplier is hybrid
Karatsuba multiplier used in this processor. For multiplicative inverse we choose the Itoh-Tsujii
Algorithm(ITA). This work presents the design of high performance elliptic curve crypto processor(ECCP)
for an elliptic curve over the finite field GF(2^233). The curve which we choose is the standard curve for
the digital signature. The processor is synthesized for Xilinx FPGA.
HIGHLY SCALABLE, PARALLEL AND DISTRIBUTED ADABOOST ALGORITHM USING LIGHT WEIG...ijdpsjournal
AdaBoost is an important algorithm in machine learning and is being widely used in object detection.
AdaBoost works by iteratively selecting the best amongst weak classifiers, and then combines several weak
classifiers to obtain a strong classifier. Even though AdaBoost has proven to be very effective, its learning
execution time can be quite large depending upon the application e.g., in face detection, the learning time
can be several days. Due to its increasing use in computer vision applications, the learning time needs to
be drastically reduced so that an adaptive near real time object detection system can be incorporated. In
this paper, we develop a hybrid parallel and distributed AdaBoost algorithm that exploits the multiple
cores in a CPU via light weight threads, and also uses multiple machines via a web service software
architecture to achieve high scalability. We present a novel hierarchical web services based distributed
architecture and achieve nearly linear speedup up to the number of processors available to us. In
comparison with the previously published work, which used a single level master-slave parallel and
distributed implementation [1] and only achieved a speedup of 2.66 on four nodes, we achieve a speedup of
95.1 on 31 workstations each having a quad-core processor, resulting in a learning time of only 4.8
seconds per feature.
Derivative threshold actuation for single phase wormhole detection with reduc...ijdpsjournal
Communication in mobile Ad hoc networks is completed via multi
-
hop ways. Owing to the distributed
specification and restricted resource of nodes, MANET is a lot prone
to wormhole attacks i.e. wormhole
attacks place severe threats to each Ad hoc routing protocol and a few security enhancements. Thus,
so as
to discover wormholes, totally different techniques are in use. In all those techniques fixation of
threshold
is mer
ely by trial & error methodology or by random manner. Conjointly wormhole detection is in twin
part by putting the nodes that is higher than the edge in a suspicious set, however predicting the n
ode as a
wormhole by using some other algorithms. Our aim in
this paper is to deduce the traffic threshold level by
derivational approach for identifying wormholes in a very single phase in relay network having dissi
milar
characteristics.
On the fly porn video blocking using distributed multi gpu and data mining ap...ijdpsjournal
Preventing users from accessing adult videos and at the same time allowing them to access good
educational videos and other materials through campus wide network is a big challenge for organizations.
Major existing web filtering systems are textual content or link analysis based. As a result, potential users
cannot access qualitative and informative video content which is available online. Adult content detection
in video based on motion features or skin detection requires significant computing power and time.
Judgment to identify pornography videos is taken based on processing of every chunk from video,
consisting specific number of frames, sequentially one after another. This solution is not feasible in real
time when user has started watching the video and decision about blocking needs to be taken within few
seconds.
In this paper, we propose a model where user is allowed to start watching any video; at the backend porn
detection process using extracted video and image features shall run on distributed nodes with multiple
GPUs (Graphics Processing Units). The video is processed on parallel and distributed platform in shortest
time and decision about filtering the video is taken in real time. Track record of blocked content and
websites is cached, too. For every new video downloads, cache is verified to prevent repetitive content
analysis. On the fly blocking is feasible due to latest GPU architecture, CUDA (Compute Unified Device
Architecture) and CUDA aware MPI (Message Passing Interface). It is possible to achieve coarse grained
as well as fine grained parallelism. Video Chunks are processed parallel on distributed nodes. Porn
detection algorithm on frames of chunks of videos can also achieve parallelism using GPUs on single node.
It ultimately results into blocking porn video on the fly and allowing educational and informative videos.
Implementing database lookup method in mobile wimax for location management a...ijdpsjournal
The mobile WiMAX plays a vital role in accessing the delay sensitive audio, video streaming and mobi
le
IPTV. To minimize the handover delay, a Location
Management Area (LMA) based Multicast and
Broadcast Service (MBS) zone is established. The handover delay is increased based on the size of th
e MBS
zone. In this paper, Location Management Area is easily identified by using Database Lookup Method t
o
obtai
n efficient bandwidth utilization along with reduced handover delay and increased throughput. The
handover delay and throughput is calculated by implementing this scenario in OPNET tool.
Advanced delay reduction algorithm based on GPS with Load Balancingijdpsjournal
A Mobile Ad-Hoc Network (MANET) is a self-configuring network of mobile nodes connected by wireless
links, to form an arbitrary topology. The nodes are free to move arbitrarily in the topology. Thus, the
network's wireless topology may be random and may change quickly. An ad Hoc network is formed by
sensor networks consisting of sensing, data processing, and communication components. There is frequent
occurrence of congested links in such a network as wireless links inherently have significantly lower
capacity than hardwired links and are therefore more prone to congestion. Here we proposed a algorithm
which involves the reduction in the delay with the help of Request_set created on the basis of the location
information of the destination node. Across the paths found in the Route_reply (RREP) packets the load is
equally distributed
BREAST CANCER DIAGNOSIS USING MACHINE LEARNING ALGORITHMS –A SURVEYijdpsjournal
Breast cancer has become a common factor now-a-days. Despite the fact, not all general hospitals
have the facilities to diagnose breast cancer through mammograms. Waiting for diagnosing a breast
cancer for a long time may increase the possibility of the cancer spreading. Therefore a computerized
breast cancer diagnosis has been developed to reduce the time taken to diagnose the breast cancer and
reduce the death rate. This paper summarizes the survey on breast cancer diagnosis using various machine
learning algorithms and methods, which are used to improve the accuracy of predicting cancer. This survey
can also help us to know about number of papers that are implemented to diagnose the breast cancer.
STUDY OF VARIOUS FACTORS AFFECTING PERFORMANCE OF MULTI-CORE PROCESSORSijdpsjournal
Advances in Integrated Circuit processing allow for more microprocessor design options. As Chip Multiprocessor system (CMP) become the predominant topology for leading microprocessors, critical components of the system are now integrated on a single chip. This enables sharing of computation resources that was not previously possible. In addition the virtualization of these computation resources exposes the system to a mix of diverse and competing workloads. On chip Cache memory is a resource of primary concern as it can be dominant in controlling overall throughput. This Paper presents analysis of various parameters affecting the performance of Multi-core Architectures like varying the number of cores, changes L2 cache size, further we have varied directory size from 64 to 2048 entries on a 4 node, 8 node 16 node and 64 node Chip multiprocessor which in turn presents an open area of research on multicore processors with private/shared last level cache as the future trend seems to be towards tiled architecture executing multiple parallel applications with optimized silicon area utilization and excellent performance.
Comparative analysis of the essential CPU scheduling algorithmsjournalBEEI
CPU scheduling algorithms have a significant function in multiprogramming operating systems. When the CPU scheduling is effective a high rate of computation could be done correctly and also the system will maintain in a stable state. As well as, CPU scheduling algorithms are the main service in the operating systems that fulfill the maximum utilization of the CPU. This paper aims to compare the characteristics of the CPU scheduling algorithms towards which one is the best algorithm for gaining a higher CPU utilization. The comparison has been done between ten scheduling algorithms with presenting different parameters, such as performance, algorithm’s complexity, algorithm’s problem, average waiting times, algorithm’s advantages-disadvantages, allocation way, etc. The main purpose of the article is to analyze the CPU scheduler in such a way that suits the scheduling goals. However, knowing the algorithm type which is most suitable for a particular situation by showing its full properties.
LEARNING SCHEDULER PARAMETERS FOR ADAPTIVE PREEMPTIONcscpconf
An operating system scheduler is expected to not allow processor stay idle if there is any
process ready or waiting for its execution. This problem gains more importance as the numbers
of processes always outnumber the processors by large margins. It is in this regard that
schedulers are provided with the ability to preempt a running process, by following any
scheduling algorithm, and give us an illusion of simultaneous running of several processes. A
process which is allowed to utilize CPU resources for a fixed quantum of time (termed as
timeslice for preemption) and is then preempted for another waiting process. Each of these
'process preemption' leads to considerable overhead of CPU cycles which are valuable resource
for runtime execution. In this work we try to utilize the historical performances of a scheduler
and predict the nature of current running process, thereby trying to reduce the number of
preemptions. We propose a machine-learning module to predict a better performing timeslice
which is calculated based on static knowledge base and adaptive reinforcement learning based
suggestive module. Results for an "adaptive timeslice parameter" for preemption show good
saving on CPU cycles and efficient throughput time.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
A survey of various scheduling algorithm in cloud computing environmenteSAT Journals
Abstract Cloud computing is known as a provider of dynamic services using very large scalable and virtualized resources over the Internet. Due to novelty of cloud computing field, there is no many standard task scheduling algorithm used in cloud environment. Especially that in cloud, there is a high communication cost that prevents well known task schedulers to be applied in large scale distributed environment. Today, researchers attempt to build job scheduling algorithms that are compatible and applicable in Cloud Computing environment Job scheduling is most important task in cloud computing environment because user have to pay for resources used based upon time. Hence efficient utilization of resources must be important and for that scheduling plays a vital role to get maximum benefit from the resources. In this paper we are studying various scheduling algorithm and issues related to them in cloud computing. Index Terms: cloud computing, scheduling, algorithm
COMPARATIVE ANALYSIS OF FCFS, SJN & RR JOB SCHEDULING ALGORITHMSijcsit
One of the primary roles of the operating system is job scheduling. Oftentimes, what makes the differencebetween the performance of one operating system over the other could be the underlying implementation ofits job scheduling algorithm. This paper therefore examines, under identical conditions and parameters,the comparative performances of First Come First Serve (FCFS), Shortest Job Next (SJN) and RoundRobin (RR) scheduling algorithms. Simulation results presented in this paper serve to stimulate furtherresearch into the subject area.
One of the primary roles of the operating system is job scheduling. Oftentimes, what makes the difference between the performance of one operating system over the other could be the underlying implementation of its job scheduling algorithm. This paper therefore examines, under identical conditions and parameters, the comparative performances of First Come First Serve (FCFS), Shortest Job Next (SJN) and Round Robin (RR) scheduling algorithms. Simulation results presented in this paper serve to stimulate further research into the subject area.
Heuristics based multi queue job scheduling for cloud computing environmenteSAT Journals
Abstract With the influx of large number of users, delivering the services to them and to meet their quality requirements is still challenging. Cloud computing is built upon the advancements of virtualization technology and distributed computing. Cloud computing itself is a grid computing with additional property of elasticity and scalability. Since cloud usually contains a heterogeneous pool of resources, scheduling plays a vital role in cloud computing. Scheduling in cloud computing denotes the execution of current jobs in a given constraint. QoS is inevitably needed to deal with scheduling in cloud computing environment. Efficient scheduling mechanism should meet the QoS parameters and should enhance resource utilization. Traditional scheduling algorithms such as FCFS, SJF, EASY, round robin and backfilling possesses fragmentation problem. The proposed heuristics based multi queue job scheduling method overcomes the problem of fragmentation by providing an on-demand strategy for scheduling. On-demand service provided by introducing priority. Priority is a comprehensive concept, computed based on QoS parameters. The proposed system, by considering the priority concept reduces the problem of fragmentation and also overcomes "starving to death" problem in scheduling. The proposed method thus can efficiently executes the user requirements and can manifest better performance. Keywords: Multi-Queue Scheduling; Heuristics
EMPIRICAL APPLICATION OF SIMULATED ANNEALING USING OBJECT-ORIENTED METRICS TO...ijcsa
The work is about using Simulated Annealing Algorithm for the effort estimation model parameter
optimization which can lead to the reduction in the difference in actual and estimated effort used in model
development.
The model has been tested using OOP’s dataset, obtained from NASA for research purpose.The data set
based model equation parameters have been found that consists of two independent variables, viz. Lines of
Code (LOC) along with one more attribute as a dependent variable related to software development effort
(DE). The results have been compared with the earlier work done by the author on Artificial Neural
Network (ANN) and Adaptive Neuro Fuzzy Inference System (ANFIS) and it has been observed that the
developed SA based model is more capable to provide better estimation of software development effort than
ANN and ANFIS
TASK-DECOMPOSITION BASED ANOMALY DETECTION OF MASSIVE AND HIGH-VOLATILITY SES...ijdpsjournal
The Science Information Network (SINET) is a Japanese academic backbone network for more than 800 universities and research institutions. The characteristic of SINET traffic is that it is enormous and highly variable. In this paper, we present a task-decomposition based anomaly detection of massive and highvolatility session data of SINET. Three main features are discussed: Tash scheduling, Traffic discrimination, and Histogramming. We adopt a task-decomposition based dynamic scheduling method to handle the massive session data stream of SINET. In the experiment, we have analysed SINET traffic from 2/27 to 3/8 and detect some anomalies by LSTM based time-series data processing.
In a multiprogramming system, multiple processes exist concurrently in main memory. Each process alternates between using a processor and waiting for some event to occur, such as the completion of an I O operation. The processor or processors are kept busy by executing one process while the others wait. The key to multiprogramming is scheduling. CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU. By switching the CPU among processor the operating system can make the computer more productive. Scheduling affectes the performance of the system because it determines which processes will wait and which will progress. In this paper, simulation of various scheduling algorithm First Come First Served FCFS , Round Robin RR , Shortest Process Next SPN and Shortest Remaining Time SRT is done over C Daw Khin Po ""Simulation of Process Scheduling Algorithms"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25124.pdf
Paper URL: https://www.ijtsrd.com/computer-science/operating-system/25124/simulation-of-process-scheduling-algorithms/daw-khin-po
Sharing of cluster resources among multiple Workflow Applicationsijcsit
Many computational solutions can be expressed as workflows. A Cluster of processors is a shared
resource among several users and hence the need for a scheduler which deals with multi-user jobs
presented as workflows. The scheduler must find the number of processors to be allotted for each workflow
and schedule tasks on allotted processors. In this work, a new method to find optimal and maximum
number of processors that can be allotted for a workflow is proposed. Regression analysis is used to find
the best possible way to share available processors, among suitable number of submitted workflows. An
instance of a scheduler is created for each workflow, which schedules tasks on the allotted processors.
Towards this end, a new framework to receive online submission of workflows, to allot processors to each
workflow and schedule tasks, is proposed and experimented using a discrete-event based simulator. This
space-sharing of processors among multiple workflows shows better performance than the other methods
found in literature. Because of space-sharing, an instance of a scheduler must be used for each workflow
within the allotted processors. Since the number of processors for each workflow is known only during
runtime, a static schedule can not be used. Hence a hybrid scheduler which tries to combine the advantages
of static and dynamic scheduler is proposed. Thus the proposed framework is a promising solution to
multiple workflows scheduling on cluster.
A Case Study: Task Scheduling Methodologies for High Speed Computing Systems ijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Elevating Tactical DDD Patterns Through Object Calisthenics
EFFICIENT SCHEDULING STRATEGY USING COMMUNICATION AWARE SCHEDULING FOR PARALLEL JOBS IN CLUSTERS
1. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.5, September 2013
DOI : 10.5121/ijdps.2013.4501 1
EFFICIENT SCHEDULING STRATEGY USING
COMMUNICATION AWARE SCHEDULING FOR
PARALLEL JOBS IN CLUSTERS
A.Neela madheswari1
and R.S.D.Wahida Banu2
1
Department of Information Technology, KMEA Engineering College, Aluva, India
2
Principal, Government College of Engineering, Salem, India
ABSTRACT
In the area of Computer Science, Parallel job scheduling is an important field of research. Finding a best
suitable processor on the high performance or cluster computing for user submitted jobs plays an
important role in measuring system performance. A new scheduling technique called communication aware
scheduling is devised and is capable of handling serial jobs, parallel jobs, mixed jobs and dynamic jobs.
This work focuses the comparison of communication aware scheduling with the available parallel job
scheduling techniques and the experimental results show that communication aware scheduling performs
better when compared to the available parallel job scheduling techniques.
KEYWORDS
Scheduling, simulation, communication aware scheduling, local scheduling, explicit coscheduling, implicit
coscheduling
1. INTRODUCTION
Along with the power of parallelization in supercomputers and massively parallel processors
comes the problem of deciding how to utilize this increased processing capability efficiently. In a
parallel computing environment, multiple processors are available to execute collectively
submitted jobs [7]. In this work, the concentration is given to jobs submitted by users.
While considering the user jobs that are to be scheduled under high performance computing
environments such as clusters or super computers, it is important to take an ultimate care for
selecting the scheduling strategy. Since scheduling plays an important role in improving the
system performance. Otherwise, the system can be degraded due to the under utilization of
existing resource, load imbalancing, more time taken for completing the jobs, etc.
We can classify the user jobs as two forms according to the nature of its execution. They are
namely: serial jobs and parallel jobs. Serial job can contain single process whereas the parallel job
can contain a set of processes. Serial jobs are independent jobs and these jobs can be scheduled to
any of the available processors. The parallel jobs are classified into two types namely: batch and
coscheduled. The processes involved in the batch jobs are independent whereas the processes
involved in coscheduled jobs are depend among themselves. Batch jobs can be scheduled using
any of the existing techniques such as [1], [3], [6], but for coscheduled jobs, if the same
techniques mentioned in [1], [3] and [6], then the system performance can be slowed down due to
communication overhead.
2. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.5, September 2013
2
2. MOTIVATION
Traditional scheduling policies for parallel systems focus on treating differently for interactive
versus batch jobs in order to maximize the utilization of an expensive system. Because it will
reduces resource fragmentation and increases system utilization. Users are expected to provide
nearly accurate estimates of job execution times [14].
Peter Strazdins and John Uhlmann has been compared local scheduling with gang scheduling.
Experiments on a Beowulf cluster with 100Mb fast Ethernet switches are made comparing the
gang scheduling with local scheduling. Results for communication-intensive numerical
applications on 16 nodes reveal that gang scheduling results in slowdowns up to a factor of two
greater for 8 simultaneous jobs. The application studies showed that, when considering
scheduling policies, both memory usage and communication patterns have an important effect on
overall throughput. These factors should not be neglected [2].
A generic framework for deploying coscheduling techniques has been proposed by Saurabh, et al.
All the types of explicit coscheduling techniques such as dynamic coscheduling, spin block and
periodic boost and a co-ordinated coscheduling were proposed. For all the coscheduling
techniques, the mixture of jobs is considered such as CPU intensive and parallel jobs, I/O
intensive and parallel jobs. For performance evaluation, execution time, overhead and slowdown
were considered. The scheduling techniques perform well for CPU intensive and parallel jobs but
for the I/O intensive and parallel jobs, performance degrades [1].
In general, the workload for job scheduling techniques is divided into two main categories
namely: serial jobs and parallel jobs. The parallel jobs are again subdivided into batch jobs or
coscheduled jobs. A batch job contains a set of independent processes, whereas coscheduled job
contains a set of dependent processes. The dependencies may be due to communication, I/O or
synchronization requirement.
The processes involved under a coscheduled job are related with themselves. The workflow of the
processes of the parallel job can be in any of the types mentioned in [12], [5]. This work
considered the flow of processes under coscheduled job to be of type workflow chain [5] as
mentioned in Figure 1. The starting process should complete its execution and then the next
process will start its execution and it continues till the last process under that coscheduled job and
thus the execution will complete for every coscheduled job.
In some kind of workflows, this form of workflow chain is common. Consider the workflow
available in High Energy Physics [8]. The HEPTrails workflow contains modules that support
streaming workflows that process data as it becomes available and make the data available for
further processing. Each module’s individual operations are fine-grained, the module’s
communication overhead can be greater than operations themselves.
In many parallel job scheduling techniques, the workload considered is focused on serial jobs [9],
serial and I/O intensive jobs [1], batch jobs [3], [10] and the performance is evaluated. But
considering the communication intensive jobs is lagging. This work concentrates on scheduling
the communication intensive jobs on clusters for reducing the communication overhead among
various processors in clusters and thus to achieve better system utilization.
Figure 1. Workflow Chain
3. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.5, September 2013
3
If the parallel jobs are in need of communicating between themselves, then using the existing
techniques [1], [3], there will be performance degradation due to communication overhead. To
avoid the communication overhead and to improve the performance of the cluster or any high
performance computing environment, communication aware scheduling (CAS) is devised [15]. In
CAS, coscheduled job is considered for parallel job. But the comparisons with the existing
techniques are not done. This paper focuses the comparison of the communication aware
scheduling with local scheduling (LS), extended implicit coscheduling (ICS) and extended
explicit coscheduling (ECS) techniques.
3. SYSTEM DESIGN
An important goal of any parallel system scheduler is to promote the productivity of its users. To
achieve high productivity, the scheduler has to keep its users satisfied and motivate them to
submit more jobs. Due to high costs involved in deploying a new scheduler, it is uncommon to
experiment with new designs in reality for the first time. Instead, whenever a new scheduler is
proposed, it is first evaluated in simulation, and only if it demonstrates significant improvement
in performance can then become a candidate for an actual deployment. The role of simulation is
thus crucial for the choices made in reality [11]. Here the scheduling system is simulated and
monitored using Java with version jdk1.6.
3.1. System Setup
The cluster system of 124 nodes is considered for system environment, in which one node act as a
head node, while other nodes are compute nodes. The head node is responsible for scheduling the
jobs to the compute nodes. Users have to submit the jobs to the head node. It is assumed that the
scheduler is able to schedule a job per second. The entire cluster system is assumed to be fully
connected and no communication delay exists. The head node or scheduler can schedule a job per
second to the available nodes. The selection of node is based on the scheduling algorithm used. It
is shown in Figure 2.
Figure 2. Architecture of a typical Cluster
3.2. Workload Consideration
The performance of a computer system depends not only on its design and implementation, but
also on the workload to which it is subjected. Different workloads may lead to different
performance and in some cases to different relative ranking of systems or designs. Using
4. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.5, September 2013
4
representative workloads is therefore crucial in order to obtain reliable performance evaluation
results. One way to obtain representative workloads is to use real workloads from production
systems [4]. Here the real workload from Grid5000 is considered for performance evaluation of
the system [13].
4. SCHEDULING ALGORITHMS
There are various literatures that describe different scheduling algorithms. They are based on the
workload consideration as the key point. Communication aware scheduling algorithm is devised
by considering the different kinds of workloads such as serial jobs or parallel jobs [15]. The
performance is evaluated using two parameters namely: idle time of nodes and wait time of jobs.
There are three important parallel job scheduling algorithms namely: local scheduling, explicit
coscheduling and implicit coscheduling [3].
4.1. Sample Scenario of Workload
To know the details of the algorithm in a clear manner, let us consider a sample workload of 6
jobs out of which the jobs with job id 1, 3 and 6 are serial and the jobs with job id 2, 4 and 5 are
parallel in nature and their dependencies, runtime is shown in Table 1. Cosched_id shows the
order in which the parallel job has to run. Let us consider we are having 4 compute nodes under
which the given jobs are to be allocated and they are specified as n1, n2, n3 and n4.
Table 1. Sample Jobs with runtime
Job_id Job type Cosched_id Job_id for graph Runtime(seconds)
1 Serial - 1 5
2 Parallel 1 21 1
2 Parallel 2 22 1
2 Parallel 3 23 1
2 Parallel 4 24 1
3 Serial - 3 5
4 Parallel 1 41 2
4 Parallel 2 42 2
5 Parallel 1 51 1
5 Parallel 2 52 1
5 Parallel 3 53 1
6 Serial - 6 5
4.2. Local Scheduling
In this scheduling strategy, the jobs are scheduled similar to First Come First Serve basis. Any
incoming job is scheduled to the available nodes in a serial order i.e. the nodes from node 1
through node 123 are filled according to the nodes that are free. Any kinds of jobs can be
scheduled using this strategy. Thus serial jobs, parallel jobs, mixed jobs and dynamic jobs are
scheduled and their performance are evaluated by finding the idle time of nodes and wait time of
jobs. For the sample workload mentioned in Table 1, the scheduling of jobs using local
scheduling is given in Figure 3, with the assumption that scheduler will take one second for every
job to submit to the worker node.
5. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.5, September 2013
5
Figure 3. Local scheduling for sample workload
Figure 4. Explicit coscheduling for sample workload
4.3. Extended explicit coscheduling
This technique is developed specifically for scheduling the parallel jobs. Each parallel job is split
into number of processes and every process is assigned to the processors at the same time. After
the completion of a single parallel job, the next parallel job will enter into the system for
execution. Here we are considering a system with 123 nodes for processing. A slight modification
is made for this work in the algorithm and the name of algorithm is called Extended explicit
coscheduling. Instead of scheduling a single parallel job, the system is assumed to schedule with
n number of jobs if the system is ready to occupy n number of jobs at a time. For the sample
workload mentioned in Table 1, the scheduling of jobs using explicit coscheduling is given in
Figure 4 and using extended explicit coscheduling is given in Figure 5. While using explicit
coscheduling, more number of time slots are left idle. Refer Figure 4. Hence the extension to this
6. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.5, September 2013
6
algorithm is made as extended explicit coscheduling where the idle time slots are less compared
to the idle time slots of implicit coscheduling. Refer Figure 5.
Figure 5. Extended explicit coscheduling for sample workload
4.4. Extended implicit coscheduling
This technique is developed specifically for scheduling the parallel jobs. Each parallel job is split
into number of processes and every process is assigned to the processors but with small time
difference between them. Here also the modification is carried out for this work i.e. instead of
scheduling a single parallel job, the system can able to handle n number of jobs if the n number of
jobs can be accommodated at a time. The algorithm contains the modification is called extended
implicit coscheduling. For the sample workload mentioned in Table 1, the scheduling of jobs
using implicit coscheduling is given in Figure 6 and using extended implicit coscheduling is
given in Figure 7. While using implicit coscheduling, more number of time slots are left idle.
Refer Figure 6. Hence the extension to this algorithm is made as extended implicit coscheduling
where the idle time slots are less compared to the idle time slots of implicit coscheduling. Refer
Figure 7.
Figure 6. Implicit coscheduling for sample workload
7. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.5, September 2013
7
Figure 7. Extended implicit coscheduling for sample workload
4.5. Communication aware scheduling
This technique is developed for handling different kinds of workloads such as serial jobs, parallel
jobs, mixed jobs and dynamic jobs [15]. For communication aware scheduling, the runtime for
sample workload given in Table 1 is given in Figure 8.
Figure 8. Communication aware scheduling for sample workload
According to the nature of the scheduling algorithms, the usage for various workloads is specified
in the Table 2. ‘A’ specifies that the particular scheduling algorithm is applicable for the specified
workload and ‘NA’ specifies that the particular algorithm is not applicable for that specified
workload.
8. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.5, September 2013
8
Table 2. Scheduling Algorithms Usage
Types of
Workloads
Scheduling Algorithms
LS ECS ICS CAS
Serial A NA NA A
Parallel A A A A
Mixed A NA NA A
Dynamic A NA NA A
5. EXPERIMENTAL RESULT
The performance of communication aware scheduling is evaluated by comparing the performance
with local scheduling, extended explicit coscheduling and extended implicit coscheduling
techniques based on the wait time of jobs and idle time of nodes. The entire work is classified
according to the type of jobs considered.
5.1. Scheduling Serial Jobs
Here the first 300 serial jobs from Grid5000 workload are considered. The scheduler is filled with
first 300 jobs. At the end of scheduling, the average idle time of nodes and the average wait time
of jobs are calculated using communication aware scheduling and local scheduling, the results are
shown in Figure 9 and Figure 10 respectively.
Figure 9. Average idle time of nodes at the end of scheduling serial jobs
Figure 10. Average wait time of jobs at the end of scheduling serial jobs
9. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.5, September 2013
9
5.2. Scheduling Parallel Jobs
Here the first 300 parallel jobs are from Grid5000 is considered. The scheduler is filled with first
300 parallel jobs. At the end of scheduling, average idle time of nodes and the average wait time
of jobs are calculated using communication aware scheduling, local scheduling, extended explicit
coscheduling and extended implicit coscheduling, the results are shown in Figure 11 and Figure
12 respectively.
Figure 11. Average idle time of nodes at the end of scheduling parallel jobs
Figure 12. Average wait time of jobs at the end of scheduling parallel jobs
5.3. Scheduling Mixed Jobs
Here equivalent mixture of 150 serial jobs and 150 parallel jobs is considered. The scheduler is
designed such that it can able to schedule 25 equivalent mixture of serial and parallel jobs in an
alternative manner. At the end of scheduling, the average wait time of jobs and average idle time
of nodes are calculated using communication aware scheduling and local scheduling and the
results are shown in Figure 13 and Figure 14 respectively.
10. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.5, September 2013
10
Figure 13. Average wait time of jobs at the end of scheduling mixed jobs
Figure 14. Average idle time of nodes at the end of scheduling mixed jobs
5.4. Scheduling dynamic jobs
If the user is not sure about the number of serial or parallel jobs, then he can classify the incoming
jobs under this category. The scheduler can able to schedule any job combination whether it is
only serial, only parallel, equal mixture of serial and parallel or any mixture of serial and parallel
job types. For this kind of workload, the scheduler is filled with random 300 jobs from Grid5000
workload. At the end of scheduling, the average wait time of jobs and average idle time of nodes
are calculated using communication aware scheduling and local scheduling and the results are
shown in Figure 15 and Figure 16 respectively.
11. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.5, September 2013
11
Figure 15. Average wait time of jobs at the end of scheduling dynamic jobs
Figure 16. Average idle time of nodes at the end of scheduling dynamic jobs
We can compare the performance according to the types of jobs while scheduling. While
scheduling serial jobs, by considering Figure 9, it is observed that average idle time of nodes is
100 seconds for communication aware scheduling and for local scheduling also the same value
obtained. From Figure 10, it is observed that average wait time of jobs is 118 seconds for
communication aware scheduling and for local scheduling also the same value obtained.
While scheduling parallel jobs, by considering Figure 11, it is observed that average idle time of
nodes is 88 seconds for communication aware scheduling whereas for local scheduling, it is 6844
seconds, for extended explicit coscheduling, it is 13212 seconds and for extended implicit
coscheduling, it is 7502 seconds. From Figure 12, it is observed that average wait time of jobs is
30 seconds for communication aware scheduling whereas for local scheduling, it is 251 seconds,
for extended explicit coscheduling, it is 191 seconds and for extended implicit coscheduling, it is
146 seconds.
While scheduling mixed jobs, by considering Figure 14, it is observed that average idle time of
nodes is 88 seconds for communication aware scheduling whereas for local scheduling, it is 476
12. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.5, September 2013
12
seconds. By considering the Figure 13, it is observed that average wait time of jobs is 1798
seconds for communication aware scheduling whereas for local scheduling, it is 125 seconds.
While scheduling dynamic jobs, by considering Figure 16, it is observed that average idle time of
nodes is 152 seconds for communication aware scheduling and for local scheduling, it is 156
seconds. From Figure 15, it is observed that the average wait time of the jobs is 0 second for
communication aware scheduling and for local scheduling, it is 3 seconds.
6. CONCLUSION
Hence we can conclude that communication aware scheduling performs equal to local scheduling
in case of serial jobs. In the case of parallel jobs, communication aware scheduling performs
better than local scheduling, extended explicit coscheduling and extended implicit coscheduling
techniques. For mixed jobs, communication aware scheduling gave more wait time compared to
local scheduling whereas in dynamic jobs, communication aware scheduling performs well when
compared to local scheduling. Hence it is concluded that communication aware scheduling is well
suited for parallel job scheduling when compared to local scheduling, extended explicit
coscheduling and extended implicit coscheduling techniques. Also, communication aware
scheduling is able to handle different kinds of job types.
REFERENCES
[1] Saurabh Agarwal, Gyu Sang Choi, Chita R. Das, “Co-ordinated Coscheduling in Time-Sharing
Clusters through a Generic Framework”, Proceedings of the IEEE International Conference on
Cluster Computing CLUSTER, pp.84-91, 2003.
[2] Peter Strazdins, John Uhlmann, “A Comparison of Local and Gang Scheduling on a Beowulf
Cluster”, In Proceedings of IEEE International conference on Cluster computing, pp.55-62, 2004.
[3] Eitan Frachtenberg, Dror G. Feitelson, Fabrizio Petrini, Juan Fernandez, “Adaptive parallel job
scheduling with flexible coscheduling”, IEEE Transactions on Parallel and Distributed Systems, Vol.
16, pp.1066- 1077, 2005.
[4] Dan Tsafrir, Dror G.Feitelson, “Instability in Parallel Job Scheduling Simulation: The role of
Workload Flurries”, IEEE, 2006.
[5] L.Schley, “Prospects of Co-allocation Strategies for a LightWeight Middleware in Grid Computing”,
Proceedings of the 20th
International Conference on Parallel and Distributed Computing and Systems
PDCS, ACTA Press, pp. 198-205, 2008.
[6] Georgios L.Stavrinides, Helen D.Karatza, “Performance Evaluation of Gang scheduling in Distributed
Real-Time Systems with Possible Software faults”, IEEE SPECTS, pp.no:1-7, 2008.
[7] Richard A.Dutton, Weizhen Mao, Jie Chen, William Watson III, “Parallel Job Scheduling with
Overhead: A Benchmark Study”, IEEE, 2008.
[8] Andrew Dolgert, Lawrence Gibbons, Christopher D.Jones, Valentin Kuznetsov, Mirek Riedewald,
Daniel Riley, Gregory J.Sharp, Peter Wittich, “Provenance in High-Energy Physics Workflows”,
Computing in Science and Engineering (CiSE), Issue 3, pp. 22-29, May/June, 2008.
[9] Sudha S.V., Thanushkodi K, “An Approach for Parallel Job Scheduling using Nimble Algorithm”,
IEEE International Conference on Computing, Communication and Networking, 2008.
[10] Zafeirios C.Papazachos, Helen D.Karatza, “Performance Evaluation of Gang Scheduling in a Two-
Cluster System with Migrations”, IEEE International Parallel and Distributed Processing Symposium
(IPDPS), pp.1-8, 2009.
[11] Edi Shmueli, Dror G.Feitolson, “On simulation and design of Parallel-Systems Schedulers: Are we
doing the Right Thing?”, IEEE Transactions on Parallel and Distributed Systems, vol.20, No.7,
pp.983-996, July 2009.
[12] Adan Hirales, Jose-Luis Gonzalez-Garcia, Andrei Tchemykh, “Workload Generation for Trace based
Grid Simulations”, First International SuperComputing Conference, ISUM, March, 2010.
[13] Grid’5000 team, Dr.Franck Cappello, http://gwa.ewi.tudelf.nl/pmwiki/, 2010.
13. International Journal of Distributed and Parallel Systems (IJDPS) Vol.4, No.5, September 2013
13
[14] M.Balajee, B.Suresh, M.Suneetha, V.Vasudha Rani, G.Veerraju, “Preemptive Job Scheduling with
Priorities and Starvation cum Congestion Avoidance in Clusters”, IEEE International Conference on
Machine Learning and Computing, pp.255-259, 2010.
[15] A.Neela madheswari, R.S.D.Wahida banu, “Static and dynamic job scheduling using communication
aware policy in cluster computing”, International Journal of Computers and Electrical Engineering,
Vol 39, Issue 2, pp. 690-696, 2013.
Authors
A.Neela madheswari received her B.E., during 2000 and her M.E., during 2006 in
Computer Science and Engineering. She is pursuing her Ph.D., in Computer Science
and Engineering from Anna University, Tamil Nadu, India. Her research interests
include Parallel computing and Web technologies.
R.S.D.Wahida Banu received her B.E., during 1981 and her M.E., during 1985. She
completed her Ph.D., during 1998 and currently working as Principal, Government
College of Engineering, Salem, Tamil Nadu, India. Her research interests include
Distributed and Parallel Computing, Networks and Communications and Biomedical
Engineering.