This summary provides the key details from the document in 3 sentences:
The document proposes a genetic algorithm approach combined with a local search algorithm inspired by binary gravitational attraction to solve scheduling problems in grid computing. The algorithm aims to minimize task completion time and costs by optimizing resource selection and load balancing. Experimental results showed that the proposed algorithm achieved better optimization of time and costs and selection of resources compared to other algorithms.
Job Scheduling on the Grid Environment using Max-Min Firefly AlgorithmEditor IJCATR
Grid computing indeed is the next generation of distributed systems and its goals is creating a powerful virtual, great, and
autonomous computer that is created using countless Heterogeneous resource with the purpose of sharing resources. Scheduling is one
of the main steps to exploit the capabilities of emerging computing systems such as the grid. Scheduling of the jobs in computational
grids due to Heterogeneous resources is known as an NP-Complete problem. Grid resources belong to different management domains
and each applies different management policies. Since the nature of the grid is Heterogeneous and dynamic, techniques used in
traditional systems cannot be applied to grid scheduling, therefore new methods must be found. This paper proposes a new algorithm
which combines the firefly algorithm with the Max-Min algorithm for scheduling of jobs on the grid. The firefly algorithm is a new
technique based on the swarm behavior that is inspired by social behavior of fireflies in nature. Fireflies move in the search space of
problem to find the optimal or near-optimal solutions. Minimization of the makespan and flowtime of completing jobs simultaneously
are the goals of this paper. Experiments and simulation results show that the proposed method has a better efficiency than other
compared algorithms.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling. In this paper, a combination of imperialist competition algorithm (ICA) and gravitational attraction is used for to address the
problem of independent task scheduling in a grid environment, with the aim of reducing the makespan and energy. Experimental results
compare ICA with other algorithms and illustrate that ICA finds a shorter makespan and energy relative to the others. Moreover, it
converges quickly, finding its optimum solution in less time than the other algorithms.
Multiprocessor scheduling of dependent tasks to minimize makespan and reliabi...ijfcstjournal
Algorithms developed for scheduling applications on heterogeneous multiprocessor system focus on a
single objective such as execution time, cost or total data transmission time. However, if more than one
objective (e.g. execution cost and time, which may be in conflict) are considered, then the problem becomes
more challenging. This project is proposed to develop a multiobjective scheduling algorithm using
Evolutionary techniques for scheduling a set of dependent tasks on available resources in a multiprocessor
environment which will minimize the makespan and reliability cost. A Non-dominated sorting Genetic
Algorithm-II procedure has been developed to get the pareto- optimal solutions. NSGA-II is a Elitist
Evolutionary algorithm, and it takes the initial parental solution without any changes, in all iteration to
eliminate the problem of loss of some pareto-optimal solutions.NSGA-II uses crowding distance concept to
create a diversity of the solutions.
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Journals
Abstract This paper concentrates on methods which provide efficient processing time of a virtual machine, CPU utilization time of a virtual machine. As the user increases, the performance may be significantly reduced if the tasks are not scheduled in a proper order. In this paper the performance of two already existing algorithms DSP (Dependency Structural Prioritization) algorithm and credit scheduling algorithm are analyzed and compared. A single virtual machine’s processing time and CPU utilization time are measured .Satisfactory results are achieved while comparing the two algorithms. This study concludes that the DSP algorithm can perform efficiently than the credit scheduling algorithm. Keywords: Virtual Machine, DSP algorithm, credit scheduling algorithm
Job Scheduling on the Grid Environment using Max-Min Firefly AlgorithmEditor IJCATR
Grid computing indeed is the next generation of distributed systems and its goals is creating a powerful virtual, great, and
autonomous computer that is created using countless Heterogeneous resource with the purpose of sharing resources. Scheduling is one
of the main steps to exploit the capabilities of emerging computing systems such as the grid. Scheduling of the jobs in computational
grids due to Heterogeneous resources is known as an NP-Complete problem. Grid resources belong to different management domains
and each applies different management policies. Since the nature of the grid is Heterogeneous and dynamic, techniques used in
traditional systems cannot be applied to grid scheduling, therefore new methods must be found. This paper proposes a new algorithm
which combines the firefly algorithm with the Max-Min algorithm for scheduling of jobs on the grid. The firefly algorithm is a new
technique based on the swarm behavior that is inspired by social behavior of fireflies in nature. Fireflies move in the search space of
problem to find the optimal or near-optimal solutions. Minimization of the makespan and flowtime of completing jobs simultaneously
are the goals of this paper. Experiments and simulation results show that the proposed method has a better efficiency than other
compared algorithms.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling. In this paper, a combination of imperialist competition algorithm (ICA) and gravitational attraction is used for to address the
problem of independent task scheduling in a grid environment, with the aim of reducing the makespan and energy. Experimental results
compare ICA with other algorithms and illustrate that ICA finds a shorter makespan and energy relative to the others. Moreover, it
converges quickly, finding its optimum solution in less time than the other algorithms.
Multiprocessor scheduling of dependent tasks to minimize makespan and reliabi...ijfcstjournal
Algorithms developed for scheduling applications on heterogeneous multiprocessor system focus on a
single objective such as execution time, cost or total data transmission time. However, if more than one
objective (e.g. execution cost and time, which may be in conflict) are considered, then the problem becomes
more challenging. This project is proposed to develop a multiobjective scheduling algorithm using
Evolutionary techniques for scheduling a set of dependent tasks on available resources in a multiprocessor
environment which will minimize the makespan and reliability cost. A Non-dominated sorting Genetic
Algorithm-II procedure has been developed to get the pareto- optimal solutions. NSGA-II is a Elitist
Evolutionary algorithm, and it takes the initial parental solution without any changes, in all iteration to
eliminate the problem of loss of some pareto-optimal solutions.NSGA-II uses crowding distance concept to
create a diversity of the solutions.
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A vm scheduling algorithm for reducing power consumption of a virtual machine...eSAT Journals
Abstract This paper concentrates on methods which provide efficient processing time of a virtual machine, CPU utilization time of a virtual machine. As the user increases, the performance may be significantly reduced if the tasks are not scheduled in a proper order. In this paper the performance of two already existing algorithms DSP (Dependency Structural Prioritization) algorithm and credit scheduling algorithm are analyzed and compared. A single virtual machine’s processing time and CPU utilization time are measured .Satisfactory results are achieved while comparing the two algorithms. This study concludes that the DSP algorithm can perform efficiently than the credit scheduling algorithm. Keywords: Virtual Machine, DSP algorithm, credit scheduling algorithm
A HYBRID CLUSTERING ALGORITHM FOR DATA MININGcscpconf
Data clustering is a process of arranging similar data into groups. A clustering algorithm
partitions a data set into several groups such that the similarity within a group is better than
among groups. In this paper a hybrid clustering algorithm based on K-mean and K-harmonic
mean (KHM) is described. The proposed algorithm is tested on five different datasets. The research is focused on fast and accurate clustering. Its performance is compared with the traditional K-means & KHM algorithm. The result obtained from proposed hybrid algorithm is much better than the traditional K-mean & KHM algorithm
Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...Editor IJCATR
Nowadays, human faces with huge data. With regard to expansion of computer technology and detectors, some terabytes are
produced. In order to response to this demand, grid computing is considered as one of the most important research fields. Grid technology
and concepts were used to provide resource subscription between scientific units. The purpose was using resources of grid environment
to solve complex problems.
In this paper, a new algorithm based on Mamdani fuzzy system has been proposed for tasks scheduling in computing grid. Mamdani
fuzzy algorithm is a new technique measuring criteria by using membership functions. In this paper, our considered criterion is response
time. The results of proposed algorithm implemented on grid systems indicate priority of the proposed method in terms of validation
criteria of scheduling algorithms like ending time of the task and etc. Also, efficiency increases considerably.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Scheduling jobs to resources in grid computing is complicated due to the distributed and heterogeneous nature of the resources.
The purpose of job scheduling in grid environment is to achieve high system throughput and minimize the execution time of applications.
The complexity of scheduling problem increases with the size of the grid and becomes highly difficult to solve effectively.
To obtain a good and efficient method to solve scheduling problems in grid, a new area of research is implemented. In this paper, a job
scheduling algorithm is proposed to assign jobs to available resources in grid environment. The proposed algorithm is based on Ant
Colony Optimization (ACO) algorithm. This algorithm is combined with one of the best scheduling algorithm, Suffrage. This paper uses
the result of Suffrage in proposed ACO algorithm. The main contribution of this work is to minimize the makespan of a given set of
jobs. The experimental results show that the proposed algorithm can lead to significant performance in grid environment.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Proposing a Scheduling Algorithm to Balance the Time and Energy Using an Impe...Editor IJCATR
Computational grids have become an appealing research area as they solve compute-intensive problems within the scientific
community and in industry. A grid computational power is aggregated from a huge set of distributed heterogeneous workers; hence, it
is becoming a mainstream technology for large-scale distributed resource sharing and system integration. Unfortunately, current grid
schedulers suffer from the haste problem, which is the schedule inability to successfully allocate all input tasks. Accordingly, some tasks
fail to complete execution as they are allocated to unsuitable workers. Others may not start execution as suitable workers are previously
allocated to other peers. This paper presents an imperialist competition algorithm (ICA) method to solve the grid scheduling problems.
The objective is to minimize the makespan and energy of the grid. Simulation results show that the grid scheduling problem can be
solved efficiently by the proposed method
Load shedding in power system using the AHP algorithm and Artificial Neural N...IJAEMSJORNAL
This paper proposes the load shedding method based on considering the load importance factor, primary frequency adjustment, secondary frequency adjustment and neuron network. Consideration the process of primary frequency control, secondary frequency control helps to reduce the amount of load shedding power and restore the system’s frequency to the permissible range. The amount of shedding power of each load bus is distributed based on the load importance factor. Neuron network is applied to distribute load shedding strategies in the power system at different load levels. The experimental and simulated results on the IEEE 37- bus system present the frequency can restore to allowed range and reduce the damage compared to the traditional load shedding method using under frequency relay- UFLS.
FAST ALGORITHMS FOR UNSUPERVISED LEARNING IN LARGE DATA SETScsandit
The ability to mine and extract useful information automatically, from large datasets, is a
common concern for organizations (having large datasets), over the last few decades. Over the
internet, data is vastly increasing gradually and consequently the capacity to collect and store
very large data is significantly increasing.
Existing clustering algorithms are not always efficient and accurate in solving clustering
problems for large datasets.
However, the development of accurate and fast data classification algorithms for very large
scale datasets is still a challenge. In this paper, various algorithms and techniques especially,
approach using non-smooth optimization formulation of the clustering problem, are proposed
for solving the minimum sum-of-squares clustering problems in very large datasets. This
research also develops accurate and real time L2-DC algorithm based with the incremental
approach to solve the minimum
DGBSA : A BATCH JOB SCHEDULINGALGORITHM WITH GA WITH REGARD TO THE THRESHOLD ...IJCSEA Journal
In this paper , we will provide a scheduler on batch jobs with GA regard to the threshold detector. In The algorithm proposed in this paper, we will provide the batch independent jobs with a new technique ,so we can optimize the schedule them. To do this, we use a threshold detector then among the selected jobs, processing resources can process batch jobs with priority. Also hierarchy of tasks in each batch, will be determined with using DGBSA algorithm. Now , with the regard to the works done by previous ,we can provide an algorithm that by adding specific parameters to fitness function of the previous algorithms ,develop a optimum fitness function that in the proposed algorithm has been used. According to assessment done on DGBSA algorithm, in compare with the similar algorithms, it has more performance. The effective parameters that used in the proposed algorithm can reduce the total wasting time in compare with previous algorithms. Also this algorithm can improve the previous problems in batch processing with a new technique.
An Effective PSO-inspired Algorithm for Workflow Scheduling IJECEIAES
The Cloud is a computing platform that provides on-demand access to a shared pool of configurable resources such as networks, servers and storage that can be rapidly provisioned and released with minimal management effort from clients. At its core, Cloud computing focuses on maximizing the effectiveness of the shared resources. Therefore, workflow scheduling is one of the challenges that the Cloud must tackle especially if a large number of tasks are executed on geographically distributed servers. This entails the need to adopt an effective scheduling algorithm in order to minimize task completion time (makespan). Although workflow scheduling has been the focus of many researchers, a handful efficient solutions have been proposed for Cloud computing. In this paper, we propose the LPSO, a novel algorithm for workflow scheduling problem that is based on the Particle Swarm Optimization method. Our proposed algorithm not only ensures a fast convergence but also prevents getting trapped in local extrema. We ran realistic scenarios using CloudSim and found that LPSO is superior to previously proposed algorithms and noticed that the deviation between the solution found by LPSO and the optimal solution is negligible.
Extended pso algorithm for improvement problems k means clustering algorithmIJMIT JOURNAL
The clustering is a without monitoring process and one of the most common data mining techniques. The
purpose of clustering is grouping similar data together in a group, so were most similar to each other in a
cluster and the difference with most other instances in the cluster are. In this paper we focus on clustering
partition k-means, due to ease of implementation and high-speed performance of large data sets, After 30
year it is still very popular among the developed clustering algorithm and then for improvement problem of
placing of k-means algorithm in local optimal, we pose extended PSO algorithm, that its name is ECPSO.
Our new algorithm is able to be cause of exit from local optimal and with high percent produce the
problem’s optimal answer. The probe of results show that mooted algorithm have better performance
regards as other clustering algorithms specially in two index, the carefulness of clustering and the quality
of clustering.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Effects of The Different Migration Periods on Parallel Multi-Swarm PSO csandit
In recent years, there has been an increasing inter
est in parallel computing. In parallel
computing, multiple computing resources are used si
multaneously in solving a problem. There
are multiple processors that will work concurrently
and the program is divided into different
tasks to be simultaneously solved. Recently, a cons
iderable literature has grown up around the
theme of metaheuristic algorithms. Particle swarm o
ptimization (PSO) algorithm is a popular
metaheuristic algorithm. The parallel comprehensive
learning particle swarm optimization
(PCLPSO) algorithm based on PSO has multiple swarms
based on the master-slave paradigm
and works cooperatively and concurrently. The migra
tion period is an important parameter in
PCLPSO and affects the efficiency of the algorithm.
We used the well-known benchmark
functions in the experiments and analysed the perfo
rmance of PCLPSO using different
migration periods.
An enhanced adaptive scoring job scheduling algorithm with replication strate...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
USING LEARNING AUTOMATA AND GENETIC ALGORITHMS TO IMPROVE THE QUALITY OF SERV...IJCSEA Journal
A hybrid learning automata–genetic algorithm (HLGA) is proposed to solve QoS routing optimization problem of next generation networks. The algorithm complements the advantages of the learning Automato Algorithm(LA) and Genetic Algorithm(GA). It firstly uses the good global search capability of LA to generate initial population needed by GA, then it uses GA to improve the Quality of Service(QoS) and acquiring the optimization tree through new algorithms for crossover and mutation operators which are an NP–Complete problem. In the proposed algorithm, the connectivity matrix of edges is used for genotype representation. Some novel heuristics are also proposed for mutation, crossover, and creation of random individuals. We evaluate the performance and efficiency of the proposed HLGA-based algorithm in comparison with other existing heuristic and GA-based algorithms by the result of simulation. Simulation results demonstrate that this paper proposed algorithm not only has the fast calculating speed and high accuracy but also can improve the efficiency in Next Generation Networks QoS routing. The proposed algorithm has overcome all of the previous algorithms in the literature..
A HYBRID CLUSTERING ALGORITHM FOR DATA MININGcscpconf
Data clustering is a process of arranging similar data into groups. A clustering algorithm
partitions a data set into several groups such that the similarity within a group is better than
among groups. In this paper a hybrid clustering algorithm based on K-mean and K-harmonic
mean (KHM) is described. The proposed algorithm is tested on five different datasets. The research is focused on fast and accurate clustering. Its performance is compared with the traditional K-means & KHM algorithm. The result obtained from proposed hybrid algorithm is much better than the traditional K-mean & KHM algorithm
Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...Editor IJCATR
Nowadays, human faces with huge data. With regard to expansion of computer technology and detectors, some terabytes are
produced. In order to response to this demand, grid computing is considered as one of the most important research fields. Grid technology
and concepts were used to provide resource subscription between scientific units. The purpose was using resources of grid environment
to solve complex problems.
In this paper, a new algorithm based on Mamdani fuzzy system has been proposed for tasks scheduling in computing grid. Mamdani
fuzzy algorithm is a new technique measuring criteria by using membership functions. In this paper, our considered criterion is response
time. The results of proposed algorithm implemented on grid systems indicate priority of the proposed method in terms of validation
criteria of scheduling algorithms like ending time of the task and etc. Also, efficiency increases considerably.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Scheduling jobs to resources in grid computing is complicated due to the distributed and heterogeneous nature of the resources.
The purpose of job scheduling in grid environment is to achieve high system throughput and minimize the execution time of applications.
The complexity of scheduling problem increases with the size of the grid and becomes highly difficult to solve effectively.
To obtain a good and efficient method to solve scheduling problems in grid, a new area of research is implemented. In this paper, a job
scheduling algorithm is proposed to assign jobs to available resources in grid environment. The proposed algorithm is based on Ant
Colony Optimization (ACO) algorithm. This algorithm is combined with one of the best scheduling algorithm, Suffrage. This paper uses
the result of Suffrage in proposed ACO algorithm. The main contribution of this work is to minimize the makespan of a given set of
jobs. The experimental results show that the proposed algorithm can lead to significant performance in grid environment.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Proposing a Scheduling Algorithm to Balance the Time and Energy Using an Impe...Editor IJCATR
Computational grids have become an appealing research area as they solve compute-intensive problems within the scientific
community and in industry. A grid computational power is aggregated from a huge set of distributed heterogeneous workers; hence, it
is becoming a mainstream technology for large-scale distributed resource sharing and system integration. Unfortunately, current grid
schedulers suffer from the haste problem, which is the schedule inability to successfully allocate all input tasks. Accordingly, some tasks
fail to complete execution as they are allocated to unsuitable workers. Others may not start execution as suitable workers are previously
allocated to other peers. This paper presents an imperialist competition algorithm (ICA) method to solve the grid scheduling problems.
The objective is to minimize the makespan and energy of the grid. Simulation results show that the grid scheduling problem can be
solved efficiently by the proposed method
Load shedding in power system using the AHP algorithm and Artificial Neural N...IJAEMSJORNAL
This paper proposes the load shedding method based on considering the load importance factor, primary frequency adjustment, secondary frequency adjustment and neuron network. Consideration the process of primary frequency control, secondary frequency control helps to reduce the amount of load shedding power and restore the system’s frequency to the permissible range. The amount of shedding power of each load bus is distributed based on the load importance factor. Neuron network is applied to distribute load shedding strategies in the power system at different load levels. The experimental and simulated results on the IEEE 37- bus system present the frequency can restore to allowed range and reduce the damage compared to the traditional load shedding method using under frequency relay- UFLS.
FAST ALGORITHMS FOR UNSUPERVISED LEARNING IN LARGE DATA SETScsandit
The ability to mine and extract useful information automatically, from large datasets, is a
common concern for organizations (having large datasets), over the last few decades. Over the
internet, data is vastly increasing gradually and consequently the capacity to collect and store
very large data is significantly increasing.
Existing clustering algorithms are not always efficient and accurate in solving clustering
problems for large datasets.
However, the development of accurate and fast data classification algorithms for very large
scale datasets is still a challenge. In this paper, various algorithms and techniques especially,
approach using non-smooth optimization formulation of the clustering problem, are proposed
for solving the minimum sum-of-squares clustering problems in very large datasets. This
research also develops accurate and real time L2-DC algorithm based with the incremental
approach to solve the minimum
DGBSA : A BATCH JOB SCHEDULINGALGORITHM WITH GA WITH REGARD TO THE THRESHOLD ...IJCSEA Journal
In this paper , we will provide a scheduler on batch jobs with GA regard to the threshold detector. In The algorithm proposed in this paper, we will provide the batch independent jobs with a new technique ,so we can optimize the schedule them. To do this, we use a threshold detector then among the selected jobs, processing resources can process batch jobs with priority. Also hierarchy of tasks in each batch, will be determined with using DGBSA algorithm. Now , with the regard to the works done by previous ,we can provide an algorithm that by adding specific parameters to fitness function of the previous algorithms ,develop a optimum fitness function that in the proposed algorithm has been used. According to assessment done on DGBSA algorithm, in compare with the similar algorithms, it has more performance. The effective parameters that used in the proposed algorithm can reduce the total wasting time in compare with previous algorithms. Also this algorithm can improve the previous problems in batch processing with a new technique.
An Effective PSO-inspired Algorithm for Workflow Scheduling IJECEIAES
The Cloud is a computing platform that provides on-demand access to a shared pool of configurable resources such as networks, servers and storage that can be rapidly provisioned and released with minimal management effort from clients. At its core, Cloud computing focuses on maximizing the effectiveness of the shared resources. Therefore, workflow scheduling is one of the challenges that the Cloud must tackle especially if a large number of tasks are executed on geographically distributed servers. This entails the need to adopt an effective scheduling algorithm in order to minimize task completion time (makespan). Although workflow scheduling has been the focus of many researchers, a handful efficient solutions have been proposed for Cloud computing. In this paper, we propose the LPSO, a novel algorithm for workflow scheduling problem that is based on the Particle Swarm Optimization method. Our proposed algorithm not only ensures a fast convergence but also prevents getting trapped in local extrema. We ran realistic scenarios using CloudSim and found that LPSO is superior to previously proposed algorithms and noticed that the deviation between the solution found by LPSO and the optimal solution is negligible.
Extended pso algorithm for improvement problems k means clustering algorithmIJMIT JOURNAL
The clustering is a without monitoring process and one of the most common data mining techniques. The
purpose of clustering is grouping similar data together in a group, so were most similar to each other in a
cluster and the difference with most other instances in the cluster are. In this paper we focus on clustering
partition k-means, due to ease of implementation and high-speed performance of large data sets, After 30
year it is still very popular among the developed clustering algorithm and then for improvement problem of
placing of k-means algorithm in local optimal, we pose extended PSO algorithm, that its name is ECPSO.
Our new algorithm is able to be cause of exit from local optimal and with high percent produce the
problem’s optimal answer. The probe of results show that mooted algorithm have better performance
regards as other clustering algorithms specially in two index, the carefulness of clustering and the quality
of clustering.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Effects of The Different Migration Periods on Parallel Multi-Swarm PSO csandit
In recent years, there has been an increasing inter
est in parallel computing. In parallel
computing, multiple computing resources are used si
multaneously in solving a problem. There
are multiple processors that will work concurrently
and the program is divided into different
tasks to be simultaneously solved. Recently, a cons
iderable literature has grown up around the
theme of metaheuristic algorithms. Particle swarm o
ptimization (PSO) algorithm is a popular
metaheuristic algorithm. The parallel comprehensive
learning particle swarm optimization
(PCLPSO) algorithm based on PSO has multiple swarms
based on the master-slave paradigm
and works cooperatively and concurrently. The migra
tion period is an important parameter in
PCLPSO and affects the efficiency of the algorithm.
We used the well-known benchmark
functions in the experiments and analysed the perfo
rmance of PCLPSO using different
migration periods.
An enhanced adaptive scoring job scheduling algorithm with replication strate...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
USING LEARNING AUTOMATA AND GENETIC ALGORITHMS TO IMPROVE THE QUALITY OF SERV...IJCSEA Journal
A hybrid learning automata–genetic algorithm (HLGA) is proposed to solve QoS routing optimization problem of next generation networks. The algorithm complements the advantages of the learning Automato Algorithm(LA) and Genetic Algorithm(GA). It firstly uses the good global search capability of LA to generate initial population needed by GA, then it uses GA to improve the Quality of Service(QoS) and acquiring the optimization tree through new algorithms for crossover and mutation operators which are an NP–Complete problem. In the proposed algorithm, the connectivity matrix of edges is used for genotype representation. Some novel heuristics are also proposed for mutation, crossover, and creation of random individuals. We evaluate the performance and efficiency of the proposed HLGA-based algorithm in comparison with other existing heuristic and GA-based algorithms by the result of simulation. Simulation results demonstrate that this paper proposed algorithm not only has the fast calculating speed and high accuracy but also can improve the efficiency in Next Generation Networks QoS routing. The proposed algorithm has overcome all of the previous algorithms in the literature..
Центр информационно-методического и психолого-педагогического сопровождения деятельности учреждений образования
г. Москвы
и их работников по социально - педагогической работе с детьми, склонными к девиантному поведению в условиях образовательного учреждения
Come ottimizzare le parole chiave che compaiono nel tuo Profilo LinkedIn. Come funziona l'algoritmo di ricerca su LinkedIn, alcuni fattori che influenzano il posizionamento organico.
Contently London Salon: 5 Steps to Building a Content Marketing Powerhousecontently
Everything you need to know to get started in content marketing—from getting the rest of your company on board to measuring and optimizing your advanced operation.
MULTIPROCESSOR SCHEDULING AND PERFORMANCE EVALUATION USING ELITIST NON DOMINA...ijcsa
Task scheduling plays an important part in the improvement of parallel and distributed systems. The problem of task scheduling has been shown to be NP hard. The time consuming is more to solve the problem in deterministic techniques. There are algorithms developed to schedule tasks for distributed environment, which focus on single objective. The problem becomes more complex, while considering biobjective.This paper presents bi-objective independent task scheduling algorithm using elitist Nondominated
sorting genetic algorithm (NSGA-II) to minimize the makespan and flowtime. This algorithm generates pareto global optimal solutions for this bi-objective task scheduling problem. NSGA-II is implemented by using the set of benchmark instances. The experimental result shows NSGA-II generates efficient optimal schedules.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
SOLVING OPTIMAL COMPONENTS ASSIGNMENT PROBLEM FOR A MULTISTATE NETWORK USING ...ijmnct
Optimal components assignment problem subject to system reliability, total lead-time, and total cost constraints is studied in this paper. The problem is formulated as fuzzy linear problem using fuzzy membership functions. An approach based on genetic algorithm with fuzzy optimization to sole the presented problem. The optimal solution found by the proposed approach is characterized by maximum reliability, minimum total cost and minimum total lead-time. The proposed approach is tested on different examples taken from the literature to illustrate its efficiency in comparison with other previous methods.
SOLVING OPTIMAL COMPONENTS ASSIGNMENT PROBLEM FOR A MULTISTATE NETWORK USING ...ijmnct
Optimal components assignment problem subject to system reliability, total lead-time, and total cost
constraints is studied in this paper. The problem is formulated as fuzzy linear problem using fuzzy
membership functions. An approach based on genetic algorithm with fuzzy optimization to sole the
presented problem. The optimal solution found by the proposed approach is characterized by maximum
reliability, minimum total cost and minimum total lead-time. The proposed approach is tested on different
examples taken from the literature to illustrate its efficiency in comparison with other previous methods
Applying Genetic Algorithms to Information Retrieval Using Vector Space ModelIJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in 1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used cosine similarity and jaccards to compute similarity between the query and documents, and used two proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study concluded that we might have several improvements when using adaptive genetic algorithms.
APPLYING GENETIC ALGORITHMS TO INFORMATION RETRIEVAL USING VECTOR SPACE MODEL IJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in 1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used cosine similarity and jaccards to compute similarity between the query and documents, and used two proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study concluded that we might have several improvements when using adaptive genetic algorithms.
Applying genetic algorithms to information retrieval using vector space modelIJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on
mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in
1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used
cosine similarity and jaccards to compute similarity between the query and documents, and used two
proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at
evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study
concluded that we might have several improvements when using adaptive genetic algorithms.
Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing IJORCS
This paper shows the importance of fair scheduling in grid environment such that all the tasks get equal amount of time for their execution such that it will not lead to starvation. The load balancing of the available resources in the computational grid is another important factor. This paper considers uniform load to be given to the resources. In order to achieve this, load balancing is applied after scheduling the jobs. It also considers the Execution Cost and Bandwidth Cost for the algorithms used here because in a grid environment, the resources are geographically distributed. The implementation of this approach the proposed algorithm reaches optimal solution and minimizes the make span as well as the execution cost and bandwidth cost.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
SWARM INTELLIGENCE SCHEDULING OF SOFT REAL-TIME TASKS IN HETEROGENEOUS MULTIP...ecij
In this paper, a hybrid swarm intelligence algorithm (named VNABCSA) is presented for the scheduling of non-preemptive soft real-time tasks in heterogeneous multiprocessor platforms. The method is based on a combination of artificial bee colony and simulated annealing algorithms. The multi-objective function of the VNABCSA algorithm is defined to minimize the total tardiness of all tasks, total number of utilized processors, total completion time, total waiting time for all tasks, and total waiting time for all processors. We introduce a hybrid variable neighborhood search strategy to improve the convergence speed of the algorithm. Simulation results demonstrate the efficiency of the proposed methodology as compared with the
existing scheduling algorithms.
Feature selection in high-dimensional datasets is
considered to be a complex and time-consuming problem. To
enhance the accuracy of classification and reduce the execution
time, Parallel Evolutionary Algorithms (PEAs) can be used. In
this paper, we make a review for the most recent works which
handle the use of PEAs for feature selection in large datasets.
We have classified the algorithms in these papers into four main
classes (Genetic Algorithms (GA), Particle Swarm Optimization
(PSO), Scattered Search (SS), and Ant Colony Optimization
(ACO)). The accuracy is adopted as a measure to compare the
efficiency of these PEAs. It is noticeable that the Parallel Genetic
Algorithms (PGAs) are the most suitable algorithms for feature
selection in large datasets; since they achieve the highest accuracy.
On the other hand, we found that the Parallel ACO is timeconsuming
and less accurate comparing with other PEA.
This paper discusses the possible applications of particle swarm optimization (PSO) in the Power system. One of the problems in Power System is Economic Load dispatch (ED). The discussion is carried out in view of the saving money, computational speed – up and expandability that can be achieved by using PSO method. The general approach of the method of this paper is that of Dynamic Programming Method coupled with PSO method. The feasibility of the proposed method is demonstrated, and it is compared with the lambda iterative method in terms of the solution quality and computation efficiency. The experimental results show that the proposed PSO method was indeed capable of obtaining higher quality solutions efficiently in ED problems.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Similar to Proposing a scheduling algorithm to balance the time and cost using a genetic algorithm (20)
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This study focused on the practice of using computing resources more efficiently while maintaining or increasing overall performance. Sustainable IT services require the integration of green computing practices such as power management, virtualization, improving cooling technology, recycling, electronic waste disposal, and optimization of the IT infrastructure to meet sustainability requirements. Studies have shown that costs of power utilized by IT departments can approach 50% of the overall energy costs for an organization. While there is an expectation that green IT should lower costs and the firm’s impact on the environment, there has been far less attention directed at understanding the strategic benefits of sustainable IT services in terms of the creation of customer value, business value and societal value. This paper provides a review of the literature on sustainable IT, key areas of focus, and identifies a core set of principles to guide sustainable IT service design.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
The Art of the Pitch: WordPress Relationships and Sales
Proposing a scheduling algorithm to balance the time and cost using a genetic algorithm
1. International Journal of Computer Applications Technology and Research
Volume 4– Issue 1, 11 - 17, 2015, ISSN:- 2319–8656
www.ijcat.com 11
Proposing a scheduling algorithm to balance the time
and cost using a genetic algorithm
Ali Akbar Faraj
Department of Computer
Science and Research Branch
Islamic Azad University Kish, Iran
Ali Haroon Abadi
Member of Science Board of
Computer Group in Azad Islamic
University of Tehran Center
Abstract: Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling.
In this paper, a combination of genetic algorithms and binary gravitational attraction is used for scheduling problem solving, where the
reduction in the duty performance timing and cost-effective use of simultaneous resources are investigated. In this case, the user
determines the execution time parameter and cost-effective use of resources. In this algorithm, a new approach that has led to a
balanced load of resources is used in the selection of resources. Experimental results reveals that our proposed algorithm in terms of
cost-time and selection of the best resource has reached better results than other algorithm.
Keywords: Grid computing, Static timing strategies, Genetic algorithms, Local search algorithm following the binary gravitational
attraction, Optimizing Time
1. INTRODUCTION
Grid is a form of distributed computing systems which is
available on a network and appears to the user as a large
virtual computing system [4]. One of the most important
parameters in the Grid environment is grid scheduling and
load balancing services. Therefore, a correct and reliable
scheduler is needed to increase the efficiency of the grid.
Unfortunately, the dynamic nature of the Grid resources and
users’ demands has led to complexity of Grid scheduling
problem and optimized assigning of tasks to resources. This
would have prompted researchers to use innovative
algorithms for solving this challenge. A scheduler in grid
environment should provide a time and cost effective
scheduler for users through using users’ parameters, without
engaging users in the complexity of such an environment.
Genetic algorithm is one of the best innovative algorithms due
to the fact that it generally investigates the problem from
several different directions at once. Therefore, genetic
algorithm is used to solve many optimization problems [7]. In
this paper, after reviewing the strengths and weaknesses of
previous approaches, the combination of genetic algorithm
and local search following the gravitational force will be
examined with an aim of reducing the simultaneous
implementation of tasks time and costs. In this algorithm, a
new approach has been used in the selection of resources.
The following sections of this paper are as follows: Section 2
describes the literature. Section 3 introduces the related work
on scheduling and Section 4 is dedicated to the algorithm
proposed. Performance evaluation of the proposed algorithm
is presented in Section 5. Section 6 presents the conclusions.
2. BACKGROUND
2.1 Scheduling Method
The scheduling task issue is considered as a tough challenge
which is composed of n tasks and m resources. Each task
must be processed by a machine and does not stop until the
end of the performance. We used ETC matrix model
described in [2]. The system assumes that the expected
execution time for each task i, on every resource j is
predetermined and is located in the matrix ETC, ETC [i, j].
Here, makespan is regarded as the maximum completion time
in Completion-Time [i, j], calculated in the following equation
(1)[10]:
Makespan=Max(Completion- Time[i,j]1≤i≤N,1≤j≤M
(1)
In the above equation, Completion-Time [i, j] is equal to
thetime when the task i on the source j is completed and it is
calculated in equation (2):
Completion-Time[i,j]=Ready[M]+ETC[i,j] (2)
2. International Journal of Computer Applications Technology and Research
Volume 4– Issue 1, 11 - 17, 2015, ISSN:- 2319–8656
www.ijcat.com 12
2.2 Genetic Algorithm
Inspired by the evolution of organisms in nature, genetic
algorithm is optimizing problems. Based on the evolution of
living organisms, species can survive in nature. Genetic
algorithm randomly selects chromosomes. Combination is a
process that displaces determined sequence of the
chromosomes. Mutation is a process that changes the
determined sequences of chromosomes with multiple mapping
functions, new to the current population. Combination and
mutation operations are done randomly. After this operation, a
new population is generated, then the population will be
evaluated and the process will be repeated several times until
the accomplished process criteria are authenticated [5].
2.3 Local search algorithms following
binary gravitational attraction
Voudouris, et al. proposed GELS algorithm for the first time
in 1995 to surf in a search and tough problem solving space
[11]. In 2007, Webster offered it as a powerful algorithm and
called it the Gravitational Attraction Algorithm. This
algorithm is based on the principle of gravity force and
imitates this natural process to surf within a search space. It is
assumes that there are only the gravity and motion laws
governing. Each response has different neighbors which can
be classified based on a criterion related to the problem.
Obtained neighbors in each group are called neighbors in that
dimension. Each dimension has got an initial velocity. After
an initial velocity is defined for each dimension, the greater
velocity the dimension has, the better response will be
provided to the problem. In each dimension, the neighbors
responses to simmilar methods are obtained from the current
response. It is so-called obtaining neighbor’s response from
the current response in a specified dimension. For each
dimension of response, an initial velocity is defined. The
greater the velocity is, the more appropriate response will be
provided for the issue [5]. The mass of an object is
determined based on its fitness, according to equation (4). The
fiti (t) is the the suitability of mass i, worst (t) is the worst
fitness at time t, and best (t) is the best fitness at time t.
( )
( )
∑ ( )
(3)
( )
( ) ( )
( ) ( )
(4)
The mass of dimension d is accelerated in a way that is
proportional to the force exerted on the mass in that (Fid)
direction divided on the inertial mass or inertia (Mii),
expressed in equation (5):
( )
( )
( )
(5)
Elitist gravitational search algorithm is introduced for solving
continuous problems in search space. In fact, Vid has the
likelihood of zero or one of Xid instead of indicating the
speed in binary version. Its formula uses equation (6) to
update the position of each dimension [10]
( ) {
( ) ( ( ))
( ) ( ( ))
} (6)
Each mass velocity is the sum of the ratio of its current mass
speed and mass acceleration, updated according to equation
(7). It should be noted that because of determination of the
range for the initial velocity, if adding momentum to an entry
of the initial velocity vector cause its value to exceed from the
specified range, its value would be set in this range [9].
( ) ( ) (t) (7)
3. Previous Research
Depending on the size and dynamics of the Grid, deterministic
algorithms cannot be useful for solving the scheduling
problems. This has led researchers to research innovative
algorithms for these problems.
A valuable technique for the design of effective scheduling is
proposed using genetic algorithm and aims to minimize the
response time tasks [12]. The results show that the scheduler
has a very high speed and decreases time response. Grid users
pay the costs to holders of financial resources for the sake of
using such resources [1]. This brings about an incentive for
resource owners to share their resources. This model leads to
significant economic costs and benefits for users and owners
of the resources. As a result, the costs and economic benefits
are considered in the objective function of scheduler in most
of the scheduling algorithms [11]. A new algorithm has been
suggested by combining genetic algorithms and gravitational
attraction to solve the scheduling task problems in grid
environment. The cost of using this algorithm is ignored [8].
In reference [5], a genetic algorithm, in which disorganized
variables are used instead of random variables to produce
chromosomes, is proposed. This makes the solution in the
search space spread, prevents the algorithm premature
convergence, and brings about better solutions and product in
a shorter time. This paper uses an algorithm for scheduling
tasks in grid gravitational attraction [3].
Most of these procedures have considered completion time
and a few other implementation costs. However, they failed to
establish a balance and balanced load among them. Thus, in
Section 4, a new method will be discussed to investigate the
balance and balanced load in a Grid environment.
4. . THE PROPOSED ALGORITHM
The proposed scheduling uses the combination of genetic
algorithms and binary gravitational attraction for scheduling
tasks in grid. Since the performance of genetic algorithms
depends largely on how chromosomes are encoded, we will
further investigate it. In this way, a chromosome shows a
solution. Each solution in this algorithm is encoded as a row
of natural numbers matrix. For example, we have assumed a
set of n tasks T = {T1, T2, T3, ......, Tn} and a set of m
resource P = {P1, P2, P3, ......, Pm}. In this method, each gene
represents a cell as a function and the chromosomes’ lengths
are considered as the number of input functions. The contents
of each cell of the matrix is as the resource and its value in
chromosomes can be random between 1 and m. Figure 1
shows an example of the chromosomes encoding.
3. International Journal of Computer Applications Technology and Research
Volume 4– Issue 1, 11 - 17, 2015, ISSN:- 2319–8656
www.ijcat.com 13
Figure 1. The chromosomes encoding in BGAGSA algorithm
4.1 The objectives and fitness function in
the proposed algorithm
The first objective of the proposed algorithm is to reduce the
longest completion task time of all processors in the system
with makespan. In the proposed method, the user is allowed to
enter the maximum completion time and maximum costs of
task performance. Here suppose that Lei represents the task i
and R is the processing speed of the resource. The execution
time of task i on j resource can be calculated by the equation
(8).
( ) (8)
The completion time of task i on resource j can be calculated
based on equation (9).
( ) ( ) ( ) (9)
In the above equation, TransferT and wait (i, j) are
respectively the data transfer time and waiting execution time.
Thus, the makespan system can be calculated using equation
(10), where Aj is a set of tasks allocated to resource j.
( )
(∑ )
(10)
* ( )+ (11)
The second objective of our algorithm is the total cost which
must be minimized. Suppose that Rj representing a fixed unit
price for resource j and is considered fixed and TransferC is
the cost data transfer. The cost of task i on resource j would be
calculated by equation (12).
( ) ( ( ) ) (12)
The cost of task i accomplishment on resource j can be
computed using equation (13).
( ) ( ) (13)
Therefore, the overall cost of a solution (chromosome), which
represents the cost associated with a chromosome in the
population, can be expressed using equation (14).
∑ ( ) (14)
4.2 First fitness function
When offering tasks, some users are concerned with the task
cost and others with completion time. Accordingly, the user
can enter the values of cost and
time (XT and XC) in the proposed
method. These values are placed in
the range [0,1] and sum of them is
equal one. For example, if Xc is
equal to .7, this means that the user
is 70% concerned with costs and 30% concerned with
completion task time. The scheduler should find a resource for
performing tasks and that resource should improve costs up to
70% and time up to 30%. The most important task scheduling
objective is to minimize the time and cost of completing tasks.
Given the above definitions, a chromosome fitness function
can be calculated in equation (15):
(15)
4. International Journal of Computer Applications Technology and Research
Volume 4– Issue 1, 11 - 17, 2015, ISSN:- 2319–8656
www.ijcat.com 14
Where FTime and FCost are the longest completion time of
tasks and the overall cost of a solution. As the equation
shows, the less the values of FTime and FCost are, the more
the value of the fitness function will be and shows that the
more suitable solution for the scheduling problem it is. In the
proposed algorithm, to select chromosomes (parents) from
competition operator and intersection operations from
partially mapped crossover operator in mutation phase after
selecting a chromosome in previous stage, a gene is randomly
selected and its source field value randomly changes between
1 and m. The main purpose of these mutations is changing the
resource parameter of a function in order for a task to be
applied on better processing resources.
4.3 Second fitting Function
Since the main objective of the proposed scheduling algorithm
is to minimize the total length of time and reduce the cost of
implementing tasks, several chromosomes are found and their
total scheduling length is the same. Thus, a chromosome with
more balanced load is selected in the second fitness function.
To calculate the balanced load, we calculate the sum of the
standard deviation of productivity. As a result, we must
calculate the efficiency of resources. Productivity is calculated
in equation (16). We initially calculate the greatest
performance time.
* +
(16)
In the above equation, Tij is the sub-task processing time and t
is data transmission time. They are calculated to obtain the
greatest performance time to be put into the variable U. If the
efficiency of resource i in the execution of the task j is shown
by E the relationship, it can be calculated in equation (17).
(17)
In the amount of U is then calculated using equation (16).
According to equation (17), the efficiency of each source will
be obtained. To obtain the standard deviation, we calculate the
mean total productivity using equation (18).
∑
(18)
To do this, we need to add all the productivity of resources
together and then divide it by the number of sources. Then,
we calculate the load balance using equation (18) at which the
efficiency of all resources is computed. Finally, if we have μ2
= 0, the maximum load reduction will be obtained.
4.4 Local search algorithm parameters
imitating the binary gravitational attraction
At this stage, because the genetic local search algorithm is
weak, some solutions, which are similar in terms of overall
length of scheduler, are given to the GELS algorithm to have
a solution to their neighbors. Each current solution has
different neighbors. Each of them is obtained based on a
change in the current solution, that is, shift towards a neighbor
solution [4]. In the proposed method, a solution represents a
chromosome and solution dimensions are considered as each
gene from chromosome (Dimensions of the solution are the
neighbor solutions which are obtained through changing the
current solution). To determine the representation of solutions
with the aim of finding an appropriate mapping, it is
performed between solutions and GELS factors. Solutions are
encoded in the form of m × n where m represents the source
and n represents the task and the element XK (i, j) indicates
that task j is assigned to an agent k in the resource i. For
example, tasks 4 and 5 are executed simultaneously in a
resource. Each task can only be assigned to a single resource.
Each resource can have multiple tasks in sequential form. The
encryption is performed using equation (19) to update the
position of each dimension [13].
( ) {
( ) * ( )+
* + }
(19)
In the above equation, a resource is dedicated to a task Xk and
the relevant element in Vk is not assigned to any other
resource. The number of entries of initial velocity vector has
been considered equal to the number of solutions. The initial
velocity vector entries are initially assigned random numbers
between 1 and maximum initial velocity [5]. The search space
is considered as a set of N mass. The position of each mass in
the search space is a point of the space that is a solution to the
problem. Neighbor solution of each current solution is equal
to a solution where the source assigned to task changes. This
is done in a way that an initial velocity is given to each
dimension of the solution. This work is done randomly and its
value is between one and maximum speed. The gene with the
highest speed is selected and its value will change randomly
(with a number between 1 and M) [5].
After obtaining the solution of the current neighbor, the
fitness value of the solution of neighbor is calculated using the
equation (15). If the neighboring solution is improved in
comparison with the current solution, it replaces its parent
chromosome in the new population. Otherwise, it will not be
copied into the new population, and its value is maintained to
calculate the mass of each particle. Then, the value of
acceleration is calculated between the neighboring solution
and the current solution, and its value is added to the initial
velocity vector, related to the dimension from which the
neighboring solution is obtained, in order for the initial
velocity vector to be updated. It should be noted that due to
the determination of the range for the initial velocity, if by
adding the gravitational force to an entry in the initial velocity
vector, its value exceeds the specified range, its value is set in
this range interval. Given the above, the mass of neighboring
solutions is calculated through equation (20) [13]
( )
∑ ( )
(21)
Where F(Xk) is the fitness value of particle, and worst is the
worst value among all the particles. The acceleration matrix of
K factors is calculated through equation (22) [13].
( ) ∑ ( ( ) ( ))
(22)
According to the above equation, the mass of i with an
acceleration equal to a is drawn to the mass of j (ak (i, j)), in
5. International Journal of Computer Applications Technology and Research
Volume 4– Issue 1, 11 - 17, 2015, ISSN:- 2319–8656
www.ijcat.com 15
0
50
100
150
200
250
300
implementationtime(sec)
Things length restrictions
GA
Genetic-
VNS
Genetic-
GELS
which Kbest is a collection of first K factor with the best
performance value and the greatest mass. Mq is the mass of Q
factor and G is the gravitational constant. The pseudo-code of
the proposed algorithm is presented in figure 2.
Figure 2.Pseudocode the algorithm proposed
5. DISCUSSION
The results of the evaluation of the proposed algorithm with
the two algorithms of [6] GA and [7] Genetic-Variable
neighborhood search for scheduling independent tasks are
presented in this section. All experiments have been done on a
system running Windows XP operating system with
configuration of 3 GHz CPU and 4GB of RAM. Table (1)
indicates the amounts of parameters of genetic and binary
gravitational emulation local search algorithms.
To evaluate the proposed algorithm, a series of simulations
was considered on 10 sources based on the change in the
number of users’ tasks and payable budgets by the customers.
In the first experiment, aimed at optimizing the time, we have
experimented the proposed algorithm with two other
scheduling algorithms through applying a change in the
number of tasks and a fixed budget. The number of 20
iterations have been performed in this algorithm in order to
obtain the execution time of task using existing algorithms, as
the results have been uncertain and the results of each
performance can be different to some extent from the previous
one. The results of the first experiment (figure 3) indicate that
the proposed algorithm has a higher performance than the
other two algorithms in terms of task completion time.
Furthermore, by increasing the number of iterations, the
overall performance time increases in all algorithms.
step1: initialization
step2: 2.1. Generate the k number of random Chromosome
with lenght n
2.2. Speed_vector[1..n] = The initial velocity for
each dimension
2.3. Location_vector[1..n] = The value of the initial
position of each particle
2.4. Setting the Maximum: Maximum-Time (Mt)
and Maximum-cost (Mc) according to the user’s
requirement.
step3: Compute Makespan and cost for all chromosomes.
step4: repeat
4.1. Evaluate all individuals using formula 15
4.2. Select the P/2 members of the combined population
based on minimum fitness, to make the population the
next generation.
4.3. Crossover & Mutation
step5: The best chromosomes from the genetic algorithm as
a solution to it current GELS will be producing a
neighboring chromosomes.
step6:
6.1. direction = max(speed_vector[...])
6.2. change the velocity_vector[index] of
current_solution with random integer between 1 and Max
Velocity.
6.3. Fitness of the neighboring chromosomes by
equation (15) is calculated and and saving the worst fitness
6.4. if direct chromosomes < neighbor chromosome
Neighbor chromosome is selected as the best solution
setp7:
7.1. Calculated the mass of each particle with using
formula (22)
7.2. Calclulate Acceleration Matrix for 'k' factors with
using formula (21)
step8:
8.1. Update the factors by using (19)
8.2. Update Velocity_Vector for each dimension by
acceleration matrix of chromosome with using formula (7)
until a terminating criterion is satisfied
Figure 3. Comparison of makespan of algorithms
GA
Number of generations 100
Selection Contesting
Combined rate 0.85
Mutation rate 0.02
GSA
Initial position Between 1 and n
Initial velocity Between 1 and the
maximum speed
Maximum speed The size of the input
functions
Neighborhood radius (R) 1
Gravitational constant (G) 6.672
6. International Journal of Computer Applications Technology and Research
Volume 4– Issue 1, 11 - 17, 2015, ISSN:- 2319–8656
www.ijcat.com 16
0
100
200
300
20 30 40 50 60
maxexecutiontime
(second)
number of tasks
Genetic-
VNS
Genetic
Annealing
BGAGSA
0
50
100
150
200
250
300
maxexecutiontime
Cost
GA
Genetic-VNS
Genetic-GELS
Figure 4. Results of time-optimization algorithms with
different budgets
Having the purpose of optimizing time, the second experiment
is compared through providing different budgets. The
proposed algorithm is evaluated along with several other
scheduling algorithms. The budget parameter has been
considered as the variable. As it can be seen in figure 4,
Genetic-GELS algorithm functioned better than other
algorithms in all cases, and by increasing the budget, the
amount of execution time is reduced. In fact, the more
specified budget set by the user, the less time is required for
algorithms to perform practical tasks.
In the third experiment, the effect of heterogeneity of tasks on
the performance of the proposed algorithm and other targeted
algorithms was investigated. The length of the tasks is
considered variable. As it can be seen in figure 5, the time
changes differently due to an increase in the heterogeneity of
the tasks in each of the algorithms. A good algorithm is
expected to be able to make use of the heterogeneity of tasks
and reduce the execution time of tasks. As it is shown clearly
in the figure, the proposed algorithm has achieved a better
performance compared to other algorithms.
6. CONCLUSIONS
In the proposed method, the researchers have combined
genetic algorithm with binary gravitational emulation local
search algorithm which has been used for scheduling in grid
environment. The proposed algorithm, in which a weighted
objective function is used considering the degree of
importance of time and cost of user’s projects, gives more
freedom for specifying the time and cost of users’ projects. In
the use of this algorithm, similar to the objective function, the
time and the cost, along with their weight, are considered
based on the user's perspective. The purposes of the proposed
scheduling are to minimize completion time and cost of
implementation of tasks for different tasks of users
simultaneously.
The proposed scheduling is compared with two algorithms of
[6] GA and [7] Genetic-Variable neighborhood search with
regard to the parameters of time and cost. The results are
investigated based on cost and user requests in different
charts. They show that if we have limited number of resources
as well as high number of duties, the proposed scheduler has
the best performance compared to the other three schedulers
in reducing the overall time and cost of scheduling. The
results of experiments show that the hybrid genetic algorithm
and the gravitational attraction can reach a high performance
regarding creation of a balance between cost and tasks
implementation scheduling. Further research can be
conducted with regard to the practice of resource allocation to
works in the proposed algorithm through using an approach
based on fuzzy logic and using of fuzzy inference system.
Figure 5. Results of time-optimization algorithms with
different heterogeneities
7. REFERENCES
[1]Buyya,R., Giddy J., Abramson,D., A case for economy
grid architecture for service-oriented grid computing , in: 10th
IEEE Internat. Heterogeneous Computing Workshop , San
Francisco, CA, April 2001.
[2]Braun, T. D., Siegel, H. J , A taxonomy for describing
matching and scheduling heuristics for mixed machine
heterogeneous computing systems, Proceedings of the 17th
IEEE Symposium on Reliable Distributed Systems, pp. 330-
335, 1998.
[3]Cruz-Chavez, M., Rodriguez-Leon, A., Avila-Melgar, E.,
Juarez-Perez, F., Cruz-Rosales, M. and Rivera-Lopez,
R,“Genetic-Annealing Algorithm in Grid Environment for
Scheduling Problems, Security-Enriched Urban Computing
and Smart Grid Communications in Computer and
Information Science”, Springer, Vol. 78, pp. 1-9, 2010.
[5]Ghaedrahmati V.,Enayatallah Alavi S.,Attarzadeh I.,"A
Reliable and Hybrid Scheduling Algorithm based on Cost and
Time Balancing for Computational Grid "ACSIJ Advances in
Computer Science: an International Journal, Vol. 3, Issue 3,
No.9 , May , ISSN : 2322-5157, 2014.
[6]Gharooni fard. G, Moein darbari. F, Deldari. H, Morvaridi.
A, "Scheduling of scientific workflows using a chaos- genetic
algorithm", International Conference on Computational
Science ICCS, pp. 1439- 1448, 2010.
[7]Holland, j. “Adaptation in Natural and Artificial Systems”,
AnnArbor,MI: Univ. of Michigan Press. 220, no. 4598, pp.
671- 680. 1975.
7. International Journal of Computer Applications Technology and Research
Volume 4– Issue 1, 11 - 17, 2015, ISSN:- 2319–8656
www.ijcat.com 17
[8]Kardani-Moghaddam, S., Khodadadi, F., Entezari-Maleki,
R., and Movaghar,A., “A Hybrid Genetic Algorithm and
Variable Neighborhood Search for Task Scheduling Problem
in Grid Environment”, Procedia Engineering, 29, pp. 3808-
3814,2012.
[9]Pooranian, Z., Harounabadi, A., Shojafar, M., Hedayat,.N,
“New hybrid algorithm for task scheduling in grid computing
to decrease missed task”, World Acad Sci Eng Technol
55,924–928,2011.
[10]Rashedi, E., Nezamabadi-Pour, H., Saryazdi, S.,"BGSA:
binary gravitational search algorithm". Nat. Comput. 9(3),
727–745, 2010.
[11]Shojafar, M., Pooranian, Z., Abwajy,J.H., Meybodi, M.,
,”An Efficient Scheduling Method for Grid Systems Based on
a Hierarchical Stochastic Petri Net” ,Journal of Computing
Science and Engineering,Vol. 7, No. 1, March, pp. 44-
52,2013.
[13]Young,L., McGough,S., Newhouse,S. and Darlington,J.,
Scheduling Architecture and Algorithms within the ICENI
Grid Middleware, in Proc. of UK e-Science All Hands
Meeting, pp. 5-12,
Nottingham, UK, September 2003.
[14]Zhang, L., Chen, Y., Sun, R., Jing, S., Yang, B., “A Task
Scheduling Algorithm Based on PSO for Grid Computing”,
International Journal of Computational Intelligence Research,
Vol 14. , No.1 , pp 37-43,2008.
[15]Zarrabi A.,Samsudin K.,"Task scheduling on
computational Grids using Gravitational Search
Algorithm",Department of Computer and Communication
Systems Engineering,University Putra Malaysia,Volume
17,issue 3, December,ISSN: 1573-7543,pp 1001-1011. 2013.