A performance analysis of metaheuristics and hybrid metaheuristics for the travel salesman problem is presented. Four single classical metaheuristics (genetic algorithm, memetic algorithm, iterated local search, and simulated annealing) were used. In addition, hybrid variations using nine different heuristic techniques for the local search, the mutation, and the intensification were used. The performance analysis was made using the Friedman test, and for the simulated annealing and local search algorithms statistical evidence was found that hybridization provides a difference in performance, while no evidence was found for the genetic and memetic algorithms. Up to six combinations were found to improve performance, five of them based on local search and one more based on simulated annealing.
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...ijmpict
Nowadays swarm intelligence-based algorithms are being used widely to optimize the dynamic traveling salesman problem (DTSP). In this paper, we have used mixed method of Ant Colony Optimization (AOC) and gradient descent to optimize DTSP which differs with ACO algorithm in evaporation rate and innovative data. This approach prevents premature convergence and scape from local optimum spots and also makes it possible to find better solutions for algorithm. In this paper, we’re going to offer gradient descent and ACO algorithm which in comparison to some former methods it shows that algorithm has significantly improved routes optimization.
In recent years, consumers and legislation have been pushing companies to optimize their activities in such a way as to reduce negative environmental and social impacts more and more. In the other side, companies
must keep their total supply chain costs as low as possible to remain competitive.This work aims to develop a model to traveling salesman problem including environmental impacts and to identify, as far as possible, the contribution of genetic operator’s tuning and setting in the success and
efficiency of genetic algorithms for solving this problem with consideration of CO2 emission due to transport. This efficiency is calculated in terms of CPU time consumption and convergence of the solution. The best transportation policy is determined by finding a balance between financial and environmental
criteria.Empirically, we have demonstrated that the performance of the genetic algorithm undergo relevant
improvements during some combinations of parameters and operators which we present in our results part.
AUTOMATIC TRANSFER RATE ADJUSTMENT FOR TRANSFER REINFORCEMENT LEARNINGgerogepatton
This paper proposes a novel parameter for transfer reinforcement learning to avoid over-fitting when an
agent uses a transferred policy from a source task. Learning robot systems have recently been studied for
many applications, such as home robots, communication robots, and warehouse robots. However, if the
agent reuses the knowledge that has been sufficiently learned in the source task, deadlock may occur and
appropriate transfer learning may not be realized. In the previous work, a parameter called transfer rate
was proposed to adjust the ratio of transfer, and its contribution include avoiding dead lock in the target
task. However, adjusting the parameter depends on human intuition and experiences. Furthermore, the
method for deciding transfer rate has not discussed. Therefore, an automatic method for adjusting the
transfer rate is proposed in this paper using a sigmoid function. Further, computer simulations are used to
evaluate the effectiveness of the proposed method to improve the environmental adaptation performance in
a target task, which refers to the situation of reusing knowledge.
The Effect of Genetic Algorithm Parameters Tuning for Route Optimization in T...Muhammad Irfan Kemal
This study aims to analyze the effect of the population size, crossover probability, mutation probability, and the number of iterations on the distribution mileage of Indonesian largest logistics service provider in the Central Jakarta area with 43 distribution locations.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Collocation Extraction Performance Ratings Using Fuzzy logicWaqas Tariq
The performance of Collocation extraction cannot quantified or properly express by a single dimension. It is very imprecise to interpret collocation extraction metrics without knowing what application (users) are involved. Most of the existing collocation extraction techniques are of Berry-Roughe, Church and Hanks, Kita, Shimohata, Blaheta and Johnson, and Pearce. The extraction techniques need to be frequently updated based on feedbacks from implementation of previous policies. These feedbacks are always stated in the form of ordinal ratings, e.g. “high speed”, “average performance”, “good condition”. Different people can describe different values to these ordinal ratings without a clear-cut reason or scientific basis. There is need for a way or means to transform vague ordinal ratings to more appreciable and precise numerical estimates. The paper transforms the ordinal performance ratings of some Collocation performance techniques to numerical ratings using Fuzzy logic. Keywords: Fuzzy Set Theory, collocation extraction, Transformation, performance Techniques, Criteria.
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...ijmpict
Nowadays swarm intelligence-based algorithms are being used widely to optimize the dynamic traveling salesman problem (DTSP). In this paper, we have used mixed method of Ant Colony Optimization (AOC) and gradient descent to optimize DTSP which differs with ACO algorithm in evaporation rate and innovative data. This approach prevents premature convergence and scape from local optimum spots and also makes it possible to find better solutions for algorithm. In this paper, we’re going to offer gradient descent and ACO algorithm which in comparison to some former methods it shows that algorithm has significantly improved routes optimization.
In recent years, consumers and legislation have been pushing companies to optimize their activities in such a way as to reduce negative environmental and social impacts more and more. In the other side, companies
must keep their total supply chain costs as low as possible to remain competitive.This work aims to develop a model to traveling salesman problem including environmental impacts and to identify, as far as possible, the contribution of genetic operator’s tuning and setting in the success and
efficiency of genetic algorithms for solving this problem with consideration of CO2 emission due to transport. This efficiency is calculated in terms of CPU time consumption and convergence of the solution. The best transportation policy is determined by finding a balance between financial and environmental
criteria.Empirically, we have demonstrated that the performance of the genetic algorithm undergo relevant
improvements during some combinations of parameters and operators which we present in our results part.
AUTOMATIC TRANSFER RATE ADJUSTMENT FOR TRANSFER REINFORCEMENT LEARNINGgerogepatton
This paper proposes a novel parameter for transfer reinforcement learning to avoid over-fitting when an
agent uses a transferred policy from a source task. Learning robot systems have recently been studied for
many applications, such as home robots, communication robots, and warehouse robots. However, if the
agent reuses the knowledge that has been sufficiently learned in the source task, deadlock may occur and
appropriate transfer learning may not be realized. In the previous work, a parameter called transfer rate
was proposed to adjust the ratio of transfer, and its contribution include avoiding dead lock in the target
task. However, adjusting the parameter depends on human intuition and experiences. Furthermore, the
method for deciding transfer rate has not discussed. Therefore, an automatic method for adjusting the
transfer rate is proposed in this paper using a sigmoid function. Further, computer simulations are used to
evaluate the effectiveness of the proposed method to improve the environmental adaptation performance in
a target task, which refers to the situation of reusing knowledge.
The Effect of Genetic Algorithm Parameters Tuning for Route Optimization in T...Muhammad Irfan Kemal
This study aims to analyze the effect of the population size, crossover probability, mutation probability, and the number of iterations on the distribution mileage of Indonesian largest logistics service provider in the Central Jakarta area with 43 distribution locations.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Collocation Extraction Performance Ratings Using Fuzzy logicWaqas Tariq
The performance of Collocation extraction cannot quantified or properly express by a single dimension. It is very imprecise to interpret collocation extraction metrics without knowing what application (users) are involved. Most of the existing collocation extraction techniques are of Berry-Roughe, Church and Hanks, Kita, Shimohata, Blaheta and Johnson, and Pearce. The extraction techniques need to be frequently updated based on feedbacks from implementation of previous policies. These feedbacks are always stated in the form of ordinal ratings, e.g. “high speed”, “average performance”, “good condition”. Different people can describe different values to these ordinal ratings without a clear-cut reason or scientific basis. There is need for a way or means to transform vague ordinal ratings to more appreciable and precise numerical estimates. The paper transforms the ordinal performance ratings of some Collocation performance techniques to numerical ratings using Fuzzy logic. Keywords: Fuzzy Set Theory, collocation extraction, Transformation, performance Techniques, Criteria.
A MODIFIED VORTEX SEARCH ALGORITHM FOR NUMERICAL FUNCTION OPTIMIZATIONijaia
The Vortex Search (VS) algorithm is one of the recently proposed metaheuristic algorithms which was
inspired from the vortical flow of the stirred fluids. Although the VS algorithm is shown to be a good
candidate for the solution of certain optimization problems, it also has some drawbacks. In the VS
algorithm, candidate solutions are generated around the current best solution by using a Gaussian
distribution at each iteration pass. This provides simplicity to the algorithm but it also leads to some
problems along. Especially, for the functions those have a number of local minimum points, to select a
single point to generate candidate solutions leads the algorithm to being trapped into a local minimum
point. Due to the adaptive step-size adjustment scheme used in the VS algorithm, the locality of the created
candidate solutions is increased at each iteration pass. Therefore, if the algorithm cannot escape a local
point as quickly as possible, it becomes much more difficult for the algorithm to escape from that point in
the latter iterations. In this study, a modified Vortex Search algorithm (MVS) is proposed to overcome
above mentioned drawback of the existing VS algorithm. In the MVS algorithm, the candidate solutions
are generated around a number of points at each iteration pass. Computational results showed that with
the help of this modification the global search ability of the existing VS algorithm is improved and the
MVS algorithm outperformed the existing VS algorithm, PSO2011 and ABC algorithms for the benchmark
numerical function set.
AHP technique a way to show preferences amongst alternativesijsrd.com
This article presents a review of the applications of Analytic Hierarchy Process (AHP). AHP is a multiple criteria decision-making tool that has been used in almost all the applications related with decision-making. Decisions involve many intangibles that need to be traded off. The Analytic Hierarchy Process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgements of experts to derive priority scales. It is these scales that measure intangibles in relative terms. The comparisons are made using a scale of absolute judgements that represents how much more; one element dominates another with respect to a given attribute. The judgements may be inconsistent, and how to measure inconsistency and improve the judgements, when possible to obtain better consistency is a concern of the AHP. The derived priority scales are synthesised by multiplying them by the priority of their parent nodes and adding for all such nodes. An illustration is also included.
Evolutionary Computing is a research area within computer science. As the name suggest, it is a special flavour of computing, which draws inspiration from the process of natural evolution. The fundamental metaphor of evolutionary computing relates this powerful natural evolution to a particular style of problem solving – that of trial and error.
Analysis strategies for constrained mixture and mixture process experiments u...Philip Ramsey
Approaches to analyzing constrained mixture and mixture process factor experiments. Originally presented at the European Discovery Conference in Copenhagen on 3/14/2019
To demonstrate our approaches we will use Sudoku puzzles, which are an excellent test bed for
evolutionary algorithms. The puzzles are accessible enough for people to enjoy. However the more complex
puzzles require thousands of iterations before an evolutionary algorithm finds a solution. If we were
attempting to compare evolutionary algorithms we could count their iterations to solution as an indicator
of relative efficiency. Evolutionary algorithms however include a process of random mutation for solution
candidates. We will show that by improving the random mutation behaviours we were able to solve
problems with minimal evolutionary optimisation. Experiments demonstrated the random mutation was at
times more effective at solving the harder problems than the evolutionary algorithms. This implies that the
quality of random mutation may have a significant impact on the performance of evolutionary algorithms
with Sudoku puzzles. Additionally this random mutation may hold promise for reuse in hybrid evolutionary
algorithm behaviours.
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...Waqas Tariq
This paper proposes a metaheuristic to solve the permutation flow shop scheduling problem where several criteria are to be considered, such as: the makespan, total flowtime and total tardiness of jobs. The proposed metaheuristic is based on tabu search algorithm. The Compromise Programming model and the concept of satisfaction functions are utilized to integrate explicitly the Manager’s preferences. The proposed approach has been tested through a computational experiment. This approach can be useful for large scale scheduling problems and the Manager can consider additional scheduling criteria.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Approximation models (or surrogate models) provide an efficient substitute to expen- sive physical simulations and an efficient solution to the lack of physical models of system behavior. However, it is challenging to quantify the accuracy and reliability of such ap- proximation models in a region of interest or the overall domain without additional system evaluations. Standard error measures, such as the mean squared error, the cross-validation error, and the Akaikes information criterion, provide limited (often inadequate) informa- tion regarding the accuracy of the final surrogate. This paper introduces a novel and model independent concept to quantify the level of errors in the function value estimated by the final surrogate in any given region of the design domain. This method is called the Re- gional Error Estimation of Surrogate (REES). Assuming the full set of available sample points to be fixed, intermediate surrogates are iteratively constructed over a sample set comprising all samples outside the region of interest and heuristic subsets of samples inside the region of interest (i.e., intermediate training points). The intermediate surrogate is tested over the remaining sample points inside the region of interest (i.e., intermediate test points). The fraction of sample points inside region of interest, which are used as interme- diate training points, is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors within the region of in- terest for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The estimated statistical mode of the median and the maximum error, and the absolute maximum error are then represented as functions of the density of intermediate training points, using regression models. The regression models are then used to predict the expected median and maximum regional errors when all the sample points are used as training points. Standard test functions and a wind farm power generation problem are used to illustrate the effectiveness and the utility of such a regional error quantification method.
The potential role of ai in the minimisation and mitigation of project delayPieter Rautenbach
Artificial intelligence (AI) can have wide reaching application within the construction
industry, however, the actual application of this set of technologies is currently under exploited. This
paper considers the role that the application of AI can take in optimising the efficiencies of project
execution and how this can potentially reduce project duration and minimise and mitigate delay on
projects.
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021Chris Ohk
RL 논문 리뷰 스터디에서 Adversarially Guided Actor-Critic 논문 내용을 정리해 발표했습니다. AGAC는 Actor-Critic에 GAN에서 영감을 받은 방법들을 결합해 리워드가 희소하고 탐험이 어려운 환경에서 뛰어난 성능을 보여줍니다. 많은 분들에게 도움이 되었으면 합니다.
An Introduction to Reinforcement Learning - The Doors to AGIAnirban Santara
Reinforcement Learning (RL) is a genre of Machine Learning in which an agent learns to choose optimal actions in different states in order to reach its specified goal, solely by interacting with the environment through trial and error. Unlike supervised learning, the agent does not get examples of "correct" actions in given states as ground truth. Instead, it has to use feedback from the environment (which can be sparse and delayed) to improve its policy over time. The formulation of the RL problem closely resembles the way in which human beings learn to act in different situations. Hence it is often considered the gateway to achieving the goal of Artificial General Intelligence.
The motivation of this talk is to introduce the audience to key theoretical concepts like formulation of the RL problem using Markov Decision Process (MDP) and solution of MDP using dynamic programming and policy gradient based algorithms. State-of-the-art deep reinforcement learning algorithms will also be covered. A case study of the application of reinforcement learning in robotics will also be presented.
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
A MODIFIED VORTEX SEARCH ALGORITHM FOR NUMERICAL FUNCTION OPTIMIZATIONijaia
The Vortex Search (VS) algorithm is one of the recently proposed metaheuristic algorithms which was
inspired from the vortical flow of the stirred fluids. Although the VS algorithm is shown to be a good
candidate for the solution of certain optimization problems, it also has some drawbacks. In the VS
algorithm, candidate solutions are generated around the current best solution by using a Gaussian
distribution at each iteration pass. This provides simplicity to the algorithm but it also leads to some
problems along. Especially, for the functions those have a number of local minimum points, to select a
single point to generate candidate solutions leads the algorithm to being trapped into a local minimum
point. Due to the adaptive step-size adjustment scheme used in the VS algorithm, the locality of the created
candidate solutions is increased at each iteration pass. Therefore, if the algorithm cannot escape a local
point as quickly as possible, it becomes much more difficult for the algorithm to escape from that point in
the latter iterations. In this study, a modified Vortex Search algorithm (MVS) is proposed to overcome
above mentioned drawback of the existing VS algorithm. In the MVS algorithm, the candidate solutions
are generated around a number of points at each iteration pass. Computational results showed that with
the help of this modification the global search ability of the existing VS algorithm is improved and the
MVS algorithm outperformed the existing VS algorithm, PSO2011 and ABC algorithms for the benchmark
numerical function set.
AHP technique a way to show preferences amongst alternativesijsrd.com
This article presents a review of the applications of Analytic Hierarchy Process (AHP). AHP is a multiple criteria decision-making tool that has been used in almost all the applications related with decision-making. Decisions involve many intangibles that need to be traded off. The Analytic Hierarchy Process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgements of experts to derive priority scales. It is these scales that measure intangibles in relative terms. The comparisons are made using a scale of absolute judgements that represents how much more; one element dominates another with respect to a given attribute. The judgements may be inconsistent, and how to measure inconsistency and improve the judgements, when possible to obtain better consistency is a concern of the AHP. The derived priority scales are synthesised by multiplying them by the priority of their parent nodes and adding for all such nodes. An illustration is also included.
Evolutionary Computing is a research area within computer science. As the name suggest, it is a special flavour of computing, which draws inspiration from the process of natural evolution. The fundamental metaphor of evolutionary computing relates this powerful natural evolution to a particular style of problem solving – that of trial and error.
Analysis strategies for constrained mixture and mixture process experiments u...Philip Ramsey
Approaches to analyzing constrained mixture and mixture process factor experiments. Originally presented at the European Discovery Conference in Copenhagen on 3/14/2019
To demonstrate our approaches we will use Sudoku puzzles, which are an excellent test bed for
evolutionary algorithms. The puzzles are accessible enough for people to enjoy. However the more complex
puzzles require thousands of iterations before an evolutionary algorithm finds a solution. If we were
attempting to compare evolutionary algorithms we could count their iterations to solution as an indicator
of relative efficiency. Evolutionary algorithms however include a process of random mutation for solution
candidates. We will show that by improving the random mutation behaviours we were able to solve
problems with minimal evolutionary optimisation. Experiments demonstrated the random mutation was at
times more effective at solving the harder problems than the evolutionary algorithms. This implies that the
quality of random mutation may have a significant impact on the performance of evolutionary algorithms
with Sudoku puzzles. Additionally this random mutation may hold promise for reuse in hybrid evolutionary
algorithm behaviours.
Manager’s Preferences Modeling within Multi-Criteria Flowshop Scheduling Prob...Waqas Tariq
This paper proposes a metaheuristic to solve the permutation flow shop scheduling problem where several criteria are to be considered, such as: the makespan, total flowtime and total tardiness of jobs. The proposed metaheuristic is based on tabu search algorithm. The Compromise Programming model and the concept of satisfaction functions are utilized to integrate explicitly the Manager’s preferences. The proposed approach has been tested through a computational experiment. This approach can be useful for large scale scheduling problems and the Manager can consider additional scheduling criteria.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Approximation models (or surrogate models) provide an efficient substitute to expen- sive physical simulations and an efficient solution to the lack of physical models of system behavior. However, it is challenging to quantify the accuracy and reliability of such ap- proximation models in a region of interest or the overall domain without additional system evaluations. Standard error measures, such as the mean squared error, the cross-validation error, and the Akaikes information criterion, provide limited (often inadequate) informa- tion regarding the accuracy of the final surrogate. This paper introduces a novel and model independent concept to quantify the level of errors in the function value estimated by the final surrogate in any given region of the design domain. This method is called the Re- gional Error Estimation of Surrogate (REES). Assuming the full set of available sample points to be fixed, intermediate surrogates are iteratively constructed over a sample set comprising all samples outside the region of interest and heuristic subsets of samples inside the region of interest (i.e., intermediate training points). The intermediate surrogate is tested over the remaining sample points inside the region of interest (i.e., intermediate test points). The fraction of sample points inside region of interest, which are used as interme- diate training points, is fixed at each iteration, with the total number of iterations being pre-specified. The estimated median and maximum relative errors within the region of in- terest for the heuristic subsets at each iteration are used to fit a distribution of the median and maximum error, respectively. The estimated statistical mode of the median and the maximum error, and the absolute maximum error are then represented as functions of the density of intermediate training points, using regression models. The regression models are then used to predict the expected median and maximum regional errors when all the sample points are used as training points. Standard test functions and a wind farm power generation problem are used to illustrate the effectiveness and the utility of such a regional error quantification method.
The potential role of ai in the minimisation and mitigation of project delayPieter Rautenbach
Artificial intelligence (AI) can have wide reaching application within the construction
industry, however, the actual application of this set of technologies is currently under exploited. This
paper considers the role that the application of AI can take in optimising the efficiencies of project
execution and how this can potentially reduce project duration and minimise and mitigate delay on
projects.
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021Chris Ohk
RL 논문 리뷰 스터디에서 Adversarially Guided Actor-Critic 논문 내용을 정리해 발표했습니다. AGAC는 Actor-Critic에 GAN에서 영감을 받은 방법들을 결합해 리워드가 희소하고 탐험이 어려운 환경에서 뛰어난 성능을 보여줍니다. 많은 분들에게 도움이 되었으면 합니다.
An Introduction to Reinforcement Learning - The Doors to AGIAnirban Santara
Reinforcement Learning (RL) is a genre of Machine Learning in which an agent learns to choose optimal actions in different states in order to reach its specified goal, solely by interacting with the environment through trial and error. Unlike supervised learning, the agent does not get examples of "correct" actions in given states as ground truth. Instead, it has to use feedback from the environment (which can be sparse and delayed) to improve its policy over time. The formulation of the RL problem closely resembles the way in which human beings learn to act in different situations. Hence it is often considered the gateway to achieving the goal of Artificial General Intelligence.
The motivation of this talk is to introduce the audience to key theoretical concepts like formulation of the RL problem using Markov Decision Process (MDP) and solution of MDP using dynamic programming and policy gradient based algorithms. State-of-the-art deep reinforcement learning algorithms will also be covered. A case study of the application of reinforcement learning in robotics will also be presented.
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
ABSTRACT: Genetic Algorithms (GAs) have become increasingly useful over the years for solving combinatorial problems. Though they are generally accepted to be good performers among metaheuristic algorithms, most works have concentrated on the application of the GAs rather than the theoretical justifications. In this paper, we examine and justify the suitability of Genetic Algorithms in solving complex, multi-variable and multi-modal optimization problems. To achieve this, a simple Genetic Algorithm was used to solve four standard complicated optimization test functions, namely Rosenbrock, Schwefel, Rastrigin and Shubert functions. These functions are benchmarks to test the quality of an optimization procedure towards a global optimum. We show that the method has a quicker convergence to the global optima and that the optimal values for the Rosenbrock, Rastrigin, Schwefel and Shubert functions are zero (0), zero (0), -418.9829 and -14.5080 respectively
Differential Evolution (DE) is a renowned optimization stratagem that can easily solve nonlinear and comprehensive problems. DE is a well known and uncomplicated population based probabilistic approach for comprehensive optimization. It has apparently outperformed a number of Evolutionary Algorithms and further search heuristics in the vein of Particle Swarm Optimization at what time of testing over both yardstick and actual world problems. Nevertheless, DE, like other probabilistic optimization algorithms, from time to time exhibits precipitate convergence and stagnates at suboptimal position. In order to stay away from stagnation behavior while maintaining an excellent convergence speed, an innovative search strategy is introduced, named memetic search in DE. In the planned strategy, positions update equation customized as per a memetic search stratagem. In this strategy a better solution participates more times in the position modernize procedure. The position update equation is inspired from the memetic search in artificial bee colony algorithm. The proposed strategy is named as Memetic Search in Differential Evolution (MSDE). To prove efficiency and efficacy of MSDE, it is tested over 8 benchmark optimization problems and three real world optimization problems. A comparative analysis has also been carried out among proposed MSDE and original DE. Results show that the anticipated algorithm go one better than the basic DE and its recent deviations in a good number of the experiments.
Multi-Population Methods with Adaptive Mutation for Multi-Modal Optimization ...ijscai
This paper presents an efficient scheme to locate multiple peaks on multi-modal optimization problems by
using genetic algorithms (GAs). The premature convergence problem shows due to the loss of diversity,
the multi-population technique can be applied to maintain the diversity in the population and the
convergence capacity of GAs. The proposed scheme is the combination of multi-population with adaptive
mutation operator, which determines two different mutation probabilities for different sites of the
solutions. The probabilities are updated by the fitness and distribution of solutions in the search space
during the evolution process. The experimental results demonstrate the performance of the proposed
algorithm based on a set of benchmark problems in comparison with relevant algorithms.
Hybrid Multi-Gradient Explorer Algorithm for Global Multi-Objective OptimizationeArtius, Inc.
Hybrid Multi-Gradient Explorer (HMGE) algorithm for global multi-objective
optimization of objective functions considered in a multi-dimensional domain is presented. The proposed hybrid algorithm relies on genetic variation operators for creating new solutions, but in addition to a standard random mutation operator, HMGE
uses a gradient mutation operator, which improves convergence. Thus, random mutation helps find global Pareto frontier, and gradient mutation improves convergence to the
Pareto frontier. In such a way HMGE algorithm combines advantages of both
gradient-based and GA-based optimization techniques: it is as fast as a pure gradient-based MGE algorithm, and is able to find the global Pareto frontier similar to genetic algorithms
(GA). HMGE employs Dynamically Dimensioned Response Surface Method (DDRSM) for calculating gradients. DDRSM dynamically recognizes the most significant design variables, and builds local approximations based only on the variables. This allows one to
estimate gradients by the price of 4-5 model evaluations without significant loss of accuracy. As a result, HMGE efficiently optimizes highly non-linear models with dozens and hundreds of design variables, and with multiple Pareto fronts. HMGE efficiency is 2-10
times higher when compared to the most advanced commercial GAs.
A Non-Revisiting Genetic Algorithm for Optimizing Numeric Multi-Dimensional F...ijcsa
Genetic Algorithm (GA) is a robust and popular stochastic optimization algorithm for large and complex search spaces. The major shortcomings of Genetic Algorithms are premature convergence and revisits to individual solutions in the search space. In other words, Genetic algorithm is a revisiting algorithm that escorts to duplicate function evaluations which is a clear wastage of time and computational resources. In this paper, a non-revisiting genetic algorithm with adaptive mutation is proposed for the domain of MultiDimensional numeric function optimization. In this algorithm whenever a revisit occurs, the underlined search point is replaced with a mutated version of the best/random (chosen probabilistically) individual from the GA population. Furthermore, the recommended approach is not using any extra memory resources to avoid revisits. To analyze the influence of the method, the proposed non-revisiting algorithm is evaluated using nine benchmarks functions with two and four dimensions. The performance of the proposed genetic algorithm is superior as contrasted to simple genetic algorithm as confirmed by the experimental results.
Traveling Salesman Problem (TSP) is a kind of NPHard problem which cant be solved in polynomial time for
asymptotically large values of n. In this paper a balanced combination of Genetic algorithm and Simulated Annealing is used. To
improve the performance of finding optimal solution from huge
search space, we have incorporated the use of tournament and
rank as selection operator. And Inver-over operator Mechanism
for crossover and mutation . To illustrate it more clearly an
implementation in C++ (4.9.9.2) has been done.
Index Terms—Genetic Algorithm (GA) , Simulated Annealing
(SA) , Inver-over operator , Lin-Kernighan algorithm , selection
operator , crossover operator , mutation operator.
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...IAEME Publication
Close range photogrammetry network design is referred to the process of placing a set of
cameras in order to achieve photogrammetric tasks. The main objective of this paper is tried to find
the best location of two/three camera stations. The genetic algorithm optimization and Particle
Swarm Optimization are developed to determine the optimal camera stations for computing the three
dimensional coordinates. In this research, a mathematical model representing the genetic algorithm
optimization and Particle Swarm Optimization for the close range photogrammetry network is
developed. This paper gives also the sequence of the field operations and computational steps for this
task. A test field is included to reinforce the theoretical aspects.
Artificial Intelligence in Robot Path Planningiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Constructing a classification model is important in machine learning for a particular task. A
classification process involves assigning objects into predefined groups or classes based on a
number of observed attributes related to those objects. Artificial neural network is one of the
classification algorithms which, can be used in many application areas. This paper investigates
the potential of applying the feed forward neural network architecture for the classification of
medical datasets. Migration based differential evolution algorithm (MBDE) is chosen and
applied to feed forward neural network to enhance the learning process and the network
learning is validated in terms of convergence rate and classification accuracy. In this paper,
MBDE algorithm with various migration policies is proposed for classification problems using
medical diagnosis.
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...cscpconf
Constructing a classification model is important in machine learning for a particular task. A
classification process involves assigning objects into predefined groups or classes based on a
number of observed attributes related to those objects. Artificial neural network is one of the
classification algorithms which, can be used in many application areas. This paper investigates
the potential of applying the feed forward neural network architecture for the classification of
medical datasets. Migration based differential evolution algorithm (MBDE) is chosen and
applied to feed forward neural network to enhance the learning process and the network
learning is validated in terms of convergence rate and classification accuracy. In this paper,
MBDE algorithm with various migration policies is proposed for classification problems using
medical diagnosis.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Essentials of Automations: The Art of Triggers and Actions in FME
Performance Analysis of Metaheuristics and Hybrid Metaheuristics for the Travel Salesman Problem
1. Performance Analysis of Metaheuristics and Hybrid
Metaheuristics for the Travel Salesman Problem
David A. Gutiérrez-Hernández*,1
, Marco A. Escobar**, Josué Del Valle-Hernández*, Miguel Gómez-Díaz*,
José Luis Villanueva-Rodríguez*, Juan M. López-López***, Claudia Lara-Rendón*,
Rossana Rodríguez-Montero*, Héctor Nava-Martínez****
*Tecnológico Nacional de México, Instituto Tecnológico de León, León, Guanajuato, 37290, México
**Universidad de La Salle Bajío campus Salamanca, Salamanca, Guanajuato, 36700, México
***Escuela Colombiana de Ingeniería Julio Garavito, Bogotá, Colombia
****Centro de Innovación Aplicada en Tecnologías Competitivas; León, Guanajuato, 37545, México
1
david.gutierrez@itleon.edu.mx
Abstract– A performance analysis of metaheuristics and hybrid
metaheuristics for the travel salesman problem is presented. Four
single classical metaheuristics (genetic algorithm, memetic
algorithm, iterated local search, and simulated annealing) were
used. In addition, hybrid variations using nine different heuristic
techniques for the local search, the mutation, and the
intensification were used. The performance analysis was made
using the Friedman test, and for the simulated annealing and local
search algorithms statistical evidence was found that hybridization
provides a difference in performance, while no evidence was found
for the genetic and memetic algorithms. Up to six combinations
were found to improve performance, five of them based on local
search and one more based on simulated annealing.
Keywords— Metaheuristics; Heuristics; Traveling Salesman
Problem; combinatorial optimization; Friedman.
I. INTRODUCTION
Optimization problems involving many finite solutions are
of interest in a wide range of fields. This kind of problems are
classified as combinatorial optimization (CO), and to find the
globally optimal solution it is theoretically possible to
enumerate and evaluate each one of the solutions. But this
approach becomes intractable rapidly due to the exponential
growth of most solution spaces. Metaheuristics allow us to
simplify this job since they can find solutions close the
optimal in a reasonable time. Metaheuristics evolved because
most modern problems are computationally intractable,
needing heuristic guidance to find good solutions, but not
necessarily the most optimal. Some of the must use techniques
include genetic algorithms (GA), memetic algorithm (MA),
iterated local search (ILS), and simulated annealing (SA).
The travel salesman problem is a CO problem that has been
studied extensively, and it is often used as a test for new
optimization algorithms. Heuristic techniques have been tested
in different instances of the travel agent algorithm. In this work
a Friedman's test analysis was performed to probe if the use of
various methods like local search, mutation, and intensification
have an impact on performance as compared to the single
classical heuristics, the results are ranked according to their
performance.
II. THEORICAL DESCRIPTION
A. Heuristics
A heuristic technique is a process that, for a particular
problem, offers a good solution, despite the fact the solution
might not be optimal. Generally speaking, these techniques are
applied to problems that are difficult to solve, and where it is
important to find a quick and easy solution (Zanakis & Evans,
1981).
The heuristics used in this work are: Random Insertion
(Twors), Reverse Sequence Mutation (RSM), Thors (Abdoun,
Abouchabaka, & Tajani, 2012), OPT (Blazinskas &
Misevicius, 2009; Gutin & Punnen, 2007), georeferenced
intersection, Centre inverse mutation, Closest insertion and
Throas Mutation (Abdoun et al., 2012). These techniques are
intended to help in the solution of the travel agent
problem.(Bang-Jensen, Gutin, & Yeo, 2004), (Bendall &
Margot, 2006), (Talbi, 2009)(Dorigo & Tutzle, 2010), (Cook,
2011).
B. Metaheuristics
Metaheuristics are an iterative search strategy that guides
the process over the search space in the hope of finding the
optimal solution (Glover,1986). In general, metaheuristics try
to combine basic heuristic methods in higher level frameworks
aimed at efficiently and effectively exploring a search space.
Metaheuristics allow working at large scales by obtaining a
satisfactory solution in a reasonable time. When working with
metaheuristics there is no warranty that a global optimal will be
found, not even a solution close to the extremes. However,
these techniques have gained popularity in the last 20 years
since in several applications they have shown efficiency and
efficacy in the solution of large complex problems. In
metaheuristic design, two opposing criteria must be met: the
first one, an efficient exploration of the search space, or
diversification; and the second one focusing on a local region
where a good solution has been found, or intensification.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 6, June 2018
195 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
2. C. Genetic Algorithms
The GAs are adaptive exploration methods that can be used
in the search for a solution and optimization. The GAs are
based on the natural selection that drives the dynamics of
biological populations. GAs use a probabilistic selection of
individuals for the crossover operation. The replacement of the
best individuals is generational, which means that the children
systematically replace the parents. The crossover operation is
based on n-points or steady state, while mutation is performed
as the interchange of bits or characteristics.
D. Memetic Algorithms
An MA is composed of two parts: The genetic algorithm
and the local search. The local search is a modification of an
individual or the total population by copying and perturbing
the individual to obtain an individual with a better fitness.
E. Local Iterated Search
The ILS uses an embedded local-search component
iteratively restarting it from different promising areas in the
search-space. The solutions obtained are better than using
random runs without heuristics.
F. Simulated Annealing
This local search algorithm ILS uses an embedded local-
search component iteratively restarting it from different
promising areas in the search-space. The solutions obtained
are better than using random runs without heuristics.
G. Travel Salesman Problem
The travel salesman is a NP-hard problem, and it is one of
the best-known combinatorial optimization problems. Given n
cities and the geographical distances between each one of
them, the task is to find the shortest closed tour in which each
city is visited exactly once.
H. Friedman’s Test
The Friedman’s test is a multiple comparison test in which
the null hypothesis is that the performance of all the
algorithms under comparison is similar. It yields a ranking of
the algorithms according to their performance with respect to
the control algorithm. To corroborate the ranking a post-hoc
procedure is required to identify the differences between the
control algorithm and the others.
I. Holm’s procedure
The Holm’s procedure is used for post-hoc testing, this
method is designed for multiple hypothesis testing iteratively
accepting or rejecting each one. The procedure begins by
ordering the m hypothesis by respective p-value, then each one
of the p-values is compared to their alpha values as calculated
from:
)1( +−
=
im
i
(1)
where i is the index of the ordered values of p. If the p-value is
smaller the hypothesis is rejected, and the rest of the p-values
are compared using the current alpha value as threshold. This
procedure rejects from H1 to H(i-1) until pi > αi
Fig. 1. Multiple EAs and SAs running in parallel.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 6, June 2018
196 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
3. III. NUMERICAL EXPERIMENTS
Numerical experiments were performed under equal
conditions, the results are presented in Table 1. A total of 99
combinations of Metaheuriscs - Heuristcs were analyzed: 9
GAs for mutation, 9 MA for mutation and intensification, 9
SA for local search and a combination of this for a total of 72,
were used TSPLIB and are: KROA100, KROB100,
KROC100, KROD100 and KROE100.
))(log*322.3(1 10 nC += (2)
IV. RESULTS
Using the Friedman test the medians were compared with
α = 0.01, n=5, k=99, and the following hypothesis:
•H0 = the algorithms offer similar results
•Ha = the combinations offer different results.
Table 2 shows the results for the test on whether the use of
different heuristics has an impact on performance. The best
result has a significant performance improvement with respect
to the others and is close to the known optimal value.
To evaluate this, Friedman test was used considering α =
0.01, n=5, k=99, and the following hypothesis:
•H0 = the algorithms offer similar results
•Ha = the combinations offer different results.
Obtaining a p-value of 6.80e-53, thus rejecting Ho. By
using this test, the best combination was SA with RSM
heuristic as local search. To corroborate the 10% of the
minima values are taken, the results are shown in Table 3.
Using Friedman test using α = 0.01, n=5, k=9 and the
hypothesis:
•H0 = the algorithms offer similar results
•Ha = the combinations offer different results.
The result is a p-value of 4.41e-06, thus rejecting Ho.
Corroborating that the best combination was SA with RSM
heuristic as local search.
a..
p-value < α
b.
El p-value > α
c.
Combinations with performance close to the control
algorithm.
Table 1. Parameters used for numerical experiments
Parameter Metaheuristics
GA MA LIS SA
Dimension 100 100 100 100
Population 526 a
526 a
1 1
Stop criteria 100,000 b
100,000 b
100,000 b
100,000 b
Experiments 33 33 33 33
Selection Vasconcelos Vasconcelos - -
Crossover k-opt c
k-opt c
- -
Mutation 1% 1%
Elitism 5% 5% - -
Intensification - 10 iterations - -
Degrees - - - 36
Mk - - - 20
a.
Based on eq. (2), b.
Function calls, c.
k=1
Table 2. Friedman’s test results
Parameter Metaheuristics
GA MA LIS SA
P-valor 2.18E-02 4.76E-01 5.11E-39 3.61E-06
k 9 9 72 9
Best
combination
- a
- a
RSM-
Centre
Inverse b
RSM b
a
Since p-value > α, ∴ There is no evidence to reject H0. Therefore, a better combination
is not determined.
b
Since p-valor < α, ∴ It is rejected H0. Therefore, the test indicates the combination that
you consider best.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 6, June 2018
197 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
4. V. CONCLUSIONS
Table 4. Holm procedure with RSM
Combinations Values
Statistics z P-value α adjusted H0 rejected?
Opt3_RSM 4.9057789 9.303E-07 0.00125 yes a
Opt2_RSM 4.1311822 3.609E-05 0.0014285 yes a
RSM_Opt3 3.7438839 0.0001811 0.0016666 yes a
Throas_RSM c
2.9692872 0.0029849 0.002 no b
RSM_Throas c
2.1946905 0.0281858 0.0025 no b
RSM_Opt2 c
1.8073922 0.0707011 0.0033333 no b
RSM_ClosestInsertion c
1.0327955 0.3016995 0.005 no b
RSM_CentreInverse c
0.1290994 0.8972789 0.01 no b
a. p-value < α b. El p-value > α
c. Combinations with performance close to the control algorithm
Fig. 2. Graph of the results.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 6, June 2018
198 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
5. V. CONCLUSIONS
Based on the results obtained, for the GA and MA there is
no statistical evidence that applying heuristics for the mutation
operator would affect the performance.
On the other hand, according to the Friedman’s test, ILS
and SA are improved using different heuristics in the local
search or in the perturbation.
The SA with RSM in the local search ranked as the best
algorithm for the combinations under study.
After the post-hoc procedure it is concluded that there no
statistical evidence that 5 combinations produce an effect on
the final performance as compared to the control algorithm.
The best combinations are presented in Table 4.
As future work it will be found the adequate stop criteria
since it is known that this parameter might affect the
performance of population-based algorithms.
ACKNOWLEDGMENT
Authors whould like to acknowledge CONACYT, TecNM and
Instituto Tecnológico de León for the given support.
REFERENCES
Abdoun, O., Abouchabaka, J., & Tajani, C. (2012). Analyzing the
Performance of Mutation Operators to Solve the Travelling Salesman
Problem. International Journal of Emerging Sciences, 2(1), 61–77.
Bang-Jensen, J., Gutin, G., & Yeo, A. (2004). When the greedy algorithm
fails. Discrete Optimization, 1(2), 121–127.
http://doi.org/10.1016/j.disopt.2004.03.007
Bendall, G., & Margot, F. (2006). Greedy-type resistance of combinatorial
problems. Discrete Optimization, 3(4), 288–298.
http://doi.org/10.1016/j.disopt.2006.03.001
Blazinskas, A., & Misevicius, A. (2009). Combining 2-Opt, 3-Opt and 4-Opt
With K-Swap-Kick Perturbations for the Traveling Salesman Problem.
Kaunas University of Technology, Department of Multimedia
Engineering, Studentu St, 50–401.
Cook, W. (2011). In pursuit of the traveling salesman: mathematics at the
limits of computation. Chemistry & …, 635–638. Retrieved from
http://books.google.com/books?hl=en&lr=&id=S3bxbr_-
qhYC&oi=fnd&pg=PP2&dq=In+pursuit+of+the+traveling+salesman+
:mathematics+at+the+limits+of+computation&ots=jDzfyEkzKs&sig=
13_GiNHiPpLZDgM4p6tAtI4OOLs
Dorigo, M., & Tutzle, T. (2010). Handbook of Metaheuristics. In
M.Gendreau, J.-Y. Potvin (eds.), Handbook of Metaheuristics,
International Series in Operations Research and Management Science
(Vol. 146, p. 648). http://doi.org/10.1007/978-1-4419-1665-5.
Gutin, G., & Punnen, A. (2007). The Traveling Salesman Problem and its
Variations. … of Combinatorial Optimization: Problems and …, 749.
http://doi.org/10.1007/b101971
Talbi, E. G. (2009). Metaheuristics: From Design to Implementation.
Metaheuristics: From Design to Implementation.
http://doi.org/10.1002/9780470496916
Zanakis, S. H., & Evans, J. R. (1981). Heuristic “Optimization”: Why, When,
and How to Use It. Interfaces, 11(5), 84–91.
http://doi.org/10.1287/inte.11.5.84o. 2, pp. 415–420, 2008.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 6, June 2018
199 https://sites.google.com/site/ijcsis/
ISSN 1947-5500