The document discusses investigations into hybrid metaheuristics, specifically local search-based hybrid metaheuristics. It provides background on metaheuristics and local search methods. Local search involves iteratively moving to neighbor solutions based on cost until an end condition is met. Hybrid metaheuristics combine components of different metaheuristics or optimization techniques. The document outlines motivations for hybridization to combine strengths and addresses guidelines for selecting papers covering different types of hybridization investigated. It then presents an outline discussing local search self-hybridization and hybridization with exact methods.
Solving Multidimensional Multiple Choice Knapsack Problem By Genetic Algorith...Shubhashis Shil
This document summarizes a study that used a genetic algorithm to solve the multidimensional multiple choice knapsack problem (MMKP) and measured its performance against traditional approaches. The genetic algorithm was able to obtain near-optimal revenue solutions for large-scale MMKP problems in less time than traditional methods like Branch and Bound with Linear Programming (BBLP), Modified Heuristic (M-HEU), and Multiple Upgrade of Heuristic (MU-HEU). While the revenue obtained was nearly the same across all methods, the genetic algorithm had significantly better timing complexity and its effectiveness increased as the problem constraints grew larger.
Nature-Inspired Metaheuristic AlgorithmsXin-She Yang
This chapter introduces optimization problems and nature-inspired metaheuristics. Optimization problems involve minimizing or maximizing objective functions subject to constraints. Nature-inspired metaheuristics are computational algorithms inspired by natural phenomena, such as simulated annealing, genetic algorithms, particle swarm optimization, and ant colony optimization. They provide near-optimal solutions to complex optimization problems.
Genetic algorithms are a class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover. They work by maintaining a population of potential solutions and applying genetic operators of selection, crossover and mutation to generate new populations in search of an optimal solution. A genetic algorithm begins with a randomly generated population that is evaluated and selected using a fitness function. Selected solutions then reproduce through crossover and mutation to create a new population, and the process repeats until a termination condition is reached.
This document provides an introduction to genetic algorithms. It explains that genetic algorithms are inspired by Darwinian evolution and use processes like selection, crossover and mutation to iteratively improve a population of potential solutions. It discusses how genetic algorithms can be used for optimization problems and classification in data mining. Examples of genetic algorithm applications like the traveling salesman problem are also presented to illustrate genetic algorithm concepts and processes.
This document outlines an approach using latent feature log-linear (LFL) models and ensemble learning to predict student responses on questions for the "What do you know?" challenge on Kaggle. It describes using LFL to model dyadic prediction tasks with side information on students, questions, and student-question pairs. Models are trained on student response data and evaluated on a validation set. Ensemble methods like linear regression and gradient boosted decision trees are used to combine predictions from multiple LFL models to improve performance compared to individual models. The approach achieved strong results on the Kaggle competition.
Differential Evolution (DE) is a renowned optimization stratagem that can easily solve nonlinear and comprehensive problems. DE is a well known and uncomplicated population based probabilistic approach for comprehensive optimization. It has apparently outperformed a number of Evolutionary Algorithms and further search heuristics in the vein of Particle Swarm Optimization at what time of testing over both yardstick and actual world problems. Nevertheless, DE, like other probabilistic optimization algorithms, from time to time exhibits precipitate convergence and stagnates at suboptimal position. In order to stay away from stagnation behavior while maintaining an excellent convergence speed, an innovative search strategy is introduced, named memetic search in DE. In the planned strategy, positions update equation customized as per a memetic search stratagem. In this strategy a better solution participates more times in the position modernize procedure. The position update equation is inspired from the memetic search in artificial bee colony algorithm. The proposed strategy is named as Memetic Search in Differential Evolution (MSDE). To prove efficiency and efficacy of MSDE, it is tested over 8 benchmark optimization problems and three real world optimization problems. A comparative analysis has also been carried out among proposed MSDE and original DE. Results show that the anticipated algorithm go one better than the basic DE and its recent deviations in a good number of the experiments.
Two Phase Algorithm for Solving VRPTW ProblemWaqas Tariq
Vehicle Routing Problem with Time Windows (VRPTW) is a well known NP hard combinatorial scheduling optimization problem in which minimum number of routes have to be determined to serve all the customers within their specified time windows. Different analytic and heuristic approaches have been tried to solve such problems. In this paper we propose a two phase method which utilizes Genetic algorithms as well as random search incorporating simulated annealing concepts to solve VRPTW problem in various scenarios.
The document discusses local search algorithms, including gradient descent, the Metropolis algorithm, simulated annealing, and Hopfield neural networks. It provides details on how each algorithm works, such as gradient descent taking steps proportional to the negative gradient of a function to find a local minimum. The algorithms are compared, with some having similarities in their methods, like maximum cut problem and Hopfield neural networks using state flipping algorithms, and Metropolis and gradient descent using simulated annealing. Advantages and disadvantages of local search algorithms are presented.
Solving Multidimensional Multiple Choice Knapsack Problem By Genetic Algorith...Shubhashis Shil
This document summarizes a study that used a genetic algorithm to solve the multidimensional multiple choice knapsack problem (MMKP) and measured its performance against traditional approaches. The genetic algorithm was able to obtain near-optimal revenue solutions for large-scale MMKP problems in less time than traditional methods like Branch and Bound with Linear Programming (BBLP), Modified Heuristic (M-HEU), and Multiple Upgrade of Heuristic (MU-HEU). While the revenue obtained was nearly the same across all methods, the genetic algorithm had significantly better timing complexity and its effectiveness increased as the problem constraints grew larger.
Nature-Inspired Metaheuristic AlgorithmsXin-She Yang
This chapter introduces optimization problems and nature-inspired metaheuristics. Optimization problems involve minimizing or maximizing objective functions subject to constraints. Nature-inspired metaheuristics are computational algorithms inspired by natural phenomena, such as simulated annealing, genetic algorithms, particle swarm optimization, and ant colony optimization. They provide near-optimal solutions to complex optimization problems.
Genetic algorithms are a class of evolutionary algorithms that use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover. They work by maintaining a population of potential solutions and applying genetic operators of selection, crossover and mutation to generate new populations in search of an optimal solution. A genetic algorithm begins with a randomly generated population that is evaluated and selected using a fitness function. Selected solutions then reproduce through crossover and mutation to create a new population, and the process repeats until a termination condition is reached.
This document provides an introduction to genetic algorithms. It explains that genetic algorithms are inspired by Darwinian evolution and use processes like selection, crossover and mutation to iteratively improve a population of potential solutions. It discusses how genetic algorithms can be used for optimization problems and classification in data mining. Examples of genetic algorithm applications like the traveling salesman problem are also presented to illustrate genetic algorithm concepts and processes.
This document outlines an approach using latent feature log-linear (LFL) models and ensemble learning to predict student responses on questions for the "What do you know?" challenge on Kaggle. It describes using LFL to model dyadic prediction tasks with side information on students, questions, and student-question pairs. Models are trained on student response data and evaluated on a validation set. Ensemble methods like linear regression and gradient boosted decision trees are used to combine predictions from multiple LFL models to improve performance compared to individual models. The approach achieved strong results on the Kaggle competition.
Differential Evolution (DE) is a renowned optimization stratagem that can easily solve nonlinear and comprehensive problems. DE is a well known and uncomplicated population based probabilistic approach for comprehensive optimization. It has apparently outperformed a number of Evolutionary Algorithms and further search heuristics in the vein of Particle Swarm Optimization at what time of testing over both yardstick and actual world problems. Nevertheless, DE, like other probabilistic optimization algorithms, from time to time exhibits precipitate convergence and stagnates at suboptimal position. In order to stay away from stagnation behavior while maintaining an excellent convergence speed, an innovative search strategy is introduced, named memetic search in DE. In the planned strategy, positions update equation customized as per a memetic search stratagem. In this strategy a better solution participates more times in the position modernize procedure. The position update equation is inspired from the memetic search in artificial bee colony algorithm. The proposed strategy is named as Memetic Search in Differential Evolution (MSDE). To prove efficiency and efficacy of MSDE, it is tested over 8 benchmark optimization problems and three real world optimization problems. A comparative analysis has also been carried out among proposed MSDE and original DE. Results show that the anticipated algorithm go one better than the basic DE and its recent deviations in a good number of the experiments.
Two Phase Algorithm for Solving VRPTW ProblemWaqas Tariq
Vehicle Routing Problem with Time Windows (VRPTW) is a well known NP hard combinatorial scheduling optimization problem in which minimum number of routes have to be determined to serve all the customers within their specified time windows. Different analytic and heuristic approaches have been tried to solve such problems. In this paper we propose a two phase method which utilizes Genetic algorithms as well as random search incorporating simulated annealing concepts to solve VRPTW problem in various scenarios.
The document discusses local search algorithms, including gradient descent, the Metropolis algorithm, simulated annealing, and Hopfield neural networks. It provides details on how each algorithm works, such as gradient descent taking steps proportional to the negative gradient of a function to find a local minimum. The algorithms are compared, with some having similarities in their methods, like maximum cut problem and Hopfield neural networks using state flipping algorithms, and Metropolis and gradient descent using simulated annealing. Advantages and disadvantages of local search algorithms are presented.
This document provides an overview of metaheuristics, which are high-level problem-solving techniques for optimization problems. It begins with the history and definition of metaheuristics, then discusses their main characteristics such as neighborhood structures and intensification/diversification. Various metaheuristic methods are classified and examples are given, such as evolutionary algorithms, simulated annealing, ant colony optimization, and particle swarm optimization. Real-world applications are mentioned in areas like scheduling and logistics. Advantages of metaheuristics include their adaptability, while disadvantages are their lack of optimality guarantees and theoretical foundations.
This document discusses software module clustering using genetic algorithms and hill climbing techniques. It introduces genetic algorithms and hill climbing algorithms and how they can be applied to software module clustering. Specifically, it proposes using multiple hill climbs first to gather information about the search landscape, which is then used to define "building blocks" to improve subsequent searches done by genetic algorithms. The results of empirical studies using this novel approach show it to be effective at software module clustering.
A Genetic Algorithm For Constraint Optimization Problems With Hybrid Handling...Jim Jimenez
This document describes a genetic algorithm for solving constrained optimization problems. It proposes handling constraints using a hybrid scheme that initially searches only the feasible space without calculating the objective function, and then uses a penalty function once the feasible space is reached. The algorithm uses special operators like mutation based on eugenics and local search of the best solutions. It was tested on 11 benchmark functions and showed competitive performance compared to other works.
Comparison Study of Decision Tree Ensembles for RegressionSeonho Park
Nowadays, decision tree ensemble methods are widely used for solving classification and regression problem due to their rigorousness and robustness. To compare with classification, the performance in regression problem so far has not been yet addressed in detail. In this presentation, we review the state-of-art decision tree ensemble methodology in scikit-learn and xgboost for regression. Also, empirical study results are illustrated to compare their performance and computational efficiency.
The document describes an ontology called Exposé that was created for machine learning experimentation. The ontology aims to formally represent key aspects of machine learning experiments such as algorithm specifications, implementations, applications, experimental contexts, evaluation functions, and structured data. Exposé builds on and extends existing ontologies for data mining and machine learning experimentation by incorporating classes and relationships to represent additional important concepts.
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmXin-She Yang
This document provides an overview of nature-inspired metaheuristic algorithms for optimization. It discusses the main components of metaheuristic algorithms, including intensification and diversification. It then reviews the history and development of several important metaheuristic algorithms from the 1960s to the 1990s, including genetic algorithms, evolutionary strategies, simulated annealing, ant colony optimization, particle swarm optimization, and differential evolution. The document aims to analyze why these algorithms work and provide a unified view of metaheuristics.
Applied Artificial Intelligence Unit 4 Semester 3 MSc IT Part 2 Mumbai Univer...Madhav Mishra
The document discusses various topics related to evolutionary computation and artificial intelligence, including:
- Evolutionary computation concepts like genetic algorithms, genetic programming, evolutionary programming, and swarm intelligence approaches like ant colony optimization and particle swarm optimization.
- The use of intelligent agents in artificial intelligence and differences between single and multi-agent systems.
- Soft computing techniques involving fuzzy logic, machine learning, probabilistic reasoning and other approaches.
- Specific concepts discussed in more depth include genetic algorithms, genetic programming, swarm intelligence, ant colony optimization, and metaheuristics.
This document discusses machine learning systems and different approaches to classification and problem solving learning. It describes five paradigms for classification learning, and how human categorization differs from computational approaches. It also discusses learning for problem solving, including inducing search-control knowledge from solution paths, means-ends analysis, and constructing macro-operators from solution paths to improve problem solving performance. Finally, it discusses human learning in problem solving domains and how computational models address some but not all phenomena of human learning.
A performance analysis of metaheuristics and hybrid metaheuristics for the travel salesman problem is presented. Four single classical metaheuristics (genetic algorithm, memetic algorithm, iterated local search, and simulated annealing) were used. In addition, hybrid variations using nine different heuristic techniques for the local search, the mutation, and the intensification were used. The performance analysis was made using the Friedman test, and for the simulated annealing and local search algorithms statistical evidence was found that hybridization provides a difference in performance, while no evidence was found for the genetic and memetic algorithms. Up to six combinations were found to improve performance, five of them based on local search and one more based on simulated annealing.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Model-Based User Interface Optimization: Part IV: ADVANCED TOPICS - At SICSA ...Aalto University
The document discusses optimization techniques for user interfaces, focusing on metaheuristics and ant colony optimization. Metaheuristics provide intelligent, black-box optimization by learning and updating models of the problem environment through cooperation of multiple search agents. Ant colony optimization is well-suited for user interface design as layouts are constructed iteratively. The document outlines challenges like robustness to noise, multi-objective optimization, and dynamic problems. Techniques for addressing complex tasks include decomposition, screening, space reduction, and sub-space elimination.
In real world applications, most of the optimization problems involve more than one objective to
be optimized. The objectives in most of engineering problems are often conflicting, i.e., maximize
performance, minimize cost, maximize reliability, etc. In the case, one extreme solution would not satisfy
both objective functions and the optimal solution of one objective will not necessary be the best solution
for other objective(s). Therefore different solutions will produce trade-offs between different objectives
and a set of solutions is required to represent the optimal solutions of all objectives. Multi-objective
formulations are realistic models for many complex engineering optimization problems. Customized
genetic algorithms have been demonstrated to be particularly effective to determine excellent solutions to
these problems. A reasonable solution to a multi-objective problem is to investigate a set of solutions, each
of which satisfies the objectives at an acceptable level without being dominated by any other solution. In
this paper, an overview is presented describing various multi objective genetic algorithms developed to
handle different problems with multiple objectives.
Multilevel techniques for the clustering problemcsandit
Data Mining is concerned with the discovery of interesting patterns and knowledge in data
repositories. Cluster Analysis which belongs to the core methods of data mining is the process
of discovering homogeneous groups called clusters. Given a data-set and some measure of
similarity between data objects, the goal in most clustering algorithms is maximizing both the
homogeneity within each cluster and the heterogeneity between different clusters. In this work,
two multilevel algorithms for the clustering problem are introduced. The multilevel
paradigm suggests looking at the clustering problem as a hierarchical optimization process
going through different levels evolving from a coarse grain to fine grain strategy. The clustering
problem is solved by first reducing the problem level by level to a coarser problem where an
initial clustering is computed. The clustering of the coarser problem is mapped back level-bylevel
to obtain a better clustering of the original problem by refining the intermediate different
clustering obtained at various levels. A benchmark using a number of data sets collected from a
variety of domains is used to compare the effectiveness of the hierarchical approach against its
single-level counterpart.
The document proposes a methodology to improve evolutionary multi-objective algorithms (EMOAs) by incorporating achievement scalarizing functions (ASFs) to provide convergence to the Pareto optimal front while maintaining diversity. The methodology executes in serial stages: running an EMOA to get a non-dominated set, clustering this set to extract a representative set, calculating pseudo-weights for the representative set, and perturbing the extreme points to generate reference points to drive the ASF towards the Pareto front over iterations until no improvements are found. Initial studies on test problems ZDT1, ZDT2 and ZDT3 show promising results, with the proposed approach finding a representative set of clustered Pareto points in fewer generations compared to NSGA
Lecture on “Aerodynamic design of Aircraft” in University of Tokyo 21st December, 2015. Optimization techniques, data-visualization and their applications are inclusive.
SEARN is an algorithm for structured prediction that casts it as a sequence of cost-sensitive classification problems. It works by learning a policy to make incremental decisions that build up the full structured output. The policy is trained through an iterative process of generating cost-sensitive examples from sample outputs produced by the current policy, training a classifier on those examples, and interpolating the new policy with the previous one. This allows SEARN to learn the structured prediction task without requiring assumptions about the output structure, unlike approaches that make independence assumptions or rely on global prediction models.
Genetic algorithms are optimization techniques inspired by biological evolution that can efficiently search large spaces to find optimal solutions; they work by evolving a population of potential solutions through mechanisms like selection, crossover and mutation. Genetic algorithms have been successfully applied to problems in many domains and are now widely used in business, science and engineering for applications like scheduling, design, control, and machine learning.
This paper proposes a parallel evolutionary algorithm to solve single variable optimization problems. Specifically:
- It presents a genetic algorithm approach that runs in parallel using a master-slave model, where the master performs genetic operations and distributes individuals to slaves for evaluation.
- The algorithm is tested on single variable optimization problems to find minimum/maximum values.
- Experimental results show the parallel genetic algorithm is effective at finding optimal solutions to these problems and represents an efficient parallel approach for optimization.
Multiobjective optimization and trade offs using pareto optimalityAmogh Mundhekar
The document discusses multi-objective optimization and various techniques used to solve multi-objective problems. It introduces concepts like Pareto optimality and Pareto frontier. It then describes various solution methods like weighted sum, normal boundary intersection, goal programming, and Pareto genetic algorithms. Genetic algorithms use concepts like fitness, reproduction, and Pareto set filtering to evolve a population towards the Pareto optimal frontier while satisfying constraints.
This document describes a new multi-objective evolutionary algorithm called MOSCA2. MOSCA2 improves upon an earlier algorithm called MOSCA by using subpopulations instead of clusters, truncation selection instead of random selection, adding a recombination operator, and adding a separate archive to store non-dominated solutions. The algorithm uses subpopulations, truncation selection, and a deleting procedure to maintain diversity without needing density information or niche methods. It also uses a separate archive that stores and periodically updates non-dominated solutions found, deleting some when the archive becomes full. The algorithm is capable of solving both constrained and unconstrained nonlinear multi-objective optimization problems.
This document provides an overview of metaheuristics, which are high-level problem-solving techniques for optimization problems. It begins with the history and definition of metaheuristics, then discusses their main characteristics such as neighborhood structures and intensification/diversification. Various metaheuristic methods are classified and examples are given, such as evolutionary algorithms, simulated annealing, ant colony optimization, and particle swarm optimization. Real-world applications are mentioned in areas like scheduling and logistics. Advantages of metaheuristics include their adaptability, while disadvantages are their lack of optimality guarantees and theoretical foundations.
This document discusses software module clustering using genetic algorithms and hill climbing techniques. It introduces genetic algorithms and hill climbing algorithms and how they can be applied to software module clustering. Specifically, it proposes using multiple hill climbs first to gather information about the search landscape, which is then used to define "building blocks" to improve subsequent searches done by genetic algorithms. The results of empirical studies using this novel approach show it to be effective at software module clustering.
A Genetic Algorithm For Constraint Optimization Problems With Hybrid Handling...Jim Jimenez
This document describes a genetic algorithm for solving constrained optimization problems. It proposes handling constraints using a hybrid scheme that initially searches only the feasible space without calculating the objective function, and then uses a penalty function once the feasible space is reached. The algorithm uses special operators like mutation based on eugenics and local search of the best solutions. It was tested on 11 benchmark functions and showed competitive performance compared to other works.
Comparison Study of Decision Tree Ensembles for RegressionSeonho Park
Nowadays, decision tree ensemble methods are widely used for solving classification and regression problem due to their rigorousness and robustness. To compare with classification, the performance in regression problem so far has not been yet addressed in detail. In this presentation, we review the state-of-art decision tree ensemble methodology in scikit-learn and xgboost for regression. Also, empirical study results are illustrated to compare their performance and computational efficiency.
The document describes an ontology called Exposé that was created for machine learning experimentation. The ontology aims to formally represent key aspects of machine learning experiments such as algorithm specifications, implementations, applications, experimental contexts, evaluation functions, and structured data. Exposé builds on and extends existing ontologies for data mining and machine learning experimentation by incorporating classes and relationships to represent additional important concepts.
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmXin-She Yang
This document provides an overview of nature-inspired metaheuristic algorithms for optimization. It discusses the main components of metaheuristic algorithms, including intensification and diversification. It then reviews the history and development of several important metaheuristic algorithms from the 1960s to the 1990s, including genetic algorithms, evolutionary strategies, simulated annealing, ant colony optimization, particle swarm optimization, and differential evolution. The document aims to analyze why these algorithms work and provide a unified view of metaheuristics.
Applied Artificial Intelligence Unit 4 Semester 3 MSc IT Part 2 Mumbai Univer...Madhav Mishra
The document discusses various topics related to evolutionary computation and artificial intelligence, including:
- Evolutionary computation concepts like genetic algorithms, genetic programming, evolutionary programming, and swarm intelligence approaches like ant colony optimization and particle swarm optimization.
- The use of intelligent agents in artificial intelligence and differences between single and multi-agent systems.
- Soft computing techniques involving fuzzy logic, machine learning, probabilistic reasoning and other approaches.
- Specific concepts discussed in more depth include genetic algorithms, genetic programming, swarm intelligence, ant colony optimization, and metaheuristics.
This document discusses machine learning systems and different approaches to classification and problem solving learning. It describes five paradigms for classification learning, and how human categorization differs from computational approaches. It also discusses learning for problem solving, including inducing search-control knowledge from solution paths, means-ends analysis, and constructing macro-operators from solution paths to improve problem solving performance. Finally, it discusses human learning in problem solving domains and how computational models address some but not all phenomena of human learning.
A performance analysis of metaheuristics and hybrid metaheuristics for the travel salesman problem is presented. Four single classical metaheuristics (genetic algorithm, memetic algorithm, iterated local search, and simulated annealing) were used. In addition, hybrid variations using nine different heuristic techniques for the local search, the mutation, and the intensification were used. The performance analysis was made using the Friedman test, and for the simulated annealing and local search algorithms statistical evidence was found that hybridization provides a difference in performance, while no evidence was found for the genetic and memetic algorithms. Up to six combinations were found to improve performance, five of them based on local search and one more based on simulated annealing.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Model-Based User Interface Optimization: Part IV: ADVANCED TOPICS - At SICSA ...Aalto University
The document discusses optimization techniques for user interfaces, focusing on metaheuristics and ant colony optimization. Metaheuristics provide intelligent, black-box optimization by learning and updating models of the problem environment through cooperation of multiple search agents. Ant colony optimization is well-suited for user interface design as layouts are constructed iteratively. The document outlines challenges like robustness to noise, multi-objective optimization, and dynamic problems. Techniques for addressing complex tasks include decomposition, screening, space reduction, and sub-space elimination.
In real world applications, most of the optimization problems involve more than one objective to
be optimized. The objectives in most of engineering problems are often conflicting, i.e., maximize
performance, minimize cost, maximize reliability, etc. In the case, one extreme solution would not satisfy
both objective functions and the optimal solution of one objective will not necessary be the best solution
for other objective(s). Therefore different solutions will produce trade-offs between different objectives
and a set of solutions is required to represent the optimal solutions of all objectives. Multi-objective
formulations are realistic models for many complex engineering optimization problems. Customized
genetic algorithms have been demonstrated to be particularly effective to determine excellent solutions to
these problems. A reasonable solution to a multi-objective problem is to investigate a set of solutions, each
of which satisfies the objectives at an acceptable level without being dominated by any other solution. In
this paper, an overview is presented describing various multi objective genetic algorithms developed to
handle different problems with multiple objectives.
Multilevel techniques for the clustering problemcsandit
Data Mining is concerned with the discovery of interesting patterns and knowledge in data
repositories. Cluster Analysis which belongs to the core methods of data mining is the process
of discovering homogeneous groups called clusters. Given a data-set and some measure of
similarity between data objects, the goal in most clustering algorithms is maximizing both the
homogeneity within each cluster and the heterogeneity between different clusters. In this work,
two multilevel algorithms for the clustering problem are introduced. The multilevel
paradigm suggests looking at the clustering problem as a hierarchical optimization process
going through different levels evolving from a coarse grain to fine grain strategy. The clustering
problem is solved by first reducing the problem level by level to a coarser problem where an
initial clustering is computed. The clustering of the coarser problem is mapped back level-bylevel
to obtain a better clustering of the original problem by refining the intermediate different
clustering obtained at various levels. A benchmark using a number of data sets collected from a
variety of domains is used to compare the effectiveness of the hierarchical approach against its
single-level counterpart.
The document proposes a methodology to improve evolutionary multi-objective algorithms (EMOAs) by incorporating achievement scalarizing functions (ASFs) to provide convergence to the Pareto optimal front while maintaining diversity. The methodology executes in serial stages: running an EMOA to get a non-dominated set, clustering this set to extract a representative set, calculating pseudo-weights for the representative set, and perturbing the extreme points to generate reference points to drive the ASF towards the Pareto front over iterations until no improvements are found. Initial studies on test problems ZDT1, ZDT2 and ZDT3 show promising results, with the proposed approach finding a representative set of clustered Pareto points in fewer generations compared to NSGA
Lecture on “Aerodynamic design of Aircraft” in University of Tokyo 21st December, 2015. Optimization techniques, data-visualization and their applications are inclusive.
SEARN is an algorithm for structured prediction that casts it as a sequence of cost-sensitive classification problems. It works by learning a policy to make incremental decisions that build up the full structured output. The policy is trained through an iterative process of generating cost-sensitive examples from sample outputs produced by the current policy, training a classifier on those examples, and interpolating the new policy with the previous one. This allows SEARN to learn the structured prediction task without requiring assumptions about the output structure, unlike approaches that make independence assumptions or rely on global prediction models.
Genetic algorithms are optimization techniques inspired by biological evolution that can efficiently search large spaces to find optimal solutions; they work by evolving a population of potential solutions through mechanisms like selection, crossover and mutation. Genetic algorithms have been successfully applied to problems in many domains and are now widely used in business, science and engineering for applications like scheduling, design, control, and machine learning.
This paper proposes a parallel evolutionary algorithm to solve single variable optimization problems. Specifically:
- It presents a genetic algorithm approach that runs in parallel using a master-slave model, where the master performs genetic operations and distributes individuals to slaves for evaluation.
- The algorithm is tested on single variable optimization problems to find minimum/maximum values.
- Experimental results show the parallel genetic algorithm is effective at finding optimal solutions to these problems and represents an efficient parallel approach for optimization.
Multiobjective optimization and trade offs using pareto optimalityAmogh Mundhekar
The document discusses multi-objective optimization and various techniques used to solve multi-objective problems. It introduces concepts like Pareto optimality and Pareto frontier. It then describes various solution methods like weighted sum, normal boundary intersection, goal programming, and Pareto genetic algorithms. Genetic algorithms use concepts like fitness, reproduction, and Pareto set filtering to evolve a population towards the Pareto optimal frontier while satisfying constraints.
This document describes a new multi-objective evolutionary algorithm called MOSCA2. MOSCA2 improves upon an earlier algorithm called MOSCA by using subpopulations instead of clusters, truncation selection instead of random selection, adding a recombination operator, and adding a separate archive to store non-dominated solutions. The algorithm uses subpopulations, truncation selection, and a deleting procedure to maintain diversity without needing density information or niche methods. It also uses a separate archive that stores and periodically updates non-dominated solutions found, deleting some when the archive becomes full. The algorithm is capable of solving both constrained and unconstrained nonlinear multi-objective optimization problems.
Similar to Investigations on Local Search based Hybrid Metaheuristics (20)
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
THEMATIC APPERCEPTION TEST(TAT) cognitive abilities, creativity, and critic...
Investigations on Local Search based Hybrid Metaheuristics
1. INVESTIGATIONS ON LOCAL SEARCH
BASED HYBRID METAHEURISTICS
Habilitationskolloquium
Vienna, October 1, 2014
Luca Di Gaspero
Institute of Computer Graphics and Algorithms
2. THE RESEARCH AREA: PRACTICAL
METHODS FOR COMBINATORIAL SEARCH
AND OPTIMIZATION PROBLEMS
5. HEURISTICS AND METAHEURISTICS
▶ Heuristics (Pearl 1984; Pólya 1945) a practical way to “tame”
NP-[complete|hard] search and optimization problems
▶ non-systematic solution search or construction guided by
experiential problem knowledge
▶ in general, with no guarantee (efficiency vs completeness)
▶ Metaheuristics a class of general-purpose heuristic methods
▶ independent (to some extent) from the specific problem
(rely on an abstract/indirect problem representation)
▶ Local Search methods, Genetic and Evolutionary Algorithms, Ant
Colony Optimization, Particle Swarm Optimization, …
▶ date back to the 1950s (Barricelli 1954; Robbins and Monro
1951), term introduced by Glover in 1986
▶ genuinely interdisciplinary research area: AI ∩ OR ∩ CS
▶ impressive success records (state-of-the-art for Routing,
Timetabling, Scheduling, SAT)
▶ empirical algorithmics (Johnson 2002; McGeoch 2012)
5
6. HEURISTICS AND METAHEURISTICS
▶ Heuristics (Pearl 1984; Pólya 1945) a practical way to “tame”
NP-[complete|hard] search and optimization problems
▶ non-systematic solution search or construction guided by
experiential problem knowledge
▶ in general, with no guarantee (efficiency vs completeness)
▶ Metaheuristics a class of general-purpose heuristic methods
▶ independent (to some extent) from the specific problem
(rely on an abstract/indirect problem representation)
▶ Local Search methods, Genetic and Evolutionary Algorithms, Ant
Colony Optimization, Particle Swarm Optimization, …
▶ date back to the 1950s (Barricelli 1954; Robbins and Monro
1951), term introduced by Glover in 1986
▶ genuinely interdisciplinary research area: AI ∩ OR ∩ CS
▶ impressive success records (state-of-the-art for Routing,
Timetabling, Scheduling, SAT)
▶ empirical algorithmics (Johnson 2002; McGeoch 2012)
5
7. HEURISTICS AND METAHEURISTICS
▶ Heuristics (Pearl 1984; Pólya 1945) a practical way to “tame”
NP-[complete|hard] search and optimization problems
▶ non-systematic solution search or construction guided by
experiential problem knowledge
▶ in general, with no guarantee (efficiency vs completeness)
▶ Metaheuristics a class of general-purpose heuristic methods
▶ independent (to some extent) from the specific problem
(rely on an abstract/indirect problem representation)
▶ Local Search methods, Genetic and Evolutionary Algorithms, Ant
Colony Optimization, Particle Swarm Optimization, …
▶ date back to the 1950s (Barricelli 1954; Robbins and Monro
1951), term introduced by Glover in 1986
▶ genuinely interdisciplinary research area: AI ∩ OR ∩ CS
▶ impressive success records (state-of-the-art for Routing,
Timetabling, Scheduling, SAT)
▶ empirical algorithmics (Johnson 2002; McGeoch 2012)
5
8. LOCAL SEARCH METAHEURISTICS
▶ Local Search is based the following scheme:
procedure LocalSearch(푆, 푁, 퐹 )
푠0 ← GenerateInitialSolution()
while ¬StopSearch(푠푖, 푖) do
푚 ← SelectMove(푠푖, 퐹 , 푁)
if AcceptableMove(푚, 푠푖, 퐹 ) then
푠푖+1 ← 푠푖 ⊕ 푚
else
푠푖+1 ← 푠푖
end if
푖 ← 푖 + 1
end while
end procedure
▶ the problem interface is provided by a search space 푆, a
neighborhood relation 푁 and a cost function 퐹
▶ Local search methods differ in the following components:
▶ SelectMove: How to explore the neighborhood
▶ AcceptableMove: When to perform the move
▶ StopSearch: When to stop the search
(e.g., Hill Climbing, Simulated Annealing, Tabu Search) 6
9. LOCAL SEARCH METAHEURISTICS
▶ Local Search is based the following scheme:
procedure LocalSearch(푆, 푁, 퐹 )
푠0 ← GenerateInitialSolution()
while ¬StopSearch(푠푖, 푖) do
푚 ← SelectMove(푠푖, 퐹 , 푁)
if AcceptableMove(푚, 푠푖, 퐹 ) then
푠푖+1 ← 푠푖 ⊕ 푚
else
푠푖+1 ← 푠푖
end if
푖 ← 푖 + 1
end while
end procedure
▶ the problem interface is provided by a search space 푆, a
neighborhood relation 푁 and a cost function 퐹
▶ Local search methods differ in the following components:
▶ SelectMove: How to explore the neighborhood
▶ AcceptableMove: When to perform the move
▶ StopSearch: When to stop the search
(e.g., Hill Climbing, Simulated Annealing, Tabu Search) 6
10. LOCAL SEARCH METAHEURISTICS
▶ Local Search is based the following scheme:
procedure LocalSearch(푆, 푁, 퐹 )
푠0 ← GenerateInitialSolution()
while ¬StopSearch(푠푖, 푖) do
푚 ← SelectMove(푠푖, 퐹 , 푁)
if AcceptableMove(푚, 푠푖, 퐹 ) then
푠푖+1 ← 푠푖 ⊕ 푚
else
푠푖+1 ← 푠푖
end if
푖 ← 푖 + 1
end while
end procedure
▶ the problem interface is provided by a search space 푆, a
neighborhood relation 푁 and a cost function 퐹
▶ Local search methods differ in the following components:
▶ SelectMove: How to explore the neighborhood
▶ AcceptableMove: When to perform the move
▶ StopSearch: When to stop the search
(e.g., Hill Climbing, Simulated Annealing, Tabu Search) 6
11. HYBRIDIZATION
▶ Hybridization of solution methods for NP-[complete|hard]
problems is a promising research trend
▶ a skillful combination of components can lead to more
effective solution methods
▶ merging complementary strengths of different solution
paradigm (milden the weaknesses)
▶ Hybrid Metaheuristics a combination of one metaheuristic with
components from other metaheuristics or with techniques from AI
and OR (Blum et al. 2008)
7
12. HYBRIDIZATION
▶ Hybridization of solution methods for NP-[complete|hard]
problems is a promising research trend
▶ a skillful combination of components can lead to more
effective solution methods
▶ merging complementary strengths of different solution
paradigm (milden the weaknesses)
▶ Hybrid Metaheuristics a combination of one metaheuristic with
components from other metaheuristics or with techniques from AI
and OR (Blum et al. 2008)
7
13. HYBRID METAHEURISTICS CLASSIFICATION
(Puchinger and Raidl 2005; Raidl 2006)
Hybrid
Metaheuristic
Metaheuristics
high-level
(weak coupling)
level of
hybridization low-level
control
strategy
(strong coupling)
batch
interleaved
collaborative
exact
techniques
space
decomposition
homogeneity
integrative
order
of execution
parallel
hybridization
components
other OR/AI
techniques other
heuristic methods
problem-specific
component
8
14. HYBRID METAHEURISTICS CLASSIFICATION
(Puchinger and Raidl 2005; Raidl 2006)
Hybrid
Metaheuristic
Metaheuristics
high-level
(weak coupling)
level of
hybridization low-level
control
strategy
(strong coupling)
batch
interleaved
collaborative
exact
techniques
space
decomposition
homogeneity
integrative
order
of execution
parallel
hybridization
components
other OR/AI
techniques other
heuristic methods
problem-specific
component
8
15. HYBRID METAHEURISTICS CLASSIFICATION
(Puchinger and Raidl 2005; Raidl 2006)
Hybrid
Metaheuristic
Metaheuristics
high-level
(weak coupling)
level of
hybridization low-level
control
strategy
(strong coupling)
batch
interleaved
collaborative
exact
techniques
space
decomposition
homogeneity
integrative
order
of execution
parallel
hybridization
components
other OR/AI
techniques other
heuristic methods
problem-specific
component
8
16. HYBRID METAHEURISTICS CLASSIFICATION
(Puchinger and Raidl 2005; Raidl 2006)
Hybrid
Metaheuristic
Metaheuristics
high-level
(weak coupling)
level of
hybridization low-level
control
strategy
(strong coupling)
batch
interleaved
collaborative
exact
techniques
space
decomposition
homogeneity
integrative
order
of execution
parallel
hybridization
components
other OR/AI
techniques other
heuristic methods
problem-specific
component
8
17. HYBRID METAHEURISTICS CLASSIFICATION
(Puchinger and Raidl 2005; Raidl 2006)
Hybrid
Metaheuristic
Metaheuristics
high-level
(weak coupling)
level of
hybridization low-level
control
strategy
(strong coupling)
batch
interleaved
collaborative
exact
techniques
space
decomposition
homogeneity
integrative
order
of execution
parallel
hybridization
components
other OR/AI
techniques other
heuristic methods
problem-specific
component
8
19. MOTIVATIONS
▶ a skillful combination of components can lead to more effective
solution methods
▶ merging complementary strengths of different solution paradigm
(milden the weaknesses)
▶ Local Search is my area of expertise since my PhD
▶ contribute the algorithmic ideas also in form of software
systems:
▶ EasyLocal++ (Di Gaspero and Schaerf 2003) enhancements
▶ GELATO (Cipriano, Di Gaspero, and Dovier 2013)
▶ GECODE LNS
10
20. MOTIVATIONS
▶ a skillful combination of components can lead to more effective
solution methods
▶ merging complementary strengths of different solution paradigm
(milden the weaknesses)
▶ Local Search is my area of expertise since my PhD
▶ contribute the algorithmic ideas also in form of software
systems:
▶ EasyLocal++ (Di Gaspero and Schaerf 2003) enhancements
▶ GELATO (Cipriano, Di Gaspero, and Dovier 2013)
▶ GECODE LNS
10
21. MOTIVATIONS
▶ a skillful combination of components can lead to more effective
solution methods
▶ merging complementary strengths of different solution paradigm
(milden the weaknesses)
▶ Local Search is my area of expertise since my PhD
▶ contribute the algorithmic ideas also in form of software
systems:
▶ EasyLocal++ (Di Gaspero and Schaerf 2003) enhancements
▶ GELATO (Cipriano, Di Gaspero, and Dovier 2013)
▶ GECODE LNS
10
22. PAPER SELECTION GUIDELINES
1. full coverage of the different kinds of hybridization considered
in my work
2. span most the different categories reported in the Hybrid
Metaheuristics classification
3. approaches that reached new state-of-the-art results for the
problem tackled
11
23. PAPER SELECTION GUIDELINES
1. full coverage of the different kinds of hybridization considered
in my work
2. span most the different categories reported in the Hybrid
Metaheuristics classification
3. approaches that reached new state-of-the-art results for the
problem tackled
11
24. PAPER SELECTION GUIDELINES
1. full coverage of the different kinds of hybridization considered
in my work
2. span most the different categories reported in the Hybrid
Metaheuristics classification
3. approaches that reached new state-of-the-art results for the
problem tackled
11
25. PAPER SELECTION GUIDELINES
1. full coverage of the different kinds of hybridization considered
in my work
2. span most the different categories reported in the Hybrid
Metaheuristics classification
3. approaches that reached new state-of-the-art results for the
problem tackled
▶ a uniform temporal coverage was not possible because of
recent consolidation
←−−1
←−−2
3−−→ ←−−4
5−−→
6−−→
2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
Figure: Journal papers, Papers in edited books, Conference papers
11
26. SELECTED PAPERS
1. L. Di Gaspero and A. Schaerf. A composite-neighborhood tabu search approach to the
traveling tournament problem. Journal of Heuristics, 13(2):189–207, 2007.
2. M. Chiarandini, L. Di Gaspero, S. Gualandi, and A. Schaerf. The balanced academic
curriculum problem revisited. Journal of Heuristics, 18(1):119—148, 2012.
3. R. Bellio, L. Di Gaspero, and A. Schaerf. Design and statistical analysis of a hybrid local
search algorithm for course timetabling. Journal of Scheduling, 15(1):49—61, 2012.
4. L. Di Gaspero, G. Di Tollo, A. Roli, and A. Schaerf. Hybrid metaheuristics for constrained
portfolio selection problem. Quantitative Finance, 11(10):1473-1488, 2011.
5. R. Cipriano, L. Di Gaspero, and A. Dovier. GELATO: a multi-paradigm tool for Large
Neighborhood Search. In E.-G. Talbi (ed.). Hybrid metaheuristics. volume 434 of
Studies in Computational Intelligence, pages 389-414. Springer Verlag, 2012.
6. L. Di Gaspero, A. Rendl, and T. Urli. Constraint-based approaches for Balancing Bike
Sharing Systems. In C. Schulte (ed.). Principles and Practice of Constraint
Programming - 19th International Conference, CP 2013, Uppsala, Sweden,
September 16-20, 2013. Proceedings. volume 8124 of Lecture Notes in Computer
Science, pages 758—773. Springer Verlag, 2013.
←−−1
←−−2
3−−→ ←−−4
5−−→
6−−→
2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014
Figure: Journal papers, Papers in edited books, Conference papers
12
27. OUTLINE
▶ Local Search self-hybridization
▶ Hybridization with Exact Methods
13
35. NEIGHBORHOOD STRUCTURES FOR THE TTP
▶ 푇 is the set {0, … , 푛 − 1} of teams, and 푅 the set
{0, … , 푟 − 1} of rounds
푁1, SwapHomes: ⟨푡1, 푡2⟩, swaps home/away position of the
two games between 푡1 and 푡2.
푁2, SwapTeams: ⟨푡1, 푡2⟩, swaps 푡1 and 푡2 throughout the
whole solution
푁3, SwapRounds: ⟨푟1, 푟2⟩, all matches assigned to 푟1 are
moved to 푟2, and vice versa
푁4, SwapMatches: ⟨푟, 푡1, 푡2⟩ the opponents of 푡1 and 푡2 in 푟
are exchanged
푁5: SwapMatchRound ⟨푡, 푟1, 푟2⟩, the opponents of 푡 in
rounds 푟1 and 푟2 are exchanged
▶ as for 푁4 and 푁5, the execution breaks the tournament,
therefore a repair chain is needed
19
36. TTP NEIGHBORHOOD ANALYSIS AND SELECTION
▶ In (Di Gaspero and Schaerf 2007) we perform a thorough
analysis of the neighborhoods, in term of the search space
connectedness
▶ the outcome of the analysis was the selection of the following
composed neighborhoods
▶ 퐶푁1 = ⋃5
푖=1 푁푖
▶ 퐶푁2 = 푁1 ∪ 푁4 ∪ 푁5
▶ 퐶푁3 = 푁1 ∪ 푁≤4
4 ∪ 푁≤6
5
▶ the selected neighborhoods equipped a Tabu Search (Glover
and Laguna 1997) metaheuristic
20
37. EXPERIMENTAL RESULTS
CN1 CN2 CN3 CN4
1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5
Figure: Distribution of the ranks
CN1 CN2 CN3 CN4
0.00 0.05 0.10 0.15
Figure: Distribution of the
normalized costs
21
38. COMPARISON WITH BEST RESULTS
(at the time of writing)
Instance Our Best By
NL10 59,583 59,436 [1]
NL12 111,483 111,248 [2]
NL14 190,174 189,766 [2]
NL16 270,063 267,194 [2]
CIRC10 242 242 [1]
CIRC12 426 408 [3]
CIRC14 668 654 [3]
CIRC16 1,004 928 [3]
CIRC18 1,412 1,306 [4]
CIRC20 1,946 1,842 [3]
Instance Our Best By
CON10 ∗124 — —
CON12 ∗181 — —
CON14 ∗252 — —
CON16 328 ∗327 [5]
CON18 418 — —
CON20 521 — —
CON22 632 — —
CON24 757 — —
BRA24 530,043 503,158 [4]
[1] Langford, [2] Anagnostopoulos et al. 2005, [3] Lim, Rodrigues,
and Zhang 2005, [4] Araùjo et al. 2005, [5] Rasmussen and Trick
2005
∗ optimality proven by Rasmussen and Trick 2005
22
39. GENERALIZED LOCAL SEARCH MACHINES
(H.H. Hoos 1999; Hoos and Stützle 2004)
▶ GLSM is a formal framework for representing Local Search
methods: it is essentially a (rich) automaton
▶ states are the basic search components
▶ transitions model the search control
▶ a transition takes place when the search component has
finished its execution and it is labeled with a pair 푐/푎
▶ 푐: the condition needed to select the transition
▶ 푎: the action performed on GLSM variables
• C1
C2
C3
C4
•
−/c"0
−/c"c+1
(c <! )/−
c #! /−
−/c"0
T >" /−
23
40. GENERALIZED LOCAL SEARCH MACHINES
Hybrid Metaheuristics Classification
Hybrid
Metaheuristic
Metaheuristics
high-level
(weak coupling)
level of
hybridization low-level
control
strategy
(strong coupling)
batch
interleaved
collaborative
exact
techniques
space
decomposition
homogeneity
integrative
order
of execution
parallel
hybridization
components
other OR/AI
techniques other
heuristic methods
problem-specific
component
24
41. THE GENERALIZED BALANCED ACADEMIC CURRICULUM PROBLEM
▶ Basic features of the BACP:
▶ Courses, with number of credits
▶ Periods, the planning horizon divided in years and terms
▶ Load limits, i.e., min and max credits per period and min and
max courses per period
▶ Precedences between courses
▶ Assign periods to courses satisfying load limits and precedences
▶ Generalization (GBACP):
▶ Curricula, subgroups of courses (students' selection)
▶ Preferences, undesired terms for courses
▶ minimize discrepancy from average load per curriculum,
minimize courses in undesired terms
25
42. AN EXAMPLE OF GBACP
Term 1 Term 2
Year 1
Year 2
Year 3
Curricula: 푄1 = {CS1, CS3, DB1, SE1},
푄2 = {CS1, CS2, DB2, SE2}
Preferences: CS3 not in term 2
Course Credits
CS1 7
CS2 6
CS3 5
DB1 7
DB2 5
SE1 4
SE2 5
… …
Precedences
CS1 ≺ CS2
CS2 ≺ CS3
CS1 ≺ DB1
… ≺ …
26
43. AN EXAMPLE OF GBACP
Term 1 Term 2
Year 1 CS1 CS2
(7) (6)
Year 2 CS3 DB1
(5) (7)
Year 3 DB2, SE1 SE2
(9) (5)
Curricula: 푄1 = {CS1, CS3, DB1, SE1},
푄2 = {CS1, CS2, DB2, SE2}
Preferences: CS3 not in term 2
Course Credits
CS1 7
CS2 6
CS3 5
DB1 7
DB2 5
SE1 4
SE2 5
… …
Precedences
CS1 ≺ CS2
CS2 ≺ CS3
CS1 ≺ DB1
… ≺ …
26
44. THE GENERALIZED BALANCED ACADEMIC CURRICULUM PROBLEM
Basic GLSMs
Basic
Gr R •
• Gr 15
• • Gr R •
(a) Basic
T >! /−
• • Gr R •
(b) Multi-start
T >! /−
T >! /−
Multi-start
• Gr R •
f # fbest/r"r+1
(c) Multi-run
T >! /−
Multi-start-multi-run
f # fbest/r"r+1
f # fbest/r"r+1
curriculum csplib8 8 (4 × 2) 46 1 46 csplib10 10 (5 × 2) 42 1 42 csplib12 12 (6 × 2) 66 1 66 UD1 9 (3 × 3) 307 37 34.62 UD2 6 (2 × 3) 268 20 27.8 UD3 9 (3 × 3) 236 31 29.81 UD4 6 (2 × 3) 139 16 25.69 UD5 6 (3 × 2) 282 31 34.32 UD6 4 (2 × 2) 264 20 27.15 −/r"0
15
T >! /−
−/r"0
• Gr R •
f < fbest/r "0
r >"/−
T >! /−
(d) Multi-start-multi-run
Fig. 4 The basic GLSM templates.
−/r"0
Table 1 Statistics on the GBACP instances.
Instance Periods Courses Curricula Courses per Courses per Prerequ. Pref.
•
• Gr R •
(b) Multi-start
•
! /−
• Gr R •
f < fbest/r "0
r >"/−
T >! /−
Multi-run
(a) Basic
T >! /−
Gr R •
(c) Multi-run
• Gr −/r (d) Fig. 4 The basic GLSM templates.
Table 1 Statistics on the GBACP instances.
Instance Periods Courses Curricula Courses per (Years ×
Terms)
15
• Gr R •
(a) Basic
• Gr R •
(b) Multi-start
• Gr R •
(c) Multi-run
• Gr R •
f < fbest/r "0
r >"/−
T >! /−
(d) Multi-start-multi-run
Fig. 4 The basic GLSM templates.
Table 1 Statistics on the GBACP instances.
Instance Periods Courses Curricula Courses per Courses per Prerequ. Pref.
27
45. r $!/f < fbest/r THE GENERALIZED BALANCED ACADEMIC CURRICULUM PROBLEM
Composite GLSMs (with a kicker)
푅1 ▷ 퐾
•
>" /−
• f < fbest/r"0
Gr
R1
f # fbest/r "r+1
K
−/r"0
• Gr
•
−/r"0
r $!/r "r+1
f < fbest/r"0
• T >" /−
−/r"0
r >!/− T >" /−
푅1 ▷ 퐾+
(b) R1 !K
•
>" /−
>" /−
fbest/r "0
• Gr
R1
R2
−/r"0
• Gr
•
−/r"0
r $!/r "r+1
f < fbest/r"0
T >" /−
r >!/− T >" /−
(d) R1 !R2
16
• Gr R •
Gr R •
r >!/− T >" /−
f < fbest/r"0
r >!/−
T >" /−
(a) R1
−/r"0
r >!/−
r"r+1
f # fbest %r $!/
• Gr
T >" /−
R1
f < fbest/r "0
K
r >!/− T >" /−
K
>!/− T >" /−
f < fbest/r"0
(b) R1 !K
푅1 ▷ −/푅r"0
2
• Gr
•
(a) R1
R1
−/r"0
r $!/r "r+1
f < fbest/r"0
T >" /−
K
•
(c) R1 !K+
r >!/− T >" /−
(b) R1 !K
• Gr
R1
K
•
−/r"0
r"r+1
f # fbest %r $!/
T >" /−
r >!/− T >" /−
Fig. 5 The composite GLSM templates (b)-(f) obtained from the multi-start-multi-run machine (a).
f < fbest/r "0
(c) R1 !K+
r $!/r"r+1
• Gr
T >" /−
T >" /−
R1
r >!/− T >" /−
R2
R1
f < fbest/r"0
r >!/− T >" /−
R2
r $!/r "r+1
(d) R1 !R2
푅1 ▷ 푅2 ▷ 퐾+
−/r"0
• Gr
•
R1
f < fbest/r"0
−/r"0
r $!/r "r+1
f < fbest/r"0
T >" /−
R2
K
•
(e) R1 !R2 !K
r >!/− T >" /−
(d) R1 !R2
R1
−/r"0
T >" /−
R1
−/r"0
T >" /−
• Gr R •
f < fbest/r"0
r >!/−
T >" /−
(a) R1
K
(b) R1 !K
• Gr
R1
K
•
−/r"0
r"r+1
f # fbest %r $!/
T >" /−
r >!/− T >" /−
f < fbest/r "0
(c) R1 !K+
• Gr
R1
R2
T >" /−
•
•
−/r"0
r $!/r "r+1
f < fbest/r"0
T >" /−
r >!/− T >" /−
(d) R1 !R2
• Gr
R1
R2
K
•
−/r"0
r $!/r"r+1
f < fbest/r"0
T >" /−
T >" /−
r >!/− T >" /−
(e) R1 !R2 !K
• Gr
R1
R1
R2
R2
r >!/− T >" /−
K
K
T >" /−
•
•
f # fbest %r $!/
−/r"0
f # fbest %r $!/
r"r+1
r"r+1
T >" /−
T >" /−
T >" /−
f < fbest/r"0
r >!/− T >" /−
(f) R1 !R2 !K+
f < fbest/r"0
(f) R1 !R2 !K+
Fig. 5 The composite GLSM templates (b)-(f) obtained from the multi-start-multi-run machine (a).
28
46. ALGORITHM SETTING
▶ The GLSM where equipped with a Simulated Annealing and a
Dynamic Tabu Search runner
▶ DTS features an adaptive weighting strategy for the cost
function
▶ the neighborhood considered are
MoveCourse (MC): move one course from its period to
another one
SwapCourses (SC): take two courses in different periods and
swap their periods
▶ As for the kicker we consider the intensification kicker relying
on the moves MC, SC and MC⊗SC.
29
47. CONFIGURATIONS SELECTION
▶ We first tune each algorithm in isolation
▶ Afterwards we equip the GLSMs with the best tuning found in
the previous phase
▶ Kickers have been tuned in conjunction with both SA and DTS
multi-run multi-start machines.
30
52. PROBLEM DECOMPOSITION
Bi-level problem
▶ If the optimization problem can be decomposed as:
min
푥̄,푦̄
푓(푥,̄푦)̄
푦̄∈ ℱ ⊆ ℤ푚
푥̄∈ 풢(푦)̄⊆ ℝ푛
▶ then, the 푦̄and 푥̄variables can be dealt with different methods
▶ a metaheuristic can be more adequate for the discrete part
▶ ad-hoc methods could be applied to the continuous part
▶ once the 푦̄variables are set to 푦∗̄, the problem reduces to
min
푥̄
푓′(푥)̄
푥̄∈ 풢(푦∗̄) ⊆ ℝ푛
where 푓′(푥)̄= 푓(푥,̄푦∗̄)
34
53. HYBRIDIZATION BY PROBLEM DECOMPOSITION
Hybrid Metaheuristics Classification
Hybrid
Metaheuristic
Metaheuristics
high-level
(weak coupling)
level of
hybridization low-level
control
strategy
(strong coupling)
batch
interleaved
collaborative
exact
techniques
space
decomposition
homogeneity
integrative
order
of execution
parallel
hybridization
components
other OR/AI
techniques other
heuristic methods
problem-specific
component
35
54. THE PORTFOLIO SELECTION PROBLEM
Unconstrained
▶ The classical model of investments (Markowitz 1952):
▶ a set of 푛 assets 푎1, … , 푎푛
▶ each asset 푎푖 has associated a return 푟푖 and a variance 휎푖
▶ for each pair of assets (푎푖, 푎푗) we know the covariance 휎푖푗 of
the two assets
▶ a total amount of money (normalized to 1)
▶ A portfolio is a vector of real values 푋 = {푥1, … , 푥푛}, such
that:
▶ 푥푖 represents the fraction of money invested in the asset 푎푖
▶
푛Σ
푖=1
푟푖푥푖 is the return (or gain) to be maximized
▶
푛Σ
푖=1
푛Σ
푗=1
휎푖푗푥푖푥푗 is the variance → risk to be minimized
36
55. THE PORTFOLIO SELECTION PROBLEM
Constrained
푓(푋) =
푛Σ
푖=1
푛Σ
푗=1
휎푖푗푥푖푥푗
푠.푡.
푛Σ 푖=1
푟푖푥푖 ≥ 푅
푛Σ
푖=1
푥푖 = 1
0 ≤ 푥푖 ≤ 1 (푖 = 1, … , 푛)
▶ Cardinality constraint: the portfolio size is bounded
1 ≤ 푘푚푖푛 ≤ Σ푖 푧푖 ≤ 푘푚푎푥 ≤ 푛
▶ Quantity constraint: the quantity of each asset is bounded
푥푖 = 0 ∨ 휖푖 ≤ 푥푖 ≤ 훿푖 (0 ≤ 휖푖 ≤ 훿푖 ≤ 1)
37
56. PROBLEM DECOMPOSITION
▶ Problem decomposition:
▶ Master solver: determine set 퐽 = {푖|푧푖 = 1} of assets in the
solution
▶ Slave solver: find optimal assignment with the constraints:
휖푖 ≤ 푥푖 ≤ 훿푖 for 푖 ∈ 퐽
푥푖 = 0 for 푖 ∉ 퐽
▶ Solution procedure:
▶ Master: simple Local Search on 푧 variables (First Descent and
Steepest Descent)
▶ Slave: Quadratic Programming solver (positive definite
problem with fewer assets)
38
61. LARGE NEIGHBORHOOD SEARCH
▶ Local Search with a particular
▶ neighborhood relation: destroy and repair a significant portion
of a solution (controlled by an intensity parameter 푑)
▶ exploration strategy: use an exact method (e.g., Constraint
Programming) for the repair phase, optimizing the subproblem
(Pisinger and Ropke 2010; Shaw 1998)
procedure LargeNeighborhoodSearch(Π = ⟨푋, 퐷, 퐶, 푓⟩, 푑)
푖 ← 0
푠 ← InitializeSolution(Π)
while ¬StoppingCriterion(푠, 푖) do
Π′ ← Destroy(푠, 푑, Π)
푛 ← Repair(Π′, 푓(푠))
if SolutionAccepted(푛) then
푠 ← 푛
end if
푖 ← 푖 + 1
end while
end procedure
▶ the destroy step can be either random or based on the problem
structure (i.e., decomposition)
▶ reuse of the CP/ILP model
42
62. LARGE NEIGHBORHOOD SEARCH
▶ Local Search with a particular
▶ neighborhood relation: destroy and repair a significant portion
of a solution (controlled by an intensity parameter 푑)
▶ exploration strategy: use an exact method (e.g., Constraint
Programming) for the repair phase, optimizing the subproblem
(Pisinger and Ropke 2010; Shaw 1998)
procedure LargeNeighborhoodSearch(Π = ⟨푋, 퐷, 퐶, 푓⟩, 푑)
푖 ← 0
푠 ← InitializeSolution(Π)
while ¬StoppingCriterion(푠, 푖) do
Π′ ← Destroy(푠, 푑, Π)
푛 ← Repair(Π′, 푓(푠))
if SolutionAccepted(푛) then
푠 ← 푛
end if
푖 ← 푖 + 1
end while
end procedure
▶ the destroy step can be either random or based on the problem
structure (i.e., decomposition)
▶ reuse of the CP/ILP model
42
63. LARGE NEIGHBORHOOD SEARCH
▶ Local Search with a particular
▶ neighborhood relation: destroy and repair a significant portion
of a solution (controlled by an intensity parameter 푑)
▶ exploration strategy: use an exact method (e.g., Constraint
Programming) for the repair phase, optimizing the subproblem
(Pisinger and Ropke 2010; Shaw 1998)
procedure LargeNeighborhoodSearch(Π = ⟨푋, 퐷, 퐶, 푓⟩, 푑)
푖 ← 0
푠 ← InitializeSolution(Π)
while ¬StoppingCriterion(푠, 푖) do
Π′ ← Destroy(푠, 푑, Π)
푛 ← Repair(Π′, 푓(푠))
if SolutionAccepted(푛) then
푠 ← 푛
end if
푖 ← 푖 + 1
end while
end procedure
▶ the destroy step can be either random or based on the problem
structure (i.e., decomposition)
▶ reuse of the CP/ILP model
42
64. HYBRIDIZATION BY LARGE NEIGHBORHOOD SEARCH
Hybrid Metaheuristics Classification
Hybrid
Metaheuristic
Metaheuristics
high-level
(weak coupling)
level of
hybridization low-level
control
strategy
(strong coupling)
batch
interleaved
collaborative
exact
techniques
space
decomposition
homogeneity
integrative
order
of execution
parallel
hybridization
components
other OR/AI
techniques other
heuristic methods
problem-specific
component
43
66. GELATO
▶ Integrates Constraint Programming and Local Search in a
single Large Neighborhood Search environment
▶ Inherits both the flexibility of CP and the efficiency of Local
Search
▶ Different point of views:
▶ Gecode point of view: a library that adds Local Search features
▶ EasyLocal++ point of view: a library that adds Constraint
Programming features
▶ Global point of view: an environment to model problems and
hybrid solving strategies using high-level language and to
search for solutions using Gecode and EasyLocal++ as low level
solvers
▶ Reuse of the Constraint Programming model for the Large
Neighborhood Search
▶ Reuse of the EasyLocal++ Local Search metaheuristics
45
67. GELATO
▶ Integrates Constraint Programming and Local Search in a
single Large Neighborhood Search environment
▶ Inherits both the flexibility of CP and the efficiency of Local
Search
▶ Different point of views:
▶ Gecode point of view: a library that adds Local Search features
▶ EasyLocal++ point of view: a library that adds Constraint
Programming features
▶ Global point of view: an environment to model problems and
hybrid solving strategies using high-level language and to
search for solutions using Gecode and EasyLocal++ as low level
solvers
▶ Reuse of the Constraint Programming model for the Large
Neighborhood Search
▶ Reuse of the EasyLocal++ Local Search metaheuristics
45
68. GELATO
▶ Integrates Constraint Programming and Local Search in a
single Large Neighborhood Search environment
▶ Inherits both the flexibility of CP and the efficiency of Local
Search
▶ Different point of views:
▶ Gecode point of view: a library that adds Local Search features
▶ EasyLocal++ point of view: a library that adds Constraint
Programming features
▶ Global point of view: an environment to model problems and
hybrid solving strategies using high-level language and to
search for solutions using Gecode and EasyLocal++ as low level
solvers
▶ Reuse of the Constraint Programming model for the Large
Neighborhood Search
▶ Reuse of the EasyLocal++ Local Search metaheuristics
45
69. GELATO
▶ Integrates Constraint Programming and Local Search in a
single Large Neighborhood Search environment
▶ Inherits both the flexibility of CP and the efficiency of Local
Search
▶ Different point of views:
▶ Gecode point of view: a library that adds Local Search features
▶ EasyLocal++ point of view: a library that adds Constraint
Programming features
▶ Global point of view: an environment to model problems and
hybrid solving strategies using high-level language and to
search for solutions using Gecode and EasyLocal++ as low level
solvers
▶ Reuse of the Constraint Programming model for the Large
Neighborhood Search
▶ Reuse of the EasyLocal++ Local Search metaheuristics
45
73. CONCLUSIONS
▶ An overview of my research on Hybrid Local Search-based
Metaheuristics
▶ Covers different hybridization methods
▶ Other studies on hybridization with different metaheuristics
▶ Two-level ACO for Haplotype Inference under pure parsimony
(Benedettini, Roli, and Di Gaspero 2008)
▶ A Hybrid ACO+CP for Balancing Bicycle Sharing Systems
(Di Gaspero, Rendl, and Urli 2013)
▶ Further studies on different problems
▶ A CP/LNS Approach for Multi-day Homecare Scheduling
Problems (Di Gaspero and Urli 2014)
▶ Future work
▶ Working on collaborative methods
▶ Investigating better LNS decomposition approaches
49
74. CONCLUSIONS
▶ An overview of my research on Hybrid Local Search-based
Metaheuristics
▶ Covers different hybridization methods
▶ Other studies on hybridization with different metaheuristics
▶ Two-level ACO for Haplotype Inference under pure parsimony
(Benedettini, Roli, and Di Gaspero 2008)
▶ A Hybrid ACO+CP for Balancing Bicycle Sharing Systems
(Di Gaspero, Rendl, and Urli 2013)
▶ Further studies on different problems
▶ A CP/LNS Approach for Multi-day Homecare Scheduling
Problems (Di Gaspero and Urli 2014)
▶ Future work
▶ Working on collaborative methods
▶ Investigating better LNS decomposition approaches
49