This document discusses nature-inspired metaheuristic algorithms for optimization and computational intelligence. It provides an overview of topics to be covered, including introductions, metaheuristic algorithms, Monte Carlo and Markov chains, algorithm analysis, exploration and exploitation techniques, constraints handling, applications, and discussions. It also notes some key quotes about computational science being the third paradigm of science, all models being inaccurate but some useful, and algorithms performing equally well on average according to the no-free-lunch theorems.
Nature-Inspired Optimization Algorithms Xin-She Yang
This document discusses nature-inspired optimization algorithms. It begins with an overview of the essence of optimization algorithms and their goal of moving to better solutions. It then discusses some issues with traditional algorithms and how nature-inspired algorithms aim to address these. Several nature-inspired algorithms are described in detail, including particle swarm optimization, firefly algorithm, cuckoo search, and bat algorithm. These are inspired by behaviors in swarms, fireflies, cuckoos, and bats respectively. Examples of applications to engineering design problems are also provided.
The document discusses ant colony optimization (ACO) algorithms. It introduces ACO as a probabilistic metaheuristic technique inspired by the behavior of ants seeking paths between their colony and food sources. It outlines the ACO metaheuristic and describes key ACO algorithms like Ant System, Ant Colony System, and MAX-MIN Ant System. The document also covers applications of ACO, advantages like inherent parallelism and efficient solutions to problems like the traveling salesman problem, and disadvantages like difficulty analyzing ACO theoretically.
This document discusses particle swarm optimization (PSO), which is an optimization technique inspired by swarm intelligence and the social behavior of bird flocking or fish schooling. PSO uses a population of candidate solutions called particles that fly through the problem hyperspace, with each particle adjusting its position based on its own experience and the experience of neighboring particles. The algorithm iteratively improves the particles' positions to locate the best solution based on fitness evaluations.
Cuckoo search is an optimization algorithm inspired by cuckoos that lay eggs in other birds' nests. It works by representing each potential solution as an "egg" in a nest, with the aim of replacing poor solutions with new, potentially better ones. There are three main rules: each cuckoo lays one egg at a time in a randomly chosen nest; the best nests carrying high-quality eggs carry over to the next generation; and some host birds can detect alien eggs and abandon the nest, requiring the cuckoo to lay again in a new nest. The algorithm uses random walks to explore the search space and find optimal solutions. It is simple to implement compared to other metaheuristic algorithms and has been successfully applied
A presentation on PSO with videos and animations to illustrate the concept. The ppt throws light on the concept, the algo, the application and comparison of PSO with GA and DE.
Metaheuristic Algorithms: A Critical AnalysisXin-She Yang
The document discusses metaheuristic algorithms and their application to optimization problems. It provides an overview of several nature-inspired algorithms including particle swarm optimization, firefly algorithm, harmony search, and cuckoo search. It describes how these algorithms were inspired by natural phenomena like swarming behavior, flashing fireflies, and bird breeding. The document also discusses applications of these algorithms to engineering design problems like pressure vessel design and gear box design optimization.
Nature-Inspired Optimization Algorithms Xin-She Yang
This document discusses nature-inspired optimization algorithms. It begins with an overview of the essence of optimization algorithms and their goal of moving to better solutions. It then discusses some issues with traditional algorithms and how nature-inspired algorithms aim to address these. Several nature-inspired algorithms are described in detail, including particle swarm optimization, firefly algorithm, cuckoo search, and bat algorithm. These are inspired by behaviors in swarms, fireflies, cuckoos, and bats respectively. Examples of applications to engineering design problems are also provided.
The document discusses ant colony optimization (ACO) algorithms. It introduces ACO as a probabilistic metaheuristic technique inspired by the behavior of ants seeking paths between their colony and food sources. It outlines the ACO metaheuristic and describes key ACO algorithms like Ant System, Ant Colony System, and MAX-MIN Ant System. The document also covers applications of ACO, advantages like inherent parallelism and efficient solutions to problems like the traveling salesman problem, and disadvantages like difficulty analyzing ACO theoretically.
This document discusses particle swarm optimization (PSO), which is an optimization technique inspired by swarm intelligence and the social behavior of bird flocking or fish schooling. PSO uses a population of candidate solutions called particles that fly through the problem hyperspace, with each particle adjusting its position based on its own experience and the experience of neighboring particles. The algorithm iteratively improves the particles' positions to locate the best solution based on fitness evaluations.
Cuckoo search is an optimization algorithm inspired by cuckoos that lay eggs in other birds' nests. It works by representing each potential solution as an "egg" in a nest, with the aim of replacing poor solutions with new, potentially better ones. There are three main rules: each cuckoo lays one egg at a time in a randomly chosen nest; the best nests carrying high-quality eggs carry over to the next generation; and some host birds can detect alien eggs and abandon the nest, requiring the cuckoo to lay again in a new nest. The algorithm uses random walks to explore the search space and find optimal solutions. It is simple to implement compared to other metaheuristic algorithms and has been successfully applied
A presentation on PSO with videos and animations to illustrate the concept. The ppt throws light on the concept, the algo, the application and comparison of PSO with GA and DE.
Metaheuristic Algorithms: A Critical AnalysisXin-She Yang
The document discusses metaheuristic algorithms and their application to optimization problems. It provides an overview of several nature-inspired algorithms including particle swarm optimization, firefly algorithm, harmony search, and cuckoo search. It describes how these algorithms were inspired by natural phenomena like swarming behavior, flashing fireflies, and bird breeding. The document also discusses applications of these algorithms to engineering design problems like pressure vessel design and gear box design optimization.
This document discusses particle swarm optimization (PSO), which is an optimization technique inspired by swarm intelligence. It summarizes that PSO was developed in 1995 and can be applied to various search and optimization problems. PSO works by having a swarm of particles that communicate locally to find the best solution within a search space, balancing exploration and exploitation.
Particle swarm optimization is a metaheuristic algorithm inspired by the social behavior of bird flocking. It works by having a population of candidate solutions, called particles, that fly through the problem space, adjusting their positions based on their own experience and the experience of neighboring particles. Each particle keeps track of its best position and the best position of its neighbors. The algorithm iteratively updates the velocity and position of each particle to move it closer to better solutions.
Metaheuristic Optimization: Algorithm Analysis and Open ProblemsXin-She Yang
The document discusses metaheuristic algorithms for optimization problems. It begins with introductions from two experts about computational science and the usefulness of models. It then provides an overview of different metaheuristic algorithms like simulated annealing, genetic algorithms, and particle swarm optimization. The document discusses how these algorithms generate new solutions through techniques like probabilistic moves, Markov chains, crossover and mutation. It provides examples and diagrams to illustrate how various metaheuristic algorithms work.
Optimization involves finding the best values for variables to minimize or maximize an objective function subject to constraints. An optimization problem consists of an objective function, variables, and constraints. The objective function expresses the performance of a system and must be minimized or maximized. Variables define the objective function and constraints. Constraints allow variables to take on certain values but exclude others to ensure feasibility. Common optimization techniques include mathematical programming, calculus methods, network methods, and meta-heuristic algorithms such as genetic algorithms, simulated annealing, and whale optimization.
A brief introduction on the principles of particle swarm optimizaton by Rajorshi Mukherjee. This presentation has been compiled from various sources (not my own work) and proper references have been made in the bibliography section for further reading. This presentation was made as a presentation for submission for our college subject Soft Computing.
The document summarizes the artificial bee colony (ABC) algorithm, which was introduced in 2005 and is inspired by the foraging behavior of honeybee swarms. The ABC algorithm simulates three groups of bees - employed bees, onlookers, and scouts - to solve optimization problems. It involves phases of employed bee search, onlooker bee choice, and scout bee recruitment to balance exploration and exploitation. The ABC algorithm has few parameters and fast convergence but is limited by its initial solutions. Variations include multi-objective ABC algorithms and parameter studies on swarm size, limit, and dimension.
Particle swarm optimization is a population-based stochastic optimization technique inspired by bird flocking or fish schooling. It works by having a population of candidate solutions, called particles, and moving these particles around in the search space according to simple mathematical formulae over the particle's position and velocity. Each particle keeps track of its coordinates in the problem space which are associated with the best solution that particle has achieved so far. The main idea is that hope flies along with the flock.
Dr. Ahmed Fouad Ali of Suez Canal University presents an overview of particle swarm optimization (PSO), a meta-heuristic optimization technique inspired by swarm intelligence in animals. PSO was proposed in 1995 by Kennedy and Eberhart and simulates the social behavior of bird flocking or fish schooling. In PSO, each potential solution is a "particle" moving in the search space, adjusting its position based on its own experience and the experience of neighboring particles. The algorithm tracks the best solution found by each particle and the best solution found by the entire swarm to guide the particles toward promising regions of the search space. PSO has advantages of being simple to implement with few parameters to adjust, while also being effective
Particle swarm optimization (PSO) is an evolutionary computation technique for optimizing problems. It initializes a population of random solutions and searches for optima by updating generations. Each potential solution, called a particle, tracks its best solution and the overall best solution to change its velocity and position in search of better solutions. The algorithm involves initializing particles with random positions and velocities, then updating velocities and positions iteratively based on the particles' local best solution and the global best solution until termination criteria are met. PSO has advantages of being simple, quick, and effective at locating good solutions.
The document summarizes the Cuckoo Search algorithm, which is inspired by the brood parasitism behavior of some cuckoo species. It describes three key aspects of cuckoos' behavior that the algorithm is based on: 1) cuckoos lay their eggs in other birds' nests; 2) if the host bird discovers the foreign egg, it will throw it out or abandon the nest; 3) cuckoo eggs often hatch slightly earlier, allowing the cuckoo chick to evict the other eggs. The algorithm represents each solution as an "egg" in a nest - the aim is to use new solutions to replace inferior solutions. It operates according to three rules: each cuckoo lays one egg, the
The document describes the Bees Algorithm, which is an optimization algorithm inspired by the foraging behavior of honey bees. It begins with randomly placed scout bees that evaluate potential solutions. The best sites are selected and more bees are recruited to explore near those locations. The fittest bees are kept and the process repeats until termination criteria is met. The algorithm aims to efficiently locate good solutions like honey bees locating food sources. It can be applied to problems with multiple optimal solutions.
Optimization and particle swarm optimization (O & PSO) Engr Nosheen Memon
The document discusses particle swarm optimization (PSO) which is a population-based stochastic optimization technique inspired by social behavior of bird flocking or fish schooling. It summarizes PSO as follows: PSO initializes a population of random solutions and searches for optima by updating generations of candidate solutions. Each candidate is adjusted based on the best candidates in the local neighborhood and overall population. This process is repeated until a termination criterion is met.
The document discusses various optimization techniques including evolutionary computing techniques such as particle swarm optimization and genetic algorithms. It provides an overview of the goal of optimization problems and discusses black-box optimization approaches. Evolutionary algorithms and swarm intelligence techniques that are inspired by nature are also introduced. The document then focuses on particle swarm optimization, providing details on the concepts, mathematical equations, components and steps involved in PSO. It also discusses genetic algorithms at a high level.
Simulated annealing is an algorithm for finding good solutions to optimization problems, such as the traveling salesman problem, where the goal is to find the shortest route between cities. It is inspired by annealing in metalworking, where heating and controlled cooling produces strong, defect-free metal. The algorithm starts with a random solution and finds neighboring solutions, accepting worse solutions with probability related to cost difference and iteration number, to avoid local optima. This allows big jumps early on, but the algorithm hones in on a local optimum over many iterations, usually finding a good enough solution. Parameters must be tuned correctly through trial and error. Overall, simulated annealing is considered effective for optimization problems.
This presentation provides an introduction to the Particle Swarm Optimization topic, it shows the PSO basic idea, PSO parameters, advantages, limitations and the related applications.
This document discusses several metaheuristic optimization algorithms, including Ant Colony Optimization (ACO), Firefly Algorithm, Modified Firefly Algorithm, BAT Algorithm, and Artificial Bee Colony (ABC) algorithm. It provides brief overviews of each algorithm, describing how they are inspired by natural behaviors and processes and outlining their main rules and procedures. The document is presented by Dr. C.Gokul and discusses these algorithms for optimization and problem solving.
The document proposes using particle swarm optimization (PSO) for supervised hyperspectral band selection to reduce data dimensionality before classification. It describes existing band selection approaches, how PSO can be applied to band selection, and reports classification results on two hyperspectral datasets that show PSO band selection improves SVM classification accuracy over other methods.
This document provides an overview of optimization techniques. It defines optimization as identifying variable values that minimize or maximize an objective function subject to constraints. It then discusses various applications of optimization in finance, engineering, and data modeling. The document outlines different types of optimization problems and algorithms. It provides examples of unconstrained optimization algorithms like gradient descent, conjugate gradient, Newton's method, and BFGS. It also discusses the Nelder-Mead simplex algorithm for constrained optimization and compares the performance of these algorithms on sample problems.
Evolutionary Symbolic Discovery for Bioinformatics, Systems and Synthetic Bi...Natalio Krasnogor
The document discusses using evolutionary symbolic discovery methods to synthesize effective energy functions for protein structure prediction and systems/synthetic biology models. It describes using genetic programming techniques to explore large combinatorial spaces of modular components and parameters to construct stochastic P systems that model cellular systems. The goal is to find structures and optimize parameters in P systems to match target models through comparing different evolutionary algorithms on test cases of increasing difficulty and dimension.
The document discusses updating the statistical mechanics curriculum to incorporate developments over the last 50 years. It notes how statistical mechanics is important to many fields beyond physics like biology, computer science, engineering, mathematics and social sciences. The document argues statistical mechanics will be taught across many fields in the next generation, and teaching it to a variety of fields enriches the subject for those with physics backgrounds.
This document discusses particle swarm optimization (PSO), which is an optimization technique inspired by swarm intelligence. It summarizes that PSO was developed in 1995 and can be applied to various search and optimization problems. PSO works by having a swarm of particles that communicate locally to find the best solution within a search space, balancing exploration and exploitation.
Particle swarm optimization is a metaheuristic algorithm inspired by the social behavior of bird flocking. It works by having a population of candidate solutions, called particles, that fly through the problem space, adjusting their positions based on their own experience and the experience of neighboring particles. Each particle keeps track of its best position and the best position of its neighbors. The algorithm iteratively updates the velocity and position of each particle to move it closer to better solutions.
Metaheuristic Optimization: Algorithm Analysis and Open ProblemsXin-She Yang
The document discusses metaheuristic algorithms for optimization problems. It begins with introductions from two experts about computational science and the usefulness of models. It then provides an overview of different metaheuristic algorithms like simulated annealing, genetic algorithms, and particle swarm optimization. The document discusses how these algorithms generate new solutions through techniques like probabilistic moves, Markov chains, crossover and mutation. It provides examples and diagrams to illustrate how various metaheuristic algorithms work.
Optimization involves finding the best values for variables to minimize or maximize an objective function subject to constraints. An optimization problem consists of an objective function, variables, and constraints. The objective function expresses the performance of a system and must be minimized or maximized. Variables define the objective function and constraints. Constraints allow variables to take on certain values but exclude others to ensure feasibility. Common optimization techniques include mathematical programming, calculus methods, network methods, and meta-heuristic algorithms such as genetic algorithms, simulated annealing, and whale optimization.
A brief introduction on the principles of particle swarm optimizaton by Rajorshi Mukherjee. This presentation has been compiled from various sources (not my own work) and proper references have been made in the bibliography section for further reading. This presentation was made as a presentation for submission for our college subject Soft Computing.
The document summarizes the artificial bee colony (ABC) algorithm, which was introduced in 2005 and is inspired by the foraging behavior of honeybee swarms. The ABC algorithm simulates three groups of bees - employed bees, onlookers, and scouts - to solve optimization problems. It involves phases of employed bee search, onlooker bee choice, and scout bee recruitment to balance exploration and exploitation. The ABC algorithm has few parameters and fast convergence but is limited by its initial solutions. Variations include multi-objective ABC algorithms and parameter studies on swarm size, limit, and dimension.
Particle swarm optimization is a population-based stochastic optimization technique inspired by bird flocking or fish schooling. It works by having a population of candidate solutions, called particles, and moving these particles around in the search space according to simple mathematical formulae over the particle's position and velocity. Each particle keeps track of its coordinates in the problem space which are associated with the best solution that particle has achieved so far. The main idea is that hope flies along with the flock.
Dr. Ahmed Fouad Ali of Suez Canal University presents an overview of particle swarm optimization (PSO), a meta-heuristic optimization technique inspired by swarm intelligence in animals. PSO was proposed in 1995 by Kennedy and Eberhart and simulates the social behavior of bird flocking or fish schooling. In PSO, each potential solution is a "particle" moving in the search space, adjusting its position based on its own experience and the experience of neighboring particles. The algorithm tracks the best solution found by each particle and the best solution found by the entire swarm to guide the particles toward promising regions of the search space. PSO has advantages of being simple to implement with few parameters to adjust, while also being effective
Particle swarm optimization (PSO) is an evolutionary computation technique for optimizing problems. It initializes a population of random solutions and searches for optima by updating generations. Each potential solution, called a particle, tracks its best solution and the overall best solution to change its velocity and position in search of better solutions. The algorithm involves initializing particles with random positions and velocities, then updating velocities and positions iteratively based on the particles' local best solution and the global best solution until termination criteria are met. PSO has advantages of being simple, quick, and effective at locating good solutions.
The document summarizes the Cuckoo Search algorithm, which is inspired by the brood parasitism behavior of some cuckoo species. It describes three key aspects of cuckoos' behavior that the algorithm is based on: 1) cuckoos lay their eggs in other birds' nests; 2) if the host bird discovers the foreign egg, it will throw it out or abandon the nest; 3) cuckoo eggs often hatch slightly earlier, allowing the cuckoo chick to evict the other eggs. The algorithm represents each solution as an "egg" in a nest - the aim is to use new solutions to replace inferior solutions. It operates according to three rules: each cuckoo lays one egg, the
The document describes the Bees Algorithm, which is an optimization algorithm inspired by the foraging behavior of honey bees. It begins with randomly placed scout bees that evaluate potential solutions. The best sites are selected and more bees are recruited to explore near those locations. The fittest bees are kept and the process repeats until termination criteria is met. The algorithm aims to efficiently locate good solutions like honey bees locating food sources. It can be applied to problems with multiple optimal solutions.
Optimization and particle swarm optimization (O & PSO) Engr Nosheen Memon
The document discusses particle swarm optimization (PSO) which is a population-based stochastic optimization technique inspired by social behavior of bird flocking or fish schooling. It summarizes PSO as follows: PSO initializes a population of random solutions and searches for optima by updating generations of candidate solutions. Each candidate is adjusted based on the best candidates in the local neighborhood and overall population. This process is repeated until a termination criterion is met.
The document discusses various optimization techniques including evolutionary computing techniques such as particle swarm optimization and genetic algorithms. It provides an overview of the goal of optimization problems and discusses black-box optimization approaches. Evolutionary algorithms and swarm intelligence techniques that are inspired by nature are also introduced. The document then focuses on particle swarm optimization, providing details on the concepts, mathematical equations, components and steps involved in PSO. It also discusses genetic algorithms at a high level.
Simulated annealing is an algorithm for finding good solutions to optimization problems, such as the traveling salesman problem, where the goal is to find the shortest route between cities. It is inspired by annealing in metalworking, where heating and controlled cooling produces strong, defect-free metal. The algorithm starts with a random solution and finds neighboring solutions, accepting worse solutions with probability related to cost difference and iteration number, to avoid local optima. This allows big jumps early on, but the algorithm hones in on a local optimum over many iterations, usually finding a good enough solution. Parameters must be tuned correctly through trial and error. Overall, simulated annealing is considered effective for optimization problems.
This presentation provides an introduction to the Particle Swarm Optimization topic, it shows the PSO basic idea, PSO parameters, advantages, limitations and the related applications.
This document discusses several metaheuristic optimization algorithms, including Ant Colony Optimization (ACO), Firefly Algorithm, Modified Firefly Algorithm, BAT Algorithm, and Artificial Bee Colony (ABC) algorithm. It provides brief overviews of each algorithm, describing how they are inspired by natural behaviors and processes and outlining their main rules and procedures. The document is presented by Dr. C.Gokul and discusses these algorithms for optimization and problem solving.
The document proposes using particle swarm optimization (PSO) for supervised hyperspectral band selection to reduce data dimensionality before classification. It describes existing band selection approaches, how PSO can be applied to band selection, and reports classification results on two hyperspectral datasets that show PSO band selection improves SVM classification accuracy over other methods.
This document provides an overview of optimization techniques. It defines optimization as identifying variable values that minimize or maximize an objective function subject to constraints. It then discusses various applications of optimization in finance, engineering, and data modeling. The document outlines different types of optimization problems and algorithms. It provides examples of unconstrained optimization algorithms like gradient descent, conjugate gradient, Newton's method, and BFGS. It also discusses the Nelder-Mead simplex algorithm for constrained optimization and compares the performance of these algorithms on sample problems.
Evolutionary Symbolic Discovery for Bioinformatics, Systems and Synthetic Bi...Natalio Krasnogor
The document discusses using evolutionary symbolic discovery methods to synthesize effective energy functions for protein structure prediction and systems/synthetic biology models. It describes using genetic programming techniques to explore large combinatorial spaces of modular components and parameters to construct stochastic P systems that model cellular systems. The goal is to find structures and optimize parameters in P systems to match target models through comparing different evolutionary algorithms on test cases of increasing difficulty and dimension.
The document discusses updating the statistical mechanics curriculum to incorporate developments over the last 50 years. It notes how statistical mechanics is important to many fields beyond physics like biology, computer science, engineering, mathematics and social sciences. The document argues statistical mechanics will be taught across many fields in the next generation, and teaching it to a variety of fields enriches the subject for those with physics backgrounds.
The Advancement and Challenges in Computational Physics - PhdassistancePhD Assistance
For the last five decades, computational physics has been a valuable scientific instrument in physics. In comparison to using only theoretical and experimental approaches, it has enabled physicists to understand complex problems better. Computational physics was mostly a scientific activity at the time, with relatively few organised undergraduate study.
Ph.D. Assistance serves as an external mentor to brainstorm your idea and translate that into a research model. Hiring a mentor or tutor is common and therefore let your research committee know about the same. We do not offer any writing services without the involvement of the researcher.
Learn More: https://bit.ly/3AUvG0y
Contact Us:
Website: https://www.phdassistance.com/
UK NO: +44–1143520021
India No: +91–4448137070
WhatsApp No: +91 91769 66446
Email: info@phdassistance.com
This document summarizes the relationship between systems biology and theoretical physics. It discusses how systems biology combines experimental techniques with mathematical modeling to understand biological processes, and how this field draws from both engineering and physics approaches. While engineering aims to numerically simulate biological systems, physics seeks universal principles and laws. The document reviews how concepts from physics, like statistical physics and nonlinear dynamics, have influenced systems biology research and how further integrating theoretical physics perspectives could aid understanding of biological systems.
Statistical global modeling of β^- decay halflives systematics ...butest
This document discusses using machine learning techniques like neural networks and support vector machines to model beta decay half-lives of nuclei. It compares these statistical, data-driven models to existing theory-based global models. The machine learning procedures treat beta decay half-lives as a non-linear optimization problem solved through statistical learning. Neural networks and support vector regression machines were constructed and compared to experimental data, previous neural network results, and traditional nuclear model estimates, showing similar performance to the best global calculations.
Have We Missed Half of What the Neocortex Does? by Jeff Hawkins (12/15/2017)Numenta
This was a presentation given on December 15, 2017 at the MIT Center for Brains, Minds + Machines as part of their Brains, Minds and Machines Seminar Series.
You can watch the recording of the presentation after Slide 1.
In this talk, Jeff describes a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. We propose the second input is a representation of allocentric location. The allocentric location represents where the sensed feature is relative to the object being sensed, in an object-centric reference frame. As the sensors move, cortical columns learn complete models of objects by integrating sensory features and location representations over time. Lateral projections allow columns to rapidly reach a consensus of what object is being sensed. We propose that the representation of allocentric location is derived locally, in layer 6 of each column, using the same tiling principles as grid cells in the entorhinal cortex. Because individual cortical columns are able to model complete complex objects, cortical regions are far more powerful than currently believed. The inclusion of allocentric location offers the possibility of rapid progress in understanding the function of numerous aspects of cortical anatomy.
Jeff discusses material from these two papers. Others can be found at https://numenta.com/papers
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World
URL: https://doi.org/10.3389/fncir.2017.00081
Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in the Neocortex
URL: https://doi.org/10.3389/fncir.2016.00023
This document discusses two decades of model sharing in systems biology. Over the past 20 years, there has been significant progress in developing standards and software for sharing mathematical models. This includes standards for describing models (SBML), simulations (SED-ML), and annotations (MIRIAM). Major model repositories now host thousands of shared models across various domains. Standardization has enabled large-scale model reconstruction, validation of existing models, and discovery of new models from data.
This document contains summaries of several papers related to artificial intelligence and neural networks:
1. The first paper discusses using recurrent neural networks to plan robot motions in variable environments.
2. The second paper describes using neural network models to classify brain signals and mind states from EEG data.
3. The third paper proposes using coarticulation composite models in acoustic-phonetic decoding to improve recognition rates beyond phonemes, diphones, and triphones.
Mun-Chung Tor is currently pursuing a Master's degree in Physics at King's College London. He has strong programming skills in Python, Fortran, and statistical data analysis. His dissertation focused on simulating dark matter signals using Python. He has also reviewed modern cosmology topics. Tor is highly motivated, adapts well to different work, and excels at problem solving independently or in teams. He is looking for opportunities that make use of his physics education and technical abilities.
This document presents an algorithm for interactively learning monotone Boolean functions. The algorithm is based on Hansel's lemma, which states that algorithms based on finding maximal upper zeros and minimal lower units are optimal for learning monotone Boolean functions. The algorithm allows decreasing the number of queries needed to learn non-monotone functions that can be represented as combinations of monotone functions. The effectiveness of the approach is demonstrated through computational experiments in engineering and medical applications.
This document presents an algorithm for interactively learning monotone Boolean functions from examples. The algorithm is based on Hansel's lemma, which states that the optimal number of queries needed to learn a monotone Boolean function of n variables is O(n). The algorithm learns the target function by finding its maximal upper zeros and minimal lower units, representing the borders of the negative and positive patterns, respectively. The algorithm is optimal in the sense that it minimizes the maximum number of queries needed for any monotone Boolean function.
This document summarizes kernel methods in machine learning. It begins with an introductory example of using a kernel function to perform binary classification in a reproducing kernel Hilbert space. It then defines positive definite kernels and shows how they allow representing algorithms as operating in linear dot product spaces while using nonlinear kernel functions. The document covers fundamental properties of kernels, provides examples, and discusses how kernels define reproducing kernel Hilbert spaces for regularization. It overviews various kernel-based machine learning approaches and modeling structured responses using statistical models in reproducing kernel Hilbert spaces.
The document provides an overview of neural networks including:
- Their history from early models in the 1940s to the breakthrough of backpropagation in the 1980s.
- What a neural network is and how it works at the level of individual neurons and when connected together.
- Common applications of neural networks like prediction, classification, and clustering.
- Key considerations in choosing an appropriate neural network architecture and training data for a given problem.
Xin Yao: "What can evolutionary computation do for you?"ieee_cis_cyprus
Evolutionary computation techniques like genetic programming and evolutionary algorithms can be used for adaptive optimization, data mining, and machine learning. They have been successfully applied to problems like modeling galaxy distributions, material modeling, constraint handling, dynamic optimization, multi-objective optimization, and ensemble learning. While evolutionary computation has had many real-world applications, challenges remain in improving theoretical foundations, scalability to large problems, dealing with dynamic and uncertain environments, and developing the ability to learn from previous optimization experiences.
This document contains notes from the first lecture of the MIT course 10.637 (quantum chemical simulation). The key points covered include:
- An introduction to atomistic and quantum chemical simulations and how they can provide insights into materials, catalysts, and chemical systems at the nanoscale.
- An overview of the course content, which will cover classical force fields, electronic structure theory, sampling methods, excited state methods and applications in various fields.
- Details on assignments, grading, and expectations upon completing the course.
- Case studies demonstrating different simulation techniques, including reaction discovery in nanoreactors, modeling protein-ligand binding, predicting singlet fission rates, and computational screening of surface catalysts.
The Advancement and Challenges in Computational Physics - PhdassistancePhD Assistance
For the last five decades, computational physics has been a valuable scientific instrument in physics. In comparison to using only theoretical and experimental approaches, it has enabled physicists to understand complex problems better. Computational physics was mostly a scientific activity at the time, with relatively few organised undergraduate study.
Ph.D. Assistance serves as an external mentor to brainstorm your idea and translate that into a research model. Hiring a mentor or tutor is common and therefore let your research committee know about the same. We do not offer any writing services without the involvement of the researcher.
Learn More: https://bit.ly/3AUvG0y
Contact Us:
Website: https://www.phdassistance.com/
UK NO: +44–1143520021
India No: +91–4448137070
WhatsApp No: +91 91769 66446
Email: info@phdassistance.com
This document discusses systems biology and provides examples of regulatory networks and dynamics modeling in systems biology. It summarizes that systems biology aims to understand biological processes using a systems-level approach by integrating 'omics data, quantitative analysis, and computational modeling to study biological systems at various scales, from pathways to whole organisms. It also notes the rapid expansion of the field since 2000 and discusses current and future directions, including data integration, modeling dynamics, placing networks in spatial and temporal contexts, and applications to medicine.
Why Neurons have thousands of synapses? A model of sequence memory in the brainNumenta
Presentation given by Yuwei Cui, Numenta Research Engineer at Beijing Normal University. December 2015.
Collaborators: Jeff Hawkins, Subutai Ahmad, Chetan Surpur
Similar to Nature-inspired metaheuristic algorithms for optimization and computional intelligence (20)
Cuckoo Search Algorithm: An IntroductionXin-She Yang
This presentation explains the fundamental ideas of the standard Cuckoo Search (CS) algorithm, which also contains the links to the free Matlab codes at Mathswork file exchanges and the animations of numerical simulations (video at Youtube). An example of multi-objective cuckoo search (MOCS) is also given with link to the Matlab code.
A Biologically Inspired Network Design ModelXin-She Yang
This document summarizes a biologically inspired network design model based on the foraging behavior of the slime mold Physarum polycephalum. The model uses a gravity model to estimate traffic flows between cities and simulates the slime mold's development of a protoplasmic network to connect food sources. It applies this approach to design transportation networks for Mexico and China, comparing the results to existing networks. The networks are evaluated based on cost, efficiency, and robustness. The model converges to solutions that balance these factors in a flexible and optimized way inspired by biological networks.
Multiobjective Bat Algorithm (demo only)Xin-She Yang
The document describes a Bat Algorithm used for multi-objective optimization. It includes the pseudo code for the Bat Algorithm and describes how it generates potential solutions and updates them over iterations to find optimal trade-offs between two objectives. It also includes two objective functions used as examples to generate a Pareto front of optimal solutions.
This document contains code for a bat-inspired algorithm for continuous optimization. It includes a function that implements the bat algorithm to minimize an objective function. The bat algorithm is a metaheuristic algorithm that simulates the echolocation behavior of bats. It initializes a population of bats with random solutions and velocities, then iteratively updates the solutions and tracks the best solution found based on the objective function value.
This document contains Matlab code that implements the firefly algorithm to solve constrained optimization problems. The firefly algorithm is used to minimize an objective function with bounds on the variables. It initializes a population of fireflies randomly within the bounds, calculates their light intensities based on the objective function, and iteratively moves the fireflies towards more intense ones while enforcing the bounds.
Flower Pollination Algorithm (matlab code)Xin-She Yang
This document describes the flower pollination algorithm (FPA), a nature-inspired metaheuristic algorithm for optimization problems. It contains the basic components of FPA implemented in a demo program for single objective optimization of unconstrained functions. FPA mimics the pollination process of flowers, where pollen can be transported over long distances by insects or animals, and reproduced by local pollination among neighboring flowers of the same species. The demo program initializes a population of solutions, evaluates their fitness, and then iteratively updates the solutions using either long distance global pollination or local pollination until a maximum number of iterations is reached.
Nature-Inspired Metaheuristic AlgorithmsXin-She Yang
This chapter introduces optimization problems and nature-inspired metaheuristics. Optimization problems involve minimizing or maximizing objective functions subject to constraints. Nature-inspired metaheuristics are computational algorithms inspired by natural phenomena, such as simulated annealing, genetic algorithms, particle swarm optimization, and ant colony optimization. They provide near-optimal solutions to complex optimization problems.
Metaheuristics and Optimiztion in Civil EngineeringXin-She Yang
This document provides an overview of metaheuristic algorithms that have been applied to optimization problems in civil engineering. It discusses several commonly used metaheuristic algorithms, including genetic algorithms, simulated annealing, ant colony optimization, and particle swarm optimization. The document also provides examples of applications of these algorithms to problems in areas such as structural engineering, transportation engineering, and geotechnical engineering.
A Biologically Inspired Network Design ModelXin-She Yang
This document summarizes a biologically inspired network design model based on the foraging behavior of the slime mold Physarum polycephalum. The model uses a gravity model to estimate traffic flows between cities and simulates the slime mold's development of a protoplasmic network to connect food sources. It applies this approach to design transportation networks for Mexico and China, comparing the results to existing networks. The networks are evaluated based on cost, efficiency, and robustness. The model converges to solutions that balance these factors in a flexible and optimized way inspired by biological networks.
Memetic Firefly algorithm for combinatorial optimizationXin-She Yang
The document proposes a memetic firefly algorithm (MFFA) for solving combinatorial optimization problems, specifically graph 3-coloring problems. The MFFA represents solutions as real-valued vectors whose elements determine the order vertices are colored. A local search heuristic is also incorporated. The results of the MFFA were compared to other algorithms on random graphs, showing it performs comparably or better at finding solutions. The structure of the paper outlines the graph 3-coloring problem, describes the MFFA approach, and presents experimental results.
Two-Stage Eagle Strategy with Differential EvolutionXin-She Yang
The document describes a two-stage optimization strategy called the Eagle Strategy (ES) that combines global and local search algorithms to improve search efficiency. It evaluates applying ES to differential evolution (DE), a popular evolutionary algorithm. ES first uses randomization like Levy flights for global exploration, then switches to DE for intensive local search around promising solutions. The authors validate ES-DE on test functions, finding it requires only 9.7-24.9% of the function evaluations of pure DE. They also apply it to real-world pressure vessel and gearbox design problems, achieving solutions with 14.9-17.7% fewer function evaluations than pure DE.
Bat Algorithm for Multi-objective OptimisationXin-She Yang
This document proposes a multi-objective bat algorithm (MOBA) to solve multi-objective optimization problems. MOBA extends the previously developed bat algorithm for single objective optimization problems. MOBA uses Pareto dominance to evaluate non-dominated solutions and find an approximation of the true Pareto front. It initializes a population of bats and updates their positions and velocities over iterations to explore the search space. The best current solutions are used to guide the bats towards non-dominated regions.
Are motorways rational from slime mould's point of view?Xin-She Yang
The document discusses an experiment where slime mold Physarum polycephalum was used to approximate real-world motorway networks in 14 geographical regions. Researchers represented major urban areas with food sources and inoculated the slime mold in capital cities to observe how its network of protoplasmic tubes developed. They found the slime mold networks matched the motorway networks to some degree and used various measures to determine which regions had networks best approximated by the slime mold.
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmXin-She Yang
This document provides an overview of nature-inspired metaheuristic algorithms for optimization. It discusses the main components of metaheuristic algorithms, including intensification and diversification. It then reviews the history and development of several important metaheuristic algorithms from the 1960s to the 1990s, including genetic algorithms, evolutionary strategies, simulated annealing, ant colony optimization, particle swarm optimization, and differential evolution. The document aims to analyze why these algorithms work and provide a unified view of metaheuristics.
This document provides a list of commonly used test functions for validating new optimization algorithms. It describes 24 test functions, including functions originally developed by De Jong, Griewank, Rastrigin, and Rosenbrock. The test functions have various properties like being unimodal, multimodal, convex, or stochastic. They serve as benchmarks for comparing how well new algorithms can find the optimal value for problems with different characteristics.
Engineering Optimisation by Cuckoo SearchXin-She Yang
This document summarizes a research paper that proposes a new metaheuristic optimization algorithm called Cuckoo Search (CS). CS is inspired by the breeding behavior of some cuckoo species. The paper describes the rules and steps of the CS algorithm, compares its performance to other algorithms on standard test functions and engineering design problems, and discusses unique features of CS like Lévy flights that make it promising for further research.
A New Metaheuristic Bat-Inspired AlgorithmXin-She Yang
This document proposes a new metaheuristic optimization algorithm called the Bat Algorithm (BA) which is inspired by the echolocation behavior of microbats. Microbats use echolocation to detect prey and navigate in darkness by emitting ultrasonic pulses and analyzing the echo. The BA idealizes these behaviors to develop rules for how "bats" can search for the optimal solution. Key behaviors include adjusting pulse rates and loudness based on proximity to the target solution. The BA shows potential to combine advantages of other algorithms like PSO and is shown to perform well in simulations.
Eagle Strategy Using Levy Walk and Firefly Algorithms For Stochastic Optimiza...Xin-She Yang
This document proposes a new two-stage hybrid search method called the Eagle Strategy for solving stochastic optimization problems. The Eagle Strategy combines random search using Lévy walk with intensive local search using the Firefly Algorithm. It first uses Lévy walk to randomly explore the search space, then switches to the Firefly Algorithm to intensively search locally around good solutions. Numerical results suggest the Eagle Strategy is efficient for stochastic optimization problems.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
Nature-inspired metaheuristic algorithms for optimization and computional intelligence
1. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Nature-Inspired Metaheristics Algorithms
for Optimization and Computational Intelligence
Xin-She Yang
National Physical Laboratory, UK
@ FedCSIS2011
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
2. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Intro
Intro
Computational science is now the third paradigm of science,
complementing theory and experiment.
- Ken Wilson (Cornell University), Nobel Laureate.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
3. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Intro
Intro
Computational science is now the third paradigm of science,
complementing theory and experiment.
- Ken Wilson (Cornell University), Nobel Laureate.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
4. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Intro
Intro
Computational science is now the third paradigm of science,
complementing theory and experiment.
- Ken Wilson (Cornell University), Nobel Laureate.
All models are wrong, but some are useful.
- George Box, Statistician
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
5. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Intro
Intro
Computational science is now the third paradigm of science,
complementing theory and experiment.
- Ken Wilson (Cornell University), Nobel Laureate.
All models are inaccurate, but some are useful.
- George Box, Statistician
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
6. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Intro
Intro
Computational science is now the third paradigm of science,
complementing theory and experiment.
- Ken Wilson (Cornell University), Nobel Laureate.
All models are inaccurate, but some are useful.
- George Box, Statistician
All algorithms perform equally well on average over all possible
functions.
- No-free-lunch theorems (Wolpert & Macready)
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
7. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Intro
Intro
Computational science is now the third paradigm of science,
complementing theory and experiment.
- Ken Wilson (Cornell University), Nobel Laureate.
All models are inaccurate, but some are useful.
- George Box, Statistician
All algorithms perform equally well on average over all possible
functions. How so?
- No-free-lunch theorems (Wolpert & Macready)
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
8. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Intro
Intro
Computational science is now the third paradigm of science,
complementing theory and experiment.
- Ken Wilson (Cornell University), Nobel Laureate.
All models are inaccurate, but some are useful.
- George Box, Statistician
All algorithms perform equally well on average over all possible
functions. Not quite! (more later)
- No-free-lunch theorems (Wolpert & Macready)
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
9. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Intro
Intro
Computational science is now the third paradigm of science,
complementing theory and experiment.
- Ken Wilson (Cornell University), Nobel Laureate.
All models are inaccurate, but some are useful.
- George Box, Statistician
All algorithms perform equally well on average over all possible
functions. Not quite! (more later)
- No-free-lunch theorems (Wolpert & Macready)
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
10. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Overview
Overview
Part I
Introduction
Metaheuristic Algorithms
Monte Carlo and Markov Chains
Algorithm Analysis
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
11. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Overview
Overview
Part I
Introduction
Metaheuristic Algorithms
Monte Carlo and Markov Chains
Algorithm Analysis
Part II
Exploration & Exploitation
Dealing with Constraints
Applications
Discussions & Bibliography
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
12. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
A Perfect Algorithm
A Perfect Algorithm
What is the best relationship among E , m and c?
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
13. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
A Perfect Algorithm
A Perfect Algorithm
What is the best relationship among E , m and c?
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
14. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
A Perfect Algorithm
A Perfect Algorithm
What is the best relationship among E , m and c?
Initial state: m,E ,c ,
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
15. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
A Perfect Algorithm
A Perfect Algorithm
What is the best relationship among E , m and c?
Initial state: m,E ,c , =⇒
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
16. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
A Perfect Algorithm
A Perfect Algorithm
What is the best relationship among E , m and c?
Initial state: m,E ,c , =⇒
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
17. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
A Perfect Algorithm
A Perfect Algorithm
What is the best relationship among E , m and c?
Initial state: m,E ,c , =⇒ =⇒ E =mc 2
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
18. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
A Perfect Algorithm
A Perfect Algorithm
What is the best relationship among E , m and c?
Initial state: m,E ,c , =⇒ =⇒ E =mc 2
Steepest Descent
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
19. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
A Perfect Algorithm
A Perfect Algorithm
What is the best relationship among E , m and c?
Initial state: m,E ,c , =⇒ =⇒ E =mc 2
Steepest Descent
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
20. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
A Perfect Algorithm
A Perfect Algorithm
What is the best relationship among E , m and c?
Initial state: m,E ,c , =⇒ =⇒ E =mc 2
Steepest Descent
=⇒ d d
1 1 + y ′2
min t = ds = dx
0 v 0 2g [h − y (x)]
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
21. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
A Perfect Algorithm
A Perfect Algorithm
What is the best relationship among E , m and c?
Initial state: m,E ,c , =⇒ =⇒ E =mc 2
Steepest Descent
=⇒ d d
1 1 + y ′2
min t = ds = dx
0 v 0 2g [h − y (x)]
=⇒
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
22. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
A Perfect Algorithm
A Perfect Algorithm
What is the best relationship among E , m and c?
Initial state: m,E ,c , =⇒ =⇒ E =mc 2
Steepest Descent
=⇒ d d
1 1 + y ′2
min t = ds = dx
0 v 0 2g [h − y (x)]
=⇒ =⇒
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
23. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
A Perfect Algorithm
A Perfect Algorithm
What is the best relationship among E , m and c?
Initial state: m,E ,c , =⇒ =⇒ E =mc 2
Steepest Descent
=⇒ d d
1 1 + y ′2
min t = ds = dx
0 v 0 2g [h − y (x)]
A
x= 2 (θ − sin θ)
=⇒ =⇒
y = h − A (1 − cos θ)
2
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
24. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Computing in Reality
Computing in Reality
A Problem & Problem Solvers
⇓
Mathematical/Numerical Models
⇓
Computer & Algorithms & Programming
⇓
Validation
⇓
Results
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
25. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
What is an Algorithm?
What is an Algorithm?
Essence of an Optimization Algorithm
To move to a new, better point xi +1 from an existing known
location xi .
xi
x2
x1
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
26. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
What is an Algorithm?
What is an Algorithm?
Essence of an Optimization Algorithm
To move to a new, better point xi +1 from an existing known
location xi .
xi
x2
x1
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
27. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
What is an Algorithm?
What is an Algorithm?
Essence of an Optimization Algorithm
To move to a new, better point xi +1 from an existing known
location xi .
xi
?
x2
x1 xi +1
Population-based algorithms use multiple, interacting paths.
Different algorithms
Different strategies/approaches in generating these moves!
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
28. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Optimization is Like Treasure Hunting
Optimization is Like Treasure Hunting
How to find a treasure, a hidden 1 million dollars?
What is your best strategy?
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
29. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Optimization Algorithms
Optimization Algorithms
Deterministic
Newton’s method (1669, published in 1711), Newton-Raphson
(1690), hill-climbing/steepest descent (Cauchy 1847),
least-squares (Gauss 1795),
linear programming (Dantzig 1947), conjugate gradient
(Lanczos et al. 1952), interior-point method (Karmarkar
1984), etc.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
30. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Stochastic/Metaheuristic
Stochastic/Metaheuristic
Genetic algorithms (1960s/1970s), evolutionary strategy
(Rechenberg & Swefel 1960s), evolutionary programming
(Fogel et al. 1960s).
Simulated annealing (Kirkpatrick et al. 1983), Tabu search
(Glover 1980s), ant colony optimization (Dorigo 1992),
genetic programming (Koza 1992), particle swarm
optimization (Kennedy & Eberhart 1995), differential
evolution (Storn & Price 1996/1997),
harmony search (Geem et al. 2001), honeybee algorithm
(Nakrani & Tovey 2004), ..., firefly algorithm (Yang 2008),
cuckoo search (Yang & Deb 2009), ...
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
31. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Steepest Descent/Hill Climbing
Steepest Descent/Hill Climbing
Gradient-Based Methods
Use gradient/derivative information – very efficient for local search.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
32. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Steepest Descent/Hill Climbing
Steepest Descent/Hill Climbing
Gradient-Based Methods
Use gradient/derivative information – very efficient for local search.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
33. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Steepest Descent/Hill Climbing
Steepest Descent/Hill Climbing
Gradient-Based Methods
Use gradient/derivative information – very efficient for local search.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
34. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Steepest Descent/Hill Climbing
Steepest Descent/Hill Climbing
Gradient-Based Methods
Use gradient/derivative information – very efficient for local search.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
35. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Steepest Descent/Hill Climbing
Steepest Descent/Hill Climbing
Gradient-Based Methods
Use gradient/derivative information – very efficient for local search.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
36. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Steepest Descent/Hill Climbing
Steepest Descent/Hill Climbing
Gradient-Based Methods
Use gradient/derivative information – very efficient for local search.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
37. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Newton’s Method
∂2f ∂2f
∂x1 2
··· ∂x1 ∂xn
xn+1 = xn − H−1 ∇f ,
H= .
. .. .
.
.
. . .
∂2f ∂2f
∂xn ∂x1 ··· ∂xn 2
Generation of new moves by gradient.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
38. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Newton’s Method
∂2f ∂2f
∂x1 2
··· ∂x1 ∂xn
xn+1 = xn − H−1 ∇f ,
H= .
. .. .
.
.
. . .
∂2f ∂2f
∂xn ∂x1 ··· ∂xn 2
Quasi-Newton
If H is replaced by I, we have
xn+1 = xn − αI∇f (xn ).
Here α controls the step length.
Generation of new moves by gradient.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
39. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Steepest Descent Method (Cauchy 1847, Riemann 1863)
Steepest Descent Method (Cauchy 1847, Riemann 1863)
From the Taylor expansion of f (x) about x(n) , we have
f (x(n+1) ) = f (x(n) + ∆s) ≈ f (x(n) + (∇f (x(n) ))T ∆s,
where ∆s = x(n+1) − x(n) is the increment vector.
So
f (x(n) + ∆s) − f (x(n) ) = (∇f )T ∆s < 0.
Therefore, we have
∆s = −α∇f (x(n) ),
where α > 0 is the step size.
In the case of finding maxima, this method is often referred to as
hill-climbing.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
40. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Conjugate Gradient (CG) Method
Conjugate Gradient (CG) Method
Belong to Krylov subspace iteration methods. The conjugate
gradient method was pioneered by Magnus Hestenes, Eduard
Stiefel and Cornelius Lanczos in the 1950s. It was named as one of
the top 10 algorithms of the 20th century.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
41. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Conjugate Gradient (CG) Method
Conjugate Gradient (CG) Method
Belong to Krylov subspace iteration methods. The conjugate
gradient method was pioneered by Magnus Hestenes, Eduard
Stiefel and Cornelius Lanczos in the 1950s. It was named as one of
the top 10 algorithms of the 20th century.
A linear system with a symmetric positive definite matrix A
Au = b,
is equivalent to minimizing the following function f (u)
1
f (u) = uT Au − bT u + v,
2
where v is a vector constant and can be taken to be zero. We can
easily see that ∇f (u) = 0 leads to Au = b.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
42. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
CG
CG
The theory behind these iterative methods is closely related to the
Krylov subspace Kn spanned by A and b as defined by
Kn (A, b) = {Ib, Ab, A2 b, ..., An−1 b},
where A0 = I.
If we use an iterative procedure to obtain the approximate solution
un to Au = b at nth iteration, the residual is given by
rn = b − Aun ,
which is essentially the negative gradient ∇f (un ).
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
43. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
The search direction vector in the conjugate gradient method is
subsequently determined by
dT Arn
n
dn+1 = rn − dn .
dT Adn
n
The solution often starts with an initial guess u0 at n = 0, and
proceeds iteratively. The above steps can compactly be written as
un+1 = un + αn dn , rn+1 = rn − αn Adn ,
and
dn+1 = rn+1 + βn dn ,
where
rT rn
n rT rn+1
n+1
αn = T
, βn = .
dn Adn rT r n
n
Iterations stop when a prescribed accuracy is reached.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
44. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Gradient-free Methods
Gradient-free Methods
Gradient-base methods
Requires the information of derivatives. Not suitable for problems
with discontinuities.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
45. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Gradient-free Methods
Gradient-free Methods
Gradient-base methods
Requires the information of derivatives. Not suitable for problems
with discontinuities.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
46. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Gradient-free Methods
Gradient-free Methods
Gradient-base methods
Requires the information of derivatives. Not suitable for problems
with discontinuities.
Gradient-free or derivative-free methods
BFGS, Downhill simplex, Trust-region, SQP ...
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
47. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Nelder-Mead Downhill Simplex Method
Nelder-Mead Downhill Simplex Method
The Nelder-Mead method is a downhill simplex algorithm, first
developed by J. A. Nelder and R. Mead in 1965.
A Simplex
In the n-dimensional space, a simplex, which is a generalization of
a triangle on a plane, is a convex hull with n + 1 distinct points.
For simplicity, a simplex in the n-dimension space is referred to as
n-simplex.
Xin-She Yang (a) (b) (c) FedCSIS2011
Metaheuristics and Computational Intelligence
48. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Downhill Simplex Method
Downhill Simplex Method
xe
xr xr
¯
x
s s
xc
xn+1 xn+1 xn+1
The first step is to rank and re-order the vertex values
f (x1 ) ≤ f (x2 ) ≤ ... ≤ f (xn+1 ),
at x1 , x2 , ..., xn+1 , respectively. Wikipedia Animation
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
49. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Metaheuristic
Metaheuristic
Most are nature-inspired, mimicking certain successful features in
nature.
Simulated annealing
Genetic algorithms
Ant and bee algorithms
Particle Swarm Optimization
Firefly algorithm and cuckoo search
Harmony search ...
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
50. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Simulated Annealling
Simulated Annealling
Metal annealing to increase strength =⇒ simulated annealing.
Probabilistic Move: p ∝ exp[−E /kB T ].
kB =Boltzmann constant (e.g., kB = 1), T =temperature, E =energy.
E ∝ f (x), T = T0 αt (cooling schedule) , (0 < α < 1).
T → 0, =⇒p → 0, =⇒ hill climbing.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
51. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Simulated Annealling
Simulated Annealling
Metal annealing to increase strength =⇒ simulated annealing.
Probabilistic Move: p ∝ exp[−E /kB T ].
kB =Boltzmann constant (e.g., kB = 1), T =temperature, E =energy.
E ∝ f (x), T = T0 αt (cooling schedule) , (0 < α < 1).
T → 0, =⇒p → 0, =⇒ hill climbing.
This is essentially a Markov chain.
Generation of new moves by Markov chain.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
52. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
An Example
An Example
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
53. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Genetic Algorithms
Genetic Algorithms
crossover mutation
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
54. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Genetic Algorithms
Genetic Algorithms
crossover mutation
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
55. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Genetic Algorithms
Genetic Algorithms
crossover mutation
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
56. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
57. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
58. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Generation of new solutions by crossover, mutation and elistism.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
59. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Swarm Intelligence
Swarm Intelligence
Ants, bees, birds, fish ...
Simple rules lead to complex behaviour.
Go to Metaheuristic Slides
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
60. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Cuckoo Search
Cuckoo Search
Local random walk:
xt+1 = xt + s ⊗ H(pa − ǫ) ⊗ (xt − xt ).
i i j k
[xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫ
is a random number drawn from a uniform distribution, and s is
the step size.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
61. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Cuckoo Search
Cuckoo Search
Local random walk:
xt+1 = xt + s ⊗ H(pa − ǫ) ⊗ (xt − xt ).
i i j k
[xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫ
is a random number drawn from a uniform distribution, and s is
the step size.
Global random walk via L´vy flights:
e
λΓ(λ) sin(πλ/2) 1
xt+1 = xt + αL(s, λ),
i i L(s, λ) = , (s ≫ s0 ).
π s 1+λ
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
62. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Cuckoo Search
Cuckoo Search
Local random walk:
xt+1 = xt + s ⊗ H(pa − ǫ) ⊗ (xt − xt ).
i i j k
[xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫ
is a random number drawn from a uniform distribution, and s is
the step size.
Global random walk via L´vy flights:
e
λΓ(λ) sin(πλ/2) 1
xt+1 = xt + αL(s, λ),
i i L(s, λ) = , (s ≫ s0 ).
π s 1+λ
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
63. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Cuckoo Search
Cuckoo Search
Local random walk:
xt+1 = xt + s ⊗ H(pa − ǫ) ⊗ (xt − xt ).
i i j k
[xi , xj , xk are 3 different solutions, H(u) is a Heaviside function, ǫ
is a random number drawn from a uniform distribution, and s is
the step size.
Global random walk via L´vy flights:
e
λΓ(λ) sin(πλ/2) 1
xt+1 = xt + αL(s, λ),
i i L(s, λ) = , (s ≫ s0 ).
π s 1+λ
Generation of new moves by L´vy flights, random walk and elitism.
e
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
64. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Monte Carlo Methods
Monte Carlo Methods
Almost everyone has used Monte Carlo methods in some way ...
Measure temperatures, choose a product, ...
Taste soup, wine ...
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
65. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Markov Chains
Markov Chains
Random walk – A drunkard’s walk:
ut+1 = µ + ut + wt ,
where wt is a random variable, and µ is the drift.
For example, wt ∼ N(0, σ 2 ) (Gaussian).
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
66. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Markov Chains
Markov Chains
Random walk – A drunkard’s walk:
ut+1 = µ + ut + wt ,
where wt is a random variable, and µ is the drift.
For example, wt ∼ N(0, σ 2 ) (Gaussian).
25
20
15
10
5
0
-5
-10
0 100 200 300 400 500
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
67. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Markov Chains
Markov Chains
Random walk – A drunkard’s walk:
ut+1 = µ + ut + wt ,
where wt is a random variable, and µ is the drift.
For example, wt ∼ N(0, σ 2 ) (Gaussian).
25 10
20
5
15
0
10
-5
5
-10
0
-15
-5
-10 -20
0 100 200 300 400 500 -15 -10 -5 0 5 10 15 20
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
68. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Markov Chains
Markov Chains
Markov chain: the next state only depends on the current state
and the transition probability.
P(i , j) ≡ P(Vt+1 = Sj V0 = Sp , ..., Vt = Si )
= P(Vt+1 = Sj Vt = Sj ),
=⇒Pij πi∗ = Pji πj∗ , π ∗ = stionary probability distribution.
Examples: Brownian motion
ui +1 = µ + ui + ǫi , ǫi ∼ N(0, σ 2 ).
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
69. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Markov Chains
Markov Chains
Monopoly (board games)
Monopoly Animation
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
70. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Markov Chain Monte Carlo
Markov Chain Monte Carlo
Landmarks: Monte Carlo method (1930s, 1945, from 1950s) e.g.,
Metropolis Algorithm (1953), Metropolis-Hastings (1970).
Markov Chain Monte Carlo (MCMC) methods – A class of
methods.
Really took off in 1990s, now applied to a wide range of areas:
physics, Bayesian statistics, climate changes, machine learning,
finance, economy, medicine, biology, materials and engineering ...
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
71. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Convergence Behaviour
Convergence Behaviour
As the MCMC runs, convergence may be reached
When does a chain converge? When to stop the chain ... ?
Are multiple chains better than a single chain?
0
100
200
300
400
500
600
0 100 200 300 400 500 600 700 800 900
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
72. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Convergence Behaviour
Convergence Behaviour
−∞ ← t
t=−2 converged
U
1
2 t=2
t=−n
3 t=0
Multiple, interacting chains
Multiple agents trace multiple, interacting Markov chains during
the Monte Carlo process.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
73. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Analysis
Analysis
Classifications of Algorithms
Trajectory-based: hill-climbing, simulated annealing, pattern
search ...
Population-based: genetic algorithms, ant & bee algorithms,
artificial immune systems, differential evolutions, PSO, HS,
FA, CS, ...
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
74. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Analysis
Analysis
Classifications of Algorithms
Trajectory-based: hill-climbing, simulated annealing, pattern
search ...
Population-based: genetic algorithms, ant & bee algorithms,
artificial immune systems, differential evolutions, PSO, HS,
FA, CS, ...
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
75. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Analysis
Analysis
Classifications of Algorithms
Trajectory-based: hill-climbing, simulated annealing, pattern
search ...
Population-based: genetic algorithms, ant & bee algorithms,
artificial immune systems, differential evolutions, PSO, HS,
FA, CS, ...
Ways of Generating New Moves/Solutions
Markov chains with different transition probability.
Trajectory-based =⇒ a single Markov chain;
Population-based =⇒ multiple, interacting chains.
Tabu search (with memory) =⇒ self-avoiding Markov chains.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
76. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Ergodicity
Ergodicity
Markov Chains & Markov Processes
Most theoretical studies uses Markov chains/process as a
framework for convergence analysis.
A Markov chain is said be to regular if some positive power k
of the transition matrix P has only positive elements.
A chain is call time-homogeneous if the change of its
transition matrix P is the same after each step, thus the
transition probability after k steps become Pk .
A chain is ergodic or irreducible if it is aperiodic and positive
recurrent – it is possible to reach every state from any state.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
77. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Convergence Behaviour
Convergence Behaviour
As k → ∞, we have the stationary probability distribution π
π = πP, =⇒ thus the first eigenvalue is always 1.
Asymptotic convergence to optimality:
lim θk → θ∗ , (with probability one).
k→∞
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
78. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Convergence Behaviour
Convergence Behaviour
As k → ∞, we have the stationary probability distribution π
π = πP, =⇒ thus the first eigenvalue is always 1.
Asymptotic convergence to optimality:
lim θk → θ∗ , (with probability one).
k→∞
The rate of convergence is usually determined by the second
eigenvalue 0 < λ2 < 1.
An algorithm can converge, but may not be necessarily efficient,
as the rate of convergence is typically low.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
79. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Convergence of GA
Convergence of GA
Important studies by Aytug et al. (1996)1 , Aytug and Koehler
(2000)2 , Greenhalgh and Marschall (2000)3 , Gutjahr (2010),4 etc.5
The number of iterations t(ζ) in GA with a convergence
probability of ζ can be estimated by
ln(1 − ζ)
t(ζ) ≤ ,
ln 1 − min[(1 − µ)Ln , µLn ]
where µ=mutation rate, L=string length, and n=population size.
1
H. Aytug, S. Bhattacharrya and G. J. Koehler, A Markov chain analysis of genetic algorithms with power of
2 cardinality alphabets, Euro. J. Operational Research, 96, 195-201 (1996).
2
H. Aytug and G. J. Koehler, New stopping criterion for genetic algorithms, Euro. J. Operational research,
126, 662-674 (2000).
3
D. Greenhalgh & S. Marshal, Convergence criteria for genetic algorithms, SIAM J. Computing, 30, 269-282
(2000).
Xin-She Yang FedCSIS2011
4
Metaheuristics and Gutjahr, Convergence Analysis of Metaheuristics Annals of Information Systems, 10, 159-187 (2010).
W. J. Computational Intelligence
80. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Multiobjective Metaheuristics
Multiobjective Metaheuristics
Asymptotic convergence of metaheuristic for multiobjective
optimization (Villalobos-Arias et al. 2005)6
The transition matrix P of a metaheuristic algorithm has a
stationary distribution π such that
|Pij − πj | ≤ (1 − ζ)k−1 ,
k
∀i , j, (k = 1, 2, ...),
where ζ is a function of mutation probability µ, string length L
and population size. For example, ζ = 2nL µnL , so µ < 0.5.
Xin-She Yang
6 FedCSIS2011
M. Villalobos-Arias, C. A. Coello Coello and O. Hern´ndez-Lerma, Asymptotic convergence of metaheuristics
a
Metaheuristics and Computational Intelligence
81. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Multiobjective Metaheuristics
Multiobjective Metaheuristics
Asymptotic convergence of metaheuristic for multiobjective
optimization (Villalobos-Arias et al. 2005)6
The transition matrix P of a metaheuristic algorithm has a
stationary distribution π such that
|Pij − πj | ≤ (1 − ζ)k−1 ,
k
∀i , j, (k = 1, 2, ...),
where ζ is a function of mutation probability µ, string length L
and population size. For example, ζ = 2nL µnL , so µ < 0.5.
Xin-She Yang
6 FedCSIS2011
M. Villalobos-Arias, C. A. Coello Coello and O. Hern´ndez-Lerma, Asymptotic convergence of metaheuristics
a
Metaheuristics and Computational Intelligence
82. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Multiobjective Metaheuristics
Multiobjective Metaheuristics
Asymptotic convergence of metaheuristic for multiobjective
optimization (Villalobos-Arias et al. 2005)6
The transition matrix P of a metaheuristic algorithm has a
stationary distribution π such that
|Pij − πj | ≤ (1 − ζ)k−1 ,
k
∀i , j, (k = 1, 2, ...),
where ζ is a function of mutation probability µ, string length L
and population size. For example, ζ = 2nL µnL , so µ < 0.5.
Note: An algorithm satisfying this condition may not converge (for
multiobjective optimization)
However, an algorithm with elitism, obeying the above condition,
does converge!.
Xin-She Yang
6 FedCSIS2011
M. Villalobos-Arias, C. A. Coello Coello and O. Hern´ndez-Lerma, Asymptotic convergence of metaheuristics
a
Metaheuristics and Computational Intelligence
83. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Other results
Other results
Limited results on convergence analysis exist, concerning (finite
states/domains)
ant colony optimization
generalized hill-climbers and simulated annealing,
best-so-far convergence of cross-entropy optimization,
nested partition method, Tabu search, and
of course, combinatorial optimization.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
84. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Other results
Other results
Limited results on convergence analysis exist, concerning (finite
states/domains)
ant colony optimization
generalized hill-climbers and simulated annealing,
best-so-far convergence of cross-entropy optimization,
nested partition method, Tabu search, and
of course, combinatorial optimization.
However, more challenging tasks for infinite states/domains and
continuous problems.
Many, many open problems needs satisfactory answers.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
85. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Converged?
Converged?
Converged, often the ‘best-so-far’ convergence, not necessarily at
the global optimality
In theory, a Markov chain can converge, but the number of
iterations tends to be large.
In practice, a finite (hopefully, small) number of generations, if the
algorithm converges, it may not reach the global optimum.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
86. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Converged?
Converged?
Converged, often the ‘best-so-far’ convergence, not necessarily at
the global optimality
In theory, a Markov chain can converge, but the number of
iterations tends to be large.
In practice, a finite (hopefully, small) number of generations, if the
algorithm converges, it may not reach the global optimum.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
87. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Converged?
Converged?
Converged, often the ‘best-so-far’ convergence, not necessarily at
the global optimality
In theory, a Markov chain can converge, but the number of
iterations tends to be large.
In practice, a finite (hopefully, small) number of generations, if the
algorithm converges, it may not reach the global optimum.
How to avoid premature convergence
Equip an algorithm with the ability to escape a local optimum
Increase diversity of the solutions
Enough randomization at the right stage
....(unknown, new) ....
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
88. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Coffee Break (15 Minutes)
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
89. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
All and NFL
All and NFL
So many algorithms – what are the common characteristics?
What are the key components?
How to use and balance different components?
What controls the overall behaviour of an algorithm?
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
90. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Exploration and Exploitation
Exploration and Exploitation
Characteristics of Metaheuristics
Exploration and Exploitation, or Diversification and Intensification.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
91. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Exploration and Exploitation
Exploration and Exploitation
Characteristics of Metaheuristics
Exploration and Exploitation, or Diversification and Intensification.
Exploitation/Intensification
Intensive local search, exploiting local information.
E.g., hill-climbing.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
92. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Exploration and Exploitation
Exploration and Exploitation
Characteristics of Metaheuristics
Exploration and Exploitation, or Diversification and Intensification.
Exploitation/Intensification
Intensive local search, exploiting local information.
E.g., hill-climbing.
Exploration/Diversification
Exploratory global search, using randomization/stochastic
components. E.g., hill-climbing with random restart.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
93. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Summary
Summary Exploration
Exploitation
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
94. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Summary
Summary
uniform
search
Exploration
Exploitation
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
95. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Summary
Summary
uniform
search
Exploration
steepest
Exploitation descent
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
96. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Summary
Summary
uniform
search
CS
Ge
net
Exploration
ic
alg
ori PS
th ms O/
SA EP FA
A nt /E
/Be S
e
Newton-
Raphson
Tabu Nelder-Mead
steepest
Exploitation descent
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
97. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Summary
Summary
uniform
search Best?
CS Free lunch?
Ge
net
Exploration
ic
alg
ori PS
th ms O/
SA EP FA
A nt /E
/Be S
e
Newton-
Raphson
Tabu Nelder-Mead
steepest
Exploitation descent
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
98. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
No-Free-Lunch (NFL) Theorems
No-Free-Lunch (NFL) Theorems
Algorithm Performance
Any algorithm is as good/bad as random search, when averaged
over all possible problems/functions.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
99. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
No-Free-Lunch (NFL) Theorems
No-Free-Lunch (NFL) Theorems
Algorithm Performance
Any algorithm is as good/bad as random search, when averaged
over all possible problems/functions.
Finite domains
No universally efficient algorithm!
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
100. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
No-Free-Lunch (NFL) Theorems
No-Free-Lunch (NFL) Theorems
Algorithm Performance
Any algorithm is as good/bad as random search, when averaged
over all possible problems/functions.
Finite domains
No universally efficient algorithm!
Any free taster or dessert?
Yes and no. (more later)
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
101. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
NFL Theorems (Wolpert and Macready 1997)
NFL Theorems (Wolpert and Macready 1997)
Search space is finite (though quite large), thus the space of
possible “cost” values is also finite. Objective function
f : X → Y, with F = Y X (space of all possible problems).
Assumptions: finite domain, closed under permutation (c.u.p).
For m iterations, m distinct visited points form a time-ordered
x y x y
set dm = dm (1), dm (1) , ..., dm (m), dm (m) .
The performance of an algorithm a iterated m times on a cost
y
function f is denoted by P(dm |f , m, a).
For any pair of algorithms a and b, the NFL theorem states
y y
P(dm |f , m, a) = P(dm |f , m, b).
f f
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
102. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
NFL Theorems (Wolpert and Macready 1997)
NFL Theorems (Wolpert and Macready 1997)
Search space is finite (though quite large), thus the space of
possible “cost” values is also finite. Objective function
f : X → Y, with F = Y X (space of all possible problems).
Assumptions: finite domain, closed under permutation (c.u.p).
For m iterations, m distinct visited points form a time-ordered
x y x y
set dm = dm (1), dm (1) , ..., dm (m), dm (m) .
The performance of an algorithm a iterated m times on a cost
y
function f is denoted by P(dm |f , m, a).
For any pair of algorithms a and b, the NFL theorem states
y y
P(dm |f , m, a) = P(dm |f , m, b).
f f
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
103. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
NFL Theorems (Wolpert and Macready 1997)
NFL Theorems (Wolpert and Macready 1997)
Search space is finite (though quite large), thus the space of
possible “cost” values is also finite. Objective function
f : X → Y, with F = Y X (space of all possible problems).
Assumptions: finite domain, closed under permutation (c.u.p).
For m iterations, m distinct visited points form a time-ordered
x y x y
set dm = dm (1), dm (1) , ..., dm (m), dm (m) .
The performance of an algorithm a iterated m times on a cost
y
function f is denoted by P(dm |f , m, a).
For any pair of algorithms a and b, the NFL theorem states
y y
P(dm |f , m, a) = P(dm |f , m, b).
f f
Any algorithm is as good (bad) as a random search!
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
104. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Open Problems
Open Problems
Framework: Need to develop a unified framework for
algorithmic analysis (e.g.,convergence).
Exploration and exploitation: What is the optimal balance
between these two components? (50-50 or what?)
Performance measure: What are the best performance
measures ? Statistically? Why ?
Convergence: Convergence analysis of algorithms for infinite,
continuous domains require systematic approaches?
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
105. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Open Problems
Open Problems
Framework: Need to develop a unified framework for
algorithmic analysis (e.g.,convergence).
Exploration and exploitation: What is the optimal balance
between these two components? (50-50 or what?)
Performance measure: What are the best performance
measures ? Statistically? Why ?
Convergence: Convergence analysis of algorithms for infinite,
continuous domains require systematic approaches?
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
106. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Open Problems
Open Problems
Framework: Need to develop a unified framework for
algorithmic analysis (e.g.,convergence).
Exploration and exploitation: What is the optimal balance
between these two components? (50-50 or what?)
Performance measure: What are the best performance
measures ? Statistically? Why ?
Convergence: Convergence analysis of algorithms for infinite,
continuous domains require systematic approaches?
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
107. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Open Problems
Open Problems
Framework: Need to develop a unified framework for
algorithmic analysis (e.g.,convergence).
Exploration and exploitation: What is the optimal balance
between these two components? (50-50 or what?)
Performance measure: What are the best performance
measures ? Statistically? Why ?
Convergence: Convergence analysis of algorithms for infinite,
continuous domains require systematic approaches?
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
108. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
More Open Problems
More Open Problems
Free lunches: Unproved for infinite or continuous domains for
multiobjective optimization. (possible free lunches!)
What are implications of NFL theorems in practice?
If free lunches exist, how to find the best algorithm(s)?
Knowledge: Problem-specific knowledge always helps to find
appropriate solutions? How to quantify such knowledge?
Intelligent algorithms: Any practical way to design truly
intelligent, self-evolving algorithms?
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
109. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
More Open Problems
More Open Problems
Free lunches: Unproved for infinite or continuous domains for
multiobjective optimization. (possible free lunches!)
What are implications of NFL theorems in practice?
If free lunches exist, how to find the best algorithm(s)?
Knowledge: Problem-specific knowledge always helps to find
appropriate solutions? How to quantify such knowledge?
Intelligent algorithms: Any practical way to design truly
intelligent, self-evolving algorithms?
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
110. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
More Open Problems
More Open Problems
Free lunches: Unproved for infinite or continuous domains for
multiobjective optimization. (possible free lunches!)
What are implications of NFL theorems in practice?
If free lunches exist, how to find the best algorithm(s)?
Knowledge: Problem-specific knowledge always helps to find
appropriate solutions? How to quantify such knowledge?
Intelligent algorithms: Any practical way to design truly
intelligent, self-evolving algorithms?
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
111. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Constraints
Constraints
In describing optimization algorithms, we are not concern with
constraints. Algorithms can solve both unconstrained and more
often constrained problems.
The handling of constraints is an implementation issue, though
incorrect or inefficient methods of dealing with constraints can slow
down the algorithm efficiency, or even result in wrong solutions.
Methods of handling constraints
Direct methods
Langrange multipliers
Barrier functions
Penalty methods
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
112. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Aims
Aims
Either converting a constrained problem to an unconstrained one
or changing the search space into a regular domain
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
113. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Aims
Aims
Either converting a constrained problem to an unconstrained one
or changing the search space into a regular domain
The ease of programming and implementation
Improve (or at least not hinder) the efficiency of the chosen
algorithm in implementation.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
114. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Aims
Aims
Either converting a constrained problem to an unconstrained one
or changing the search space into a regular domain
The ease of programming and implementation
Improve (or at least not hinder) the efficiency of the chosen
algorithm in implementation.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
115. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Aims
Aims
Either converting a constrained problem to an unconstrained one
or changing the search space into a regular domain
The ease of programming and implementation
Improve (or at least not hinder) the efficiency of the chosen
algorithm in implementation.
Scalability
The used approach should be able to deal with small, large and
very large scale problems.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
116. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Common Approaches
Common Approaches
Direct method
Simple, but not versatile, difficult in programming.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
117. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Common Approaches
Common Approaches
Direct method
Simple, but not versatile, difficult in programming.
Lagrange multipliers
Main for equality constraints.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
118. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Common Approaches
Common Approaches
Direct method
Simple, but not versatile, difficult in programming.
Lagrange multipliers
Main for equality constraints.
Barrier functions
Very powerful and widely used in convex optimization.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
119. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Common Approaches
Common Approaches
Direct method
Simple, but not versatile, difficult in programming.
Lagrange multipliers
Main for equality constraints.
Barrier functions
Very powerful and widely used in convex optimization.
Penalty methods
Simple and versatile, widely used.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
120. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Common Approaches
Common Approaches
Direct method
Simple, but not versatile, difficult in programming.
Lagrange multipliers
Main for equality constraints.
Barrier functions
Very powerful and widely used in convex optimization.
Penalty methods
Simple and versatile, widely used.
Others
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
121. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Direct Methods
Direct Methods
Minimize f (x, y ) = (x − 2)2 + 4(y − 3)2
subject to −x + y ≤ 2, x + 2y ≤ 3.
2
≤
y
Optimal x+
−
x+
2y
≤3
Direct Methods: to generate solutions/points inside the region!
(easy for rectangular regions)
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
122. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Method of Lagrange Multipliers
Method of Lagrange Multipliers
Maximize f (x, y ) = 10 − x 2 − (y − 2)2 subject to x + 2y = 5.
Defining a combined function Φ using a multiplier λ, we have
Φ = 10 − x 2 − (y − 2)2 + λ(x + 2y − 5).
The optimality conditions are
∂Φ ∂Φ ∂Φ
= 2x +λ = 0, = −2(y −2)+2λ = 0, = x +2y −5,
∂x ∂y ∂λ
whose solutions become
49
x = 1/5, y = 12/5, λ = 2/5, =⇒ fmax = .
5
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
123. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Barrier Functions
Barrier Functions
As an equality h(x) = 0 can be written as two inequalities h(x) ≤ 0
and −h(x) ≤ 0, we only use inequalities.
For a general optimization problem:
minimize f (x), subject to g (xi ) ≤ 0(i = 1, 2, ..., N),
we can define a Indicator or barrier function
0 if u ≤ 0
I−1 [u] =
∞ if u > 0.
Not so easy to deal with numerically. Also discontinuous!
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
124. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Logarithmic Barrier Functions
Logarithmic Barrier Functions
A log barrier function
¯− (u) = − 1 log(−u),
I u < 0,
t
where t > 0 is an accuracy parameters (can be very large).
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
125. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Logarithmic Barrier Functions
Logarithmic Barrier Functions
A log barrier function
¯− (u) = − 1 log(−u),
I u < 0,
t
where t > 0 is an accuracy parameters (can be very large).
Then, the above minimization problem becomes
N N
¯− (gi (x)) = f (x) + 1
minimize f (x) + I − log[−gi (x)].
t
i =1 i =1
This is an unconstrained problem and easy to implement!
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
126. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Penalty Methods
Penalty Methods
For a nonlinear optimization problem with equality and inequality
constraints,
minimize
x∈ℜn f (x), x = (x1 , ..., xn )T ∈ ℜn ,
subject to φi (x) = 0, (i = 1, ..., M),
ψj (x) ≤ 0, (j = 1, ..., N),
the idea is to define a penalty function so that the constrained
problem is transformed into an unconstrained problem. Now we
define
M N
Π(x, µi , νj ) = f (x) + µi φ2 (x) +
i νj ψj2 (x),
i =1 j=1
where µi ≫ 1 and νj ≥ 0 which should be large enough,
depending on the solution quality needed.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
127. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
In addition, for simplicity of implementation, we can use µ = µi for
all i and ν = νj for all j. That is, we can use a simplified
M N
Π(x, µ, ν) = f (x) + µ Qi [φi (x)]φ2 (x) + ν
i Hj [ψj (x)]ψj2 (x).
i =1 j=1
Here the barrier/indicator-like functions
0 if ψj (x) ≤ 0 0 if φi (x) = 0
Hj = , Qi = .
1 if ψj (x) > 0 1 if φi (x) = 0
In general, for most applications, µ and ν can be taken as 1010 to
1015 . We will use these values in most implementations.
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence
128. Intro Classic Algorithms Metaheuristic Markov Analysis All and NFL Constraints Applications Thanks
Pressure Vessel Design Optimization
Pressure Vessel Design Optimization
d1 d2
r r
L
Xin-She Yang FedCSIS2011
Metaheuristics and Computational Intelligence