This document provides an overview of local search algorithms. It discusses how local search works by iteratively improving a single current state rather than exploring the entire state space. Key aspects covered include representing problems as states, defining neighbor states and objective functions, getting stuck in local optima, and techniques like hill climbing, gradient descent, simulated annealing, and random restarts to escape local optima. Local search is memory efficient but can find good solutions more slowly than optimal algorithms. Algorithm design considerations like state representation, neighbors, and constraints are discussed. Pseudocode outlines for basic local search, tabu search, and simulated annealing wrappers are also provided.
This document discusses local search algorithms. It begins by introducing the topic of local search algorithms and some examples of problems they can be applied to, such as the n-queens problem. It then describes several specific local search algorithms in more detail, including hill-climbing search, gradient descent, simulated annealing search, local beam search, and genetic algorithms. It also discusses techniques like random restart wrappers and tabu search wrappers that can help local search algorithms avoid getting stuck in local optima.
The document discusses various search methods including local search techniques like hill climbing, simulated annealing, beam search, and genetic search. It also covers searching continuous state spaces, nondeterministic actions, and online search where the agent performs actions and acquires sensor data. Local search methods start from an initial state and try to improve the current state to reach the goal. Simulated annealing allows occasional downhill moves to help explore more of the search space.
The document discusses various search methods including local search techniques like hill climbing, simulated annealing, beam search, and genetic search. It also covers searching continuous state spaces, nondeterministic actions, and online search where the agent performs actions and acquires sensor data. Local search methods start from an initial state and try improving the current state by moving locally through the state space.
Local search algorithms aim to find configurations that satisfy constraints by starting with a single state and iteratively moving to neighboring states. Hill climbing is a basic local search technique where the algorithm always chooses the neighbor with the best value according to a heuristic function. However, hill climbing can get stuck in local optima. Variants like simulated annealing, tabu search, and genetic algorithms incorporate randomness to help escape local optima.
This document discusses search techniques used in artificial intelligence problems. It defines key concepts related to search spaces such as states, actions, goals, and costs. It provides examples of search problems in domains like the 8-puzzle, robot assembly, and missionaries and cannibals. It analyzes search algorithms like breadth-first search, depth-first search, uniform cost search, iterative deepening search, and informed searches using heuristics. The document compares the properties of different search strategies.
This document summarizes search algorithms for discrete optimization problems. It begins with an overview of discrete optimization and definitions. It then discusses sequential search algorithms like depth-first search, best-first search, A*, and iterative deepening search. The document next covers parallel search algorithms including parallel depth-first search using dynamic load balancing. It analyzes different load balancing schemes and evaluates them through experiments on satisfiability problems. Finally, it discusses techniques for termination detection in parallel search algorithms.
This document discusses search algorithms in artificial intelligence. It describes how search is used to find solutions in many AI problems by exploring possible states. The key aspects covered are:
- Search involves exploring a space of possible states through actions or operators to find a goal state.
- Problems are formulated as a search problem defined by the initial state, operators/actions, goal test, and cost function.
- Common uninformed search algorithms like breadth-first search (BFS) and depth-first search (DFS) are described and their properties analyzed.
- BFS is complete but can require exponential time and space, while DFS is not complete but uses less space in most cases.
- Uniform
The document discusses local search algorithms like hill-climbing for solving optimization problems. It explains that hill-climbing iteratively moves to successor states with improved evaluations until a local optimum is reached. However, hill-climbing often gets stuck in local optima and fails to find global optima. The document proposes methods like allowing sideways moves, random restarts, and stochastic selection to help hill-climbing escape local optima and improve performance.
This document discusses local search algorithms. It begins by introducing the topic of local search algorithms and some examples of problems they can be applied to, such as the n-queens problem. It then describes several specific local search algorithms in more detail, including hill-climbing search, gradient descent, simulated annealing search, local beam search, and genetic algorithms. It also discusses techniques like random restart wrappers and tabu search wrappers that can help local search algorithms avoid getting stuck in local optima.
The document discusses various search methods including local search techniques like hill climbing, simulated annealing, beam search, and genetic search. It also covers searching continuous state spaces, nondeterministic actions, and online search where the agent performs actions and acquires sensor data. Local search methods start from an initial state and try to improve the current state to reach the goal. Simulated annealing allows occasional downhill moves to help explore more of the search space.
The document discusses various search methods including local search techniques like hill climbing, simulated annealing, beam search, and genetic search. It also covers searching continuous state spaces, nondeterministic actions, and online search where the agent performs actions and acquires sensor data. Local search methods start from an initial state and try improving the current state by moving locally through the state space.
Local search algorithms aim to find configurations that satisfy constraints by starting with a single state and iteratively moving to neighboring states. Hill climbing is a basic local search technique where the algorithm always chooses the neighbor with the best value according to a heuristic function. However, hill climbing can get stuck in local optima. Variants like simulated annealing, tabu search, and genetic algorithms incorporate randomness to help escape local optima.
This document discusses search techniques used in artificial intelligence problems. It defines key concepts related to search spaces such as states, actions, goals, and costs. It provides examples of search problems in domains like the 8-puzzle, robot assembly, and missionaries and cannibals. It analyzes search algorithms like breadth-first search, depth-first search, uniform cost search, iterative deepening search, and informed searches using heuristics. The document compares the properties of different search strategies.
This document summarizes search algorithms for discrete optimization problems. It begins with an overview of discrete optimization and definitions. It then discusses sequential search algorithms like depth-first search, best-first search, A*, and iterative deepening search. The document next covers parallel search algorithms including parallel depth-first search using dynamic load balancing. It analyzes different load balancing schemes and evaluates them through experiments on satisfiability problems. Finally, it discusses techniques for termination detection in parallel search algorithms.
This document discusses search algorithms in artificial intelligence. It describes how search is used to find solutions in many AI problems by exploring possible states. The key aspects covered are:
- Search involves exploring a space of possible states through actions or operators to find a goal state.
- Problems are formulated as a search problem defined by the initial state, operators/actions, goal test, and cost function.
- Common uninformed search algorithms like breadth-first search (BFS) and depth-first search (DFS) are described and their properties analyzed.
- BFS is complete but can require exponential time and space, while DFS is not complete but uses less space in most cases.
- Uniform
The document discusses local search algorithms like hill-climbing for solving optimization problems. It explains that hill-climbing iteratively moves to successor states with improved evaluations until a local optimum is reached. However, hill-climbing often gets stuck in local optima and fails to find global optima. The document proposes methods like allowing sideways moves, random restarts, and stochastic selection to help hill-climbing escape local optima and improve performance.
Artificial Inteligence for Games an Overview SBGAMES 2012Bruno Duarte Corrêa
This document provides an overview of artificial intelligence concepts for games. It discusses navigation techniques like waypoints, navmeshes and pathfinding using A* and steering behaviors. It also covers perceptions, finite state machines, behavior trees, planning using GOAP and learning approaches like neural networks and genetic algorithms. Specific examples are given for how AI was implemented in the game Left 4 Dead to deliver robust behaviors, provide competent human proxies and generate dramatic pacing through an adaptive director algorithm.
Heuristic search techniques like A* use heuristics to guide the search for a solution. A* balances the cost to reach a node and an estimate of the remaining cost to reach the goal. If the heuristic is admissible (does not overestimate cost), A* is guaranteed to find an optimal solution. A worked example demonstrates A* finding the optimal path through a maze using Manhattan distance as the heuristic.
This document discusses local search algorithms for solving optimization problems. It begins with an introduction to the N-queens problem and how it can be formulated as a search problem. Hill-climbing search and simulated annealing are then introduced as examples of local search algorithms. Simulated annealing allows occasional "downhill" moves to help escape local optima. The document also briefly mentions genetic algorithms and local beam search as other examples of local search methods.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
Searching is a technique used in AI to solve problems by exploring possible states or solutions. The document discusses various search algorithms used in single-agent pathfinding problems like sliding tile puzzles. It describes brute force search strategies like breadth-first search and depth-first search, and informed search strategies like A* search, greedy best-first search, hill-climbing search and simulated annealing that use heuristic functions. Local search algorithms are also summarized.
Hill climbing is a local search algorithm that starts with a random solution and iteratively makes small changes to improve the solution. It terminates when no further improvements can be made. Hill climbing can get stuck at local optima rather than finding the global optimum. Simulated annealing is similar to hill climbing but allows occasional "downhill moves" that worsen the solution based on a probability function involving the change in solution quality and temperature parameter. The temperature is gradually decreased, reducing the probability of downhill moves over time. This helps simulated annealing avoid local optima and find better solutions than hill climbing.
Types Of Recursion in C++, Data Stuctures by DHEERAJ KATARIADheeraj Kataria
The document discusses recursion and provides examples of recursion including nested recursion. It explains recursion using functions like the Ackermann function and Fibonacci numbers. It discusses excessive recursion and how it can lead to many repeated computations. The document also covers backtracking as a method for systematically trying sequences of decisions to find a solution. It provides the example of solving the eight queens problem using backtracking and recursion.
Backtracking and branch and bound are algorithms used to solve problems with large search spaces. Backtracking uses depth-first search and prunes subtrees that don't lead to viable solutions. Branch and bound uses breadth-first search and pruning, maintaining partial solutions in a priority queue. Both techniques systematically eliminate possibilities to find optimal solutions faster than exhaustive search. Examples where they can be applied include maze pathfinding, the eight queens problem, sudoku, and the traveling salesman problem.
The document discusses search techniques in artificial intelligence. It defines search as finding a sequence of actions to achieve a goal state. Common problems that use search include problem solving, natural language processing, computer vision, and machine learning. Search involves defining a search space with states, operators to transition between states, an initial state, and a goal test. Popular uninformed search techniques like breadth-first search and depth-first search are explained. The document also introduces informed search techniques like uniform cost search that use cost information to guide the search towards optimal solutions.
This document discusses classical search and some extensions:
1) Searching with non-deterministic actions requires contingency planning to handle uncertain outcomes. AND-OR search trees can represent these problems.
2) Searching with partial observations introduces belief states representing possible physical states. Prediction, observation, and update functions are used to model state transitions.
3) Online search interleaves planning and execution in unknown or dynamic environments. It focuses contingencies that actually arise rather than all possibilities. Random walk can solve some online search problems but is inefficient.
Informed Search Techniques new kirti L 8.pptxKirti Verma
This document discusses various informed search techniques, including generate-and-test, hill climbing, best-first search, A* algorithm, and AO* algorithm. It provides details on the algorithms of hill climbing (simple, steepest-ascent, stochastic), best-first search, A*, and AO*, including their steps, advantages, and disadvantages. Examples are given to illustrate the workings of best-first search and A* on problems. The key differences between A* and AO* are that AO* may not find an optimal solution but uses less memory than A* and cannot get stuck in loops.
The document describes the Towers of Hanoi problem and algorithms for solving it recursively. It involves moving n disks from one pole to another, where disks can only be moved one at a time and no disk can be placed on top of a smaller disk. The recursive algorithm solvesTowers moves the top n-1 disks to the spare pole, then the last disk to the destination pole, then the n-1 disks from the spare pole to the destination pole. The document also discusses linear and binary search algorithms and their recursive implementations.
Recursion is a fundamental programming technique where a method calls itself to solve a problem. This chapter discusses recursive thinking and programming, providing examples of recursion in sorting, graphics, and solving puzzles like the Towers of Hanoi. Some key algorithms covered are merge sort, quick sort, maze traversal, and generating fractals. Recursive solutions can provide elegant code but require careful structuring to avoid infinite recursion.
This document discusses solving Sudoku puzzles using constraint satisfaction techniques. It presents the objectives, which are to examine Sudoku as a constraint satisfaction problem and determine if puzzle symmetry affects solving time. It then provides details on constraint satisfaction problems, Sudoku rules, proposed solutions using backtracking search and forward checking, evaluation of solving times for different puzzle types and analysis of results. Symmetric puzzles were found to solve faster than asymmetric ones due to providing more constraints.
https://telecombcn-dl.github.io/2017-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Artificial Inteligence for Games an Overview SBGAMES 2012Bruno Duarte Corrêa
This document provides an overview of artificial intelligence concepts for games. It discusses navigation techniques like waypoints, navmeshes and pathfinding using A* and steering behaviors. It also covers perceptions, finite state machines, behavior trees, planning using GOAP and learning approaches like neural networks and genetic algorithms. Specific examples are given for how AI was implemented in the game Left 4 Dead to deliver robust behaviors, provide competent human proxies and generate dramatic pacing through an adaptive director algorithm.
Heuristic search techniques like A* use heuristics to guide the search for a solution. A* balances the cost to reach a node and an estimate of the remaining cost to reach the goal. If the heuristic is admissible (does not overestimate cost), A* is guaranteed to find an optimal solution. A worked example demonstrates A* finding the optimal path through a maze using Manhattan distance as the heuristic.
This document discusses local search algorithms for solving optimization problems. It begins with an introduction to the N-queens problem and how it can be formulated as a search problem. Hill-climbing search and simulated annealing are then introduced as examples of local search algorithms. Simulated annealing allows occasional "downhill" moves to help escape local optima. The document also briefly mentions genetic algorithms and local beam search as other examples of local search methods.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
Searching is a technique used in AI to solve problems by exploring possible states or solutions. The document discusses various search algorithms used in single-agent pathfinding problems like sliding tile puzzles. It describes brute force search strategies like breadth-first search and depth-first search, and informed search strategies like A* search, greedy best-first search, hill-climbing search and simulated annealing that use heuristic functions. Local search algorithms are also summarized.
Hill climbing is a local search algorithm that starts with a random solution and iteratively makes small changes to improve the solution. It terminates when no further improvements can be made. Hill climbing can get stuck at local optima rather than finding the global optimum. Simulated annealing is similar to hill climbing but allows occasional "downhill moves" that worsen the solution based on a probability function involving the change in solution quality and temperature parameter. The temperature is gradually decreased, reducing the probability of downhill moves over time. This helps simulated annealing avoid local optima and find better solutions than hill climbing.
Types Of Recursion in C++, Data Stuctures by DHEERAJ KATARIADheeraj Kataria
The document discusses recursion and provides examples of recursion including nested recursion. It explains recursion using functions like the Ackermann function and Fibonacci numbers. It discusses excessive recursion and how it can lead to many repeated computations. The document also covers backtracking as a method for systematically trying sequences of decisions to find a solution. It provides the example of solving the eight queens problem using backtracking and recursion.
Backtracking and branch and bound are algorithms used to solve problems with large search spaces. Backtracking uses depth-first search and prunes subtrees that don't lead to viable solutions. Branch and bound uses breadth-first search and pruning, maintaining partial solutions in a priority queue. Both techniques systematically eliminate possibilities to find optimal solutions faster than exhaustive search. Examples where they can be applied include maze pathfinding, the eight queens problem, sudoku, and the traveling salesman problem.
The document discusses search techniques in artificial intelligence. It defines search as finding a sequence of actions to achieve a goal state. Common problems that use search include problem solving, natural language processing, computer vision, and machine learning. Search involves defining a search space with states, operators to transition between states, an initial state, and a goal test. Popular uninformed search techniques like breadth-first search and depth-first search are explained. The document also introduces informed search techniques like uniform cost search that use cost information to guide the search towards optimal solutions.
This document discusses classical search and some extensions:
1) Searching with non-deterministic actions requires contingency planning to handle uncertain outcomes. AND-OR search trees can represent these problems.
2) Searching with partial observations introduces belief states representing possible physical states. Prediction, observation, and update functions are used to model state transitions.
3) Online search interleaves planning and execution in unknown or dynamic environments. It focuses contingencies that actually arise rather than all possibilities. Random walk can solve some online search problems but is inefficient.
Informed Search Techniques new kirti L 8.pptxKirti Verma
This document discusses various informed search techniques, including generate-and-test, hill climbing, best-first search, A* algorithm, and AO* algorithm. It provides details on the algorithms of hill climbing (simple, steepest-ascent, stochastic), best-first search, A*, and AO*, including their steps, advantages, and disadvantages. Examples are given to illustrate the workings of best-first search and A* on problems. The key differences between A* and AO* are that AO* may not find an optimal solution but uses less memory than A* and cannot get stuck in loops.
The document describes the Towers of Hanoi problem and algorithms for solving it recursively. It involves moving n disks from one pole to another, where disks can only be moved one at a time and no disk can be placed on top of a smaller disk. The recursive algorithm solvesTowers moves the top n-1 disks to the spare pole, then the last disk to the destination pole, then the n-1 disks from the spare pole to the destination pole. The document also discusses linear and binary search algorithms and their recursive implementations.
Recursion is a fundamental programming technique where a method calls itself to solve a problem. This chapter discusses recursive thinking and programming, providing examples of recursion in sorting, graphics, and solving puzzles like the Towers of Hanoi. Some key algorithms covered are merge sort, quick sort, maze traversal, and generating fractals. Recursive solutions can provide elegant code but require careful structuring to avoid infinite recursion.
This document discusses solving Sudoku puzzles using constraint satisfaction techniques. It presents the objectives, which are to examine Sudoku as a constraint satisfaction problem and determine if puzzle symmetry affects solving time. It then provides details on constraint satisfaction problems, Sudoku rules, proposed solutions using backtracking search and forward checking, evaluation of solving times for different puzzle types and analysis of results. Symmetric puzzles were found to solve faster than asymmetric ones due to providing more constraints.
https://telecombcn-dl.github.io/2017-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...
Chap 4 local_search
1. Local search algorithms
CS271P, Winter Quarter, 2020
Introduction to Artificial Intelligence
Prof. Richard Lathrop
Read Beforehand: R&N 4.1-4.2, 4.6
Optional: R&N 4.3-4.5
2. Local search algorithms
• In many optimization problems, the path to the goal is
irrelevant; the goal state itself is the solution
– Local search: widely used for very big problems
– Returns good but not optimal solutions
– Usually very slow, but can yield good solutions if you wait
• State space = set of "complete" configurations
• Find a complete configuration satisfying constraints
– Examples: n-Queens, VLSI layout, airline flight schedules
• Local search algorithms
– Keep a single "current" state, or small set of states
– Iteratively try to improve it / them
– Very memory efficient
• keeps only one or a few states
• You control how much memory you use
3. Typically, “tired of doing it” means that some resource limit is exceeded,
e.g., number of iterations, wall clock time, CPU time, etc.
It may also mean that result improvements are small and infrequent,
e.g., less than 0.1% result improvement in the last week of run time.
Basic idea of local search (many variations)
// initialize to something, usually a random initial state
// alternatively, might pass in a human-generated initial state
best_found ← current_state ← RandomState()
// now do local search
loop do
if (tired of doing it) then return best_found
else
current_state ← MakeNeighbor( current_state )
if ( Cost(current_state) < Cost(best_found) ) then
// keep best result found so far
best_found ← current_state
You, as algorithm
designer, write
the functions
named in red.
4. Example: n-queens
• Goal: Put n queens on an n × n board with no two
queens on the same row, column, or diagonal
• Neighbor: move one queen to another row
• Search: go from one neighbor to the next…
5. Algorithm design considerations
• How do you represent your problem?
• What is a “complete state”?
• What is your objective function?
– How do you measure cost or value of a state?
– Stand on your head: cost = −value, value = −cost
• What is a “neighbor” of a state?
– Or, what is a “step” from one state to another?
– How can you compute a neighbor or a step?
• Are there any constraints you can exploit?
6. Random restart wrapper
• We’ll use stochastic local search methods
– Return different solution for each trial & initial state
• Almost every trial hits difficulties (see sequel)
– Most trials will not yield a good result (sad!)
• Using many random restarts improves your chances
– Many “shots at goal” may finally get a good one
• Restart a random initial state, many times
– Report the best result found across many trials
7. Random restart wrapper
best_found ← RandomState() // initialize to something
// now do repeated local search
loop do
if (tired of doing it)
then return best_found
else
result ← LocalSearch( RandomState() )
if ( Cost(result) < Cost(best_found) )
// keep best result found so far
then best_found ← result
Typically, “tired of doing it” means that some resource limit has been
exceeded, e.g., number of iterations, wall clock time, CPU time, etc.
It may also mean that result improvements are small and infrequent,
e.g., less than 0.1% result improvement in the last week of run time.
You, as algorithm
designer, write
the functions
named in red.
8. Tabu search wrapper
• Add recently visited states to a tabu-list
– Temporarily excluded from being visited again
– Forces solver away from explored regions
– Less likely to get stuck in local minima (hope, in principle)
• Implemented as a hash table + FIFO queue
– Unit time cost per step; constant memory cost
– You control how much memory is used
• RandomRestart( TabuSearch ( LocalSearch() ) )
9. Tabu search wrapper (inside random restart! )
best_found ← current_state ← RandomState() // initialize
loop do // now do local search
if (tired of doing it) then return best_found else
neighbor ← MakeNeighbor( current_state )
if ( neighbor is in hash_table ) then discard neighbor
else push neighbor onto fifo, pop oldest_state
remove oldest_state from hash_table, insert neighbor
current_state ← neighbor;
if ( Cost(current_state ) < Cost(best_found) )
then best_found ← current_state
FIFO QUEUE
Oldest
State
New
State
HASH TABLE
State
Present?
10. Local search algorithms
• Hill-climbing search
– Gradient descent in continuous state spaces
– Can use, e.g., Newton’s method to find roots
• Simulated annealing search
• Local beam search
• Genetic algorithms
• Linear Programming (for specialized problems)
11. Local Search Difficulties
• Problems: depending on state, can get stuck in local maxima
– Many other problems also endanger your success!!
These difficulties apply to ALL local search algorithms, and become MUCH
more difficult as the search space increases to high dimensionality.
12. Local Search Difficulties
• Ridge problem: Every neighbor appears to be downhill
– But the search space has an uphill!! (worse in high dimensions)
Ridge:
Fold a piece of
paper and hold
it tilted up at an
unfavorable
angle to every
possible search
space step.
Every step
leads downhill;
but the ridge
leads uphill.
These difficulties apply to ALL local search algorithms, and become MUCH
more difficult as the search space increases to high dimensionality.
13. Hill-climbing search
“…like trying to find the top of Mount Everest in a thick fog while
suffering from amnesia”
Equivalently: “if COST[neighbor] COST[current] then …”
Equivalently:
“…a lowest-cost successor…”
You must shift effortlessly between maximizing value and minimizing cost
14. 12 (boxed) = best h
among all neighors;
select one randomly
h = # of pairs of
queens that are
attacking each other,
either directly or
indirectly (cost)
h=17 for this state
Each number indicates h
if we move a queen in its
column to that square
Example: Hill-climbing, 8-queens
15. Example: Hill-climbing, 8-queens
• A local minimum with h=1
• All one-step neighbors have
higher h values
• What can you do to get out
of this local minimum?
16. Gradient descent
• Hill-climbing in continuous state spaces
• Denote “state” as θ, a vector of parameters
• Denote cost as J(θ)
• How to change θ to improve J(θ)?
• Choose a direction in which J(θ)
is decreasing
• Derivative
• Positive => increasing cost
• Negative => decreasing cost
The curly D means to take a
derivative while holding all other
variables constant. You are not
responsible for multivariate calculus,
but gradient descent is a very
important method, so it is presented.
17. Gradient descent
(c) Alexander Ihler
• Gradient vector
Gradient = direction of
steepest ascent
Negative gradient =
steepest descent
Hill-climbing in continuous spaces
18. * Assume we have some cost-function:
and we want minimize over continuous variables x1,x2,..,xn
1. Compute the gradient :
2. Take a small step downhill in the direction of the gradient:
3. Check if
4. If true then accept move, if not “reject”.
5. Repeat.
Gradient = the most direct direction up-hill in the objective
(cost) function, so its negative minimizes the cost function.
Gradient descent
Hill-climbing in continuous spaces
(or, Armijo rule, etc.)
(decrease step size, etc.)
19. Gradient descent
• How do I determine the gradient?
– Derive formula using multivariate calculus.
– Ask a mathematician or a domain expert.
– Do a literature search.
• Variations of gradient descent can improve
performance for this or that special case.
– See Numerical Recipes in C (and in other languages) by
Press, Teukolsky, Vetterling, and Flannery.
– Simulated Annealing, Linear Programming too
• Works well in smooth spaces; poorly in rough.
Hill-climbing in continuous spaces
20. Newton’s method
• Want to find the roots of f(x)
– “Root”: value of x for which f(x)=0
• Initialize to some point xn
• Compute the tangent at xn & compute xn+1 = where it crosses x-axis
21. Newton’s method
• Want to find the roots of f(x)
– “Root”: value of x for which f(x)=0
• Initialize to some point xn
• Compute the tangent at xn & compute xn+1 = where it crosses x-axis
• Repeat for xn+1
– Does not always converge; sometimes unstable
– If converges, usually very fast
– Works well for smooth, non-pathological functions; accurate linearization
– Works poorly for wiggly, ill-behaved functions; tangent is a poor guide to root
22. Simulated annealing (Physics!)
• Idea: escape local maxima by allowing some "bad"
moves but gradually decrease their frequency
Improvement: Track the
BestResultFoundSoFar.
Here, this slide follows
Fig. 4.5 of the textbook,
which is simplified.
23. Typical annealing schedule
• Usually use a decaying exponential
• Axis values scaled to fit problem characteristics
Temperature
24. Probability( accept worse successor )
•Decreases as temperature T decreases
•Increases as |Δ E| decreases
•Sometimes, step size also decreases with T
Temperature
e E / T
Temperature T
High Low
|E |
High Medium Low
Low High Medium
(accept very bad moves early on; later, mainly accept “not very much worse”)
25. Your “random restart
wrapper” starts here.
A
Value=42
B
Value=41
C
Value=45
D
Value=44
E
Value=48
F
Value=47
G
Value=51
Value
You want to get
here. HOW??
This is an illustrative cartoon…
Arbitrary (Fictitious) Search Space Coordinate
Goal: “ratchet up” a bumpy slope
(see HW #2, prob. #5; here T = 1; cartoon is NOT to scale)
27. Properties of simulated annealing
• One can prove:
– If T decreases slowly enough, then simulated annealing search
will find a global optimum with probability approaching 1
– Unfortunately this can take a VERY VERY long time
– Note: in any finite search space, random guessing also will find a
global optimum with probability approaching 1
– So, ultimately this is a very weak claim
• Often works very well in practice
– But usually VERY VERY slow
• Widely used in VLSI layout, airline scheduling, etc.
28. Local beam search
• Keep track of k states rather than just one
• Start with k randomly generated states
• At each iteration, all the successors of all k states are
generated
• If any one is a goal state, stop; else select the k best
successors from the complete list and repeat.
• Concentrates search effort in areas believed to be fruitful
– May lose diversity as search progresses, resulting in wasted effort
29. a1 b1 k1
… Create k random initial states
… Generate their children
a2 b2 k2
… Select the k best children
… Repeat indefinitely…
Is it better than simply running k searches?
Maybe…??
Local beam search
30. Genetic algorithms (Darwin!!)
• A state = a string over a finite alphabet (an individual)
– A successor state is generated by combining two parent states
• Start with k randomly generated states (a population)
• Fitness function (= our heuristic objective function).
– Higher fitness values for better states.
• Select individuals for next generation based on fitness
– P(individual in next gen.) = individual fitness/total population fitness
• Crossover fit parents to yield next generation (offspring)
• Mutate the offspring randomly with some low probability
32. Genetic algorithms
• Fitness function (value): number of non-attacking pairs of
queens (min = 0, max = 8 × 7/2 = 28)
• 24/(24+23+20+11) = 31%
• 23/(24+23+20+11) = 29%; etc.
33. fitness =
#non-attacking
queens
• Fitness function: #non-attacking queen pairs
– min = 0, max = 8 × 7/2 = 28
• i fitness_i = 24+23+20+11 = 78
• P(pick child_1 for next gen.) = fitness_1/(_i fitness_i) = 24/78 = 31%
• P(pick child_2 for next gen.) = fitness_2/(_i fitness_i) = 23/78 = 29%; etc
probability of being
in next generation =
fitness/(_i fitness_i)
How to convert a
fitness value into a
probability of being in
the next generation.
34. Linear Programming
• Maximize: z = c1 x1 + c2 x2 +…+ cn xn
• Primary constraints: x10, x20, …, xn0
• Arbitrary additional linear constraints:
ai1 x1 + ai2 x2 + … + ain xn ai, (ai 0)
aj1 x1 + aj2 x2 + … + ajn xn aj 0
bk1 x1 + bk2 x2 + … + bkn xn = bk 0
• Restricted class of linear problems.
– Efficient for very large problems(!!) in this class.
35. Linear Programming
Efficient Optimal Solution
For a Restricted Class of Problems
•Very efficient “off-the-shelf”
solvers are available for LPs.
• They quickly solve large problems
with thousands of variables.
36. Summary
• Local search maintains a complete solution
– Maintains a complete solution, seeks consistent (or at least good)
– vs: Path search maintains a consistent solution; seeks complete
– Goal of both: consistent & complete solution
• Types:
– hill climbing, gradient ascent
– simulated annealing, other Monte Carlo methods
– Population methods: beam search; genetic / evolutionary algorithms
– Wrappers: random restart; tabu search
• Local search often works well on very large problems
– Abandons optimality
– Always has some answer available (best found so far)
– Often requires a very long time to achieve a good result