SlideShare a Scribd company logo
1 of 36
Download to read offline
CS 5860
Local Search
Adapted from Russell and Norvig, Artificial Intelliegence,
Chapter 4
Marsland, Machine Learning, Section 9.4-9.6
Lugar, Artificial Intelligence, Chapters 3 & 4
1
Search in Artificial Intelligence
• In Artificial Intelligence, problem solving is thought of as searching
for solutions in a space of solutions.
• For example, consider playing a 1-person game like the game of 15-
or 8-puzzle.
• The problem is generated by “creating” the search space for the
game and then searching it for a path from the start to the end.
2
CS 4-5820
Search space for 8-puzzle
• The search space for the 8-puzzle game is shown below.
• We can search this space in many different ways.
• Depending on the game or the problem, the search space can be
very large. For example, the search space for chess is supposedly of
size Θ(10^120).
3
Search space for 8-puzzle
• Search algorithms include the so-called “uninformed” searches or exhaustive
searches: Breadth-first search, Depth-first search, etc.
• Since the search space is potentially very large, it is not possible to obtain
solutions in the time needed by exhaustive search. For example, in chess, we
may have only a few minutes (say, 2 minutes or 5 minutes) to make a move,
depending on the tournament rules.
• In such situations, we abandon exhaustive searches in favor of so-called
heuristic searches.
• In heuristic search, we assign a heuristic evaluation to every node using some
mathematical function based on the configuration of the state of the problem
(here, the state of the board).
• Heuristics may be simple or complicated. Simple informed ones are preferred.
• Heuristic search is not perfect. It may sometimes find bad solutions, but most of
the time it does reasonably well.
• In simple heuristic search, given a board position, we can create all possible
next states and choose the one with the maximum heuristic value.
4
Search space for 8-puzzle
• Below are three simple heuristics for the 8-puzzle game.
• We can combine the third one with one of the first two to obtain
something still better.
5
Using Heuristics in the 8-puzzle Game
6
Search space for 8-puzzle
• Below we see the
search space for a
simple heuristic
search.
• There are more
complex heurstic
searches with names
like A-search, A*-
search, etc.
• The goal is to search
quickly and find a
reasonable solution,
if not the best.
7
8
Local Search Algorithms
• The search algorithms we have seen so far include systematic search
(breadth-first, depth-first, iterative deepening, etc.) where we look at the
entire search space in a systematic manner till we have found a goal (or all
goals, if we have to).
• We also have seen heuristic search (best-first, A*-search) where we compute
an evaluation function for each candidate node and choose the one that has
the best heuristic value (the lowest heuristic value is usually the best).
• In both systematic search and heuristic search, we have to keep track of
what’s called the frontier (some books call it open or agenda) data structure
that keeps track of all the candidate nodes for traversal in the next move.
• We make a choice from this frontier data structure to choose the next node
in the search space we go to.
• We usually have to construct the path from the start node to the goal node
to solve a problem.
9
Local Search or Iterative Improvement
• In many optimization problems, we need to find an
optimal path to the goal from a start state (e.g., find the
shortest path from a start city to an end city), but in
many other optimization problems, the path is
irrelevant.
• The goal state itself is the solution.
• Find optimal configuration (e.g., TSP), or
• Find configuration satisfying constraints (e.g., n-
queens)
• Here, it is not important how we get to the goal state,
we just need to know the goal state.
• In such cases, can use iterative improvement algorithms:
keep a single “current” state, and try to improve it.
10
Iterative improvement
• The evaluation can be the length of the tour in the TSP
problem.
11
Iterative improvement example: n-queens
• Goal: Put n chess queens on an n x n board, with no two
queens on the same row, column, or diagonal.
• Here, goal state is initially unknown but is specified by
constraints that it must satisfy.
CS 4-5820 12
An 8-queens state and its successor
13
Plotting evaluations of a space
• In the case of the 8-queens problem, each node has 8 x
7 = 56 successors.
• If we plot all the evaluations of successor states when we
are at a certain state, we get a graph. In this case, it will
be a 3-D graph. We will have x and y for the board
positions and z will be the evaluation.
• Similarly, for any problem, we can plot the evaluations of
all successor nodes.
14
Plotting evaluations of a space
• A search space can look like the graph below. Here we
assume state space has only one dimension.
15
Plotting evaluations of a space
• Depending on the search/optimization problem, the
search spaces can look very bizarre/difficult to navigate.
• The vertical value is the heuristic value, the search space
is 2-D.
16
Plotting evaluations of a space
• Some additional examples of search space landscape.
•
17
Hill climbing (or gradient ascent/descent)
• This is a very simple algorithm for finding a local
maximum.
• Iteratively maximize “value” of current state, by replacing
it by successor state that has highest value, as long as
possible.
• In other words, we start from an initial position (possibly
random), the move along the direction of steepest
ascent/gradient (descent/negative of gradient, for
minimization), go along the direction for a little while;
and repeat the process.
We usually don’t
know the underlying
search space when we
start solving a
problem.
How would Hill-
Climbing do on the
following problems?
19
What we think hill-climbing
looks like
What we learn hill-climbing is
usually like
In other words, when we perform
hill-climbing, we are short-
sighted. We can’t really see far.
Thus, we make local best
decisions with the hope of
finding a global best solution.
This may be an impossible task.
20
Problems with Hill climbing
• Local optima (maxima or minima): A local maximum is a peak that is
higher than each of its neighboring states, but lower than the global
maximum.
• Ridges: A sequence of local maxima. Ridges are very difficult to
navigate for a hill-climbing algorithm.
• Plateaus: An area of the state space where the evaluation function is
flat. It can be a flat local maximum, from which no uphill path exists,
or a shoulder, from which progress is possible.
Ridge
21
Statistics from a real problem: 8-queen problem
• Starting with a randomly generated 8-queens state, hill-climbing gets
stuck 86% of the time at a local maximum or a ridge or a plateau.
• That means, it can solve only 14% of the 8-queens problems given to
it.
• However, hill-climbing is very efficient.
• It takes only 4 steps on an average when it succeeds.
• It takes only 3 steps when getting stuck.
• This is very good, in some ways, in a search space that is 8^8 ≈ 17
million states (Each queen can be in 8 places, independent of each
other. First queen can be in 8 places, second can be in 8 places, ….).
• What we want: An algorithm that compromises on efficiency, but can
solve more problems.
CS 4-5820 22
Some ideas about making Hill-climbing better
• Allowing sideways moves
• When stuck on a ridge or plateau (i.e., all successors have the same
value), allow it to move anyway hoping it is a shoulder and after some
time, there will be a way up.
• How many sideways moves?
• If we always allow sideways moves when there are no uphill moves, an
infinite loop may occur if it’s not a shoulder.
• Solution
• Put a limit on the number of consecutive sideways moves allowed.
• Experience with a real problem
• The authors allowed 100 consecutive sideways moves in the 8-queens
problem.
• This raises the percentage of problems solved from 14% to 94%
• However, now it takes on average 21 steps when successful and 64
steps when it fails.
23
Variants of Hill-climbing
• Stochastic hill-climbing
• Choose at random from among the uphill moves, assuming several uphill
moves are possible. Not the steepest.
• It usually converges more slowly than steepest ascent, but in some state
landscapes, it finds better solutions.
• First-choice hill climbing
• Generates successors randomly until one is generated that is better than
current state.
• This is a good strategy when a state may have hundreds or thousands of
successor states.
• Steepest ascent, hill-climbing with limited sideways moves,
stochastic hill-climbing, first-choice hill-climbing are all incomplete.
• Complete: A local search algorithm is complete if it always finds a goal if one
exists.
• Optimal: A local search algorithm is complete if it always finds the global
maximum/minimum.
24
Variants of Hill-climbing
• Random-restart hill-climbing
• If you don’t succeed the first time, try, try again.
• If the first hill-climbing attempt doesn’t work, try again and again and
again!
• That is, generate random initial states and perform hill-climbing again
and again.
• The number of attempts needs to be limited, this number depends on
the problem.
• With the 8-queens problem, if the number of restarts is limited to about
25, it works very well.
• For a 3-million queen problem with restarts (not sure how many
attempts were allowed) with sideways moves, hill-climbing finds a
solution very quickly.
• Luby et al. (1993) has showed that in many cases, to restart a
randomized search after a particular fixed amount of time is much more
efficient than letting each search continue indefinitely.
25
Final Words on Hill-climbing
• Success of hill-climbing depends on the shape of the state space
landscape.
• If there are few local maxima and plateaus, random-start hill-
climbing with sideways moves works well.
• However, for many real problems, the state space landscape is much
more rugged.
• NP-complete problems are hard because they have exponential
number of local maxima to get stuck on.
• In spite of all the problems, random-hill climbing with sideways
moves works and other approximation techniques work reasonably
well on such problems.
26
Simulated Annealing
• A hill-climbing algorithm that never makes a “downhill”
move toward states with lower value (or higher cost) is
guaranteed to be incomplete, because it can get stuck in
a local maximum.
• In contrast, a purely random walk—that is, moving to a
successor chosen uniformly at random from the set of
successors—is complete but extremely inefficient.
• Therefore, it is reasonable to try to combine hill-climbing
with with a random walk in some way to get both
efficiency and completeness.
• Simulated annealing is one such algorithm.
27
Simulated Annealing
• To discuss simulated annealing, we need to switch our
point of view from hill climbing to gradient descent
(i.e., minimizing cost; simply place a negative sign in
front of the cost function in hill climbing).
28
Simulated Annealing
• Analogy: Imagine the task of getting a ping-pong ball
into the deepest crevice in a bumpy surface.
• If we let the ball roll, it will come to rest at a local
minimum.
• If we shake the surface, we can bounce the ball out of
the local minimum.
• The trick is to shake just hard enough to bounce the ball
out of the local minima, but not hard enough to bounce
the ball out of the global minimum.
• The simulated annealing algorithm starts by shaking hard
(i.e., at a high temperature) and then gradually reduces
the intensity of shaking (i.e., lower the temperature)
29
Minimizing energy
• In this new formulation of the problem
• We compare our state space to that of a physical system
(a bouncing ping-pong ball in a bumpy surface) that is
subject to natural laws
• We compare our value function to the overall
potential energy E of the system.
• On every update, we want ΔE ≤ 0, i.e., the energy of the
system should decrease.
• We start with a high energy state and reduce the energy
of the system slowly letting it randomly walk/jump
around/search at that energy level or temperature it is at,
before reducing the energy level or temperature.
30
Local Minima Problem
• Question: How do you avoid this local minimum?
starting
point
descend
direction
local minimum
global minimum
barrier to overcoming local minimum
31
Consequences of the Occasional Ascents
Help escaping the
local optima.
desired effect
Might pass global optima
after reaching it
adverse effect
(easy to avoid by
keeping track of
best-ever state)
32
Simulated annealing: basic idea
• From current state, pick a random successor state
• If it has better value than current state, then “accept the
transition,” that is, use successor state as current state
(standard hill-climbing, in this case hill-descending).
• Otherwise, do not give up, but instead flip a coin and
accept the transition with a given probability (that is
accept successors that do make the state better with a
certain probability).
• In other words, although we want our next step to be a
good step, if necessary we allow bad steps that take us
in the opposite direction with a non-zero probability.
33
Simulated annealing algorithm
• Idea: Escape local extrema by allowing “bad moves,” but gradually decrease
their size and frequency.
• Even though it says number of times, it means a large no of times.
• We are working with a situation where a state with a lower energy is a
better state.
for t=1 to do
T ß schedule (t)
if T=0 then return current
next ß a randomly selected successor of current
=next.value – current.value
if then current ß next
else current ß next only with probability
endfor
34
Note on simulated annealing: limit cases
• The for loop of the simulated annealing algorithm is similar to hill-climbing
(but we are working with a situation where lower energy is better.
• However, instead of picking the best move, it picks a random move (like
stochastic hill climbing).
• If the random move improves the situation, it is always accepted.
• Otherwise, the the algorithm accepts the move with some probability less
than 1.
• The probability decreases exponentially with the “badness” of the move—
the amount ΔE by which the evaluation is worsened. I.e., a less bad move
is more likely.
• The probability also decreases as the “temperature” T goes down: “bad”
moves are more likely to be allowed at the start when T is high, and they
become more unlikely as T decreases.
• If the “schedule” lowers T slowly enough, the algorithm will find a global
optimum with probability approaching 1.
35
Local Beam Search
• Keeping just one node in memory might seem an extreme reaction to
the problem of memory limitation, but this is what we do in local
searches and in simulated annealing.
• Local Beam Search algorithm keeps track of k states rather than 1.
• It begins with k randomly generated states.
• At each step, all the successors of these k states are generated.
• If any of them is a goal, the algorithm halts.
• Otherwise, it selects the best k successors from the complete list and
repeats.
36
Local Beam Search vs. Hill Climbing
• At first sight, a local beam search with k states may seem like
running k random restart hill-climbing algorithms in parallel.
• However, there is a difference.
• In parallel random-restart hill-climbing, each search process runs
independently of the others.
• In a local beam search, useful information is passed among the
parallel search threads.
• A local beam search algorithm quickly abandons unfruitful searches
and moves its resources to where the most progress is being made.
• Problem: All k states can quickly become concentrated in a small
region of the search space, making it an expensive version of hill
climbing.
• Solution: Stochastic local beam search: Instead of choosing the k
best successors, choose k successors at random.

More Related Content

Similar to 02LocalSearch.pdf

Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}FellowBuddy.com
 
hill climbing algorithm.pptx
hill climbing algorithm.pptxhill climbing algorithm.pptx
hill climbing algorithm.pptxMghooolMasier
 
AI3391 Session 11 Hill climbing algorithm.pptx
AI3391 Session 11 Hill climbing algorithm.pptxAI3391 Session 11 Hill climbing algorithm.pptx
AI3391 Session 11 Hill climbing algorithm.pptxAsst.prof M.Gokilavani
 
Informed Search Techniques new kirti L 8.pptx
Informed Search Techniques new kirti L 8.pptxInformed Search Techniques new kirti L 8.pptx
Informed Search Techniques new kirti L 8.pptxKirti Verma
 
Artificial Intelligence_Anjali_Kumari_26900122059.pptx
Artificial Intelligence_Anjali_Kumari_26900122059.pptxArtificial Intelligence_Anjali_Kumari_26900122059.pptx
Artificial Intelligence_Anjali_Kumari_26900122059.pptxCCBProduction
 
03_UninformedSearch.pdf
03_UninformedSearch.pdf03_UninformedSearch.pdf
03_UninformedSearch.pdfkaxeca4096
 
UninformedSearch (2).pptx
UninformedSearch (2).pptxUninformedSearch (2).pptx
UninformedSearch (2).pptxSankarTerli
 
Introduction to artificial intelligence
Introduction to artificial intelligenceIntroduction to artificial intelligence
Introduction to artificial intelligencerishi ram khanal
 
Search problems in Artificial Intelligence
Search problems in Artificial IntelligenceSearch problems in Artificial Intelligence
Search problems in Artificial Intelligenceananth
 
chapter 2 Problem Solving.pdf
chapter 2 Problem Solving.pdfchapter 2 Problem Solving.pdf
chapter 2 Problem Solving.pdfMeghaGupta952452
 
Heuristc Search Techniques
Heuristc Search TechniquesHeuristc Search Techniques
Heuristc Search TechniquesJismy .K.Jose
 
2-Heuristic Search.ppt
2-Heuristic Search.ppt2-Heuristic Search.ppt
2-Heuristic Search.pptMIT,Imphal
 
ARTIFICIAL INTELLIGENCE PROBLEM SOLVING AND SEARCH
ARTIFICIAL INTELLIGENCE PROBLEM SOLVING AND SEARCHARTIFICIAL INTELLIGENCE PROBLEM SOLVING AND SEARCH
ARTIFICIAL INTELLIGENCE PROBLEM SOLVING AND SEARCHJeff Brooks
 

Similar to 02LocalSearch.pdf (20)

Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}
 
Rai practical presentations.
Rai practical presentations.Rai practical presentations.
Rai practical presentations.
 
Lec 6 bsc csit
Lec 6 bsc csitLec 6 bsc csit
Lec 6 bsc csit
 
hill climbing algorithm.pptx
hill climbing algorithm.pptxhill climbing algorithm.pptx
hill climbing algorithm.pptx
 
searching
searchingsearching
searching
 
AI3391 Session 11 Hill climbing algorithm.pptx
AI3391 Session 11 Hill climbing algorithm.pptxAI3391 Session 11 Hill climbing algorithm.pptx
AI3391 Session 11 Hill climbing algorithm.pptx
 
Informed Search Techniques new kirti L 8.pptx
Informed Search Techniques new kirti L 8.pptxInformed Search Techniques new kirti L 8.pptx
Informed Search Techniques new kirti L 8.pptx
 
Artificial Intelligence_Anjali_Kumari_26900122059.pptx
Artificial Intelligence_Anjali_Kumari_26900122059.pptxArtificial Intelligence_Anjali_Kumari_26900122059.pptx
Artificial Intelligence_Anjali_Kumari_26900122059.pptx
 
Sudoku
SudokuSudoku
Sudoku
 
03_UninformedSearch.pdf
03_UninformedSearch.pdf03_UninformedSearch.pdf
03_UninformedSearch.pdf
 
UninformedSearch (2).pptx
UninformedSearch (2).pptxUninformedSearch (2).pptx
UninformedSearch (2).pptx
 
Introduction to artificial intelligence
Introduction to artificial intelligenceIntroduction to artificial intelligence
Introduction to artificial intelligence
 
Search problems in Artificial Intelligence
Search problems in Artificial IntelligenceSearch problems in Artificial Intelligence
Search problems in Artificial Intelligence
 
chapter 2 Problem Solving.pdf
chapter 2 Problem Solving.pdfchapter 2 Problem Solving.pdf
chapter 2 Problem Solving.pdf
 
Chap11 slides
Chap11 slidesChap11 slides
Chap11 slides
 
l2.pptx
l2.pptxl2.pptx
l2.pptx
 
Heuristc Search Techniques
Heuristc Search TechniquesHeuristc Search Techniques
Heuristc Search Techniques
 
Lecture 3 problem solving
Lecture 3   problem solvingLecture 3   problem solving
Lecture 3 problem solving
 
2-Heuristic Search.ppt
2-Heuristic Search.ppt2-Heuristic Search.ppt
2-Heuristic Search.ppt
 
ARTIFICIAL INTELLIGENCE PROBLEM SOLVING AND SEARCH
ARTIFICIAL INTELLIGENCE PROBLEM SOLVING AND SEARCHARTIFICIAL INTELLIGENCE PROBLEM SOLVING AND SEARCH
ARTIFICIAL INTELLIGENCE PROBLEM SOLVING AND SEARCH
 

Recently uploaded

The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfakmcokerachita
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfUmakantAnnand
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxRoyAbrique
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 

Recently uploaded (20)

The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdf
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.Compdf
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 

02LocalSearch.pdf

  • 1. CS 5860 Local Search Adapted from Russell and Norvig, Artificial Intelliegence, Chapter 4 Marsland, Machine Learning, Section 9.4-9.6 Lugar, Artificial Intelligence, Chapters 3 & 4 1
  • 2. Search in Artificial Intelligence • In Artificial Intelligence, problem solving is thought of as searching for solutions in a space of solutions. • For example, consider playing a 1-person game like the game of 15- or 8-puzzle. • The problem is generated by “creating” the search space for the game and then searching it for a path from the start to the end. 2
  • 3. CS 4-5820 Search space for 8-puzzle • The search space for the 8-puzzle game is shown below. • We can search this space in many different ways. • Depending on the game or the problem, the search space can be very large. For example, the search space for chess is supposedly of size Θ(10^120). 3
  • 4. Search space for 8-puzzle • Search algorithms include the so-called “uninformed” searches or exhaustive searches: Breadth-first search, Depth-first search, etc. • Since the search space is potentially very large, it is not possible to obtain solutions in the time needed by exhaustive search. For example, in chess, we may have only a few minutes (say, 2 minutes or 5 minutes) to make a move, depending on the tournament rules. • In such situations, we abandon exhaustive searches in favor of so-called heuristic searches. • In heuristic search, we assign a heuristic evaluation to every node using some mathematical function based on the configuration of the state of the problem (here, the state of the board). • Heuristics may be simple or complicated. Simple informed ones are preferred. • Heuristic search is not perfect. It may sometimes find bad solutions, but most of the time it does reasonably well. • In simple heuristic search, given a board position, we can create all possible next states and choose the one with the maximum heuristic value. 4
  • 5. Search space for 8-puzzle • Below are three simple heuristics for the 8-puzzle game. • We can combine the third one with one of the first two to obtain something still better. 5
  • 6. Using Heuristics in the 8-puzzle Game 6
  • 7. Search space for 8-puzzle • Below we see the search space for a simple heuristic search. • There are more complex heurstic searches with names like A-search, A*- search, etc. • The goal is to search quickly and find a reasonable solution, if not the best. 7
  • 8. 8 Local Search Algorithms • The search algorithms we have seen so far include systematic search (breadth-first, depth-first, iterative deepening, etc.) where we look at the entire search space in a systematic manner till we have found a goal (or all goals, if we have to). • We also have seen heuristic search (best-first, A*-search) where we compute an evaluation function for each candidate node and choose the one that has the best heuristic value (the lowest heuristic value is usually the best). • In both systematic search and heuristic search, we have to keep track of what’s called the frontier (some books call it open or agenda) data structure that keeps track of all the candidate nodes for traversal in the next move. • We make a choice from this frontier data structure to choose the next node in the search space we go to. • We usually have to construct the path from the start node to the goal node to solve a problem.
  • 9. 9 Local Search or Iterative Improvement • In many optimization problems, we need to find an optimal path to the goal from a start state (e.g., find the shortest path from a start city to an end city), but in many other optimization problems, the path is irrelevant. • The goal state itself is the solution. • Find optimal configuration (e.g., TSP), or • Find configuration satisfying constraints (e.g., n- queens) • Here, it is not important how we get to the goal state, we just need to know the goal state. • In such cases, can use iterative improvement algorithms: keep a single “current” state, and try to improve it.
  • 10. 10 Iterative improvement • The evaluation can be the length of the tour in the TSP problem.
  • 11. 11 Iterative improvement example: n-queens • Goal: Put n chess queens on an n x n board, with no two queens on the same row, column, or diagonal. • Here, goal state is initially unknown but is specified by constraints that it must satisfy.
  • 12. CS 4-5820 12 An 8-queens state and its successor
  • 13. 13 Plotting evaluations of a space • In the case of the 8-queens problem, each node has 8 x 7 = 56 successors. • If we plot all the evaluations of successor states when we are at a certain state, we get a graph. In this case, it will be a 3-D graph. We will have x and y for the board positions and z will be the evaluation. • Similarly, for any problem, we can plot the evaluations of all successor nodes.
  • 14. 14 Plotting evaluations of a space • A search space can look like the graph below. Here we assume state space has only one dimension.
  • 15. 15 Plotting evaluations of a space • Depending on the search/optimization problem, the search spaces can look very bizarre/difficult to navigate. • The vertical value is the heuristic value, the search space is 2-D.
  • 16. 16 Plotting evaluations of a space • Some additional examples of search space landscape. •
  • 17. 17 Hill climbing (or gradient ascent/descent) • This is a very simple algorithm for finding a local maximum. • Iteratively maximize “value” of current state, by replacing it by successor state that has highest value, as long as possible. • In other words, we start from an initial position (possibly random), the move along the direction of steepest ascent/gradient (descent/negative of gradient, for minimization), go along the direction for a little while; and repeat the process.
  • 18. We usually don’t know the underlying search space when we start solving a problem. How would Hill- Climbing do on the following problems?
  • 19. 19 What we think hill-climbing looks like What we learn hill-climbing is usually like In other words, when we perform hill-climbing, we are short- sighted. We can’t really see far. Thus, we make local best decisions with the hope of finding a global best solution. This may be an impossible task.
  • 20. 20 Problems with Hill climbing • Local optima (maxima or minima): A local maximum is a peak that is higher than each of its neighboring states, but lower than the global maximum. • Ridges: A sequence of local maxima. Ridges are very difficult to navigate for a hill-climbing algorithm. • Plateaus: An area of the state space where the evaluation function is flat. It can be a flat local maximum, from which no uphill path exists, or a shoulder, from which progress is possible. Ridge
  • 21. 21 Statistics from a real problem: 8-queen problem • Starting with a randomly generated 8-queens state, hill-climbing gets stuck 86% of the time at a local maximum or a ridge or a plateau. • That means, it can solve only 14% of the 8-queens problems given to it. • However, hill-climbing is very efficient. • It takes only 4 steps on an average when it succeeds. • It takes only 3 steps when getting stuck. • This is very good, in some ways, in a search space that is 8^8 ≈ 17 million states (Each queen can be in 8 places, independent of each other. First queen can be in 8 places, second can be in 8 places, ….). • What we want: An algorithm that compromises on efficiency, but can solve more problems.
  • 22. CS 4-5820 22 Some ideas about making Hill-climbing better • Allowing sideways moves • When stuck on a ridge or plateau (i.e., all successors have the same value), allow it to move anyway hoping it is a shoulder and after some time, there will be a way up. • How many sideways moves? • If we always allow sideways moves when there are no uphill moves, an infinite loop may occur if it’s not a shoulder. • Solution • Put a limit on the number of consecutive sideways moves allowed. • Experience with a real problem • The authors allowed 100 consecutive sideways moves in the 8-queens problem. • This raises the percentage of problems solved from 14% to 94% • However, now it takes on average 21 steps when successful and 64 steps when it fails.
  • 23. 23 Variants of Hill-climbing • Stochastic hill-climbing • Choose at random from among the uphill moves, assuming several uphill moves are possible. Not the steepest. • It usually converges more slowly than steepest ascent, but in some state landscapes, it finds better solutions. • First-choice hill climbing • Generates successors randomly until one is generated that is better than current state. • This is a good strategy when a state may have hundreds or thousands of successor states. • Steepest ascent, hill-climbing with limited sideways moves, stochastic hill-climbing, first-choice hill-climbing are all incomplete. • Complete: A local search algorithm is complete if it always finds a goal if one exists. • Optimal: A local search algorithm is complete if it always finds the global maximum/minimum.
  • 24. 24 Variants of Hill-climbing • Random-restart hill-climbing • If you don’t succeed the first time, try, try again. • If the first hill-climbing attempt doesn’t work, try again and again and again! • That is, generate random initial states and perform hill-climbing again and again. • The number of attempts needs to be limited, this number depends on the problem. • With the 8-queens problem, if the number of restarts is limited to about 25, it works very well. • For a 3-million queen problem with restarts (not sure how many attempts were allowed) with sideways moves, hill-climbing finds a solution very quickly. • Luby et al. (1993) has showed that in many cases, to restart a randomized search after a particular fixed amount of time is much more efficient than letting each search continue indefinitely.
  • 25. 25 Final Words on Hill-climbing • Success of hill-climbing depends on the shape of the state space landscape. • If there are few local maxima and plateaus, random-start hill- climbing with sideways moves works well. • However, for many real problems, the state space landscape is much more rugged. • NP-complete problems are hard because they have exponential number of local maxima to get stuck on. • In spite of all the problems, random-hill climbing with sideways moves works and other approximation techniques work reasonably well on such problems.
  • 26. 26 Simulated Annealing • A hill-climbing algorithm that never makes a “downhill” move toward states with lower value (or higher cost) is guaranteed to be incomplete, because it can get stuck in a local maximum. • In contrast, a purely random walk—that is, moving to a successor chosen uniformly at random from the set of successors—is complete but extremely inefficient. • Therefore, it is reasonable to try to combine hill-climbing with with a random walk in some way to get both efficiency and completeness. • Simulated annealing is one such algorithm.
  • 27. 27 Simulated Annealing • To discuss simulated annealing, we need to switch our point of view from hill climbing to gradient descent (i.e., minimizing cost; simply place a negative sign in front of the cost function in hill climbing).
  • 28. 28 Simulated Annealing • Analogy: Imagine the task of getting a ping-pong ball into the deepest crevice in a bumpy surface. • If we let the ball roll, it will come to rest at a local minimum. • If we shake the surface, we can bounce the ball out of the local minimum. • The trick is to shake just hard enough to bounce the ball out of the local minima, but not hard enough to bounce the ball out of the global minimum. • The simulated annealing algorithm starts by shaking hard (i.e., at a high temperature) and then gradually reduces the intensity of shaking (i.e., lower the temperature)
  • 29. 29 Minimizing energy • In this new formulation of the problem • We compare our state space to that of a physical system (a bouncing ping-pong ball in a bumpy surface) that is subject to natural laws • We compare our value function to the overall potential energy E of the system. • On every update, we want ΔE ≤ 0, i.e., the energy of the system should decrease. • We start with a high energy state and reduce the energy of the system slowly letting it randomly walk/jump around/search at that energy level or temperature it is at, before reducing the energy level or temperature.
  • 30. 30 Local Minima Problem • Question: How do you avoid this local minimum? starting point descend direction local minimum global minimum barrier to overcoming local minimum
  • 31. 31 Consequences of the Occasional Ascents Help escaping the local optima. desired effect Might pass global optima after reaching it adverse effect (easy to avoid by keeping track of best-ever state)
  • 32. 32 Simulated annealing: basic idea • From current state, pick a random successor state • If it has better value than current state, then “accept the transition,” that is, use successor state as current state (standard hill-climbing, in this case hill-descending). • Otherwise, do not give up, but instead flip a coin and accept the transition with a given probability (that is accept successors that do make the state better with a certain probability). • In other words, although we want our next step to be a good step, if necessary we allow bad steps that take us in the opposite direction with a non-zero probability.
  • 33. 33 Simulated annealing algorithm • Idea: Escape local extrema by allowing “bad moves,” but gradually decrease their size and frequency. • Even though it says number of times, it means a large no of times. • We are working with a situation where a state with a lower energy is a better state. for t=1 to do T ß schedule (t) if T=0 then return current next ß a randomly selected successor of current =next.value – current.value if then current ß next else current ß next only with probability endfor
  • 34. 34 Note on simulated annealing: limit cases • The for loop of the simulated annealing algorithm is similar to hill-climbing (but we are working with a situation where lower energy is better. • However, instead of picking the best move, it picks a random move (like stochastic hill climbing). • If the random move improves the situation, it is always accepted. • Otherwise, the the algorithm accepts the move with some probability less than 1. • The probability decreases exponentially with the “badness” of the move— the amount ΔE by which the evaluation is worsened. I.e., a less bad move is more likely. • The probability also decreases as the “temperature” T goes down: “bad” moves are more likely to be allowed at the start when T is high, and they become more unlikely as T decreases. • If the “schedule” lowers T slowly enough, the algorithm will find a global optimum with probability approaching 1.
  • 35. 35 Local Beam Search • Keeping just one node in memory might seem an extreme reaction to the problem of memory limitation, but this is what we do in local searches and in simulated annealing. • Local Beam Search algorithm keeps track of k states rather than 1. • It begins with k randomly generated states. • At each step, all the successors of these k states are generated. • If any of them is a goal, the algorithm halts. • Otherwise, it selects the best k successors from the complete list and repeats.
  • 36. 36 Local Beam Search vs. Hill Climbing • At first sight, a local beam search with k states may seem like running k random restart hill-climbing algorithms in parallel. • However, there is a difference. • In parallel random-restart hill-climbing, each search process runs independently of the others. • In a local beam search, useful information is passed among the parallel search threads. • A local beam search algorithm quickly abandons unfruitful searches and moves its resources to where the most progress is being made. • Problem: All k states can quickly become concentrated in a small region of the search space, making it an expensive version of hill climbing. • Solution: Stochastic local beam search: Instead of choosing the k best successors, choose k successors at random.