0
Chapter 4  Informed Search and Exploration
Outline <ul><li>Informed (Heuristic) search strategies </li></ul><ul><ul><li>(Greedy) Best-first search </li></ul></ul><ul...
Informed search strategies <ul><li>Informed search   </li></ul><ul><ul><li>uses  problem-specific  knowledge beyond the pr...
Greedy best-first search <ul><ul><li>expand the node that is  closest  to the goal </li></ul></ul><ul><ul><li>:  Straight ...
Greedy best-first search example
Properties of Greedy best-first search  <ul><ul><li>Complete ? </li></ul></ul><ul><ul><li>Optimal ? </li></ul></ul><ul><ul...
A* search <ul><ul><li>evaluation function  f ( n ) =  g ( n ) +  h ( n ) </li></ul></ul><ul><ul><ul><li>g ( n )  =  cost t...
:  Straight line distance  heuristic
A* search example
Optimality of A* <ul><li>Consistency  (monotonicity) </li></ul><ul><ul><li>n’  is any successor of  n,  general  triangle ...
Properties of A* <ul><li>Suppose  C * is the cost of the optimal solution path </li></ul><ul><ul><li>A* expands  all  node...
Memory-bounded heuristic search <ul><li>Iterative-deepening A* (IDA*) </li></ul><ul><ul><li>uses  f -value ( g  +  h ) as ...
:  Straight line distance  heuristic
RBFS example
Memory-bounded heuristic search (cont’d) <ul><li>SMA* – Simplified MA* (Memory-bounded A*) </li></ul><ul><ul><li>expands t...
(Admissible) Heuristic Functions h 1 ?  h 2 ?  = the number of misplaced tiles = total  Manhattan  (city block) distance =...
Effect of heuristic accuracy <ul><ul><li>Effective  branching factor  b* </li></ul></ul><ul><ul><ul><li>total # of nodes g...
Inventing admissible heuristic functions <ul><ul><li>h 1  and  h 2  are  solutions  to  relaxed  (simplified) version of t...
Local search algorithms and optimization <ul><li>Systematic search algorithms </li></ul><ul><ul><li>to find (or given) the...
Local search – example
Local search – state space landscape <ul><ul><li>elevation  =   the value of the  objective function  or  heuristic cost f...
<ul><li>moves in the direction of increasing value until a “ peak ” </li></ul><ul><ul><li>current node data structure only...
<ul><li>complete-state formulation for 8-queens </li></ul><ul><ul><li>successor function returns  all possible  states gen...
Hill-climbing search – greedy local search <ul><li>Hill climbing, the greedy local search, often gets stuck </li></ul><ul>...
Hill-climbing search – improvement <ul><li>Allows  sideways move : with hope that the plateau is a shoulder </li></ul><ul>...
Simulated annealing search <ul><ul><li>combine hill climbing ( efficiency ) with random walk ( completeness ) </li></ul></...
Simulated annealing search - Implementation <ul><ul><li>Always accept the good moves </li></ul></ul><ul><ul><li>The probab...
Local beam search <ul><ul><li>Local beam search : keeps track of  k  states rather than just one </li></ul></ul><ul><ul><u...
Genetic algorithms <ul><ul><li>Genetic Algorithms  (GA): successor states are generated by combining  two  parents states....
Genetic algorithms (cont’d) <ul><ul><li>schema :   a substring in which some of the positions can be left  unspecified </l...
Upcoming SlideShare
Loading in...5
×

Searchadditional2

463

Published on

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
463
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
15
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "Searchadditional2"

  1. 1. Chapter 4 Informed Search and Exploration
  2. 2. Outline <ul><li>Informed (Heuristic) search strategies </li></ul><ul><ul><li>(Greedy) Best-first search </li></ul></ul><ul><ul><li>A* search </li></ul></ul><ul><li>(Admissible) Heuristic Functions </li></ul><ul><ul><li>Relaxed problem </li></ul></ul><ul><ul><li>Subproblem </li></ul></ul><ul><li>Local search algorithms </li></ul><ul><ul><li>Hill-climbing search </li></ul></ul><ul><ul><li>Simulated anneal search </li></ul></ul><ul><ul><li>Local beam search </li></ul></ul><ul><ul><li>Genetic algorithms </li></ul></ul><ul><li>Online search * </li></ul><ul><ul><li>Online local search </li></ul></ul><ul><ul><li>learning in online search </li></ul></ul>
  3. 3. Informed search strategies <ul><li>Informed search </li></ul><ul><ul><li>uses problem-specific knowledge beyond the problem definition </li></ul></ul><ul><ul><li>finds solution more efficiently than the uninformed search </li></ul></ul><ul><li>Best-first search </li></ul><ul><ul><li>uses an evaluation function f ( n ) for each node </li></ul></ul><ul><ul><ul><li>e.g., Measures distance to the goal – lowest evaluation </li></ul></ul></ul><ul><ul><li>Implementation : </li></ul></ul><ul><ul><ul><li>fringe is a queue sorted in increasing order of f -values. </li></ul></ul></ul><ul><ul><li>Can we really expand the best node first? </li></ul></ul><ul><ul><ul><li>No! only the one that appears to be best based on f ( n ). </li></ul></ul></ul><ul><ul><li>heuristic function h ( n ) </li></ul></ul><ul><ul><ul><li>estimated cost of the cheapest path from node n to a goal node </li></ul></ul></ul><ul><ul><li>Specific algorithms </li></ul></ul><ul><ul><ul><li>greedy best-first search </li></ul></ul></ul><ul><ul><ul><li>A* search </li></ul></ul></ul>
  4. 4. Greedy best-first search <ul><ul><li>expand the node that is closest to the goal </li></ul></ul><ul><ul><li>: Straight line distance heuristic </li></ul></ul>
  5. 5. Greedy best-first search example
  6. 6. Properties of Greedy best-first search <ul><ul><li>Complete ? </li></ul></ul><ul><ul><li>Optimal ? </li></ul></ul><ul><ul><li>Time ? </li></ul></ul><ul><ul><li>Space ? </li></ul></ul>No No – can get stuck in loops, e.g., Iasi –> Neamt – > Iasi – > Neamt Yes – complete in finite states with repeated-state checking , but a good heuristic function can give dramatic improvement – keeps all nodes in memory
  7. 7. A* search <ul><ul><li>evaluation function f ( n ) = g ( n ) + h ( n ) </li></ul></ul><ul><ul><ul><li>g ( n ) = cost to reach the node </li></ul></ul></ul><ul><ul><ul><li>h ( n ) = estimated cost to the goal from n </li></ul></ul></ul><ul><ul><ul><li>f ( n ) = estimated total cost of path through n to the goal </li></ul></ul></ul><ul><ul><li>an admissible (optimistic) heuristic </li></ul></ul><ul><ul><ul><li>never overestimates the cost to reach the goal </li></ul></ul></ul><ul><ul><ul><li>estimates the cost of solving the problem is less than it actually is </li></ul></ul></ul><ul><ul><ul><li>e.g., never overestimates the actual road distances </li></ul></ul></ul><ul><ul><li>A* using Tree-Search is optimal if h ( n ) is admissible </li></ul></ul><ul><ul><li>could get suboptimal solutions using Graph-Search </li></ul></ul><ul><ul><ul><li>might discard the optimal path to a repeated state if it is not the first one generated </li></ul></ul></ul><ul><ul><ul><li>a simple solution is to discard the more expensive of any two paths found to the same node (extra memory) </li></ul></ul></ul>
  8. 8. : Straight line distance heuristic
  9. 9. A* search example
  10. 10. Optimality of A* <ul><li>Consistency (monotonicity) </li></ul><ul><ul><li>n’ is any successor of n, general triangle inequality ( n , n’ , and the goal) </li></ul></ul><ul><ul><li>consistent heuristic is also admissible </li></ul></ul><ul><li>A* using Graph-Search is optimal if h ( n ) is consistent </li></ul><ul><ul><li>the values of f ( n ) along any path are nondecreasing </li></ul></ul>
  11. 11. Properties of A* <ul><li>Suppose C * is the cost of the optimal solution path </li></ul><ul><ul><li>A* expands all nodes with f ( n ) < C * </li></ul></ul><ul><ul><li>A* might expand some of nodes with f ( n ) = C * on the “goal contour” </li></ul></ul><ul><ul><li>A* will expand no nodes with f ( n ) > C *, which are pruned ! </li></ul></ul><ul><ul><li>Pruning : eliminating possibilities from consideration without examination </li></ul></ul><ul><li>A* is optimally efficient for any given heuristic function </li></ul><ul><ul><li>no other optimal algorithm is guaranteed to expand fewer nodes than A* </li></ul></ul><ul><ul><li>an algorithm might miss the optimal solution if it does not expand all nodes with f ( n ) < C * </li></ul></ul><ul><li>A* is complete </li></ul><ul><li>Time complexity </li></ul><ul><ul><li>exponential number of nodes within the goal contour </li></ul></ul><ul><li>Space complexity </li></ul><ul><ul><li>keeps all generated nodes in memory </li></ul></ul><ul><ul><li>runs out of space long before runs out of time </li></ul></ul>
  12. 12. Memory-bounded heuristic search <ul><li>Iterative-deepening A* (IDA*) </li></ul><ul><ul><li>uses f -value ( g + h ) as the cutoff </li></ul></ul><ul><li>Recursive best-first search (RBFS) </li></ul><ul><ul><li>replaces the f -value of each node along the path with the best f -value of its children </li></ul></ul><ul><ul><li>remembers the f -value of the best leaf in the “forgotten” subtree so that it can reexpand it later if necessary </li></ul></ul><ul><ul><li>is efficient than IDA* but generates excessive nodes </li></ul></ul><ul><ul><li>changes mind : go back to pick up the second-best path due to the extension ( f -value increased) of current best path </li></ul></ul><ul><ul><li>optimal if h ( n ) is admissible </li></ul></ul><ul><ul><li>space complexity is O ( bd ) </li></ul></ul><ul><ul><li>time complexity depends on the accuracy of h ( n ) and how often the current best path is changed </li></ul></ul><ul><li>Exponential time complexity of Both IDA* and RBFS </li></ul><ul><ul><li>cannot check repeated states other than those on the current path when search on Graphs – Should have used more memory (to store the nodes visited)! </li></ul></ul>
  13. 13. : Straight line distance heuristic
  14. 14. RBFS example
  15. 15. Memory-bounded heuristic search (cont’d) <ul><li>SMA* – Simplified MA* (Memory-bounded A*) </li></ul><ul><ul><li>expands the best leaf node until memory is full </li></ul></ul><ul><ul><li>then drops the worst leaf node – the one has the highest f -value </li></ul></ul><ul><ul><li>regenerates the subtree only when all other paths have been shown to look worse than the path it has forgotten </li></ul></ul><ul><ul><li>complete and optimal if there is a solution reachable </li></ul></ul><ul><ul><li>might be the best general-purpose algorithm for finding optimal solutions </li></ul></ul><ul><li>If there is no way to balance the trade off between time an memory, drop the optimality requirement ! </li></ul>
  16. 16. (Admissible) Heuristic Functions h 1 ? h 2 ? = the number of misplaced tiles = total Manhattan (city block) distance = 7 tiles are out of position = 4+0+3+3+1+0+2+1 = 14
  17. 17. Effect of heuristic accuracy <ul><ul><li>Effective branching factor b* </li></ul></ul><ul><ul><ul><li>total # of nodes generated by A* is N , the solution depth is d </li></ul></ul></ul><ul><ul><ul><li>b* is b that a uniform tree of depth d containing N +1 nodes would have </li></ul></ul></ul><ul><ul><ul><li>well-designed heuristic would have a value close to 1 </li></ul></ul></ul><ul><ul><ul><li>h 2 is better than h 1 based on the b* </li></ul></ul></ul><ul><ul><li>Domination </li></ul></ul><ul><ul><ul><li>h 2 dominates h 1 if for any node n </li></ul></ul></ul><ul><ul><ul><li>A* using h 2 will never expand more nodes than A* using h 1 </li></ul></ul></ul><ul><ul><ul><li>every node n with will be expanded </li></ul></ul></ul><ul><ul><ul><li>the larger the better, as long as it does not overestimate! </li></ul></ul></ul>
  18. 18. Inventing admissible heuristic functions <ul><ul><li>h 1 and h 2 are solutions to relaxed (simplified) version of the puzzle. </li></ul></ul><ul><ul><ul><li>If the rules of the 8-puzze are relaxed so that a tie can move anywhere , then h 1 gives the shortest solution </li></ul></ul></ul><ul><ul><ul><li>If the rules are relaxed so that a tile can move to any adjacent square , then h 2 gives the shortest solution </li></ul></ul></ul><ul><ul><li>Relaxed problem : A problem with fewer restrictions on the actions </li></ul></ul><ul><ul><ul><li>Admissible heuristics for the original problem can be derived from the optimal (exact) solution to a relaxed problem </li></ul></ul></ul><ul><ul><ul><li>Key point : the optimal solution cost of a relaxed problem is no greater than the optimal solution cost of the original problem </li></ul></ul></ul><ul><ul><ul><li>Which should we choose if none of the h 1 … h m dominates any of the others? </li></ul></ul></ul><ul><ul><ul><li>We can have the best of all worlds, i.e., use whichever function is most accurate on the current node </li></ul></ul></ul><ul><ul><li>Subproblem * </li></ul></ul><ul><ul><ul><li>Admissible heuristics for the original problem can also be derived from the solution cost of the subproblem. </li></ul></ul></ul><ul><ul><li>Learning from experience * </li></ul></ul>
  19. 19. Local search algorithms and optimization <ul><li>Systematic search algorithms </li></ul><ul><ul><li>to find (or given) the goal and to find the path to that goal </li></ul></ul><ul><li>Local search algorithms </li></ul><ul><ul><li>the path to the goal is irrelevant , e.g., n -queens problem </li></ul></ul><ul><ul><li>state space = set of “complete” configurations </li></ul></ul><ul><ul><li>keep a single “current” state and try to improve it, e.g., move to its neighbors </li></ul></ul><ul><ul><li>Key advantages: </li></ul></ul><ul><ul><ul><li>use very little (constant) memory </li></ul></ul></ul><ul><ul><ul><li>find reasonable solutions in large or infinite (continuous) state spaces </li></ul></ul></ul><ul><ul><li>(pure) Optimization problem: </li></ul></ul><ul><ul><ul><li>to find the best state (optimal configuration ) based on an objective function , e.g. reproductive fitness – Darwinian, no goal test and path cost </li></ul></ul></ul>
  20. 20. Local search – example
  21. 21. Local search – state space landscape <ul><ul><li>elevation = the value of the objective function or heuristic cost function </li></ul></ul>global minimum heuristic cost function <ul><ul><li>A complete local search algorithm finds a solution if one exists </li></ul></ul><ul><ul><li>A optimal algorithm finds a global minimum or maximum </li></ul></ul>
  22. 22. <ul><li>moves in the direction of increasing value until a “ peak ” </li></ul><ul><ul><li>current node data structure only records the state and its objective function </li></ul></ul><ul><ul><li>neither remember the history nor look beyond the immediate neighbors </li></ul></ul><ul><ul><li>like climbing Mount Everest in thick fog with amnesia </li></ul></ul>Hill-climbing search
  23. 23. <ul><li>complete-state formulation for 8-queens </li></ul><ul><ul><li>successor function returns all possible states generated by moving a single queen to another square in the same column (8 x 7 = 56 successors for each state) </li></ul></ul><ul><ul><li>the heuristic cost function h is the number of pairs of queens that are attacking each other </li></ul></ul>Hill-climbing search - example best moves reduce h = 17 to h = 12 local minimum with h = 1
  24. 24. Hill-climbing search – greedy local search <ul><li>Hill climbing, the greedy local search, often gets stuck </li></ul><ul><ul><li>Local maxima : a peak that is higher than each of its neighboring states, but lower than the global maximum </li></ul></ul><ul><ul><li>Ridges : a sequence of local maxima that is difficult to navigate </li></ul></ul><ul><ul><li>Plateau : a flat area of the state space landscape </li></ul></ul><ul><ul><ul><li>a flat local maximum : no uphill exit exists </li></ul></ul></ul><ul><ul><ul><li>a shoulder : possible to make progress </li></ul></ul></ul><ul><ul><li>can only solve 14% of 8-queen instance but fast (4 steps to S and 3 to F ) </li></ul></ul>
  25. 25. Hill-climbing search – improvement <ul><li>Allows sideways move : with hope that the plateau is a shoulder </li></ul><ul><ul><li>could stuck in an infinite loop when it reaches a flat local maximum </li></ul></ul><ul><ul><li>limits the number of consecutive sideways moves </li></ul></ul><ul><ul><li>can solve 94% of 8-queen instances but slow (21 steps to S and 64 to F ) </li></ul></ul><ul><li>Variations </li></ul><ul><ul><li>stochastic hill climbing </li></ul></ul><ul><ul><ul><li>chooses at random ; probability of selection depends on the steepness </li></ul></ul></ul><ul><ul><li>first choice hill climbing </li></ul></ul><ul><ul><ul><li>randomly generates successors to find a better one </li></ul></ul></ul><ul><ul><li>All the hill climbing algorithms discussed so far are incomplete </li></ul></ul><ul><ul><ul><li>fail to find a goal when one exists because they get stuck on local maxima </li></ul></ul></ul><ul><ul><li>Random-restart hill climbing </li></ul></ul><ul><ul><ul><li>conducts a series of hill-climbing searches; randomly generated initial states </li></ul></ul></ul><ul><ul><li>Have to give up the global optimality </li></ul></ul><ul><ul><ul><li>landscape consists of a large amount of porcupines on a flat floor </li></ul></ul></ul><ul><ul><ul><li>NP-hard problems </li></ul></ul></ul>
  26. 26. Simulated annealing search <ul><ul><li>combine hill climbing ( efficiency ) with random walk ( completeness ) </li></ul></ul><ul><ul><li>annealing : harden metals by heating metals to a high temperature and gradually cooling them </li></ul></ul><ul><ul><li>getting a ping-pong ball into the deepest crevice in a humpy surface </li></ul></ul><ul><ul><ul><li>shake the surface to get the ball out of the local minima </li></ul></ul></ul><ul><ul><ul><li>not too hard to dislodge it from the global minimum </li></ul></ul></ul><ul><ul><li>simulated annealing : </li></ul></ul><ul><ul><ul><li>start by shaking hard (at a high temperature) and then gradually reduce the intensity of the shaking (lower the temperature) </li></ul></ul></ul><ul><ul><ul><li>escape the local minima by allowing some “bad” moves </li></ul></ul></ul><ul><ul><ul><li>but gradually reduce their size and frequency </li></ul></ul></ul>
  27. 27. Simulated annealing search - Implementation <ul><ul><li>Always accept the good moves </li></ul></ul><ul><ul><li>The probability to accept a bad move </li></ul></ul><ul><ul><ul><li>decreases exponentially with the “ badness ” of the move </li></ul></ul></ul><ul><ul><ul><li>decreases exponentially with the “ temperature ” T (decreasing) </li></ul></ul></ul><ul><ul><li>finds a global optimum with probability approaching 1 if the schedule lowers T slowly enough </li></ul></ul>
  28. 28. Local beam search <ul><ul><li>Local beam search : keeps track of k states rather than just one </li></ul></ul><ul><ul><ul><li>generates all the successors of all k states </li></ul></ul></ul><ul><ul><ul><li>selects the k best successors from the complete list and repeats </li></ul></ul></ul><ul><ul><ul><li>quickly abandons unfruitful searches and moves to the space where the most progress is being made </li></ul></ul></ul><ul><ul><ul><li>– “ Come over here, the grass is greener! ” </li></ul></ul></ul><ul><ul><ul><li>lack of diversity among the k states </li></ul></ul></ul><ul><ul><li>stochastic beam search : chooses k successors at random , with the probability of choosing a given successor having an increasing value </li></ul></ul><ul><ul><li>natural selection: the successors (offspring) if a state (organism) populate the next generation according to is value (fitness). </li></ul></ul>
  29. 29. Genetic algorithms <ul><ul><li>Genetic Algorithms (GA): successor states are generated by combining two parents states. </li></ul></ul><ul><ul><ul><li>population : s set of k randomly generated states </li></ul></ul></ul><ul><ul><ul><li>each state, called individual , is represented as a string over a finite alphabet, e.g. a string of 0s and 1s; 8-queens : 24 bits or 8 digits for their positions </li></ul></ul></ul><ul><ul><ul><li>fitness (evaluation) function : return higher values for better states, </li></ul></ul></ul><ul><ul><ul><li>e.g., the number of nonattacking pairs of queens </li></ul></ul></ul><ul><ul><ul><li>randomly choosing two pairs for reproducing based on the probability ; proportional to fitness score; not choosing the similar ones too early </li></ul></ul></ul>
  30. 30. Genetic algorithms (cont’d) <ul><ul><li>schema : a substring in which some of the positions can be left unspecified </li></ul></ul><ul><ul><ul><li>instances : strings that match the schema </li></ul></ul></ul><ul><ul><ul><li>GA works best when schemas correspond to meaningful components of a solution. </li></ul></ul></ul><ul><ul><li>a crossover point is randomly chosen from the positions in the string </li></ul></ul><ul><ul><ul><li>larger steps in the state space early and smaller steps later </li></ul></ul></ul><ul><ul><li>each location is subject to random mutation with a small independent probability </li></ul></ul>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×