• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Heuristics slides 321
 

Heuristics slides 321

on

  • 449 views

 

Statistics

Views

Total Views
449
Views on SlideShare
449
Embed Views
0

Actions

Likes
0
Downloads
4
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Heuristics slides 321 Heuristics slides 321 Presentation Transcript

    • HeuristicsSome further elaborationsof the art of heuristics andexamples.
    • Goodness of heuristicsIf a heuristic is perfect, search work is proportional to solution length S = O(b*d) b is average branching, d depth of solutionIf h1 and h2 are two heuristics, and h1 < h2 everywhere,then A*(h2) will expand no more nodes than A*(h1)If an heuristic never overestimates more than N % of the leastcost, then the found solution is not more than N% over the optimalsolution.h()=0 is an admissible trivial heuristic, worst of allIn theory, we can always make a perfect heuristics, if we perform afull breadth first search from each node, but that would be pointless.
    • Example of good heuristics for the 8 puzzleTransparency shows the heuristic f(n)=g(n) + h(n) h(n)=# misplaced tilesOn an A* search for the 8 - puzzle
    • Graph Search%% Original versionfunction GRAPH-SEARCH(problem,fringe) returns a solution, orfailureclosed <- an empty setfringe <- INSERT(MAKE-NODE(INITIAL-STATE[problem]),fringe)loop do if EMPTY?(fringe) then return failure node <- REMOVE-FIRST(fringe) if GOAL-TEST[problem](STATE[node]) then return SOLUTION(node) if STATE[node] is not in closed then add STATE[node] to closed fringe <- INSERT-ALL(EXPAND(node,problem),fringe)
    • Heuristic Best First Search%% f [problem] is the heuristic selection function of the problemfunction BEST-FIRST-SEARCH([problem])returns a solution,or failureOPEN <- an empty set // P1CLOSED <- an empty set // P2OPEN <- INSERT(MAKE-NODE(INITIAL-STATE[problem]),OPEN) // P3repeat if EMPTY?(OPEN) then return failure //P4 best <- the lowest f-valued node on OPEN // P5 remove best from OPEN // P6 if GOAL-TEST[problem](STATE[best]) then return SOLUTION(best) // P7 for all successors M of best if STATE[M] is not in CLOSED then // P8 OPEN <-INSERT(successor,OPEN) // P9 add STATE[best] to CLOSED // P10
    • Heuristic Best First Search (A*)Java Pseudocode // Instantiating OPEN, CLOSED // Move selected node from OPEN to n // P6 OPEN = new Vector<Node>(); // P1 n = OPEN.elementAt(lowIndex); CLOSED = new Vector<Node>(); // P2 OPEN.removeElement(n); // Successful exit if n is goal node // P8 // Placing initial node on OPEN if (n.equals(goalnode)) return; OPEN.add(0, initialnode); // P3 // Retrieve all possible successors of n M = n.successors();// After initial phase,we enter the main loop //Compute f-,g- and h-value for each successor // of the A* algorithm for (int i = 0; i < M.size(); i++) { while (true) { Node s = M.elementAt(i); // Check if OPEN is empty s.g = n.g + s.cost; if (OPEN.size() == 0) { // P4 s.h = s.estimate(goalnode); System.out.println("Failure :"); s.f = s.g + s.h; return; // Augmenting OPEN with suitable nodes from M // Locate next node on OPEN with heuristic for (int i = 0; i < M.size(); i++) lowIndex = 0; // P5 // Insert node into OPEN if not on CLOSED// P9 low = OPEN.elementAt(0).f; if (!(on CLOSED)) for (int i = 0; i < OPEN.size(); i++) { OPEN.add(0, M.elementAt(i)); number = OPEN.elementAt(i).f; // Insert n into CLOSED if (number < low) { CLOSED.add(0, n); // P10 lowIndex = i; low = number;
    • AStar Java CodeSee exercise 7 http://www.idi.ntnu.no/emner/tdt4136/PRO/Astar.java
    • Example Mouse King Problem (1,5) (5,5) ___________ | | | | | | | | |X| | | | | |X| | | | | |X| | | |M| |X| |C| ----------- (1,1) (5,1)There is a 5X5 board.At (1,1) there is a mouse M which can move as a king on a chess board.The target is a cheese C at (5,1).There is however a barrier XXXX at (3,1-4) which the mouse cannot gothrough, but the mouse heuristics is to ignore this.
    • Heuristics for Mouse Kingpublic class MouseKingState extends State { public int[] value; public MouseKingState(int[] v) {value = v;} public boolean equals(State state) {…} public String toString() {…} public Vector<State> successors() {…} public int estimate(State goal) { MouseKingState goalstate = (MouseKingState)goal; int[] goalarray = goalstate.value; int dx = Math.abs(goalarray[0] - value[0]); int dy = Math.abs(goalarray[1] - value[1]); return Math.max(dx, dy); }} // End class MouseKingState
    • Behaviour of Mouse KingThe mouse will expand the following nodes:1,1 2,1 2,2 2,3 1,2 1,3 2,4 1,4 2,53,5 4,4 4,3 4,2 5,1Solution Path:start,f,g,h (1,5) (5,5) (1,5) (5,5) _______________ _______________(1,1),4,0,4 | | 9|10| | | | | |5 | | |(2,2),4,1,3 | 8| 7| |11| | | | 4| |6 | |(2,3),5,2,3 | 6| 4| |12| | | | 3| |7 | | | 5| 3| |13| | | | 2| |8 | |(2,4),6,3,3 | 1| 2| | |14| | 1| | | |9 |(3,5),8,4,4 ---------------- ----------------(4,4),8,5,3 (1,1) (5,1) (1,1) (5,1)(4,3),8,6,2 Order of expansion Solution Path(4,2),8,7,1(5,1),8,8,0
    • Perfect Heuristics BehaviourIf the heuristics had been perfect, the expansion of the nodes would havebeen equal to the solution path. 1,1 2,2 2,3 2,4 3,5 4,4 4,2 4,2 5,1This means that a perfect heuristics ”encodes” all the relevant knowledge of the problemspace.Solution Path:start,f,g,h (1,5) (5,5) (1,5) (5,5) _______________ _______________(1,1),8,0,8 | | |5 | | | | | |5 | | |(2,2),8,1,7 | | 4| |6 | | | | 4| |6 | |(2,3),8,2,6 | | 3| |7 | | | | 3| |7 | | | | 2| |8 | | | | 2| |8 | |(2,4),8,3,5 | 1| | | |9 | | 1| | | |9 |(3,5),8,4,4 ---------------- ----------------(4,4),8,5,3 (1,1) (5,1) (1,1) (5,1)(4,3),8,6,2 Order of expansion Solution Path(4,2),8,7,1(5,1),8,8,0
    • Monotone (consistent) heuristicsA heuristic is monotone if the f-value is non-decreasingalong any path from start to goal.This is fulfilled if for every pair of nodes n  n’ f(n) <= f(n’) = g(n’) + h(n’) = g(n) + cost(n,n’) + h(n’) f(n) = g(n) + h(n)Which gives the triangle inequality h(n) <= cost(n.n’) + h(n’) cost(n,n’) h(n’) goal h(n)
    • Properties of monotone heuristics1) All monotone heuristics are admissible2) Monotone heuristic is admissible at all nodes (if h(G)=0)3) If a node is expanded using a monotone heuristic, A* has found the optimal route to that node4) Therefore, there is no need to consider a node if it is already found.5) If this is assumed, and the heuristic is monotone, the algorithm is still admissible6) If the monotone assumption is not true, we risk a non optimal solution7) However, most heuristics are monotone
    • Some more notes on heuristicsIf h1(n) and h2(n) are admissible heuristics, then thefollowing are also admissible heuristics• max(h1(n),h2(n))• α*h1(n) + β*h2(n) where α+β=1
    • Monotone heuristic repairSuppose a heuristic h is admissible but not consistent, i.e. h(n) > c(n,n’)+h(n’), which means f(n) > f(n’) (f is not monotone)In that case, f’(n’) can be set to f(n) as a better heuristic,(higher but still underestimate), i.e. use h’(n’) = max(h(n’),h(n)-c(n,n’))
    • An example of a heuristicsConsider a knight (”horse”) on a chess board.It can move 2 squares in one direction and 1 to either side.The task is to get from one square to another in fewest possible steps (e.g. A1 to H8).A proposed heuristics could be ManHattanDistance/2 Is it admissible ? Is it monotone ?(Actually, it is not straightforward to find an heuristics that is both admissible and notmonotone) 8 | | | | | | | |*| 7 | | | | | | | | | 6 | | | | | | | | | 5 | | | | | | | | | 4 | | | | | | | | | 3 | | | | | | | | | 2 | | | | | | | | | 1 |*| | | | | | | | A B C D E F G H
    • Relaxed problemsMany heuristics can be found by using a relaxed (easier, simpler)model of the problem.By definition, heuristics derived from relaxed models areunderestimates of the cost of the original problemFor example, straight line distance presumes that we can move instraight lines.For the 8-puzzle, the heuristics W(n) = # misplaced tileswould be exact if we could move tiles freelyThe less relaxed, (and therefore better) heuristic P(n) = distance from home ( Manhattan distance) would allow tiles to be moved to an adjacent square even thoughthere may already be a tile there.
    • Generalized A* f(n) = *g(n) + *h(n) = =1 A*=0 Uniform cost=0 Greedy search< 0,=0 Depth first > Conservative (still admissible) < Radical (not admissible), depending on g,h Dynamic
    • Learning heuristics from experienceWhere do heuristics come from ?Heuristics can be learned as a computed (linear?) combination of features of the state.Example: 8-puzzleFeatures: x1(n) : number of misplaced tiles x2(n) : number of adjacent tiles that are also adjacent in the goal stateProcedure: make a run of searches from 100 random start states. Let h(ni) be thefound minimal cost. n1 h(n1) x1(n1) x2(n1) … …… …….. ………. n100 h(n100) x1(n100) x2(n100)From these, use regression to estimate h(n) = c1*x1(n) + c2*x2(n)
    • Learning heuristics from experience (II)Suppose the problem is harder than the heuristic (h1) indicates but that thehardness is assumed to be uniform over the state space. Then , it is an estimate tolet an improved heuristics h2(x) = *h1(x). |S| | | | | | | | Problem: move a piece from S to G using ChessKing | | | | | | | | | heuristics h1(x) = # horisontal/vertical/diagonal moves. | | | | | | | | | h1(S)=7 h1(n) = 4 | | | |n| | | | | | | | | | | | | | Assume problem is actual harder (in effect Manhattan disance h2(x), but we dont’ know that). It means | | | | | | | | | g(n) = 6 | | | | | | | | | We then estimate = g(n)/(h1(s)-h1(n)) | | | | | | | | G| h2(n) = g(n)/(h1(s)-h1(n)) * h1(n) = 6/(7 – 4) *4 = 8 (correct)
    • Learning heuristics from experience (III)Suppose the problem is easier than the heuristic (h1) indicates but that theeasiness is assumed to be uniform over the state space. Then , it is an estimate tolet an improved heuristics h2(x) = *h1(x). |S| | | | | | | | Problem: move a piece from S to G using Manhattan | | | | | | | | | heuristics h1(x) = # horisontal/vertical moves | | | | | | | | | h1(S)=14 h1(n) = 8 | | | |n| | | | | | | | | | | | | | Assume problem is actual easier (in effect Chess King disance h2(x), but we dont’ know that). It means | | | | | | | | | g(n) = 3 | | | | | | | | | We then estimate = g(n)/(h1(s)-h1(n)) | | | | | | | | G| h2(n) = g(n)/(h1(s)-h1(n)) * h1(n) = 3/(14 – 8) *8 = 4 (correct)
    • Practical example (Bus world scenario)Find an optimal route from one place toanother by public transportNodes: Bus passing events A bus route passes a station at a time Actions: - enter bus - leave bus - wait
    • Bus scenario search space Search space space(2) X time(1)Time Bus 3 wait Space Bus 5 Space
    • Heuristics for bus route planner which route is best ? T3 = Z N # bus transfers K3 A wait time 1. departure Z wait time before arrival T2 T sum transfer waiting time K2 K sum driving time Equivalent transfer T1 K1T0 = A
    • Planner discussion1. (T1+T2) = T critical if rain2. If Z is to be minimised, must search backwards3. Many equivalent transfers (same T and K)4. In practice, A* is problematic5. Waiting time A maybe unimportantSolution: Relaxationa) Find trasees independent of timeb) Eliminate equivalent transfersc) For each trasee, find best route pland) Keep the best of these solutions