SlideShare a Scribd company logo
1 of 126
Download to read offline
Module 3
PROBLEM
SOLVING
PART II
SEARCH STRATEGIES
Shiwani Gupta 3
Search strategies
• A search strategy is defined by picking the order of node expansion
• Strategies are evaluated along the following dimensions:
– completeness: does it always find a solution if one exists?
– time complexity: time taken to find solution
– space complexity: maximum number of nodes in memory
– Optimality/admissibility: does it always find a least-cost solution?
• Time and space complexity are measured in terms of
– b: maximum branching factor (state expanded to yield new states)
of the search tree
– d: depth of the least-cost solution
– m: maximum depth of the state space (may be ∞)
Basic Search Concepts
• Search tree
• Search node
• Node expansion
• Search strategy: At each stage it determines
which node to expand
Shiwani Gupta 4
Water Jug Problem
Shiwani Gupta 5
Node Data Structure
• STATE
• PARENT
• ACTION
• COST
• DEPTH
Shiwani Gupta 6
Fringe
• Set of search nodes that have not been
expanded yet
• Implemented as a queue FRINGE
– INSERT(node,FRINGE)
– REMOVE(FRINGE)
• The ordering of the nodes in FRINGE defines
the search strategy
Shiwani Gupta 7
Chapter 3.2
UNINFORMED SEARCH METHODS
Shiwani Gupta 9
Uninformed search strategies
(BLIND SEARCH)
Blind (or uninformed) strategies do not exploit any of the information
contained in a state
• Breadth-first search
• Depth-first search
• Depth-limited search
• Uniform Cost
• Depth First Iterative deepening search
• Bidirectional Search
Breadth-First Strategy
New nodes are inserted at the end of FRINGE (FIFO Queue)
2 3
4 5
1
6 7
FRINGE = (1)
• The root node is expanded first, then all the nodes generated by the
root node are expanded next, and then their successors, and so on.
• In general, all the nodes at depth d in the search tree are expanded
before the nodes at depth d + 1.Shiwani Gupta 10
Breadth-First Strategy
New nodes are inserted at the end of FRINGE
FRINGE = (2, 3)2 3
4 5
1
6 7
Shiwani Gupta 11
Breadth-First Strategy
New nodes are inserted at the end of FRINGE
FRINGE = (3, 4, 5)2 3
4 5
1
6 7
Shiwani Gupta 12
Breadth-First Strategy
New nodes are inserted at the end of FRINGE
FRINGE = (4, 5, 6, 7)2 3
4 5
1
6 7
Shiwani Gupta 13
Shiwani Gupta 14
Breadth-first search
1. Create a single member LIFO queue comprising of root node.
2. If 1st member of queue is goal node then goto step 5.
3. i) If 1st member of queue is not goal, then remove it from queue and
add it to list of visited nodes.
ii) Consider its child nodes if any and add them to the rear end of
the queue.
4. If queue empty goto step 6 else goto step 2.
5. Print success and stop.
6. Print fail and stop.
Shiwani Gupta 15
Breadth-first search
• If there is a solution, breadth-first search is guaranteed to find it
• if there are several solutions, breadth-first search will always find the
shallowest goal state first.
Shiwani Gupta 16
Properties of breadth-first search
• Complete? Yes (if b is finite)
• Time? 1+b+b2+b3+… +bd = O(bd)
• Space? O(bd) (keeps every node in memory)
• Optimal? Yes (if cost = 1 per step)
for weighted tree; not necessary that node nearer to root is cheaper too
Memory requirements are a bigger problem for BFS than the execution time
(eg. for depth=6 nodes= time=18 min. memory=111MB
for depth=10 nodes= time=135 days memory=1TB)
Exponential complexity search problems can be solved for only small instances
Advantages
• If there is sol, BFS is
guaranteed to find
• If more than one sol.,
then best sol. found
• BFS doesn’t get trapped
exploring blind alley
• BFS should be used if b
small
Shiwani Gupta 18
Disadvantages
• Exponential Time
Complexity
• Exponential Space
Complexity
• Wasteful if all paths
lead to goal state at
more or less same
depth
Applications
• To find Shortest Path
– Single Source
– All Pair
• To find all connected components in graph
• To find spanning tree of a graph
• Testing graph for bipartiteness
• Crawler in search engine
Shiwani Gupta 19
Shiwani Gupta 20
Depth-first search
• Expand deepest unexpanded node
• Implementation:
fringe = LIFO queue, i.e., put successors at front
Shiwani Gupta 21
Depth-first search
• Depth-first search always expands one of the nodes at the deepest
level of the tree. Only when the search hits a dead end (a nongoal
node with no expansion) does the search go back and expand nodes
at shallower levels.
Shiwani Gupta 22
Depth-first search
• Depth-first search can get stuck going down the wrong path.
• Many problems have very deep or even infinite search trees, so
depth-first search will never be able to recover from an unlucky
choice at one of the nodes near the top of the tree.
Shiwani Gupta 23
Depth-first search
• The search will always continue downward without backing up, even
when a shallow solution exists.
• Thus, on these problems depth-first search will either get stuck in an
infinite loop and never return a solution, or it may eventually find a
solution path that is longer than the optimal solution.
Shiwani Gupta 24
Depth-first search
• That means depth-first search is neither complete nor optimal.
• Because of this, depth-first search should be avoided for search trees
with large or infinite maximum depths.
Shiwani Gupta 25
Depth-first search
Shiwani Gupta 26
Depth-first search
• It is also common to implement depth-first search with a recursive
function that calls itself on each of its children in turn.
Shiwani Gupta 27
Depth-first search
• Complete? No: fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
→ complete in finite spaces
Shiwani Gupta 28
Depth-first search
• Time? O(bm): terrible if maximum depth m is much larger than
branching factor b.
For problems that have many solutions, depth-first may actually be
faster than breadth-first, because it has a good chance of finding a
solution after exploring only a small portion of the whole space.
Shiwani Gupta 29
Depth-first search
• Space? O(bm), i.e., linear space!
Depth-first search needs to store only a single path from the root to a
leaf node, along with the remaining unexpanded sibling nodes for
each node on the path.
Shiwani Gupta 30
Depth-first search
• Optimal? No
Shiwani Gupta 31
Depth-first search
1. Create single member FIFO queue comprising of root node.
2. If 1st member of queue is goal node goto step 5.
3. i) If 1st member of queue is not goal node, then remove it from
queue and add it to list of visited nodes.
ii) Consider its child nodes if any, then add it to front of queue.
4. If queue is empty goto step 6 else goto step 2.
5. Print success and stop.
6. Print fail and stop.
Advantages
• Require less memory
since only nodes on
current path stored
alongwith remaining
unexpanded siblings on
each path.
• May find solution
without examining
much of the search
space, reducing time
• Simple to implement
Shiwani Gupta 32
Disadvantages
• May get trapped in
blind alley
• Doesnot guarantee
minimal path
• Not complete, if tree is
unbounded.
• Time taken for large
and unbounded trees is
exponentially large
Applications
• To find path between two vertices
i) Call DFS(G, u) with u as the start vertex.
ii) Use a stack S to keep track of the path between the start vertex and the current vertex.
iii) As soon as destination vertex z is encountered, return the path as the contents of the stack
• To check cycle in a graph
– A graph has cycle iff we see a back edge during DFS.
• To find connected components
– A directed graph is called strongly connected if there is a path from each vertex in the graph
to every other vertex.
Shiwani Gupta 33
Applications
• Topological sort
– Topological Sorting is mainly used for scheduling jobs from the given
dependencies among jobs. In computer science, applications of this type arise
in instruction scheduling, ordering of formula cell evaluation when
recomputing formula values in spreadsheets, logic synthesis, determining the
order of compilation tasks to perform in makefiles, data serialization, and
resolving symbol dependencies in linkers
• Finding solution in Mazes
– DFS can be adapted to find all solutions to a maze by only including nodes on
the current path in the visited set
• To find spanning tree and forest of a graph
Shiwani Gupta 34
Compare and contrast DFS, BFS
• For tree, DFS requires
less memory
• DFS is good if there are
multiple sol, and you are
concerned with just sol
• Not good if in case of
multiple sol, best
required
Shiwani Gupta 35
• BFS doesn’t get stuck in
blind alleys
• BFS requires more
memory
• BFS finds shortest path
first
• BFS is good in large
search space where sol is
near root
• BFS is good if multiple
solutions
Depth first example
Q Q
Q
Q
Q
Q
Q
Q
a b c d e
Q Q
Q
Q
Q
Q
Q
Q
Q
Q
f g h i
fb
j
k
a
f
j
k
a
dc
b f
j
k
a
b
dc
f
j
k
a
b
dc
e
f
j
k
a
c
e
d
b
g
j
k
a
c
e
d
b f
j
k
a
c
e
d
b f
h
g
ja
c
e
d
b f
g
h
i
k
solution
Q Q
Q
Q
Q
Q
Q
Q
a b c d e
Q Q
Q
Q
Q
Q
Q
Q
Q
Q
f g h i
fb
j
k
a
f
j
k
a
dc
b f
j
k
a
b
dc
f
j
k
a
b
dc
e
f
j
k
a
c
e
d
b
g
j
k
a
c
e
d
b f
j
k
a
c
e
d
b f
h
g
ja
c
e
d
b f
g
h
i
k
solution
The 4 Queens problem:
Note:
We can place exactly 1 Q in each row
Search starts at top row
MAX-DEPTH = 4
Shiwani Gupta 36
A maze used to show brute force search
Shiwani Gupta 37
The tree for the maze in Figure
Shiwani Gupta 38
Breadth-first search of the tree
Shiwani Gupta 39
Depth-first search of the tree
Shiwani Gupta 40
DFS
• Not complete
• Not optimal
Shiwani Gupta 41
Shiwani Gupta 42
Depth-limited search
• avoids the pitfalls of depth-first search by imposing a cutoff on the
maximum depth of a path.
• Depth-limited search = depth-first search with depth limit l
i.e., nodes at depth l have no successors
• Three possible outcomes:
– Solution
– Failure (no solution)
– Cutoff (no solution within cutoff)
Shiwani Gupta 43
Properties of Depth-Limited search
• Complete? No for l<d, yes otherwise
• Time? 1+b+b2+b3+… +bl = O(bl) since uses DFS
• Space? O(bl) since uses DFS
• Optimal? No
If we can find better depth limit, then more efficient.
Shiwani Gupta 52
Iterative deepening search l =0
Shiwani Gupta 53
Iterative deepening search l =1
Shiwani Gupta 54
Iterative deepening search l =2
Shiwani Gupta 55
Iterative deepening search l =3
Shiwani Gupta 56
Depth First Iterative deepening search
• We picked 19 as an "obvious“ depth limit for the Romania problem, but
in fact if we studied the map carefully, we would discover that any city can
be reached from any other city in at most 9 steps.
• This number, known as the diameter of the state space, gives us a better
depth limit, which leads to a more efficient depth-limited search.
• It sidesteps the issue of choosing the best depth limit by trying all
possible depth limits: first depth 0, then depth 1, then depth 2, and so on.
• In effect, iterative deepening combines the benefits of depth-first search
and breadth-first search.
• It is optimal and complete, like breadth-first search, but has only the
modest memory requirements of depth-first search.
• The order of expansion of states is similar to breadth-first, except that
some states are expanded multiple times.
Shiwani Gupta 57
Depth First Iterative deepening search
• Runs Depth Limited repeatedly, increasing depth limit each
time
• Equivalent to DFS
• Combines DFS’s space efficiency and BFS’s completeness
• preferred search method when there is a large search space
and the depth of the solution is not known
•Extra cost of revisiting
Shiwani Gupta 58
Iterative deepening search
• Number of nodes generated in a depth-limited search to depth d
with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd
• Number of nodes generated in an iterative deepening search to
depth d with branching factor b:
NIDS = (d+1)b0 + d b1 + (d-1)b2 + … + 3bd-2 +2bd-1 + 1bd
• For b = 10, d = 5,
– NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
– NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
• Space Overhead = (123,456 - 111,111)/111,111 = 11%
Shiwani Gupta 59
Properties of iterative deepening search
• Complete? Yes
• Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd) less than BFS
• Space? O(bd), d is depth of shallowest goal
• Optimal? Yes
The higher the branching factor, the lower the overhead of
repeatedly expanded states
Iterative deepening is the preferred search method when there is a
large search space and the depth of the solution is not known.
Shiwani Gupta 60
Summary of algorithms
b is the branching factor
d is the depth of solution
m is the maximum depth of the search tree
/ is the depth limit.
Shiwani Gupta 61
University Questions
• Compare different uninformed search strategies
• Explain BFS and DFS Algo
• Explain DFS on a graph
• Advantage, disadvantage and applications of DFS, BFS
• Solve 8 puzzle by DFS and BFS (assignment)
• Compare and contrast DFS and BFS
• Explain techniques to overcome the drawback of DFS and BFS
• Write short note on Iterative Deepening Search
Beyond Syllabus
• Uniform Cost
• Bidirectional
Chapter 3.2
INFORMED SEARCH METHODS
Heuristic (or informed) strategies exploit such
information to assess that one node is “more
promising” than another
Using heuristic search, we assign a quantitative value called
a heuristic value (h value) to each node. This quantitative
value shows the relative closeness of the node to the goal
state. For example, consider solving the 8-puzzle
Heuristic search
Initial and goal states for heuristic search
Shiwani Gupta 63
The heuristic values for the first step
Shiwani Gupta 64
Heuristic search for solving the 8-puzzleShiwani Gupta 65
Best First Search
• Combines the advantages of both DFS and BFS into a
single method.
• DFS is good because it allows a solution to be found
without all competing branches having to be
expanded.
• BFS is good because it does not get branch on dead
end paths.
• One way of combining the two is to follow a single
path at a time, but switch paths whenever some
competing path looks more promising than the
current one does.
Shiwani Gupta 66
Shiwani Gupta 67
(Seemingly) Best-first search
• Idea: use an evaluation function f(n) for each node
– estimate of "desirability"
→Expand most desirable unexpanded node
• Implementation:
Order the nodes in fringe in decreasing order of desirability
(Priority Queue)
• Special cases:
– greedy best-first search
– A* search
68
Real-World Problem:
Touring in Romania
Oradea
Bucharest
Fagaras
Pitesti
Neamt
Iasi
Vaslui
Urziceni
Hirsova
Eforie
Giurgiu
Craiova
Rimnicu Vilcea
Sibiu
Dobreta
Mehadia
Lugoj
Timisoara
Arad
Zerind
120
140
151
75
70
111
118
75
71
85
90
211
101
97
138
146
80
99
87
92
142
98
86
Aim: find a course of action that satisfies a number of specified conditions
Shiwani Gupta
Shiwani Gupta 69
Greedy best-first search
(minimize estimated cost to reach a goal)
• Evaluation function f(n) = h(n) (heuristic =
estimate of cost from n to goal)
• e.g., hSLD(n) = straight-line distance from n to
Bucharest
• Greedy best-first search expands the node
that appears to be closest to goal
(cheapest/shortest path)
70
Romania with step costs in km
374
329
253
Shiwani Gupta
71Shiwani Gupta
72Shiwani Gupta
73Shiwani Gupta
74Shiwani Gupta
Best First Search
Step 1 Step 2 Step 3
Step 4 Step 5
A A
B C D
3 5 1
A
B C D
3 5 1
E F4
6A
B C D
3 5 1
E F4
6
G H6 5
A
B C D
3 5 1
E F4
6
G H6 5
A A2 1Shiwani Gupta 75
Shiwani Gupta 76
Properties of greedy best-first search
• Complete? No – can get stuck in loops, e.g., Iasi →
Neamt → Iasi → Neamt →
• Time? O(bm), but a good heuristic can give dramatic
improvement
• Space? O(bm) keeps all nodes in memory in worst
case
• Optimal? No
Algorithm: BFS
1. Start with OPEN containing just the initial state
2. Until a goal is found or there are no nodes left on
OPEN do:
a. Pick the best node on OPEN
b. Generate its successors
c. For each successor do:
i. If it has not been generated before, evaluate it, add it to OPEN,
and record its parent.
ii. If it has been generated before, change the parent if this new
path is better than the previous one. In that case, update the
cost of getting to this node and to any successors that this node
may already have.
Shiwani Gupta 77
Algorithm: Best First Search
1. Create single member priority queue comprising of root
node.
2. If 1st member of queue is goal node goto step 5.
3. i) If 1st member of queue is not goal node, then remove it
from queue and add it to list of visited nodes.
ii) Consider its child nodes if any, evaluate them using
evaluation function f(n)=h(n), add them to the queue and
reorder the states on queue by heuristic merit (best
leftmost).
4. If queue is empty goto step 6 else goto step 2.
5. Print success and stop.
6. Print fail and stop.
Shiwani Gupta 78
Applications of Best First Search
• Games (Minesweeper)
• Web Crawlers
• Task Scheduling
Shiwani Gupta 79
Shiwani Gupta 81
An A* algorithm is admissible best-first search
algorithm that aims at minimizing the total cost
along a path from start to goal.
f*(n) = g(n) + h*(n)
estimate of total cost
along path through n
estimate of cost
to reach goal
from n
actual cost to reach
n
A* search (1968)
(minimizing the total path cost)
Shiwani Gupta 82
A* search example
Shiwani Gupta 83
A* search example
Shiwani Gupta 84
A* search example
Shiwani Gupta 85
A* search example
Shiwani Gupta 86
A* search example
Shiwani Gupta 87
A* search example
A* Algorithm
1. Start with OPEN containing only initial node. Set that node’s
g value to 0, its h* value to whatever it is, and its f* value to
h*+0 or h*. Set CLOSED to empty list.
2. Until a goal node is found, repeat the following procedure:
If there are no nodes on OPEN, report failure. Otherwise
pick the node on OPEN with the lowest f* value. Call it
BESTNODE. Remove it from OPEN. Place it in CLOSED. See if
the BESTNODE is a goal state. If so exit and report a
solution. Otherwise, generate the successors of BESTNODE
but do not set the BESTNODE to point to them yet.
Shiwani Gupta 88
A* Algorithm
1. Create single member priority queue comprising of root node.
2. If 1st member of queue is goal node goto step 5.
3. i) If 1st member of queue is not goal node, then remove it from
queue and add it to list of visited nodes.
ii) Consider its child nodes if any, evaluate them using evaluation
function f*(n)=g(n)+h*(n), add them to the queue and reorder
the states on queue by heuristic merit (best leftmost).
4. If queue is empty goto step 6 else goto step 2.
5. Print success and stop.
6. Print fail and stop.
Shiwani Gupta 89
Shiwani Gupta 90
Admissible heuristics
• An admissible heuristic never overestimates the
cost to reach the goal, i.e., it is optimistic
• Example: hSLD(n) (never overestimates the actual
road distance)
• Theorem: If h*(n) is admissible, A* using TREE-
SEARCH is optimal
Shiwani Gupta 91
• A heuristic is monotonic/consistent if the
estimated cost of reaching any node is always less
than the actual cost.
h* (n1)– h* (n2)≤ h (n1)–h (n2)
• A heuristic is (globally) optimistic or admissible if
the estimated cost of reaching a goal is always less
than the actual cost.
h* (n) ≤ h (n)
estimate of cost
to reach goal
from n
actual (unknown)
cost to reach goal
from n
Shiwani Gupta 92
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares misplaced from desired location of each tile)
• h1(S) = ?
• h2(S) = ?
Shiwani Gupta 93
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Shiwani Gupta 94
Properties of A*
• Complete? Yes
• Time? Exponential
• Space? Keeps all nodes in memory
• Optimal? Yes
Optimality → Completeness (not vice versa)
Shiwani Gupta 95
Optimality of A* (proof)
• Suppose some suboptimal goal G2 has been generated and is in the
fringe. Let n be an unexpanded node in the fringe such that n is on a
shortest path to an optimal goal G.
• f(G2) = g(G2) since h(G2) = 0
• g(G2) > g(G) since G2 is suboptimal
• f(G) = g(G) since h(G) = 0
• f(G2) > f(G) from above
Shiwani Gupta 96
Optimality of A* (proof)
• Suppose some suboptimal goal G2 has been generated and is in the
fringe. Let n be an unexpanded node in the fringe such that n is on a
shortest path to an optimal goal G.
• f(G2) > f(G) from above
• h*(n) ≤ h(n) since h* is admissible
• g(n) + h*(n) ≤ g(n) + h(n)
• f(n) ≤ f(G) actual cost to Goal(n)
Hence f(G2) > f(n), and A* will never select G2 for expansion
Applications of A*
• Common pathfinding problem
• Graph traversal
• Parsing in NLP
• It is special case of BnB
• Multiple sequence Alignment
• Dijkstra is special case of A*
– When heuristic=0; f(x)=g(x)
Shiwani Gupta 98
University Questions
• Consider a 8-puzzle problem and a heuristic function given by taking a sum of the distances
of the tiles that are in proper positions. Tiles in proper positions have value equal to zero. Use
the A* algorithm to draw a solution tree for the 8 puzzle problem. Indicate clearly the values
you consider at each step.
• How Best First Search will be applicable to 8 puzzle problem.
• When would Best First Search be worst than simple Breadth First Search.
• Best First Search uses both OPEN and CLOSED list. Describe purpose of both with example.
• Explain A* search algorithm with example.
• What is heuristics? A* search uses a combined heuristic to select the best path to follow
through the state space towards the goal. Define the two heuristics used.
• What do you mean by admissible heuristic function. Discuss with suitable example.
• What is heuristic function? How will you find suitable heuristic function? Give suitable
example.
• Describe A* algorithm with merits and demerits..
• Prove that A* search algorithm is complete and optimal among all search algorithms. Show
that A* is optimally efficient.
• Write short note on admissibility of A*.
• Prove that A* is admissible if it uses a monotone heuristic.
Shiwani Gupta 99
Shiwani Gupta 100
Assignment Question
How will A* get from Iasi to Fagaras?
Shiwani Gupta 102
Memory-bounded heuristic search
• A* may run into space limitations
• MBHS attempts to avoid space complexity
– IDA* = iterative deepening A*
– RBFS = recursive best-first search
– MA*, SMA* = memory-bounded A*
Shiwani Gupta 103
IDA*
• A* and best-first keep all nodes in memory
• Combination of A* and DFS
• IDA* (iterative deeping A*) is similar to standard
iterative deeping, except the cutoff used is the f-
cost (g+h), rather than the depth. (thus doesn’t go
to the same depth everywhere in the tree)
• Cutoff value is set to smallest f-cost of any node
that exceeded cutoff in previous iteration
• Complete and Optimal
• Time (bm) Space (mb)
PROs: IDA*
• Always finds the optimal solution if exists for an
admissible heuristic
• Uses a lot less memory which increases linearly as it
doesn’t store and forgets after it reaches a certain
depth.
Shiwani Gupta 104
CONs: IDA*
• Between iterations, it retains only current f-cost
limit
• Since it cannot remember history, hence repeats
states
Shiwani Gupta 105
Shiwani Gupta 106
RBFS
• Recursive best-first search (RBFS) tries to mimic best-first
in linear space
• Similar to recursive DFS
• Keeps track of the f-value of the best leaf in the forgotten
subtree so that it could be reexpanded in future
• RBFS only keeps the current search path and the sibling
nodes along the path
• If current node exceed that limit, it unwinds back to the
alternate path
• Cost function is non monotonic
• It will often have to re-expand discarded subtrees
Shiwani Gupta 107
RBFS
f-value of best
alternate path
Shiwani Gupta 108
RBFS
Shiwani Gupta 109
RBFS
Performance Measure
• Completeness: y
• Optimality: y for admissible heuristic
• Time Complexity: actual no. of nodes
generated (depends on cost fn)
• Space Complexity: O(bd)
Shiwani Gupta 110
Shiwani Gupta 112
Memory-bounded A*
• MA* and SMA* (simplified MA*) work like A* until
memory is full.
• SMA*: if memory is full, drop the worst leaf node
and push its f-value back to its parent
• Subtrees are regenerated only when all other paths
have been shown to be worse than the forgotten
path
• Thrashing behavior can result if memory is small
compared to size of candidate subpaths
MA*
Once a preset limit is reached (in memory), the
algorithm prunes the open list by highest f-cost
Shiwani Gupta 113
SMA*
Improves upon MA* by:
1. Using a more efficient data structure for the
open list (binary tree), sorted by f-cost and depth
2. Only maintaining two f-cost quantities (instead
of four with MA*)
3. Pruning one node at a time (the worst f-cost)
4. Retaining backed-up f-costs for pruned paths
Performance Measure
• Completeness: y (if available mem sufficient to
store shallowest sol path)
• Optimality: y for admissible heuristic (if enough
mem available to store shallowest sol path)
Shiwani Gupta 114
University Questions
• Discuss blind search and informed search. Hence discuss
merits and demerits of each.
• Write note on comparative analysis of search techniques.
• Describe IDA* search algorithm giving suitable example.
• Compare the following informed search algorithms based
on all performance measures with justification:
– Greedy Best First
– A*
– Recursive Best First
• Prove that A* Search algorithm is complete and optimal
among all search algorithms.
• How Best First Search will be applicable to 8 puzzle game.
Shiwani Gupta 115
Shiwani Gupta 116
Local Search Algorithms
• Exponential growth of the solution space for most of the
practical problems
• In many optimization problems, the path to the goal is
irrelevant; the goal state itself is the solution
• State space = set of "complete" configurations
• Find configuration satisfying constraints, e.g., n-queens
• In such cases, we can use local search algorithms
• keep a single "current" state, try to improve it
Shiwani Gupta 117
Example of Local Search Algorithm :
Hill Climbing/ Gradient Descent
Local optimum
Initial solution
Global optimum
Neighbourhood of solution
Neighbourhood of solutionInitial solution
Local Optimum
Global Optimum
Shiwani Gupta 118
4 - Queens
• States: 4 queens in 4 columns (256 states)
• Neighborhood Operators: move queen in column
• Goal test: no attacks
• Evaluation: h(n) = number of attacks
Shiwani Gupta 119
Advantages of Local search
• Two advantages
– Use little memory
– More applicable in searching large/infinite search space
• Completeness: guaranteed to find a solution if exists
• Optimality: solution found is optimal
• Local search algorithms are also useful for optimization
problems
• Goal: find a state such that the objective function is
optimized
Shiwani Gupta 120
Application of local search algorithms
• planning, scheduling, routing, configuration, protein
folding, etc.
• In fact, many (most?) large Operation Research
problems are solved using local search methods
(e.g., airline scheduling).
Shiwani Gupta 121
Problems in Hill-Climbing search
depending on initial state, can get stuck in local maxima
Shiwani Gupta 122
Drawbacks
• Local maxima/ foothill: a local maximum is a peak that is lower than
the highest peak in the state space. Once on a local maximum, the
algorithm will halt even though the solution may be far from
satisfactory. (Maze: may have to move AWAY from goal to find (best)
solution)
• Plateau: a plateau is an area of the state space where the evaluation
function is essentially flat. The search will conduct a random walk. (8-
puzzle: perhaps no action will change # of tiles out of place)
• Ridges: a sequence of local maxima which cannot be searched in a
single move
Shiwani Gupta 124
Hill Climbing - Algorithm
1. Pick a random point in the search space
2. Consider all the neighbours of the current state
3. Choose the neighbour with the best quality and
move to that state
4. Repeat 2 thru 4 until all the neighbouring states
are of lower quality
5. Return the current state as the solution state
Shiwani Gupta 125
Hill-Climbing Search
"Like climbing Everest in thick fog with amnesia"
Shiwani Gupta 126
Solutions• Stochastic hill-climbing
• Chose at random from among the uphill moves.
• First-choice hill climbing
• Generates successors randomly until one is generated that is better than
current state
• Simulated annealing
• Use conventional hill-climbing style techniques, but occasionally take a
step in a direction other than that in which there is improvement (downhill
moves). As time passes, the probability that a down-hill step is taken is
gradually reduced and the size of any down-hill step taken is decreased.
• Local beam search
• Run the random starting points in parallel, always keeping the k most
promising states
• Random-restart hill climbing
• Simply restart at a new random state after a pre-defined number of steps.
• Genetic Algorithms
Local Beam search
• Keep track of k states instead of one
– Initially: k randomly selected states
– Next: determine all successors of k states
– If any of successors is goal → finished
– Else select k best from successors and repeat.
• Major difference with random-restart search
– Information is shared among k search threads.
• Can suffer from lack of diversity.
– Stochastic beam search
• choose k successors proportional to state quality.
Shiwani Gupta 127
Simulated Annealing
• analogy to annealing in solids
• If you heat a solid past melting point and then cool
it, the structural properties of the solid depend on
the rate of cooling. If the liquid is cooled slowly
enough, large crystals will be formed. However, if
the liquid is cooled quickly the crystals will contain
imperfections.
• idea is to use simulated annealing to search for
feasible solutions and converge to an optimal
solution. Shiwani Gupta 128
Simulated Annealing vs Hill climbing
• allowing worse moves (lesser quality) to be taken
some of the time. That is, it allows some uphill
steps so that it can escape from local minima
• Unlike hill climbing, simulated annealing chooses a
random move from the neighbourhood (recall that
hill climbing chooses the best move from all those
available
• If the move is better than its current position then
simulated annealing will always take it. If the move
is worse (i.e. lesser quality) then it will be accepted
based on some probabilityShiwani Gupta 129
Search using Simulated Annealing
Basic ideas:
– like hill-climbing identify the quality of the local improvements
– instead of picking the best move, pick one randomly
– say the change in objective function is ∆E
– if d is positive, then move to that state
– otherwise:
• move to this state with probability proportional to d
• thus worse moves (very large negative ∆E) are executed less
often
– however, there is always a chance of escaping from local maxima
– over time, make it less likely to accept locally bad moves
– (Can also make the size of the move random as well, i.e., allow
“large” steps in state space)
Shiwani Gupta 130
• Annealing = physical process of cooling a
liquid or metal until particles achieve a certain
frozen crystal state
• Simulated Annealing:
– free variables are like particles
– seek “low energy” (high quality) configuration
– get this by slowly reducing temperature T, which
particles move around randomly
Shiwani Gupta 131
Simulated Annealing
function SIMULATED-ANNEALING( problem, schedule) return a solution state
input: problem, a problem
schedule, a mapping from time to temperature
local variables: current, a node.
next, a node.
T, a “temperature” controlling the probability of downward steps
current  MAKE-NODE(INITIAL-STATE[problem])
for t  1 to ∞ do
T  schedule[t]
if T = 0 then return current
next  a randomly selected successor of current
∆E  VALUE[next] - VALUE[current]
if ∆E > 0 then current  next
else current  next only with probability e∆E /T
Shiwani Gupta 132
– Lets say there are 3 moves available, with changes in the objective function of d1
= -0.1, d2 = 0.5, d3 = -5. (Let T = 1).
– pick a move randomly:
• if d2 is picked, move there.
• if d1 or d3 are picked, probability of move = exp(d/T)
• move 1: prob1 = exp(-0.1) = 0.9,
– i.e., 90% of the time we will accept this move
• move 3: prob3 = exp(-5) = 0.05
– i.e., 5% of the time we will accept this move
– T = “temperature” parameter
• high T => probability of “locally bad” move is higher
• low T => probability of “locally bad” move is lower
• typically, T is decreased as the algorithm runs longer
– i.e., there is a “temperature schedule”Shiwani Gupta 133
Genetic Algorithms
• Different approach to other search algorithms
– A successor state is generated by combining two parent states
• A state is represented as a string over a finite alphabet (e.g. binary)
– 8-queens
• State = position of 8 queens each in a column
=> 8 x log(8) bits = 24 bits (for binary representation)
• Start with k randomly generated states (population)
• Evaluation function (fitness function).
– Higher values for better states.
– Opposite to heuristic function, e.g., # non-attacking pairs in 8-
queens
• Produce the next generation of states by “simulated evolution”
– Random selection
– Crossover
– Random mutation Shiwani Gupta 134
Genetic Algorithms
Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28)
• 24/(24+23+20+11) = 31%
• 23/(24+23+20+11) = 29% etc
4 states for
8-queens
problem
2 pairs of 2 states randomly
selected based on fitness.
Random crossover points
selected
New states
after crossover
Random
mutation
applied
Shiwani Gupta 135
Genetic Algorithm pseudocode
function GENETIC_ALGORITHM( population, FITNESS-FN) return an individual
input: population, a set of individuals
FITNESS-FN, a function which determines the quality of the individual
repeat
new_population  empty set
loop for i from 1 to SIZE(population) do
x  RANDOM_SELECTION(population, FITNESS_FN)
y  RANDOM_SELECTION(population, FITNESS_FN)
child  REPRODUCE(x,y)
if (small random probability) then child  MUTATE(child )
add child to new_population
population  new_population
until some individual is fit enough or enough time has elapsed
return the best individual
Shiwani Gupta 136
Comments on Genetic Algorithms
• Positive points
– Random exploration can find solutions that local search can’t
• (via crossover primarily)
– Appealing connection to human evolution
• E.g., see related area of genetic programming
• Negative points
– Large number of “tunable” parameters
• Difficult to replicate performance from one problem to another
– Lack of good empirical studies comparing to simpler methods
– Useful on some (small?) set of problems but no convincing
evidence that GAs are better than hill-climbing w/r random restarts
in general
Shiwani Gupta 137
Shiwani Gupta 138
Example: Searching for a computer
program (‘Genetic Programming’)
Extension of GA for evolving computer programs
– represent programs as LISP expressions
– e.g.
(IF (GT (x) (0)) (x) (-x))
IF
GT x -x
x 0
Shiwani Gupta 139
Electronic Filter
Circuit Design
•Individuals are programs that transform beginning circuit to final circuit
by
•Adding/subtracting components and connections
•Fitness: computed by simulating the circuit
•Population of 640,000 has been run on a parallel processor
•After 137 generations, the discovered circuits exhibited performance
competitive with best human designs
University Questions
• Define heuristic function. Give an example heuristics function for
Blocks World problem.
• Find heuristics value for a particular state of the Blocks World
Problem.
• What are the problems / frustrations that occur in Hill Climbing
Technique? Illustrate with an example.
• Write a short note on genetic algorithm.
• Explain Hill Climbing algorithm with an example.
• Explain Local Beam Search Algorithm in detail.
Shiwani Gupta 140

More Related Content

What's hot

Lecture 17 Iterative Deepening a star algorithm
Lecture 17 Iterative Deepening a star algorithmLecture 17 Iterative Deepening a star algorithm
Lecture 17 Iterative Deepening a star algorithmHema Kashyap
 
P, NP, NP-Complete, and NP-Hard
P, NP, NP-Complete, and NP-HardP, NP, NP-Complete, and NP-Hard
P, NP, NP-Complete, and NP-HardAnimesh Chaturvedi
 
Artificial Intelligence Searching Techniques
Artificial Intelligence Searching TechniquesArtificial Intelligence Searching Techniques
Artificial Intelligence Searching TechniquesDr. C.V. Suresh Babu
 
Page replacement algorithms
Page replacement algorithmsPage replacement algorithms
Page replacement algorithmsPiyush Rochwani
 
Depth First Search ( DFS )
Depth First Search ( DFS )Depth First Search ( DFS )
Depth First Search ( DFS )Sazzad Hossain
 
I. AO* SEARCH ALGORITHM
I. AO* SEARCH ALGORITHMI. AO* SEARCH ALGORITHM
I. AO* SEARCH ALGORITHMvikas dhakane
 
Control Strategies in AI
Control Strategies in AIControl Strategies in AI
Control Strategies in AIAmey Kerkar
 
Pumping lemma Theory Of Automata
Pumping lemma Theory Of AutomataPumping lemma Theory Of Automata
Pumping lemma Theory Of Automatahafizhamza0322
 
NFA Converted to DFA , Minimization of DFA , Transition Diagram
NFA Converted to DFA , Minimization of DFA , Transition DiagramNFA Converted to DFA , Minimization of DFA , Transition Diagram
NFA Converted to DFA , Minimization of DFA , Transition DiagramAbdullah Jan
 
Computer architecture page replacement algorithms
Computer architecture page replacement algorithmsComputer architecture page replacement algorithms
Computer architecture page replacement algorithmsMazin Alwaaly
 
Breadth First Search & Depth First Search
Breadth First Search & Depth First SearchBreadth First Search & Depth First Search
Breadth First Search & Depth First SearchKevin Jadiya
 
Methods for handling deadlock
Methods for handling deadlockMethods for handling deadlock
Methods for handling deadlocksangrampatil81
 
Lecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithmLecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithmHema Kashyap
 
Structure of the page table
Structure of the page tableStructure of the page table
Structure of the page tableduvvuru madhuri
 

What's hot (20)

Lecture 17 Iterative Deepening a star algorithm
Lecture 17 Iterative Deepening a star algorithmLecture 17 Iterative Deepening a star algorithm
Lecture 17 Iterative Deepening a star algorithm
 
P, NP, NP-Complete, and NP-Hard
P, NP, NP-Complete, and NP-HardP, NP, NP-Complete, and NP-Hard
P, NP, NP-Complete, and NP-Hard
 
Artificial Intelligence Searching Techniques
Artificial Intelligence Searching TechniquesArtificial Intelligence Searching Techniques
Artificial Intelligence Searching Techniques
 
Hill climbing algorithm
Hill climbing algorithmHill climbing algorithm
Hill climbing algorithm
 
Page replacement algorithms
Page replacement algorithmsPage replacement algorithms
Page replacement algorithms
 
Depth First Search ( DFS )
Depth First Search ( DFS )Depth First Search ( DFS )
Depth First Search ( DFS )
 
I. AO* SEARCH ALGORITHM
I. AO* SEARCH ALGORITHMI. AO* SEARCH ALGORITHM
I. AO* SEARCH ALGORITHM
 
Automata theory
Automata theoryAutomata theory
Automata theory
 
Control Strategies in AI
Control Strategies in AIControl Strategies in AI
Control Strategies in AI
 
Pumping lemma Theory Of Automata
Pumping lemma Theory Of AutomataPumping lemma Theory Of Automata
Pumping lemma Theory Of Automata
 
NFA to DFA
NFA to DFANFA to DFA
NFA to DFA
 
NFA Converted to DFA , Minimization of DFA , Transition Diagram
NFA Converted to DFA , Minimization of DFA , Transition DiagramNFA Converted to DFA , Minimization of DFA , Transition Diagram
NFA Converted to DFA , Minimization of DFA , Transition Diagram
 
Computer architecture page replacement algorithms
Computer architecture page replacement algorithmsComputer architecture page replacement algorithms
Computer architecture page replacement algorithms
 
Sorting network
Sorting networkSorting network
Sorting network
 
Operating System: Deadlock
Operating System: DeadlockOperating System: Deadlock
Operating System: Deadlock
 
Breadth First Search & Depth First Search
Breadth First Search & Depth First SearchBreadth First Search & Depth First Search
Breadth First Search & Depth First Search
 
Methods for handling deadlock
Methods for handling deadlockMethods for handling deadlock
Methods for handling deadlock
 
Lecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithmLecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithm
 
Structure of the page table
Structure of the page tableStructure of the page table
Structure of the page table
 
Hill climbing
Hill climbingHill climbing
Hill climbing
 

Similar to Search strategies

Search strategies BFS, DFS
Search strategies BFS, DFSSearch strategies BFS, DFS
Search strategies BFS, DFSKuppusamy P
 
uninformed search part 1.pptx
uninformed search part 1.pptxuninformed search part 1.pptx
uninformed search part 1.pptxMUZAMILALI48
 
Artificial intelligence(05)
Artificial intelligence(05)Artificial intelligence(05)
Artificial intelligence(05)Nazir Ahmed
 
Lecture 3 Problem Solving, DFS, BFS, IDF.pptx
Lecture 3 Problem Solving, DFS, BFS, IDF.pptxLecture 3 Problem Solving, DFS, BFS, IDF.pptx
Lecture 3 Problem Solving, DFS, BFS, IDF.pptxBcsf19m502MUJAHIDALI
 
searching technique
searching techniquesearching technique
searching techniquecolleges
 
PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptx
PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptxPPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptx
PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptxRaviKiranVarma4
 
Lecture 08 uninformed search techniques
Lecture 08 uninformed search techniquesLecture 08 uninformed search techniques
Lecture 08 uninformed search techniquesHema Kashyap
 
RPT_AI_03_PartB_UNINFORMED_FINAL.pptx
RPT_AI_03_PartB_UNINFORMED_FINAL.pptxRPT_AI_03_PartB_UNINFORMED_FINAL.pptx
RPT_AI_03_PartB_UNINFORMED_FINAL.pptxRahulkumarTivarekar1
 
Uninformed search /Blind search in AI
Uninformed search /Blind search in AIUninformed search /Blind search in AI
Uninformed search /Blind search in AIKirti Verma
 
Artificial intelligence topic for the btech studentCT II.pptx
Artificial intelligence topic for the btech studentCT II.pptxArtificial intelligence topic for the btech studentCT II.pptx
Artificial intelligence topic for the btech studentCT II.pptxbharatipatel22
 

Similar to Search strategies (20)

Search strategies BFS, DFS
Search strategies BFS, DFSSearch strategies BFS, DFS
Search strategies BFS, DFS
 
Final slide4 (bsc csit) chapter 4
Final slide4 (bsc csit) chapter 4Final slide4 (bsc csit) chapter 4
Final slide4 (bsc csit) chapter 4
 
uninformed search part 1.pptx
uninformed search part 1.pptxuninformed search part 1.pptx
uninformed search part 1.pptx
 
Artificial intelligence(05)
Artificial intelligence(05)Artificial intelligence(05)
Artificial intelligence(05)
 
Lecture 3 Problem Solving, DFS, BFS, IDF.pptx
Lecture 3 Problem Solving, DFS, BFS, IDF.pptxLecture 3 Problem Solving, DFS, BFS, IDF.pptx
Lecture 3 Problem Solving, DFS, BFS, IDF.pptx
 
Ai unit-4
Ai unit-4Ai unit-4
Ai unit-4
 
searching technique
searching techniquesearching technique
searching technique
 
AI(Module1).pptx
AI(Module1).pptxAI(Module1).pptx
AI(Module1).pptx
 
2.uninformed search
2.uninformed search2.uninformed search
2.uninformed search
 
PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptx
PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptxPPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptx
PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptx
 
Lecture 08 uninformed search techniques
Lecture 08 uninformed search techniquesLecture 08 uninformed search techniques
Lecture 08 uninformed search techniques
 
Chapter 3.pptx
Chapter 3.pptxChapter 3.pptx
Chapter 3.pptx
 
RPT_AI_03_PartB_UNINFORMED_FINAL.pptx
RPT_AI_03_PartB_UNINFORMED_FINAL.pptxRPT_AI_03_PartB_UNINFORMED_FINAL.pptx
RPT_AI_03_PartB_UNINFORMED_FINAL.pptx
 
3-uninformed-search.ppt
3-uninformed-search.ppt3-uninformed-search.ppt
3-uninformed-search.ppt
 
NEW-II.pptx
NEW-II.pptxNEW-II.pptx
NEW-II.pptx
 
Uninformed search /Blind search in AI
Uninformed search /Blind search in AIUninformed search /Blind search in AI
Uninformed search /Blind search in AI
 
Week 7.pdf
Week 7.pdfWeek 7.pdf
Week 7.pdf
 
Chap11 slides
Chap11 slidesChap11 slides
Chap11 slides
 
Artificial intelligence topic for the btech studentCT II.pptx
Artificial intelligence topic for the btech studentCT II.pptxArtificial intelligence topic for the btech studentCT II.pptx
Artificial intelligence topic for the btech studentCT II.pptx
 
NEW-II.pptx
NEW-II.pptxNEW-II.pptx
NEW-II.pptx
 

More from Shiwani Gupta

module6_stringmatchingalgorithm_2022.pdf
module6_stringmatchingalgorithm_2022.pdfmodule6_stringmatchingalgorithm_2022.pdf
module6_stringmatchingalgorithm_2022.pdfShiwani Gupta
 
module5_backtrackingnbranchnbound_2022.pdf
module5_backtrackingnbranchnbound_2022.pdfmodule5_backtrackingnbranchnbound_2022.pdf
module5_backtrackingnbranchnbound_2022.pdfShiwani Gupta
 
module4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdfmodule4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdfShiwani Gupta
 
module3_Greedymethod_2022.pdf
module3_Greedymethod_2022.pdfmodule3_Greedymethod_2022.pdf
module3_Greedymethod_2022.pdfShiwani Gupta
 
module2_dIVIDEncONQUER_2022.pdf
module2_dIVIDEncONQUER_2022.pdfmodule2_dIVIDEncONQUER_2022.pdf
module2_dIVIDEncONQUER_2022.pdfShiwani Gupta
 
module1_Introductiontoalgorithms_2022.pdf
module1_Introductiontoalgorithms_2022.pdfmodule1_Introductiontoalgorithms_2022.pdf
module1_Introductiontoalgorithms_2022.pdfShiwani Gupta
 
ML MODULE 1_slideshare.pdf
ML MODULE 1_slideshare.pdfML MODULE 1_slideshare.pdf
ML MODULE 1_slideshare.pdfShiwani Gupta
 
Functionsandpigeonholeprinciple
FunctionsandpigeonholeprincipleFunctionsandpigeonholeprinciple
FunctionsandpigeonholeprincipleShiwani Gupta
 
Uncertain knowledge and reasoning
Uncertain knowledge and reasoningUncertain knowledge and reasoning
Uncertain knowledge and reasoningShiwani Gupta
 

More from Shiwani Gupta (20)

ML MODULE 6.pdf
ML MODULE 6.pdfML MODULE 6.pdf
ML MODULE 6.pdf
 
ML MODULE 5.pdf
ML MODULE 5.pdfML MODULE 5.pdf
ML MODULE 5.pdf
 
ML MODULE 4.pdf
ML MODULE 4.pdfML MODULE 4.pdf
ML MODULE 4.pdf
 
module6_stringmatchingalgorithm_2022.pdf
module6_stringmatchingalgorithm_2022.pdfmodule6_stringmatchingalgorithm_2022.pdf
module6_stringmatchingalgorithm_2022.pdf
 
module5_backtrackingnbranchnbound_2022.pdf
module5_backtrackingnbranchnbound_2022.pdfmodule5_backtrackingnbranchnbound_2022.pdf
module5_backtrackingnbranchnbound_2022.pdf
 
module4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdfmodule4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdf
 
module3_Greedymethod_2022.pdf
module3_Greedymethod_2022.pdfmodule3_Greedymethod_2022.pdf
module3_Greedymethod_2022.pdf
 
module2_dIVIDEncONQUER_2022.pdf
module2_dIVIDEncONQUER_2022.pdfmodule2_dIVIDEncONQUER_2022.pdf
module2_dIVIDEncONQUER_2022.pdf
 
module1_Introductiontoalgorithms_2022.pdf
module1_Introductiontoalgorithms_2022.pdfmodule1_Introductiontoalgorithms_2022.pdf
module1_Introductiontoalgorithms_2022.pdf
 
ML MODULE 1_slideshare.pdf
ML MODULE 1_slideshare.pdfML MODULE 1_slideshare.pdf
ML MODULE 1_slideshare.pdf
 
ML MODULE 2.pdf
ML MODULE 2.pdfML MODULE 2.pdf
ML MODULE 2.pdf
 
ML Module 3.pdf
ML Module 3.pdfML Module 3.pdf
ML Module 3.pdf
 
Problem formulation
Problem formulationProblem formulation
Problem formulation
 
Simplex method
Simplex methodSimplex method
Simplex method
 
Functionsandpigeonholeprinciple
FunctionsandpigeonholeprincipleFunctionsandpigeonholeprinciple
Functionsandpigeonholeprinciple
 
Relations
RelationsRelations
Relations
 
Logic
LogicLogic
Logic
 
Set theory
Set theorySet theory
Set theory
 
Uncertain knowledge and reasoning
Uncertain knowledge and reasoningUncertain knowledge and reasoning
Uncertain knowledge and reasoning
 
Introduction to ai
Introduction to aiIntroduction to ai
Introduction to ai
 

Recently uploaded

UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesPrabhanshu Chaturvedi
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college projectTonystark477637
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...Call Girls in Nagpur High Profile
 
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsRussian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingrknatarajan
 

Recently uploaded (20)

UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and Properties
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college project
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
 
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsRussian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 

Search strategies

  • 3. Shiwani Gupta 3 Search strategies • A search strategy is defined by picking the order of node expansion • Strategies are evaluated along the following dimensions: – completeness: does it always find a solution if one exists? – time complexity: time taken to find solution – space complexity: maximum number of nodes in memory – Optimality/admissibility: does it always find a least-cost solution? • Time and space complexity are measured in terms of – b: maximum branching factor (state expanded to yield new states) of the search tree – d: depth of the least-cost solution – m: maximum depth of the state space (may be ∞)
  • 4. Basic Search Concepts • Search tree • Search node • Node expansion • Search strategy: At each stage it determines which node to expand Shiwani Gupta 4
  • 6. Node Data Structure • STATE • PARENT • ACTION • COST • DEPTH Shiwani Gupta 6
  • 7. Fringe • Set of search nodes that have not been expanded yet • Implemented as a queue FRINGE – INSERT(node,FRINGE) – REMOVE(FRINGE) • The ordering of the nodes in FRINGE defines the search strategy Shiwani Gupta 7
  • 9. Shiwani Gupta 9 Uninformed search strategies (BLIND SEARCH) Blind (or uninformed) strategies do not exploit any of the information contained in a state • Breadth-first search • Depth-first search • Depth-limited search • Uniform Cost • Depth First Iterative deepening search • Bidirectional Search
  • 10. Breadth-First Strategy New nodes are inserted at the end of FRINGE (FIFO Queue) 2 3 4 5 1 6 7 FRINGE = (1) • The root node is expanded first, then all the nodes generated by the root node are expanded next, and then their successors, and so on. • In general, all the nodes at depth d in the search tree are expanded before the nodes at depth d + 1.Shiwani Gupta 10
  • 11. Breadth-First Strategy New nodes are inserted at the end of FRINGE FRINGE = (2, 3)2 3 4 5 1 6 7 Shiwani Gupta 11
  • 12. Breadth-First Strategy New nodes are inserted at the end of FRINGE FRINGE = (3, 4, 5)2 3 4 5 1 6 7 Shiwani Gupta 12
  • 13. Breadth-First Strategy New nodes are inserted at the end of FRINGE FRINGE = (4, 5, 6, 7)2 3 4 5 1 6 7 Shiwani Gupta 13
  • 14. Shiwani Gupta 14 Breadth-first search 1. Create a single member LIFO queue comprising of root node. 2. If 1st member of queue is goal node then goto step 5. 3. i) If 1st member of queue is not goal, then remove it from queue and add it to list of visited nodes. ii) Consider its child nodes if any and add them to the rear end of the queue. 4. If queue empty goto step 6 else goto step 2. 5. Print success and stop. 6. Print fail and stop.
  • 15. Shiwani Gupta 15 Breadth-first search • If there is a solution, breadth-first search is guaranteed to find it • if there are several solutions, breadth-first search will always find the shallowest goal state first.
  • 16. Shiwani Gupta 16 Properties of breadth-first search • Complete? Yes (if b is finite) • Time? 1+b+b2+b3+… +bd = O(bd) • Space? O(bd) (keeps every node in memory) • Optimal? Yes (if cost = 1 per step) for weighted tree; not necessary that node nearer to root is cheaper too Memory requirements are a bigger problem for BFS than the execution time (eg. for depth=6 nodes= time=18 min. memory=111MB for depth=10 nodes= time=135 days memory=1TB) Exponential complexity search problems can be solved for only small instances
  • 17. Advantages • If there is sol, BFS is guaranteed to find • If more than one sol., then best sol. found • BFS doesn’t get trapped exploring blind alley • BFS should be used if b small Shiwani Gupta 18 Disadvantages • Exponential Time Complexity • Exponential Space Complexity • Wasteful if all paths lead to goal state at more or less same depth
  • 18. Applications • To find Shortest Path – Single Source – All Pair • To find all connected components in graph • To find spanning tree of a graph • Testing graph for bipartiteness • Crawler in search engine Shiwani Gupta 19
  • 19. Shiwani Gupta 20 Depth-first search • Expand deepest unexpanded node • Implementation: fringe = LIFO queue, i.e., put successors at front
  • 20. Shiwani Gupta 21 Depth-first search • Depth-first search always expands one of the nodes at the deepest level of the tree. Only when the search hits a dead end (a nongoal node with no expansion) does the search go back and expand nodes at shallower levels.
  • 21. Shiwani Gupta 22 Depth-first search • Depth-first search can get stuck going down the wrong path. • Many problems have very deep or even infinite search trees, so depth-first search will never be able to recover from an unlucky choice at one of the nodes near the top of the tree.
  • 22. Shiwani Gupta 23 Depth-first search • The search will always continue downward without backing up, even when a shallow solution exists. • Thus, on these problems depth-first search will either get stuck in an infinite loop and never return a solution, or it may eventually find a solution path that is longer than the optimal solution.
  • 23. Shiwani Gupta 24 Depth-first search • That means depth-first search is neither complete nor optimal. • Because of this, depth-first search should be avoided for search trees with large or infinite maximum depths.
  • 25. Shiwani Gupta 26 Depth-first search • It is also common to implement depth-first search with a recursive function that calls itself on each of its children in turn.
  • 26. Shiwani Gupta 27 Depth-first search • Complete? No: fails in infinite-depth spaces, spaces with loops Modify to avoid repeated states along path → complete in finite spaces
  • 27. Shiwani Gupta 28 Depth-first search • Time? O(bm): terrible if maximum depth m is much larger than branching factor b. For problems that have many solutions, depth-first may actually be faster than breadth-first, because it has a good chance of finding a solution after exploring only a small portion of the whole space.
  • 28. Shiwani Gupta 29 Depth-first search • Space? O(bm), i.e., linear space! Depth-first search needs to store only a single path from the root to a leaf node, along with the remaining unexpanded sibling nodes for each node on the path.
  • 29. Shiwani Gupta 30 Depth-first search • Optimal? No
  • 30. Shiwani Gupta 31 Depth-first search 1. Create single member FIFO queue comprising of root node. 2. If 1st member of queue is goal node goto step 5. 3. i) If 1st member of queue is not goal node, then remove it from queue and add it to list of visited nodes. ii) Consider its child nodes if any, then add it to front of queue. 4. If queue is empty goto step 6 else goto step 2. 5. Print success and stop. 6. Print fail and stop.
  • 31. Advantages • Require less memory since only nodes on current path stored alongwith remaining unexpanded siblings on each path. • May find solution without examining much of the search space, reducing time • Simple to implement Shiwani Gupta 32 Disadvantages • May get trapped in blind alley • Doesnot guarantee minimal path • Not complete, if tree is unbounded. • Time taken for large and unbounded trees is exponentially large
  • 32. Applications • To find path between two vertices i) Call DFS(G, u) with u as the start vertex. ii) Use a stack S to keep track of the path between the start vertex and the current vertex. iii) As soon as destination vertex z is encountered, return the path as the contents of the stack • To check cycle in a graph – A graph has cycle iff we see a back edge during DFS. • To find connected components – A directed graph is called strongly connected if there is a path from each vertex in the graph to every other vertex. Shiwani Gupta 33
  • 33. Applications • Topological sort – Topological Sorting is mainly used for scheduling jobs from the given dependencies among jobs. In computer science, applications of this type arise in instruction scheduling, ordering of formula cell evaluation when recomputing formula values in spreadsheets, logic synthesis, determining the order of compilation tasks to perform in makefiles, data serialization, and resolving symbol dependencies in linkers • Finding solution in Mazes – DFS can be adapted to find all solutions to a maze by only including nodes on the current path in the visited set • To find spanning tree and forest of a graph Shiwani Gupta 34
  • 34. Compare and contrast DFS, BFS • For tree, DFS requires less memory • DFS is good if there are multiple sol, and you are concerned with just sol • Not good if in case of multiple sol, best required Shiwani Gupta 35 • BFS doesn’t get stuck in blind alleys • BFS requires more memory • BFS finds shortest path first • BFS is good in large search space where sol is near root • BFS is good if multiple solutions
  • 35. Depth first example Q Q Q Q Q Q Q Q a b c d e Q Q Q Q Q Q Q Q Q Q f g h i fb j k a f j k a dc b f j k a b dc f j k a b dc e f j k a c e d b g j k a c e d b f j k a c e d b f h g ja c e d b f g h i k solution Q Q Q Q Q Q Q Q a b c d e Q Q Q Q Q Q Q Q Q Q f g h i fb j k a f j k a dc b f j k a b dc f j k a b dc e f j k a c e d b g j k a c e d b f j k a c e d b f h g ja c e d b f g h i k solution The 4 Queens problem: Note: We can place exactly 1 Q in each row Search starts at top row MAX-DEPTH = 4 Shiwani Gupta 36
  • 36. A maze used to show brute force search Shiwani Gupta 37
  • 37. The tree for the maze in Figure Shiwani Gupta 38
  • 38. Breadth-first search of the tree Shiwani Gupta 39
  • 39. Depth-first search of the tree Shiwani Gupta 40
  • 40. DFS • Not complete • Not optimal Shiwani Gupta 41
  • 41. Shiwani Gupta 42 Depth-limited search • avoids the pitfalls of depth-first search by imposing a cutoff on the maximum depth of a path. • Depth-limited search = depth-first search with depth limit l i.e., nodes at depth l have no successors • Three possible outcomes: – Solution – Failure (no solution) – Cutoff (no solution within cutoff)
  • 42. Shiwani Gupta 43 Properties of Depth-Limited search • Complete? No for l<d, yes otherwise • Time? 1+b+b2+b3+… +bl = O(bl) since uses DFS • Space? O(bl) since uses DFS • Optimal? No If we can find better depth limit, then more efficient.
  • 43. Shiwani Gupta 52 Iterative deepening search l =0
  • 44. Shiwani Gupta 53 Iterative deepening search l =1
  • 45. Shiwani Gupta 54 Iterative deepening search l =2
  • 46. Shiwani Gupta 55 Iterative deepening search l =3
  • 47. Shiwani Gupta 56 Depth First Iterative deepening search • We picked 19 as an "obvious“ depth limit for the Romania problem, but in fact if we studied the map carefully, we would discover that any city can be reached from any other city in at most 9 steps. • This number, known as the diameter of the state space, gives us a better depth limit, which leads to a more efficient depth-limited search. • It sidesteps the issue of choosing the best depth limit by trying all possible depth limits: first depth 0, then depth 1, then depth 2, and so on. • In effect, iterative deepening combines the benefits of depth-first search and breadth-first search. • It is optimal and complete, like breadth-first search, but has only the modest memory requirements of depth-first search. • The order of expansion of states is similar to breadth-first, except that some states are expanded multiple times.
  • 48. Shiwani Gupta 57 Depth First Iterative deepening search • Runs Depth Limited repeatedly, increasing depth limit each time • Equivalent to DFS • Combines DFS’s space efficiency and BFS’s completeness • preferred search method when there is a large search space and the depth of the solution is not known •Extra cost of revisiting
  • 49. Shiwani Gupta 58 Iterative deepening search • Number of nodes generated in a depth-limited search to depth d with branching factor b: NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd • Number of nodes generated in an iterative deepening search to depth d with branching factor b: NIDS = (d+1)b0 + d b1 + (d-1)b2 + … + 3bd-2 +2bd-1 + 1bd • For b = 10, d = 5, – NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111 – NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456 • Space Overhead = (123,456 - 111,111)/111,111 = 11%
  • 50. Shiwani Gupta 59 Properties of iterative deepening search • Complete? Yes • Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd) less than BFS • Space? O(bd), d is depth of shallowest goal • Optimal? Yes The higher the branching factor, the lower the overhead of repeatedly expanded states Iterative deepening is the preferred search method when there is a large search space and the depth of the solution is not known.
  • 51. Shiwani Gupta 60 Summary of algorithms b is the branching factor d is the depth of solution m is the maximum depth of the search tree / is the depth limit.
  • 52. Shiwani Gupta 61 University Questions • Compare different uninformed search strategies • Explain BFS and DFS Algo • Explain DFS on a graph • Advantage, disadvantage and applications of DFS, BFS • Solve 8 puzzle by DFS and BFS (assignment) • Compare and contrast DFS and BFS • Explain techniques to overcome the drawback of DFS and BFS • Write short note on Iterative Deepening Search Beyond Syllabus • Uniform Cost • Bidirectional
  • 53. Chapter 3.2 INFORMED SEARCH METHODS Heuristic (or informed) strategies exploit such information to assess that one node is “more promising” than another
  • 54. Using heuristic search, we assign a quantitative value called a heuristic value (h value) to each node. This quantitative value shows the relative closeness of the node to the goal state. For example, consider solving the 8-puzzle Heuristic search Initial and goal states for heuristic search Shiwani Gupta 63
  • 55. The heuristic values for the first step Shiwani Gupta 64
  • 56. Heuristic search for solving the 8-puzzleShiwani Gupta 65
  • 57. Best First Search • Combines the advantages of both DFS and BFS into a single method. • DFS is good because it allows a solution to be found without all competing branches having to be expanded. • BFS is good because it does not get branch on dead end paths. • One way of combining the two is to follow a single path at a time, but switch paths whenever some competing path looks more promising than the current one does. Shiwani Gupta 66
  • 58. Shiwani Gupta 67 (Seemingly) Best-first search • Idea: use an evaluation function f(n) for each node – estimate of "desirability" →Expand most desirable unexpanded node • Implementation: Order the nodes in fringe in decreasing order of desirability (Priority Queue) • Special cases: – greedy best-first search – A* search
  • 59. 68 Real-World Problem: Touring in Romania Oradea Bucharest Fagaras Pitesti Neamt Iasi Vaslui Urziceni Hirsova Eforie Giurgiu Craiova Rimnicu Vilcea Sibiu Dobreta Mehadia Lugoj Timisoara Arad Zerind 120 140 151 75 70 111 118 75 71 85 90 211 101 97 138 146 80 99 87 92 142 98 86 Aim: find a course of action that satisfies a number of specified conditions Shiwani Gupta
  • 60. Shiwani Gupta 69 Greedy best-first search (minimize estimated cost to reach a goal) • Evaluation function f(n) = h(n) (heuristic = estimate of cost from n to goal) • e.g., hSLD(n) = straight-line distance from n to Bucharest • Greedy best-first search expands the node that appears to be closest to goal (cheapest/shortest path)
  • 61. 70 Romania with step costs in km 374 329 253 Shiwani Gupta
  • 66. Best First Search Step 1 Step 2 Step 3 Step 4 Step 5 A A B C D 3 5 1 A B C D 3 5 1 E F4 6A B C D 3 5 1 E F4 6 G H6 5 A B C D 3 5 1 E F4 6 G H6 5 A A2 1Shiwani Gupta 75
  • 67. Shiwani Gupta 76 Properties of greedy best-first search • Complete? No – can get stuck in loops, e.g., Iasi → Neamt → Iasi → Neamt → • Time? O(bm), but a good heuristic can give dramatic improvement • Space? O(bm) keeps all nodes in memory in worst case • Optimal? No
  • 68. Algorithm: BFS 1. Start with OPEN containing just the initial state 2. Until a goal is found or there are no nodes left on OPEN do: a. Pick the best node on OPEN b. Generate its successors c. For each successor do: i. If it has not been generated before, evaluate it, add it to OPEN, and record its parent. ii. If it has been generated before, change the parent if this new path is better than the previous one. In that case, update the cost of getting to this node and to any successors that this node may already have. Shiwani Gupta 77
  • 69. Algorithm: Best First Search 1. Create single member priority queue comprising of root node. 2. If 1st member of queue is goal node goto step 5. 3. i) If 1st member of queue is not goal node, then remove it from queue and add it to list of visited nodes. ii) Consider its child nodes if any, evaluate them using evaluation function f(n)=h(n), add them to the queue and reorder the states on queue by heuristic merit (best leftmost). 4. If queue is empty goto step 6 else goto step 2. 5. Print success and stop. 6. Print fail and stop. Shiwani Gupta 78
  • 70. Applications of Best First Search • Games (Minesweeper) • Web Crawlers • Task Scheduling Shiwani Gupta 79
  • 71. Shiwani Gupta 81 An A* algorithm is admissible best-first search algorithm that aims at minimizing the total cost along a path from start to goal. f*(n) = g(n) + h*(n) estimate of total cost along path through n estimate of cost to reach goal from n actual cost to reach n A* search (1968) (minimizing the total path cost)
  • 72. Shiwani Gupta 82 A* search example
  • 73. Shiwani Gupta 83 A* search example
  • 74. Shiwani Gupta 84 A* search example
  • 75. Shiwani Gupta 85 A* search example
  • 76. Shiwani Gupta 86 A* search example
  • 77. Shiwani Gupta 87 A* search example
  • 78. A* Algorithm 1. Start with OPEN containing only initial node. Set that node’s g value to 0, its h* value to whatever it is, and its f* value to h*+0 or h*. Set CLOSED to empty list. 2. Until a goal node is found, repeat the following procedure: If there are no nodes on OPEN, report failure. Otherwise pick the node on OPEN with the lowest f* value. Call it BESTNODE. Remove it from OPEN. Place it in CLOSED. See if the BESTNODE is a goal state. If so exit and report a solution. Otherwise, generate the successors of BESTNODE but do not set the BESTNODE to point to them yet. Shiwani Gupta 88
  • 79. A* Algorithm 1. Create single member priority queue comprising of root node. 2. If 1st member of queue is goal node goto step 5. 3. i) If 1st member of queue is not goal node, then remove it from queue and add it to list of visited nodes. ii) Consider its child nodes if any, evaluate them using evaluation function f*(n)=g(n)+h*(n), add them to the queue and reorder the states on queue by heuristic merit (best leftmost). 4. If queue is empty goto step 6 else goto step 2. 5. Print success and stop. 6. Print fail and stop. Shiwani Gupta 89
  • 80. Shiwani Gupta 90 Admissible heuristics • An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic • Example: hSLD(n) (never overestimates the actual road distance) • Theorem: If h*(n) is admissible, A* using TREE- SEARCH is optimal
  • 81. Shiwani Gupta 91 • A heuristic is monotonic/consistent if the estimated cost of reaching any node is always less than the actual cost. h* (n1)– h* (n2)≤ h (n1)–h (n2) • A heuristic is (globally) optimistic or admissible if the estimated cost of reaching a goal is always less than the actual cost. h* (n) ≤ h (n) estimate of cost to reach goal from n actual (unknown) cost to reach goal from n
  • 82. Shiwani Gupta 92 Admissible heuristics E.g., for the 8-puzzle: • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance (i.e., no. of squares misplaced from desired location of each tile) • h1(S) = ? • h2(S) = ?
  • 83. Shiwani Gupta 93 Admissible heuristics E.g., for the 8-puzzle: • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) • h1(S) = ? 8 • h2(S) = ? 3+1+2+2+2+3+3+2 = 18
  • 84. Shiwani Gupta 94 Properties of A* • Complete? Yes • Time? Exponential • Space? Keeps all nodes in memory • Optimal? Yes Optimality → Completeness (not vice versa)
  • 85. Shiwani Gupta 95 Optimality of A* (proof) • Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. • f(G2) = g(G2) since h(G2) = 0 • g(G2) > g(G) since G2 is suboptimal • f(G) = g(G) since h(G) = 0 • f(G2) > f(G) from above
  • 86. Shiwani Gupta 96 Optimality of A* (proof) • Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. • f(G2) > f(G) from above • h*(n) ≤ h(n) since h* is admissible • g(n) + h*(n) ≤ g(n) + h(n) • f(n) ≤ f(G) actual cost to Goal(n) Hence f(G2) > f(n), and A* will never select G2 for expansion
  • 87. Applications of A* • Common pathfinding problem • Graph traversal • Parsing in NLP • It is special case of BnB • Multiple sequence Alignment • Dijkstra is special case of A* – When heuristic=0; f(x)=g(x) Shiwani Gupta 98
  • 88. University Questions • Consider a 8-puzzle problem and a heuristic function given by taking a sum of the distances of the tiles that are in proper positions. Tiles in proper positions have value equal to zero. Use the A* algorithm to draw a solution tree for the 8 puzzle problem. Indicate clearly the values you consider at each step. • How Best First Search will be applicable to 8 puzzle problem. • When would Best First Search be worst than simple Breadth First Search. • Best First Search uses both OPEN and CLOSED list. Describe purpose of both with example. • Explain A* search algorithm with example. • What is heuristics? A* search uses a combined heuristic to select the best path to follow through the state space towards the goal. Define the two heuristics used. • What do you mean by admissible heuristic function. Discuss with suitable example. • What is heuristic function? How will you find suitable heuristic function? Give suitable example. • Describe A* algorithm with merits and demerits.. • Prove that A* search algorithm is complete and optimal among all search algorithms. Show that A* is optimally efficient. • Write short note on admissibility of A*. • Prove that A* is admissible if it uses a monotone heuristic. Shiwani Gupta 99
  • 89. Shiwani Gupta 100 Assignment Question How will A* get from Iasi to Fagaras?
  • 90. Shiwani Gupta 102 Memory-bounded heuristic search • A* may run into space limitations • MBHS attempts to avoid space complexity – IDA* = iterative deepening A* – RBFS = recursive best-first search – MA*, SMA* = memory-bounded A*
  • 91. Shiwani Gupta 103 IDA* • A* and best-first keep all nodes in memory • Combination of A* and DFS • IDA* (iterative deeping A*) is similar to standard iterative deeping, except the cutoff used is the f- cost (g+h), rather than the depth. (thus doesn’t go to the same depth everywhere in the tree) • Cutoff value is set to smallest f-cost of any node that exceeded cutoff in previous iteration • Complete and Optimal • Time (bm) Space (mb)
  • 92. PROs: IDA* • Always finds the optimal solution if exists for an admissible heuristic • Uses a lot less memory which increases linearly as it doesn’t store and forgets after it reaches a certain depth. Shiwani Gupta 104
  • 93. CONs: IDA* • Between iterations, it retains only current f-cost limit • Since it cannot remember history, hence repeats states Shiwani Gupta 105
  • 94. Shiwani Gupta 106 RBFS • Recursive best-first search (RBFS) tries to mimic best-first in linear space • Similar to recursive DFS • Keeps track of the f-value of the best leaf in the forgotten subtree so that it could be reexpanded in future • RBFS only keeps the current search path and the sibling nodes along the path • If current node exceed that limit, it unwinds back to the alternate path • Cost function is non monotonic • It will often have to re-expand discarded subtrees
  • 95. Shiwani Gupta 107 RBFS f-value of best alternate path
  • 98. Performance Measure • Completeness: y • Optimality: y for admissible heuristic • Time Complexity: actual no. of nodes generated (depends on cost fn) • Space Complexity: O(bd) Shiwani Gupta 110
  • 99. Shiwani Gupta 112 Memory-bounded A* • MA* and SMA* (simplified MA*) work like A* until memory is full. • SMA*: if memory is full, drop the worst leaf node and push its f-value back to its parent • Subtrees are regenerated only when all other paths have been shown to be worse than the forgotten path • Thrashing behavior can result if memory is small compared to size of candidate subpaths
  • 100. MA* Once a preset limit is reached (in memory), the algorithm prunes the open list by highest f-cost Shiwani Gupta 113 SMA* Improves upon MA* by: 1. Using a more efficient data structure for the open list (binary tree), sorted by f-cost and depth 2. Only maintaining two f-cost quantities (instead of four with MA*) 3. Pruning one node at a time (the worst f-cost) 4. Retaining backed-up f-costs for pruned paths
  • 101. Performance Measure • Completeness: y (if available mem sufficient to store shallowest sol path) • Optimality: y for admissible heuristic (if enough mem available to store shallowest sol path) Shiwani Gupta 114
  • 102. University Questions • Discuss blind search and informed search. Hence discuss merits and demerits of each. • Write note on comparative analysis of search techniques. • Describe IDA* search algorithm giving suitable example. • Compare the following informed search algorithms based on all performance measures with justification: – Greedy Best First – A* – Recursive Best First • Prove that A* Search algorithm is complete and optimal among all search algorithms. • How Best First Search will be applicable to 8 puzzle game. Shiwani Gupta 115
  • 103. Shiwani Gupta 116 Local Search Algorithms • Exponential growth of the solution space for most of the practical problems • In many optimization problems, the path to the goal is irrelevant; the goal state itself is the solution • State space = set of "complete" configurations • Find configuration satisfying constraints, e.g., n-queens • In such cases, we can use local search algorithms • keep a single "current" state, try to improve it
  • 104. Shiwani Gupta 117 Example of Local Search Algorithm : Hill Climbing/ Gradient Descent Local optimum Initial solution Global optimum Neighbourhood of solution Neighbourhood of solutionInitial solution Local Optimum Global Optimum
  • 105. Shiwani Gupta 118 4 - Queens • States: 4 queens in 4 columns (256 states) • Neighborhood Operators: move queen in column • Goal test: no attacks • Evaluation: h(n) = number of attacks
  • 106. Shiwani Gupta 119 Advantages of Local search • Two advantages – Use little memory – More applicable in searching large/infinite search space • Completeness: guaranteed to find a solution if exists • Optimality: solution found is optimal • Local search algorithms are also useful for optimization problems • Goal: find a state such that the objective function is optimized
  • 107. Shiwani Gupta 120 Application of local search algorithms • planning, scheduling, routing, configuration, protein folding, etc. • In fact, many (most?) large Operation Research problems are solved using local search methods (e.g., airline scheduling).
  • 108. Shiwani Gupta 121 Problems in Hill-Climbing search depending on initial state, can get stuck in local maxima
  • 109. Shiwani Gupta 122 Drawbacks • Local maxima/ foothill: a local maximum is a peak that is lower than the highest peak in the state space. Once on a local maximum, the algorithm will halt even though the solution may be far from satisfactory. (Maze: may have to move AWAY from goal to find (best) solution) • Plateau: a plateau is an area of the state space where the evaluation function is essentially flat. The search will conduct a random walk. (8- puzzle: perhaps no action will change # of tiles out of place) • Ridges: a sequence of local maxima which cannot be searched in a single move
  • 110. Shiwani Gupta 124 Hill Climbing - Algorithm 1. Pick a random point in the search space 2. Consider all the neighbours of the current state 3. Choose the neighbour with the best quality and move to that state 4. Repeat 2 thru 4 until all the neighbouring states are of lower quality 5. Return the current state as the solution state
  • 111. Shiwani Gupta 125 Hill-Climbing Search "Like climbing Everest in thick fog with amnesia"
  • 112. Shiwani Gupta 126 Solutions• Stochastic hill-climbing • Chose at random from among the uphill moves. • First-choice hill climbing • Generates successors randomly until one is generated that is better than current state • Simulated annealing • Use conventional hill-climbing style techniques, but occasionally take a step in a direction other than that in which there is improvement (downhill moves). As time passes, the probability that a down-hill step is taken is gradually reduced and the size of any down-hill step taken is decreased. • Local beam search • Run the random starting points in parallel, always keeping the k most promising states • Random-restart hill climbing • Simply restart at a new random state after a pre-defined number of steps. • Genetic Algorithms
  • 113. Local Beam search • Keep track of k states instead of one – Initially: k randomly selected states – Next: determine all successors of k states – If any of successors is goal → finished – Else select k best from successors and repeat. • Major difference with random-restart search – Information is shared among k search threads. • Can suffer from lack of diversity. – Stochastic beam search • choose k successors proportional to state quality. Shiwani Gupta 127
  • 114. Simulated Annealing • analogy to annealing in solids • If you heat a solid past melting point and then cool it, the structural properties of the solid depend on the rate of cooling. If the liquid is cooled slowly enough, large crystals will be formed. However, if the liquid is cooled quickly the crystals will contain imperfections. • idea is to use simulated annealing to search for feasible solutions and converge to an optimal solution. Shiwani Gupta 128
  • 115. Simulated Annealing vs Hill climbing • allowing worse moves (lesser quality) to be taken some of the time. That is, it allows some uphill steps so that it can escape from local minima • Unlike hill climbing, simulated annealing chooses a random move from the neighbourhood (recall that hill climbing chooses the best move from all those available • If the move is better than its current position then simulated annealing will always take it. If the move is worse (i.e. lesser quality) then it will be accepted based on some probabilityShiwani Gupta 129
  • 116. Search using Simulated Annealing Basic ideas: – like hill-climbing identify the quality of the local improvements – instead of picking the best move, pick one randomly – say the change in objective function is ∆E – if d is positive, then move to that state – otherwise: • move to this state with probability proportional to d • thus worse moves (very large negative ∆E) are executed less often – however, there is always a chance of escaping from local maxima – over time, make it less likely to accept locally bad moves – (Can also make the size of the move random as well, i.e., allow “large” steps in state space) Shiwani Gupta 130
  • 117. • Annealing = physical process of cooling a liquid or metal until particles achieve a certain frozen crystal state • Simulated Annealing: – free variables are like particles – seek “low energy” (high quality) configuration – get this by slowly reducing temperature T, which particles move around randomly Shiwani Gupta 131
  • 118. Simulated Annealing function SIMULATED-ANNEALING( problem, schedule) return a solution state input: problem, a problem schedule, a mapping from time to temperature local variables: current, a node. next, a node. T, a “temperature” controlling the probability of downward steps current  MAKE-NODE(INITIAL-STATE[problem]) for t  1 to ∞ do T  schedule[t] if T = 0 then return current next  a randomly selected successor of current ∆E  VALUE[next] - VALUE[current] if ∆E > 0 then current  next else current  next only with probability e∆E /T Shiwani Gupta 132
  • 119. – Lets say there are 3 moves available, with changes in the objective function of d1 = -0.1, d2 = 0.5, d3 = -5. (Let T = 1). – pick a move randomly: • if d2 is picked, move there. • if d1 or d3 are picked, probability of move = exp(d/T) • move 1: prob1 = exp(-0.1) = 0.9, – i.e., 90% of the time we will accept this move • move 3: prob3 = exp(-5) = 0.05 – i.e., 5% of the time we will accept this move – T = “temperature” parameter • high T => probability of “locally bad” move is higher • low T => probability of “locally bad” move is lower • typically, T is decreased as the algorithm runs longer – i.e., there is a “temperature schedule”Shiwani Gupta 133
  • 120. Genetic Algorithms • Different approach to other search algorithms – A successor state is generated by combining two parent states • A state is represented as a string over a finite alphabet (e.g. binary) – 8-queens • State = position of 8 queens each in a column => 8 x log(8) bits = 24 bits (for binary representation) • Start with k randomly generated states (population) • Evaluation function (fitness function). – Higher values for better states. – Opposite to heuristic function, e.g., # non-attacking pairs in 8- queens • Produce the next generation of states by “simulated evolution” – Random selection – Crossover – Random mutation Shiwani Gupta 134
  • 121. Genetic Algorithms Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28) • 24/(24+23+20+11) = 31% • 23/(24+23+20+11) = 29% etc 4 states for 8-queens problem 2 pairs of 2 states randomly selected based on fitness. Random crossover points selected New states after crossover Random mutation applied Shiwani Gupta 135
  • 122. Genetic Algorithm pseudocode function GENETIC_ALGORITHM( population, FITNESS-FN) return an individual input: population, a set of individuals FITNESS-FN, a function which determines the quality of the individual repeat new_population  empty set loop for i from 1 to SIZE(population) do x  RANDOM_SELECTION(population, FITNESS_FN) y  RANDOM_SELECTION(population, FITNESS_FN) child  REPRODUCE(x,y) if (small random probability) then child  MUTATE(child ) add child to new_population population  new_population until some individual is fit enough or enough time has elapsed return the best individual Shiwani Gupta 136
  • 123. Comments on Genetic Algorithms • Positive points – Random exploration can find solutions that local search can’t • (via crossover primarily) – Appealing connection to human evolution • E.g., see related area of genetic programming • Negative points – Large number of “tunable” parameters • Difficult to replicate performance from one problem to another – Lack of good empirical studies comparing to simpler methods – Useful on some (small?) set of problems but no convincing evidence that GAs are better than hill-climbing w/r random restarts in general Shiwani Gupta 137
  • 124. Shiwani Gupta 138 Example: Searching for a computer program (‘Genetic Programming’) Extension of GA for evolving computer programs – represent programs as LISP expressions – e.g. (IF (GT (x) (0)) (x) (-x)) IF GT x -x x 0
  • 125. Shiwani Gupta 139 Electronic Filter Circuit Design •Individuals are programs that transform beginning circuit to final circuit by •Adding/subtracting components and connections •Fitness: computed by simulating the circuit •Population of 640,000 has been run on a parallel processor •After 137 generations, the discovered circuits exhibited performance competitive with best human designs
  • 126. University Questions • Define heuristic function. Give an example heuristics function for Blocks World problem. • Find heuristics value for a particular state of the Blocks World Problem. • What are the problems / frustrations that occur in Hill Climbing Technique? Illustrate with an example. • Write a short note on genetic algorithm. • Explain Hill Climbing algorithm with an example. • Explain Local Beam Search Algorithm in detail. Shiwani Gupta 140