SlideShare a Scribd company logo
Lecture # 2
Solving Problems
By:-
Syed Ali Raza
Outline
 Problem-solving agents
 Problem types
 Problem formulation
 Example problems
 Basic search algorithms
Introduction
 Goal-based agents can succeed by considering future
actions and desirability of their outcomes.
 Problem solving agent is a goal-based agent that
decides what to do by finding sequences of actions that
lead to desirable states
Problem solving
 We want:
 To automatically solve a problem
 We need:
 A representation of the problem
 Algorithms that use some strategy to solve the problem
defined in that representation
Problem representation
 General:
– State space: a problem is divided into a set of
resolution steps from the initial state to the goal state
– Reduction to sub-problems: a problem is arranged
into a hierarchy of sub-problems
States
 A problem is defined by its elements and their
relations.
 In each instant of the resolution of a problem,
those elements have specific descriptors (How to
select them?) and relations.
 A state is a representation of those elements in a
given moment.
 Two special states are defined:
– Initial state (starting point)
– Final state (goal state)
State modification: successor function
 A successor function is needed to move between
different states.
 A successor function is a description of possible
actions, a set of operators. It is a transformation
function on a state representation, which convert it
into another state.
 The successor function defines a relation of
accessibility among states.
 Representation of the successor function:
 Conditions of applicability
 Transformation function
State space
 The state space is the set of all states reachable
from the initial state.
 It forms a graph (or map) in which the nodes are
states and the arcs between nodes are actions.
 A path in the state space is a sequence of states
connected by a sequence of actions.
 The solution of the problem is part of the map
formed by the state space.
Problem solution
 A solution in the state space is a path from the
initial state to a goal state or.
 Path/solution cost: function that assigns a
numeric cost to each path, the cost of applying the
operators to the states.
 Solution quality is measured by the path cost
function, and an optimal solution has the lowest
path cost among all solutions.
 Solutions: any, an optimal one, all. Cost is
important depending on the problem and the type
of solution sought.
Problem description
 Components:
 State space (explicitly or implicitly defined)
 Initial state
 Goal state (or the conditions it has to fulfill)
 Available actions (operators to change state)
 Restrictions (e.g., cost)
 Elements of the domain which are relevant to the
problem (e.g., incomplete knowledge of the starting
point)
 Type of solution:
 Sequence of operators or goal state
 Any, an optimal one (cost definition needed), all
Problem solving agents
 Intelligent agents are supposed to maximize their performance measure.
 This can be simplified if the agent can adopt a goal and aim at
satisfying it.
 Goal formulation, based on the current situation and the agent’s
performance measure, is the first step in problem solving
 Goal is a set of states. The agent’s task is to find out which sequence of
actions will get it to a goal state
 Problem formulation is the process of deciding what sorts of actions
and states to consider, given a goal
Contd..
 An agent with several immediate options of unknown value can decide
what to do by first examining different possible sequences of actions
that lead to states of known value, and then choosing the best sequence,
 Looking for such a sequence is called search,
 A search algorithm takes a problem as input and returns a solution in
the form of action sequence,
 a solution is found the actions it recommends can be carried out –
execution phase.
Contd..
 “formulate, search, execute” design for the agent,
 After formulating a goal and a problem to solve the agent calls a search
procedure to solve it,
 It then uses the solution to guide its actions, doing whatever the
solution recommends as the next thing to do (typically the first action in
the sequence),
 Then removing that step from the sequence,
 Once the solution has been executed, the agent will formulate a new
goal.
Problem-solving agents
Problem types
 Deterministic, fully observable  single-state problem
 Agent knows exactly which state it will be in;
 Non-observable  senseless problem (conformant
problem)
 Agent may have no idea where it is;
 Nondeterministic and/or partially observable 
contingency problem
 percepts provide new information about current state
 Unknown state space  exploration problem
Example: Romania
On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest
 Formulate goal:
 be in Bucharest
 Formulate problem:
 states: various cities
 actions: drive between cities
 Find solution:
 sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Example: Romania
Well-defined problems and solutions
 A problem can be defined formally by four components
 Initial state that the agent starts in – e.g. In(Arad)
 A description of the possible actions available to the agent
– Successor function – returns a set of <action, successor> pairs
– e.g. {<Go(Sibiu),In(Sibiu)>, <Go(Timisoara),In(Timisoara)>, <Go(Zerind), In(Zerind)>}
– Initial state and the successor function define the state space ( a graph in which the nodes are
states and the arcs between nodes are actions). A path in state space is a sequence of states
connected by a sequence of actions
 Goal test determines whether a given state is a goal state – e.g.{In(Bucharest)}
 Path cost function that assigns a numeric cost to each path. The cost of a path can be described
as the some of the costs of the individual actions along the path – step cost – e.g. Time to go
Bucharest
Single-state problem formulation
A problem is defined by four items:
1. initial state e.g., "at Arad“
2. actions or successor function S(x) = set of action–state pairs
 e.g., S(Arad) = {<Arad  Zerind, Zerind>, … }
3. goal test, can be
 explicit, e.g., x = "at Bucharest“
4. path cost (additive)
 e.g., sum of distances, number of actions executed, etc.
 step cost, assumed to be ≥ 0
 A solution is a sequence of actions leading from the initial state to a goal state
Example: The 8-puzzle
 states?
 actions?
 goal test?
 path cost?
Example: The 8-puzzle
 states? locations of tiles
 actions? move blank left, right, up, down
 goal test? = goal state (given)
 path cost? 1 per move
Tree search algorithms
 Basic idea:
 offline, simulated exploration of state space by
generating successors of already-explored states
(a.k.a.~expanding states)
Tree search example
Tree search example
Tree search example
Search strategies
 A search strategy is defined by picking the order of node
expansion
 Strategies are evaluated along the following dimensions:
 completeness: does it always find a solution if one exists?
 time complexity: number of nodes generated
 space complexity: maximum number of nodes in memory
 optimality: does it always find a least-cost solution?
 Time and space complexity are measured in terms of
 b: maximum branching factor of the search tree
 d: depth of the least-cost solution
 m: maximum depth of the state space (may be ∞)
Uninformed search algorithms
Uninformed search strategies
 Uninformed search strategies use only the information
available in the problem definition
 Breadth-first search
 Uniform-cost search
 Depth-first search
 Depth-limited search
 Iterative deepening search
Breadth-first search
 The root node is expanded first, then all the successors of the root
node, and their successors and so on
 In general, all the nodes are expanded at a given depth in the search tree
before any nodes at the next level are expanded
 Expand shallowest unexpanded node
 Implementation:
– fringe is a FIFO queue,
– the nodes that are visited first will be expanded first
–All newly generated successors will be put at the end of the queue
– Shallow nodes are expanded before deeper nodes
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Initial state
Goal state
Fringe: A (FIFO)
Successors: B,C,D
Visited:
Breadth-first Search
The fringe is the data structure we use to store all of the
nodes that have been generated
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: B,C,D (FIFO)
Successors: E,F
Visited: A
Next node
Breadth-first Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: C,D,E,F (FIFO)
Successors: G,H
Visited: A, B
Next node
Breadth-first Search
A
B C D
E F G H I J
K L M N O P Q R
S T U
Fringe: D,E,F,G,H (FIFO)
Successors: I,J
Visited: A, B, C
Next node
Breadth-first Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: E,F,G,H,I,J (FIFO)
Successors: K,L
Visited: A, B, C, D
Next node
Breadth-first Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: F,G,H,I,J,K,L (FIFO)
Successors: M
Visited: A, B, C, D, E
Next node
Breadth-first Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: G,H,I,J,K,L,M (FIFO)
Successors: N
Visited: A, B, C, D, E, F
Next node
Breadth-first Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: H,I,J,K,L,M,N (FIFO)
Successors: O
Visited: A, B, C, D, E, F,
G
Next node
Breadth-first Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: I,J,K,L,M,N,O (FIFO)
Successors: P,Q
Visited: A, B, C, D, E, F,
G, H
Next node
Breadth-first Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: J,K,L,M,N,O,P,Q (FIFO)
Successors: R
Next node
Visited: A, B, C, D, E, F,
G, H, I
Breadth-first Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: K,L,M,N,O,P,Q,R (FIFO)
Successors: S
Next node
Visited: A, B, C, D, E, F,
G, H, I, J
Breadth-first Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: L,M,N,O,P,Q,R,S (FIFO)
Successors: T
Next node
Visited: A, B, C, D, E, F,
G, H, I, J, K
Breadth-first Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: M,N,O,P,Q,R,S,T (FIFO)
Successors:
Next node
Visited: A, B, C, D, E, F,
G, H, I, J, K, L
Breadth-first Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: N,O,P,Q,R,S,T (FIFO)
Successors:
Next node
Visited: A, B, C, D, E, F,
G, H, I, J, K, L, M
Goal state achieved
Breadth-first Search
Breadth-first Search
Algorithm BREADTH: Breadth first search in state space
Let fringe be a list containing the initial state
Loop if fringe is empty return failure
Node  remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node, and
(merge the newly generated nodes into fringe)
add generated nodes to the back of fringe
End Loop
Properties of breadth-first search
 Complete? Yes (if b is finite)
 Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
 Space? O(bd+1) (keeps every node in memory)
 Optimal? Yes (if cost = 1 per step)
 Space is the bigger problem (more than time)
Uniform-cost search (UCS)
 Uniform cost search is a search algorithm used to traverse,
and find the shortest path in weighted trees and graphs.
 Uniform Cost Search or UCS begins at a root node and will
continually expand nodes, taking the node with the smallest
total cost from the root until it reaches the goal state.
 Uniform cost search doesn't care about how many steps a
path has, only the total cost of the path.
 UCS with all path costs equal to one is identical to breadth
first search.
Uniform Cost Search
S
A
B
C
D
1
5
15
5
5
10
Similar to BFS except that it sorts (ascending order) the
nodes in the fringe according to the cost of the node.
where cost is the path cost.
S
A
B
C
D
1
5
15
5
5
10
Uniform Cost Search
Fringe = [S0]
Next Node=Head of Fringe=S, S is not
goal
Successor(S)={C,B,A}=expand(S) but
sort them according to path cost.
Updated Fringe=[A1,B5,C15]
Queue
Queue
S
A
B
C
1
5
15
D
5
5
10
Uniform Cost Search
Fringe = [A1,B5,C15]
Next Node=Head of Fringe=A, A is not
goal
Successor(A)={D}=expand(A)
Sort the queue according to path cost.
Updated Fringe=[B5,D11,C15]
DS
A
B
C
1
5
15
5
5
10
Uniform Cost Search
Fringe = [B5,D11,C15]
Next Node=Head of Fringe=B, B is not
goal
Successor(B)={D}=expand(B)
Sort the queue according to path cost.
Updated Fringe=[D10,D11,C15]
Uniform Cost Search
Fringe = [D10,D11,C15]
Next Node=Head of Fringe=D,
D is a GOAL (cost 10 = 5+5)
SBD
DS
A
B
C
1
5
15
5
5
10
Always finds the
cheapest solution
Uniform Cost Search
A
B C
D
E
F G
H
Uniform Cost Search
A
B C
D
E
H
F G
Uniform Cost Search
Uniform-cost search
 Expand least-cost unexpanded node
 Implementation:
 fringe = queue ordered by path cost
 Equivalent to breadth-first if step costs all equal
 Complete? Yes, if step cost ≥ ε
 Time? # of nodes with g ≤ cost of optimal solution, O(b1 + C*/ε) where
C* is the cost of the optimal solution
 Space? # of nodes with g ≤ cost of optimal solution, O(b1 + C*/ε)
Optimal? Yes – nodes expanded in increasing order of g(n)
Depth First Search - Method
 Expand Root Node First
 Explore one branch of the tree before exploring another
branch
 If a leaf node do not represent a goal state, search
backtracks up to the next highest node that has an
unexplored path
DFS
 Depth-first search (DFS) is an algorithm for
traversing or searching a tree, tree structure, or graph.
One starts at the root (selecting some node as the root
in the graph case) and explores as far as possible along
each branch before backtracking.
DFS
 DFS is an uninformed search that starts from root node
of the search tree and going deeper and deeper until a
goal node is found, or until it hits a node that has no
children. Then the search backtracks, returning to the
most recent node it hasn't finished exploring.
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Initial state
Goal state
Fringe: A (LIFO)
Successors: B,C,D
Visited:
Depth-First Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: B,C,D (LIFO)
Successors: E,F
Visited: A
Depth-First Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: E,F,C,D (LIFO)
Successors: K,L
Visited: A, B
Depth-First Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: K,L,F,C,D (LIFO)
Successors: S
Visited: A, B, E
Depth-First Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: S,L,F,C,D (LIFO)
Successors:
Visited: A, B, E, K
Depth-First Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: L,F,C,D (LIFO)
Successors: T
Backtracking
Visited: A, B, E, K, S
Depth-First Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: T,F,C,D (LIFO)
Successors:
Visited: A, B, E, K, S, L
Depth-First Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: F,C,D (LIFO)
Successors: M
Depth-First Search
Backtracking
Visited: A, B, E, K, S, L, T
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: M,C,D (LIFO)
Successors:
Visited: A, B, E, K, S, L, T,
F
Depth-First Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: C,D (LIFO)
Successors: G,H
Backtracking
Visited: A, B, E, K, S, L, T,
F, M
Depth-First Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: G,H,D (LIFO)
Successors: N
Visited: A, B, E, K, S, L, T,
F, M, C
Depth-First Search
A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: N,H,D (LIFO)
Successors:
Finished search
U
Goal state achieved
Visited: A, B, E, K, S, L, T,
F, M, C, G
Depth-First Search
Depth First Search
Let fringe be a list containing the initial state
Loop
if fringe is empty return failure
Node  remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node, and
merge the newly generated nodes into fringe
add generated nodes to the front of fringe
End Loop
Depth-First Search
Properties of depth-first search
 Complete? No: fails in infinite-depth spaces,
spaces with loops
 Modify to avoid repeated states along path
 complete in finite spaces
 Time? O(bd): terrible if m is much larger than d
 Space? O(bm), i.e., linear space!
 Optimal? No
Depth limited search
 Like Depth first search, but the search is limited to a
predefined depth.
 The depth of each state is recorded as it is generated. When
picking the next state to expand, only those with depth less or
equal than the current depth are expanded.
 Once all the nodes of a given depth are explored, the current
depth is incremented.
Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors
 Recursive implementation:
Determine the vertex where the search should start and assign the maximum
search depth
Check if the current vertex is the goal state
If not: Do nothing
If yes: return
Check if the current vertex is within the maximum search depth
If not: Do nothing
If yes:
Expand the vertex and save all of its successors in a stack
Call DLS recursively for all vertices of the stack and go back to Step 2
Iterative deepening search
until solution found do
DFS with depth cutoff c
c = c+1
Iterative deepening search l =0
Iterative deepening search l =1
Iterative deepening search l =2
Iterative deepening search l =3
Iterative deepening search
 Number of nodes generated in a depth-limited search to depth d
with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd
 Number of nodes generated in an iterative deepening search to
depth d with branching factor b:
NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd
 For b = 10, d = 5,
 NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
 NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
 Overhead = (123,456 - 111,111)/111,111 = 11%
Properties of iterative deepening search
 Complete? Yes
 Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)
 Space? O(bd)
 Optimal? Yes, if step cost = 1
Summary of algorithms
Informed search algorithms
Outline
 Best-first search
 Greedy best-first search
 A* search
 Heuristics
 Local search algorithms
 Hill-climbing search
 Simulated annealing search
 Local beam search
 Genetic algorithms
Best-first search
 Idea: use an evaluation function f(n) for each node
 f(n) provides an estimate for the total cost.
 Expand the node n with smallest f(n).
 Implementation:
Order the nodes in fringe increasing order of cost.
 Special cases:
 greedy best-first search
 A* search
Romania with straight-line dist.
Greedy best-first search
 f(n) = estimate of cost from n to goal
 e.g., fSLD(n) = straight-line distance from n to
Bucharest
 Greedy best-first search expands the node that appears
to be closest to goal.
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Greedy best-first search example
Properties of greedy best-first search
 Complete? No – can get stuck in loops.
 Time? O(bm), but a good heuristic can give dramatic
improvement
 Space? O(bm) - keeps all nodes in memory
 Optimal? No
A* search
 Idea: avoid expanding paths that are already expensive
 Evaluation function f(n) = g(n) + h(n)
 g(n) = cost so far to reach n
 h(n) = estimated cost from n to goal
 f(n) = estimated total cost of path through n to goal
A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
Admissible heuristics
 A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal
state from n.
 An admissible heuristic never overestimates the cost to
reach the goal, i.e., it is optimistic
 Example: hSLD(n) (never overestimates the actual road
distance)
 Theorem: If h(n) is admissible, A* using TREE-SEARCH
is optimal
Optimality of A* (proof)
 Suppose some suboptimal goal G2 has been generated and is in the
fringe. Let n be an unexpanded node in the fringe such that n is on a
shortest path to an optimal goal G.
We want to prove:
f(n) < f(G2)
(then A* will prefer n over G2)
 f(G2) = g(G2) since h(G2) = 0
 f(G) = g(G) since h(G) = 0
 g(G2) > g(G) since G2 is suboptimal
 f(G2) > f(G) from above
Optimality of A* (proof)
 Suppose some suboptimal goal G2 has been generated and is in the fringe.
Let n be an unexpanded node in the fringe such that n is on a shortest path
to an optimal goal G.
 f(G2) > f(G) copied from last slide
 h(n) ≤ h*(n) since h is admissible (under-estimate)
 g(n) + h(n) ≤ g(n) + h*(n) from above
 f(n) ≤ f(G) since g(n)+h(n)=f(n) & g(n)+h*(n)=f(G)
 f(n) < f(G2) from top line.
Hence: n is preferred over G2
Consistent heuristics
 A heuristic is consistent if for every node n, every successor n' of n
generated by any action a,
h(n) ≤ c(n,a,n') + h(n')
 If h is consistent, we have
f(n') = g(n') + h(n')
= g(n) + c(n,a,n') + h(n')
≥ g(n) + h(n) = f(n)
f(n’) ≥ f(n)
 Theorem:
If h(n) is consistent, A* using GRAPH-SEARCH is optimal
It’s the triangle
inequality !
Optimality of A*
 A* expands nodes in order of increasing f value.
 Gradually adds "f-contours" of nodes
 Contour i contains all nodes with f ≤ fi where fi < fi+1
Properties of A*
 Complete? Yes (unless there are infinitely many nodes with
f ≤ f(G) , i.e. path-cost > ε)
 Time/Space? Exponential
except if:
 Optimal? Yes
 Optimally Efficient: Yes (no algorithm with the same
heuristic is guaranteed to expand fewer nodes)
d
b
* *
| ( ) ( )| (log ( ))h n h n O h n 
Memory Bounded Heuristic Search:
Recursive BFS
 How can we solve the memory problem for A* search?
 Idea: Try something like depth first search, but let’s not
forget everything about the branches we have partially
explored.
 We remember the best f-value we have found so far in the
branch we are deleting.
RBFS:
RBFS changes its mind
very often in practice.
This is because the
f=g+h become more
accurate (less optimistic)
as we approach the goal.
Hence, higher level nodes
have smaller f-values and
will be explored first.
Problem: We should keep
in memory whatever we can.
best alternative
over fringe nodes,
which are not children:
do I want to back up?
Simple Memory Bounded A*
 This is like A*, but when memory is full we delete the worst
node (largest f-value).
 Like RBFS, we remember the best descendent in the branch
we delete.
 If there is a tie (equal f-values) we first delete the oldest nodes
first.
 simple-MBA* finds the optimal reachable solution given the
memory constraint.
 Time can still be exponential.
Admissible heuristics
E.g., for the 8-puzzle:
 h1(n) = number of misplaced tiles
 h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
 h1(S) = ?
 h2(S) = ?
Admissible heuristics
E.g., for the 8-puzzle:
 h1(n) = number of misplaced tiles
 h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
 h1(S) = ? 8
 h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Dominance
 If h2(n) ≥ h1(n) for all n (both admissible)
 then h2 dominates h1
 h2 is better for search: it is guaranteed to expand
less nodes.
 Typical search costs (average number of nodes
expanded):
 d=12 IDS = 3,644,035 nodes
A*(h1) = 227 nodes
A*(h2) = 73 nodes
 d=24 IDS = too many nodes
A*(h1) = 39,135 nodes
A*(h2) = 1,641 nodes
Relaxed problems
 A problem with fewer restrictions on the actions is
called a relaxed problem
 The cost of an optimal solution to a relaxed problem
is an admissible heuristic for the original problem
 If the rules of the 8-puzzle are relaxed so that a tile
can move anywhere, then h1(n) gives the shortest
solution
 If the rules are relaxed so that a tile can move to any
adjacent square, then h2(n) gives the shortest solution
Local search algorithms
 State space = set of "complete" configurations
 Keep only current node in memory
 Local search is useful for solving optimization
problems:
 Often it is easy to find a solution
 But hard to find the best solution
Example: n-queens
 Put n queens on an n × n board with no two queens on
the same row, column, or diagonal
Hill Climbing
 Generate-and-test + direction to move.
 Heuristic function to estimate how close a given state is to a
goal state.
Hill Climbing
 Hill climbing is an optimization technique for solving
computationally hard problems.
 Used in problems with “the property that the state description
itself contains all the information”
 The algorithm is memory efficient since it does not maintain a
search tree
 Hill climbing attempts to iteratively improve the current state by
means of an evaluation function
 Searching for a goal state = Climbing to the top of a hill
Simple Hill Climbing
Algorithm
1. determine successors of current state
2. choose successor of maximum goodness
3. if goodness of best successor is less than current state's
goodness, stop
4. otherwise make best successor the current state and go
to step 1
Hill Climbing (Gradient Search)
Considers all the moves from the current state.
Selects the best one as the next state.
Hill Climbing: Disadvantages
Local maximum
A state that is better than all of its neighbours, but not
better than some other states far away.
Hill Climbing: Disadvantages
Plateau
A flat area of the search space in which all
neighbouring states have the same value.
Hill-climbing search
 Problem: depending on initial state, can get stuck in local maxima
Hill Climbing: Conclusion
Can be very inefficient in a large, rough problem
space.
Global heuristic - computational complexity.
Often useful when combined with other methods,
getting it started right in the right general
neighbourhood.
Simulated annealing search
function SIM-ANNEALING(problem,schedule)
 current = INITIAL-STATE(problem)
 loop do
 temperature = schedule[t]
 next = randomly selected successor of current
 diff = VALUE(next) - VALUE(current)
 if diff > 0
 then current = next
 else current = next only with probability
 end
Local beam search
 Keep track of k states rather than just one.
 Start with k randomly generated states.
 At each iteration, all the successors of all k states
are generated.
 If any one is a goal state, stop; else select the k best
successors from the complete list and repeat.
Genetic algorithms
 A successor state is generated by combining two parent
states
 Start with k randomly generated states (population)
 A state is represented as a string over a finite alphabet
(often a string of 0s and 1s)
 Evaluation function (fitness function). Higher values for
better states.
 Produce the next generation of states by selection,
crossover, and mutation
 Fitness function: number of non-attacking pairs of queens
(min = 0, max = 8 × 7/2 = 28)
 24/(24+23+20+11) = 31%
 23/(24+23+20+11) = 29% etc
fitness:
#non-attacking queens
probability of being
regenerated
in next generation
End of Lecture # 2

More Related Content

Similar to Lec#2

Lecture is related to the topic of Artificial intelligence
Lecture is related to the topic of Artificial intelligenceLecture is related to the topic of Artificial intelligence
Lecture is related to the topic of Artificial intelligence
mohsinwaseer1
 
AI_Lecture2.pptx
AI_Lecture2.pptxAI_Lecture2.pptx
AI_Lecture2.pptx
saadurrehman35
 
Jarrar.lecture notes.aai.2011s.ch3.uniformedsearch
Jarrar.lecture notes.aai.2011s.ch3.uniformedsearchJarrar.lecture notes.aai.2011s.ch3.uniformedsearch
Jarrar.lecture notes.aai.2011s.ch3.uniformedsearch
PalGov
 
Solving problems by searching
Solving problems by searchingSolving problems by searching
Solving problems by searching
Luigi Ceccaroni
 
Problem Solving Techniques
Problem Solving TechniquesProblem Solving Techniques
Problem Solving Techniques
Sagacious IT Solution
 
Artificial Intelligence Searching Techniques
Artificial Intelligence Searching TechniquesArtificial Intelligence Searching Techniques
Artificial Intelligence Searching Techniques
Dr. C.V. Suresh Babu
 
CptS 440 / 540 Artificial Intelligence
CptS 440 / 540 Artificial IntelligenceCptS 440 / 540 Artificial Intelligence
CptS 440 / 540 Artificial Intelligence
butest
 
State Space Search and Control Strategies in Artificial Intelligence.pptx
State Space Search and Control Strategies in Artificial Intelligence.pptxState Space Search and Control Strategies in Artificial Intelligence.pptx
State Space Search and Control Strategies in Artificial Intelligence.pptx
RSAISHANKAR
 
3 probsolver edited.ppt
3 probsolver edited.ppt3 probsolver edited.ppt
3 probsolver edited.ppt
HenokGetachew15
 
Searching techniques
Searching techniquesSearching techniques
Searching techniques
Prof.Dharmishtha R. Chaudhari
 
Problem Solving
Problem Solving Problem Solving
Problem Solving
Amar Jukuntla
 
AI: AI & Problem Solving
AI: AI & Problem SolvingAI: AI & Problem Solving
AI: AI & Problem Solving
DataminingTools Inc
 
AI: AI & problem solving
AI: AI & problem solvingAI: AI & problem solving
AI: AI & problem solving
Datamining Tools
 
Agents-and-Problem-Solving-20022024-094442am.pdf
Agents-and-Problem-Solving-20022024-094442am.pdfAgents-and-Problem-Solving-20022024-094442am.pdf
Agents-and-Problem-Solving-20022024-094442am.pdf
syedhasanali293
 
Lecture 07 search techniques
Lecture 07 search techniquesLecture 07 search techniques
Lecture 07 search techniques
Hema Kashyap
 
CH2_AI_Lecture1.ppt
CH2_AI_Lecture1.pptCH2_AI_Lecture1.ppt
CH2_AI_Lecture1.ppt
AhmedNURHUSIEN
 
Lecture 3 Problem Solving.pptx
Lecture 3 Problem Solving.pptxLecture 3 Problem Solving.pptx
Lecture 3 Problem Solving.pptx
AndrewKuziwakwasheMu
 
Final-AI-Problem Solving.pdf
Final-AI-Problem Solving.pdfFinal-AI-Problem Solving.pdf
Final-AI-Problem Solving.pdf
harinathkuruva
 
Amit ppt
Amit pptAmit ppt
Amit ppt
amitp26
 
Problem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptxProblem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptx
kitsenthilkumarcse
 

Similar to Lec#2 (20)

Lecture is related to the topic of Artificial intelligence
Lecture is related to the topic of Artificial intelligenceLecture is related to the topic of Artificial intelligence
Lecture is related to the topic of Artificial intelligence
 
AI_Lecture2.pptx
AI_Lecture2.pptxAI_Lecture2.pptx
AI_Lecture2.pptx
 
Jarrar.lecture notes.aai.2011s.ch3.uniformedsearch
Jarrar.lecture notes.aai.2011s.ch3.uniformedsearchJarrar.lecture notes.aai.2011s.ch3.uniformedsearch
Jarrar.lecture notes.aai.2011s.ch3.uniformedsearch
 
Solving problems by searching
Solving problems by searchingSolving problems by searching
Solving problems by searching
 
Problem Solving Techniques
Problem Solving TechniquesProblem Solving Techniques
Problem Solving Techniques
 
Artificial Intelligence Searching Techniques
Artificial Intelligence Searching TechniquesArtificial Intelligence Searching Techniques
Artificial Intelligence Searching Techniques
 
CptS 440 / 540 Artificial Intelligence
CptS 440 / 540 Artificial IntelligenceCptS 440 / 540 Artificial Intelligence
CptS 440 / 540 Artificial Intelligence
 
State Space Search and Control Strategies in Artificial Intelligence.pptx
State Space Search and Control Strategies in Artificial Intelligence.pptxState Space Search and Control Strategies in Artificial Intelligence.pptx
State Space Search and Control Strategies in Artificial Intelligence.pptx
 
3 probsolver edited.ppt
3 probsolver edited.ppt3 probsolver edited.ppt
3 probsolver edited.ppt
 
Searching techniques
Searching techniquesSearching techniques
Searching techniques
 
Problem Solving
Problem Solving Problem Solving
Problem Solving
 
AI: AI & Problem Solving
AI: AI & Problem SolvingAI: AI & Problem Solving
AI: AI & Problem Solving
 
AI: AI & problem solving
AI: AI & problem solvingAI: AI & problem solving
AI: AI & problem solving
 
Agents-and-Problem-Solving-20022024-094442am.pdf
Agents-and-Problem-Solving-20022024-094442am.pdfAgents-and-Problem-Solving-20022024-094442am.pdf
Agents-and-Problem-Solving-20022024-094442am.pdf
 
Lecture 07 search techniques
Lecture 07 search techniquesLecture 07 search techniques
Lecture 07 search techniques
 
CH2_AI_Lecture1.ppt
CH2_AI_Lecture1.pptCH2_AI_Lecture1.ppt
CH2_AI_Lecture1.ppt
 
Lecture 3 Problem Solving.pptx
Lecture 3 Problem Solving.pptxLecture 3 Problem Solving.pptx
Lecture 3 Problem Solving.pptx
 
Final-AI-Problem Solving.pdf
Final-AI-Problem Solving.pdfFinal-AI-Problem Solving.pdf
Final-AI-Problem Solving.pdf
 
Amit ppt
Amit pptAmit ppt
Amit ppt
 
Problem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptxProblem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptx
 

Recently uploaded

morris_worm_intro_and_source_code_analysis_.pdf
morris_worm_intro_and_source_code_analysis_.pdfmorris_worm_intro_and_source_code_analysis_.pdf
morris_worm_intro_and_source_code_analysis_.pdf
ycwu0509
 
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
IJECEIAES
 
Mechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdfMechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdf
21UME003TUSHARDEB
 
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
MadhavJungKarki
 
Welding Metallurgy Ferrous Materials.pdf
Welding Metallurgy Ferrous Materials.pdfWelding Metallurgy Ferrous Materials.pdf
Welding Metallurgy Ferrous Materials.pdf
AjmalKhan50578
 
TIME TABLE MANAGEMENT SYSTEM testing.pptx
TIME TABLE MANAGEMENT SYSTEM testing.pptxTIME TABLE MANAGEMENT SYSTEM testing.pptx
TIME TABLE MANAGEMENT SYSTEM testing.pptx
CVCSOfficial
 
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
upoux
 
Software Engineering and Project Management - Software Testing + Agile Method...
Software Engineering and Project Management - Software Testing + Agile Method...Software Engineering and Project Management - Software Testing + Agile Method...
Software Engineering and Project Management - Software Testing + Agile Method...
Prakhyath Rai
 
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELDEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
ijaia
 
一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理
一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理
一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理
upoux
 
22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
KrishnaveniKrishnara1
 
Generative AI Use cases applications solutions and implementation.pdf
Generative AI Use cases applications solutions and implementation.pdfGenerative AI Use cases applications solutions and implementation.pdf
Generative AI Use cases applications solutions and implementation.pdf
mahaffeycheryld
 
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
Yasser Mahgoub
 
Gas agency management system project report.pdf
Gas agency management system project report.pdfGas agency management system project report.pdf
Gas agency management system project report.pdf
Kamal Acharya
 
132/33KV substation case study Presentation
132/33KV substation case study Presentation132/33KV substation case study Presentation
132/33KV substation case study Presentation
kandramariana6
 
Data Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason WebinarData Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason Webinar
UReason
 
Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...
bijceesjournal
 
An Introduction to the Compiler Designss
An Introduction to the Compiler DesignssAn Introduction to the Compiler Designss
An Introduction to the Compiler Designss
ElakkiaU
 
Computational Engineering IITH Presentation
Computational Engineering IITH PresentationComputational Engineering IITH Presentation
Computational Engineering IITH Presentation
co23btech11018
 
Object Oriented Analysis and Design - OOAD
Object Oriented Analysis and Design - OOADObject Oriented Analysis and Design - OOAD
Object Oriented Analysis and Design - OOAD
PreethaV16
 

Recently uploaded (20)

morris_worm_intro_and_source_code_analysis_.pdf
morris_worm_intro_and_source_code_analysis_.pdfmorris_worm_intro_and_source_code_analysis_.pdf
morris_worm_intro_and_source_code_analysis_.pdf
 
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
 
Mechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdfMechanical Engineering on AAI Summer Training Report-003.pdf
Mechanical Engineering on AAI Summer Training Report-003.pdf
 
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
1FIDIC-CONSTRUCTION-CONTRACT-2ND-ED-2017-RED-BOOK.pdf
 
Welding Metallurgy Ferrous Materials.pdf
Welding Metallurgy Ferrous Materials.pdfWelding Metallurgy Ferrous Materials.pdf
Welding Metallurgy Ferrous Materials.pdf
 
TIME TABLE MANAGEMENT SYSTEM testing.pptx
TIME TABLE MANAGEMENT SYSTEM testing.pptxTIME TABLE MANAGEMENT SYSTEM testing.pptx
TIME TABLE MANAGEMENT SYSTEM testing.pptx
 
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
一比一原版(osu毕业证书)美国俄勒冈州立大学毕业证如何办理
 
Software Engineering and Project Management - Software Testing + Agile Method...
Software Engineering and Project Management - Software Testing + Agile Method...Software Engineering and Project Management - Software Testing + Agile Method...
Software Engineering and Project Management - Software Testing + Agile Method...
 
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELDEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODEL
 
一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理
一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理
一比一原版(uofo毕业证书)美国俄勒冈大学毕业证如何办理
 
22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
 
Generative AI Use cases applications solutions and implementation.pdf
Generative AI Use cases applications solutions and implementation.pdfGenerative AI Use cases applications solutions and implementation.pdf
Generative AI Use cases applications solutions and implementation.pdf
 
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
 
Gas agency management system project report.pdf
Gas agency management system project report.pdfGas agency management system project report.pdf
Gas agency management system project report.pdf
 
132/33KV substation case study Presentation
132/33KV substation case study Presentation132/33KV substation case study Presentation
132/33KV substation case study Presentation
 
Data Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason WebinarData Driven Maintenance | UReason Webinar
Data Driven Maintenance | UReason Webinar
 
Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...
 
An Introduction to the Compiler Designss
An Introduction to the Compiler DesignssAn Introduction to the Compiler Designss
An Introduction to the Compiler Designss
 
Computational Engineering IITH Presentation
Computational Engineering IITH PresentationComputational Engineering IITH Presentation
Computational Engineering IITH Presentation
 
Object Oriented Analysis and Design - OOAD
Object Oriented Analysis and Design - OOADObject Oriented Analysis and Design - OOAD
Object Oriented Analysis and Design - OOAD
 

Lec#2

  • 1.
  • 2. Lecture # 2 Solving Problems By:- Syed Ali Raza
  • 3. Outline  Problem-solving agents  Problem types  Problem formulation  Example problems  Basic search algorithms
  • 4. Introduction  Goal-based agents can succeed by considering future actions and desirability of their outcomes.  Problem solving agent is a goal-based agent that decides what to do by finding sequences of actions that lead to desirable states
  • 5. Problem solving  We want:  To automatically solve a problem  We need:  A representation of the problem  Algorithms that use some strategy to solve the problem defined in that representation
  • 6. Problem representation  General: – State space: a problem is divided into a set of resolution steps from the initial state to the goal state – Reduction to sub-problems: a problem is arranged into a hierarchy of sub-problems
  • 7. States  A problem is defined by its elements and their relations.  In each instant of the resolution of a problem, those elements have specific descriptors (How to select them?) and relations.  A state is a representation of those elements in a given moment.  Two special states are defined: – Initial state (starting point) – Final state (goal state)
  • 8. State modification: successor function  A successor function is needed to move between different states.  A successor function is a description of possible actions, a set of operators. It is a transformation function on a state representation, which convert it into another state.  The successor function defines a relation of accessibility among states.  Representation of the successor function:  Conditions of applicability  Transformation function
  • 9. State space  The state space is the set of all states reachable from the initial state.  It forms a graph (or map) in which the nodes are states and the arcs between nodes are actions.  A path in the state space is a sequence of states connected by a sequence of actions.  The solution of the problem is part of the map formed by the state space.
  • 10. Problem solution  A solution in the state space is a path from the initial state to a goal state or.  Path/solution cost: function that assigns a numeric cost to each path, the cost of applying the operators to the states.  Solution quality is measured by the path cost function, and an optimal solution has the lowest path cost among all solutions.  Solutions: any, an optimal one, all. Cost is important depending on the problem and the type of solution sought.
  • 11. Problem description  Components:  State space (explicitly or implicitly defined)  Initial state  Goal state (or the conditions it has to fulfill)  Available actions (operators to change state)  Restrictions (e.g., cost)  Elements of the domain which are relevant to the problem (e.g., incomplete knowledge of the starting point)  Type of solution:  Sequence of operators or goal state  Any, an optimal one (cost definition needed), all
  • 12. Problem solving agents  Intelligent agents are supposed to maximize their performance measure.  This can be simplified if the agent can adopt a goal and aim at satisfying it.  Goal formulation, based on the current situation and the agent’s performance measure, is the first step in problem solving  Goal is a set of states. The agent’s task is to find out which sequence of actions will get it to a goal state  Problem formulation is the process of deciding what sorts of actions and states to consider, given a goal
  • 13. Contd..  An agent with several immediate options of unknown value can decide what to do by first examining different possible sequences of actions that lead to states of known value, and then choosing the best sequence,  Looking for such a sequence is called search,  A search algorithm takes a problem as input and returns a solution in the form of action sequence,  a solution is found the actions it recommends can be carried out – execution phase.
  • 14. Contd..  “formulate, search, execute” design for the agent,  After formulating a goal and a problem to solve the agent calls a search procedure to solve it,  It then uses the solution to guide its actions, doing whatever the solution recommends as the next thing to do (typically the first action in the sequence),  Then removing that step from the sequence,  Once the solution has been executed, the agent will formulate a new goal.
  • 16. Problem types  Deterministic, fully observable  single-state problem  Agent knows exactly which state it will be in;  Non-observable  senseless problem (conformant problem)  Agent may have no idea where it is;  Nondeterministic and/or partially observable  contingency problem  percepts provide new information about current state  Unknown state space  exploration problem
  • 17. Example: Romania On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest  Formulate goal:  be in Bucharest  Formulate problem:  states: various cities  actions: drive between cities  Find solution:  sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
  • 19. Well-defined problems and solutions  A problem can be defined formally by four components  Initial state that the agent starts in – e.g. In(Arad)  A description of the possible actions available to the agent – Successor function – returns a set of <action, successor> pairs – e.g. {<Go(Sibiu),In(Sibiu)>, <Go(Timisoara),In(Timisoara)>, <Go(Zerind), In(Zerind)>} – Initial state and the successor function define the state space ( a graph in which the nodes are states and the arcs between nodes are actions). A path in state space is a sequence of states connected by a sequence of actions  Goal test determines whether a given state is a goal state – e.g.{In(Bucharest)}  Path cost function that assigns a numeric cost to each path. The cost of a path can be described as the some of the costs of the individual actions along the path – step cost – e.g. Time to go Bucharest
  • 20. Single-state problem formulation A problem is defined by four items: 1. initial state e.g., "at Arad“ 2. actions or successor function S(x) = set of action–state pairs  e.g., S(Arad) = {<Arad  Zerind, Zerind>, … } 3. goal test, can be  explicit, e.g., x = "at Bucharest“ 4. path cost (additive)  e.g., sum of distances, number of actions executed, etc.  step cost, assumed to be ≥ 0  A solution is a sequence of actions leading from the initial state to a goal state
  • 21. Example: The 8-puzzle  states?  actions?  goal test?  path cost?
  • 22. Example: The 8-puzzle  states? locations of tiles  actions? move blank left, right, up, down  goal test? = goal state (given)  path cost? 1 per move
  • 23. Tree search algorithms  Basic idea:  offline, simulated exploration of state space by generating successors of already-explored states (a.k.a.~expanding states)
  • 27. Search strategies  A search strategy is defined by picking the order of node expansion  Strategies are evaluated along the following dimensions:  completeness: does it always find a solution if one exists?  time complexity: number of nodes generated  space complexity: maximum number of nodes in memory  optimality: does it always find a least-cost solution?  Time and space complexity are measured in terms of  b: maximum branching factor of the search tree  d: depth of the least-cost solution  m: maximum depth of the state space (may be ∞)
  • 29. Uninformed search strategies  Uninformed search strategies use only the information available in the problem definition  Breadth-first search  Uniform-cost search  Depth-first search  Depth-limited search  Iterative deepening search
  • 30. Breadth-first search  The root node is expanded first, then all the successors of the root node, and their successors and so on  In general, all the nodes are expanded at a given depth in the search tree before any nodes at the next level are expanded  Expand shallowest unexpanded node  Implementation: – fringe is a FIFO queue, – the nodes that are visited first will be expanded first –All newly generated successors will be put at the end of the queue – Shallow nodes are expanded before deeper nodes
  • 31. A B C D E F G H I J K L M N O P Q R S T U Initial state Goal state Fringe: A (FIFO) Successors: B,C,D Visited: Breadth-first Search The fringe is the data structure we use to store all of the nodes that have been generated
  • 32. A B C D E F G H I J K L M N O P Q R S T U Fringe: B,C,D (FIFO) Successors: E,F Visited: A Next node Breadth-first Search
  • 33. A B C D E F G H I J K L M N O P Q R S T U Fringe: C,D,E,F (FIFO) Successors: G,H Visited: A, B Next node Breadth-first Search
  • 34. A B C D E F G H I J K L M N O P Q R S T U Fringe: D,E,F,G,H (FIFO) Successors: I,J Visited: A, B, C Next node Breadth-first Search
  • 35. A B C D E F G H I J K L M N O P Q R S T U Fringe: E,F,G,H,I,J (FIFO) Successors: K,L Visited: A, B, C, D Next node Breadth-first Search
  • 36. A B C D E F G H I J K L M N O P Q R S T U Fringe: F,G,H,I,J,K,L (FIFO) Successors: M Visited: A, B, C, D, E Next node Breadth-first Search
  • 37. A B C D E F G H I J K L M N O P Q R S T U Fringe: G,H,I,J,K,L,M (FIFO) Successors: N Visited: A, B, C, D, E, F Next node Breadth-first Search
  • 38. A B C D E F G H I J K L M N O P Q R S T U Fringe: H,I,J,K,L,M,N (FIFO) Successors: O Visited: A, B, C, D, E, F, G Next node Breadth-first Search
  • 39. A B C D E F G H I J K L M N O P Q R S T U Fringe: I,J,K,L,M,N,O (FIFO) Successors: P,Q Visited: A, B, C, D, E, F, G, H Next node Breadth-first Search
  • 40. A B C D E F G H I J K L M N O P Q R S T U Fringe: J,K,L,M,N,O,P,Q (FIFO) Successors: R Next node Visited: A, B, C, D, E, F, G, H, I Breadth-first Search
  • 41. A B C D E F G H I J K L M N O P Q R S T U Fringe: K,L,M,N,O,P,Q,R (FIFO) Successors: S Next node Visited: A, B, C, D, E, F, G, H, I, J Breadth-first Search
  • 42. A B C D E F G H I J K L M N O P Q R S T U Fringe: L,M,N,O,P,Q,R,S (FIFO) Successors: T Next node Visited: A, B, C, D, E, F, G, H, I, J, K Breadth-first Search
  • 43. A B C D E F G H I J K L M N O P Q R S T U Fringe: M,N,O,P,Q,R,S,T (FIFO) Successors: Next node Visited: A, B, C, D, E, F, G, H, I, J, K, L Breadth-first Search
  • 44. A B C D E F G H I J K L M N O P Q R S T U Fringe: N,O,P,Q,R,S,T (FIFO) Successors: Next node Visited: A, B, C, D, E, F, G, H, I, J, K, L, M Goal state achieved Breadth-first Search
  • 45. Breadth-first Search Algorithm BREADTH: Breadth first search in state space Let fringe be a list containing the initial state Loop if fringe is empty return failure Node  remove-first (fringe) if Node is a goal then return the path from initial state to Node else generate all successors of Node, and (merge the newly generated nodes into fringe) add generated nodes to the back of fringe End Loop
  • 46. Properties of breadth-first search  Complete? Yes (if b is finite)  Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)  Space? O(bd+1) (keeps every node in memory)  Optimal? Yes (if cost = 1 per step)  Space is the bigger problem (more than time)
  • 47. Uniform-cost search (UCS)  Uniform cost search is a search algorithm used to traverse, and find the shortest path in weighted trees and graphs.  Uniform Cost Search or UCS begins at a root node and will continually expand nodes, taking the node with the smallest total cost from the root until it reaches the goal state.  Uniform cost search doesn't care about how many steps a path has, only the total cost of the path.  UCS with all path costs equal to one is identical to breadth first search.
  • 48. Uniform Cost Search S A B C D 1 5 15 5 5 10 Similar to BFS except that it sorts (ascending order) the nodes in the fringe according to the cost of the node. where cost is the path cost.
  • 49. S A B C D 1 5 15 5 5 10 Uniform Cost Search Fringe = [S0] Next Node=Head of Fringe=S, S is not goal Successor(S)={C,B,A}=expand(S) but sort them according to path cost. Updated Fringe=[A1,B5,C15] Queue Queue
  • 50. S A B C 1 5 15 D 5 5 10 Uniform Cost Search Fringe = [A1,B5,C15] Next Node=Head of Fringe=A, A is not goal Successor(A)={D}=expand(A) Sort the queue according to path cost. Updated Fringe=[B5,D11,C15]
  • 51. DS A B C 1 5 15 5 5 10 Uniform Cost Search Fringe = [B5,D11,C15] Next Node=Head of Fringe=B, B is not goal Successor(B)={D}=expand(B) Sort the queue according to path cost. Updated Fringe=[D10,D11,C15]
  • 52. Uniform Cost Search Fringe = [D10,D11,C15] Next Node=Head of Fringe=D, D is a GOAL (cost 10 = 5+5) SBD DS A B C 1 5 15 5 5 10 Always finds the cheapest solution
  • 53. Uniform Cost Search A B C D E F G H
  • 54. Uniform Cost Search A B C D E H F G
  • 56. Uniform-cost search  Expand least-cost unexpanded node  Implementation:  fringe = queue ordered by path cost  Equivalent to breadth-first if step costs all equal  Complete? Yes, if step cost ≥ ε  Time? # of nodes with g ≤ cost of optimal solution, O(b1 + C*/ε) where C* is the cost of the optimal solution  Space? # of nodes with g ≤ cost of optimal solution, O(b1 + C*/ε) Optimal? Yes – nodes expanded in increasing order of g(n)
  • 57. Depth First Search - Method  Expand Root Node First  Explore one branch of the tree before exploring another branch  If a leaf node do not represent a goal state, search backtracks up to the next highest node that has an unexplored path
  • 58. DFS  Depth-first search (DFS) is an algorithm for traversing or searching a tree, tree structure, or graph. One starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking.
  • 59. DFS  DFS is an uninformed search that starts from root node of the search tree and going deeper and deeper until a goal node is found, or until it hits a node that has no children. Then the search backtracks, returning to the most recent node it hasn't finished exploring.
  • 60. A B C D E F G H I J K L M N O P Q R S T U Initial state Goal state Fringe: A (LIFO) Successors: B,C,D Visited: Depth-First Search
  • 61. A B C D E F G H I J K L M N O P Q R S T U Fringe: B,C,D (LIFO) Successors: E,F Visited: A Depth-First Search
  • 62. A B C D E F G H I J K L M N O P Q R S T U Fringe: E,F,C,D (LIFO) Successors: K,L Visited: A, B Depth-First Search
  • 63. A B C D E F G H I J K L M N O P Q R S T U Fringe: K,L,F,C,D (LIFO) Successors: S Visited: A, B, E Depth-First Search
  • 64. A B C D E F G H I J K L M N O P Q R S T U Fringe: S,L,F,C,D (LIFO) Successors: Visited: A, B, E, K Depth-First Search
  • 65. A B C D E F G H I J K L M N O P Q R S T U Fringe: L,F,C,D (LIFO) Successors: T Backtracking Visited: A, B, E, K, S Depth-First Search
  • 66. A B C D E F G H I J K L M N O P Q R S T U Fringe: T,F,C,D (LIFO) Successors: Visited: A, B, E, K, S, L Depth-First Search
  • 67. A B C D E F G H I J K L M N O P Q R S T U Fringe: F,C,D (LIFO) Successors: M Depth-First Search Backtracking Visited: A, B, E, K, S, L, T
  • 68. A B C D E F G H I J K L M N O P Q R S T U Fringe: M,C,D (LIFO) Successors: Visited: A, B, E, K, S, L, T, F Depth-First Search
  • 69. A B C D E F G H I J K L M N O P Q R S T U Fringe: C,D (LIFO) Successors: G,H Backtracking Visited: A, B, E, K, S, L, T, F, M Depth-First Search
  • 70. A B C D E F G H I J K L M N O P Q R S T U Fringe: G,H,D (LIFO) Successors: N Visited: A, B, E, K, S, L, T, F, M, C Depth-First Search
  • 71. A B C D E F G H I J K L M N O P Q R S T U Fringe: N,H,D (LIFO) Successors: Finished search U Goal state achieved Visited: A, B, E, K, S, L, T, F, M, C, G Depth-First Search
  • 72. Depth First Search Let fringe be a list containing the initial state Loop if fringe is empty return failure Node  remove-first (fringe) if Node is a goal then return the path from initial state to Node else generate all successors of Node, and merge the newly generated nodes into fringe add generated nodes to the front of fringe End Loop Depth-First Search
  • 73. Properties of depth-first search  Complete? No: fails in infinite-depth spaces, spaces with loops  Modify to avoid repeated states along path  complete in finite spaces  Time? O(bd): terrible if m is much larger than d  Space? O(bm), i.e., linear space!  Optimal? No
  • 74. Depth limited search  Like Depth first search, but the search is limited to a predefined depth.  The depth of each state is recorded as it is generated. When picking the next state to expand, only those with depth less or equal than the current depth are expanded.  Once all the nodes of a given depth are explored, the current depth is incremented.
  • 75. Depth-limited search = depth-first search with depth limit l, i.e., nodes at depth l have no successors  Recursive implementation: Determine the vertex where the search should start and assign the maximum search depth Check if the current vertex is the goal state If not: Do nothing If yes: return Check if the current vertex is within the maximum search depth If not: Do nothing If yes: Expand the vertex and save all of its successors in a stack Call DLS recursively for all vertices of the stack and go back to Step 2
  • 76. Iterative deepening search until solution found do DFS with depth cutoff c c = c+1
  • 81. Iterative deepening search  Number of nodes generated in a depth-limited search to depth d with branching factor b: NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd  Number of nodes generated in an iterative deepening search to depth d with branching factor b: NIDS = (d+1)b0 + d b^1 + (d-1)b^2 + … + 3bd-2 +2bd-1 + 1bd  For b = 10, d = 5,  NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111  NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456  Overhead = (123,456 - 111,111)/111,111 = 11%
  • 82. Properties of iterative deepening search  Complete? Yes  Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)  Space? O(bd)  Optimal? Yes, if step cost = 1
  • 85. Outline  Best-first search  Greedy best-first search  A* search  Heuristics  Local search algorithms  Hill-climbing search  Simulated annealing search  Local beam search  Genetic algorithms
  • 86. Best-first search  Idea: use an evaluation function f(n) for each node  f(n) provides an estimate for the total cost.  Expand the node n with smallest f(n).  Implementation: Order the nodes in fringe increasing order of cost.  Special cases:  greedy best-first search  A* search
  • 88. Greedy best-first search  f(n) = estimate of cost from n to goal  e.g., fSLD(n) = straight-line distance from n to Bucharest  Greedy best-first search expands the node that appears to be closest to goal.
  • 93. Properties of greedy best-first search  Complete? No – can get stuck in loops.  Time? O(bm), but a good heuristic can give dramatic improvement  Space? O(bm) - keeps all nodes in memory  Optimal? No
  • 94. A* search  Idea: avoid expanding paths that are already expensive  Evaluation function f(n) = g(n) + h(n)  g(n) = cost so far to reach n  h(n) = estimated cost from n to goal  f(n) = estimated total cost of path through n to goal
  • 101. Admissible heuristics  A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n.  An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic  Example: hSLD(n) (never overestimates the actual road distance)  Theorem: If h(n) is admissible, A* using TREE-SEARCH is optimal
  • 102. Optimality of A* (proof)  Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. We want to prove: f(n) < f(G2) (then A* will prefer n over G2)  f(G2) = g(G2) since h(G2) = 0  f(G) = g(G) since h(G) = 0  g(G2) > g(G) since G2 is suboptimal  f(G2) > f(G) from above
  • 103. Optimality of A* (proof)  Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G.  f(G2) > f(G) copied from last slide  h(n) ≤ h*(n) since h is admissible (under-estimate)  g(n) + h(n) ≤ g(n) + h*(n) from above  f(n) ≤ f(G) since g(n)+h(n)=f(n) & g(n)+h*(n)=f(G)  f(n) < f(G2) from top line. Hence: n is preferred over G2
  • 104. Consistent heuristics  A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n')  If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n) f(n’) ≥ f(n)  Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal It’s the triangle inequality !
  • 105. Optimality of A*  A* expands nodes in order of increasing f value.  Gradually adds "f-contours" of nodes  Contour i contains all nodes with f ≤ fi where fi < fi+1
  • 106. Properties of A*  Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) , i.e. path-cost > ε)  Time/Space? Exponential except if:  Optimal? Yes  Optimally Efficient: Yes (no algorithm with the same heuristic is guaranteed to expand fewer nodes) d b * * | ( ) ( )| (log ( ))h n h n O h n 
  • 107. Memory Bounded Heuristic Search: Recursive BFS  How can we solve the memory problem for A* search?  Idea: Try something like depth first search, but let’s not forget everything about the branches we have partially explored.  We remember the best f-value we have found so far in the branch we are deleting.
  • 108. RBFS: RBFS changes its mind very often in practice. This is because the f=g+h become more accurate (less optimistic) as we approach the goal. Hence, higher level nodes have smaller f-values and will be explored first. Problem: We should keep in memory whatever we can. best alternative over fringe nodes, which are not children: do I want to back up?
  • 109. Simple Memory Bounded A*  This is like A*, but when memory is full we delete the worst node (largest f-value).  Like RBFS, we remember the best descendent in the branch we delete.  If there is a tie (equal f-values) we first delete the oldest nodes first.  simple-MBA* finds the optimal reachable solution given the memory constraint.  Time can still be exponential.
  • 110. Admissible heuristics E.g., for the 8-puzzle:  h1(n) = number of misplaced tiles  h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile)  h1(S) = ?  h2(S) = ?
  • 111. Admissible heuristics E.g., for the 8-puzzle:  h1(n) = number of misplaced tiles  h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile)  h1(S) = ? 8  h2(S) = ? 3+1+2+2+2+3+3+2 = 18
  • 112. Dominance  If h2(n) ≥ h1(n) for all n (both admissible)  then h2 dominates h1  h2 is better for search: it is guaranteed to expand less nodes.  Typical search costs (average number of nodes expanded):  d=12 IDS = 3,644,035 nodes A*(h1) = 227 nodes A*(h2) = 73 nodes  d=24 IDS = too many nodes A*(h1) = 39,135 nodes A*(h2) = 1,641 nodes
  • 113. Relaxed problems  A problem with fewer restrictions on the actions is called a relaxed problem  The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem  If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution  If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution
  • 114. Local search algorithms  State space = set of "complete" configurations  Keep only current node in memory  Local search is useful for solving optimization problems:  Often it is easy to find a solution  But hard to find the best solution
  • 115. Example: n-queens  Put n queens on an n × n board with no two queens on the same row, column, or diagonal
  • 116. Hill Climbing  Generate-and-test + direction to move.  Heuristic function to estimate how close a given state is to a goal state.
  • 117. Hill Climbing  Hill climbing is an optimization technique for solving computationally hard problems.  Used in problems with “the property that the state description itself contains all the information”  The algorithm is memory efficient since it does not maintain a search tree  Hill climbing attempts to iteratively improve the current state by means of an evaluation function  Searching for a goal state = Climbing to the top of a hill
  • 118. Simple Hill Climbing Algorithm 1. determine successors of current state 2. choose successor of maximum goodness 3. if goodness of best successor is less than current state's goodness, stop 4. otherwise make best successor the current state and go to step 1
  • 119. Hill Climbing (Gradient Search) Considers all the moves from the current state. Selects the best one as the next state.
  • 120. Hill Climbing: Disadvantages Local maximum A state that is better than all of its neighbours, but not better than some other states far away.
  • 121. Hill Climbing: Disadvantages Plateau A flat area of the search space in which all neighbouring states have the same value.
  • 122. Hill-climbing search  Problem: depending on initial state, can get stuck in local maxima
  • 123. Hill Climbing: Conclusion Can be very inefficient in a large, rough problem space. Global heuristic - computational complexity. Often useful when combined with other methods, getting it started right in the right general neighbourhood.
  • 124. Simulated annealing search function SIM-ANNEALING(problem,schedule)  current = INITIAL-STATE(problem)  loop do  temperature = schedule[t]  next = randomly selected successor of current  diff = VALUE(next) - VALUE(current)  if diff > 0  then current = next  else current = next only with probability  end
  • 125. Local beam search  Keep track of k states rather than just one.  Start with k randomly generated states.  At each iteration, all the successors of all k states are generated.  If any one is a goal state, stop; else select the k best successors from the complete list and repeat.
  • 126. Genetic algorithms  A successor state is generated by combining two parent states  Start with k randomly generated states (population)  A state is represented as a string over a finite alphabet (often a string of 0s and 1s)  Evaluation function (fitness function). Higher values for better states.  Produce the next generation of states by selection, crossover, and mutation
  • 127.  Fitness function: number of non-attacking pairs of queens (min = 0, max = 8 × 7/2 = 28)  24/(24+23+20+11) = 31%  23/(24+23+20+11) = 29% etc fitness: #non-attacking queens probability of being regenerated in next generation